Skip to main content

Unit information: High Performance Computing in 2014/15

Please note: you are viewing unit and programme information for a past academic year. Please see the current academic year for up to date information.

Unit name High Performance Computing
Unit code COMS30004
Credit points 20
Level of study H/6
Teaching block(s) Teaching Block 1 (weeks 1 - 12)
Unit director Professor. McIntosh-Smith
Open unit status Not open
Pre-requisites

Ability to program competently in C.

Co-requisites

None

School/department Department of Computer Science
Faculty Faculty of Engineering

Description including Unit Aims

The aim of this unit is to introduce and explore technologies relating to high performance, high throughput, and high availability computing, and to offer practical hands-on use of and experience with said technologies. Students completing the unit should have had an opportunity to integrate content from other units in the programme, for example implementing high performance parallel versions of algorithms from COMS21103 or COMS21202 based on theory introduced in COMS22101. The syllabus will include:

  • Algorithmic models (the view from Berkeley)
  • Computational models (PRAM, Flynn's taxonomy).
  • Communication models (interconnects, message passing).
  • Memory models (NUMA, COMA).
  • Single-computer technologies (vector computing via SSE, multi-core computing via OpenMP, stream computing via CUDA/OpenCL).
  • Multi-computer technologies (cluster computing via MPI, cloud/grid computing via Hadoop/MapReduce).
  • Other approaches (batch processing via Condor, distributed computing via BOINC, distributed and redundant file systems, e.g., RAID/GFS, load balancing, check-pointing).
  • Design and implementation of parallel algorithms and libraries.

Intended Learning Outcomes

On successful completion of this unit, students will be able to:

  • Understand state-of-the-art high performance computing technologies, and select the right one for a given task;
  • Utilise said technologies through appropriate programming interfaces (e.g., specialist languages, additions to standard languages or via libraries or compiler assistance);
  • Analyse, implement, debug and profile high performance algorithms as realised in software.

Specific learning outcomes will be tackled through focused coursework activities, including:

  • Mastering shared memory multi-core parallelisation through approaches such as OpenMP and Ct
  • Message passing parallel programming through APIs such as MPI
  • Many-core parallel programming through stream languages such as OpenCL and Cuda.

Teaching Information

Two hours per week in lecture format, two hours per week in laboratory or problem class format. Assessment for the unit is 100%; students are expected to dedicate a significant amount of time to self-directed learning to complete the assessments.

Assessment Information

Assessment for the unit is 100% via coursework assignments based on hands-on use of high performance computing platforms (e.g., BlueCrystal phase 1 or similar). The assignments will turn the theory developed in this and previous units into practical experience.

Reading and References

David A. Patterson and John L. Hennessy. Computer Organization and Design: The Hardware/Software Interface. Morgan Kaufman. 2004. ISBN: 1-558-60604-1 Background

A. Grama, G. Karypis, V. Kumar and A. Gupta Introduction to Parallel Computing Addison Wesley, 2nd Edition. ISBN: 0201648652 Background

B. Chapman, G. Jost and R. van der Pas Using OpenMP: Portable Shared Memory Parallel Programming MIT Press ISBN: 0262533022 Background

P. Pacheco Parallel Programming with MPI Morgan Kaufmann ISBN: 1558603395 Background

Feedback