Lecture on

Parallel Systems

Winter Term 2018/19, LVA 703650

Vienna Scientific Clusters 3
Photo: TU Wien/Matthias Heisler
LEO 3  University of Innsbruck
Photo: Wolfgang Kapferer

Mach2 Linz
MACH2  Linz

Room:       HSB 2

Time:        Fridays, 10:15 - 12:00 (begin Oct. 5)

Instructor: Prof. T. Fahringer

Distributed and Parallel Systems Group, Institute for Computer Science, University of Innsbruck

office hours (ICT building, 2nd floor): Wednesday 1 - 2 pm, email: Thomas.Fahringer at uibk.ac.at

Notice:       All lectures and lecture material including exams and laboratories are given in English only.


    Parallel processing has matured to the point where it has begun to make a considerable impact on the computer marketplace. All major microprocessor vendors (Intel, AMD, ARM, etc.) are by now exclusively developing microprocessors with multiple cores per chip. Any system ranging from regular desktop computers, notebooks, PDAs, smartphone, servers, game consoles, supercomputers, and even industry computers and any device requiring embedded systems is equipped with CPUs that contain mulitple compute cores. The potential for these systems, however, can only be fully realized by explicit parallel programming. As such understanding the benefits, challenges, and limits of parallel computing is increasingly becoming a mandatory qualification for IT professionals which is the only way forward towards new IT-infrastructures and modern computer programming The ultimate efficiency in parallel systems is to achieve a computation speedup factor of p with p processors. Although often this ideal cannot be achieved, some speedup is generally possible by using a multiprocessor-based architecture. The actual speed gain depends on the system's architecture and the algorithm run on it. This course serves as an introduction to the area of parallel systems with a special focus on programming for parallel architectures. Basic concepts and important techniques will be presented. Major approaches to parallel programming, including shared-memory multiprocessing and message-passing, will be covered in detail. Students will gain programming experience in each of these paradigms through an accompanying practial laboratory exercises (proseminar). Architectural considerations, parallelization techniques, program analysis, and measures of performance will be covered. We will not follow any particular text through out the entire class. In stead, we will use several text books as the general guideline of the lecture covering both basic concepts and programming skills.

    As part of this lecture we thus offer an introduction to most important basic concepts of parallel processing which is crucial know-how needed to deal with basically any new computer put on the market. This course is designed for all graduate students interested in parallel processing and high performance computing.

Lecture Foils (appear one day before each lecture)

Can be found under OLAT one day before each lecture.

Example Exam

Here you find an example exam.

Course Outline

Introduction to Parallel Systems
Parallel Programming Models
Message Passing Programming
Dependence Analysis
OpenMP Programming
Evaluation of Programs
Optimizations for Scalar Architectures
Models for Parallel Computing

External material

Further Reading

perfbook Paul E. McKenney (Ed.): Is Parallel Programming Hard, And, If So, What Can You Do About It? (online)
parcomp Ananth Grama et al.: Introduction to Parallel Computing (2nd Ed.)
parcomparch David Culler and Jaswinder Pal Singh: Parallel Computer Architecture, A Hardware/Software Approach
comparch John Hennessy and David Patterson: Computer Architecture a Quantitative Approach (5th Ed.)
hpcforeng Georg Hager and Gerhard Wellein: Introduction to High Performance Computing for Scientist and Engineers
usingopenmp Barbara Chapman et al.: Using OpenMP
usingmpi William Gropp, Ewing Lusk, Anthony Skjellum: Using MPI
usingadvancedmpi William Gropp, Torsten Hoefler, Ewing Lusk: Using Advanced MPI