The efficient use of modern parallel computers is based on the exploitation of parallelism at all levels: hardware, programming and algorithms. After a brief overview of basic concepts for parallel processing the course presents in detail the specific concepts and language features of the Message Passing Interface (MPI) for programming parallel applications. The most important parallelization constructs of MPI are explained and applied in hands on exercises. The parallelization of algorithms is demonstrated in simple examples, their implementation as MPI programs will be studied in practical exercises.
Contents: Fundamentals of parallel processing (computer architectures and programming models), Introduction to the Message Passing Interface (MPI), The main language constructs of MPI-1 and MPI-2 (Point-to-point communication, Collective communication incl. synchronization, Parallel operations, Data Structures, Parallel I / O, Process management), Demonstration and practical exercises with Fortran, C and Python source codes for all topics; Practice for the parallelization of sample programs; Analysis and optimization of parallel efficiency.
Use of MPI for parallelization of algorithms in order to be able to run parallel calculations on several computing nodes.
Using the GWDG Scientific Compute Cluster - An Introduction, or equivalent knowledge. Practical experience with Fortran , C or Python. For the practical exercises: GWDG account (preferable) or course account (available upon request), own notebook.