Intermediate MPI Programming

This section covers some more advanced aspects of MPI including non-blocking communcations (universally used to avoid deadlock), collective communication patterns, virtual topologies and derived datatypes. The relevant exercises are numbers 5, 6, 7 and 8; exercise 9 is entirely optional, but hopefully fun to attempt. This block also includes simple example solutions to the basic exercises.


Learning Objectives

After completing this section the student will:

  • gain familiarity with intermediate aspects of MPI such as non-blocking and collective communication, virtual topologies and derived datatypes

  • have gained practical experience by solving various exercises


Non-Blocking Communication

This subsection covers non-blocking communication functions (e.g. MPI_Isend and MPI_Irecv) that can be used to avoid deadlock (slides).










Collectives

This section covers collective communication patterns such as MPI_Barrier, MPI_Bcast, MPI_Scatter, MPI_Gather, MPI_Scan, MPI_Reduce and MPI_Allreduce (slides).






Virtual Topologies

This subsection covers creation and use of virtual topologies in MPI to simplify structured communication (slides).








Derived Datatypes

This subsection covers derive datatypes, which facilitate communication of composite data structures and non-contiguous data (slides).









Simple example solutions using MPI in C and Fortran can be downloaded here.