The Message Passing Interface (MPI) is arguably the primary programming model used for applications' internode parallelism. Developed by the NCI Training Team, this Introduction to MPI course demonstrates MPI procedures based on the latest MPI Standard - 4.0, with hands-on finite difference exercises.
Course Information
Prerequisites
This course demonstrates examples in C. Only basic experience with C/C++ is required. Knowledge about C functions, pointers and memory management is sufficient.
Serial codes will be provided for the exercises. The training will focus on MPI programming and C programming is secondary.
The training session is driven on the Australian Research Environment (ARE) and Gadi. Attendees are encouraged to review the following page for background information.
Objectives
The training is designed to be the first MPI programming course for scientists. As such, it aims to help attendees
- Understand the MPI programming model,
- Familiarise with the semantic terms in MPI Standard,
- Perform various MPI communication operations.
Learning outcomes
At the completion of this training session, you will be able to
- Know when to use MPI for parallelization,
- How to prepare the buffer for communication calls,
- Distinguish and use blocking and non-blocking communications,
- Understand different communication modes,
- Overlap the communication and computation,
- Perform basic one-sided communications
- Output data in parallel with MPI-IO,
- Profile MPI applications,
- Feel confident for more advanced parallel programming.
Covered topics
- MPI semantics
- Point-to-point communication
- Blocking communication
- Nonblocking communication
- Persistent communication
- Collective communication
- One-sided communication
- Overlapping communication and computation
- MPI-IO
-
Profiling MPI codes
- Course Information
- Content Information
- Using JupyterLab on ARE (6:59)
- Overview of the MPI Standard and a Simple Example. (62:14)
- Syntax and Semantics (26:42)
- Blocking Communication Operation (55:19)
- Nonblocking Communication Operation (45:16)
- Persistent Communication Operation (19:02)
- Collective Communication Operation (22:21)
- Basic MPI-IO (22:14)
- MPI Profiling Interface (16:03)