The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers. Documentation for Open MPI can be found on its official website.

Currently, version 1.2.9, 1.10.1, 3.1.4, 4.0.1, and 4.0.3 are available on the cluster, and Open MPI can be compiled through GCC, Intel, Open64, and PGI.

Basics

To use Open MPI, you must first load the Open MPI module with the compiler of your choice. For example, if you want to use the GCC compiler, use the command

module load openmpi/gcc

To compile the file, use the Open MPI compiler wrapper that goes with your chosen file type. The C wrapper is named mpicc, the C++ wrapper can be compiled with mpicxx, mpiCC, or mpicc++.

For example, to compile a C file, you can use the command:

mpicc <filename>.c -o <filename>

where <filename> is the name of the file to be compiled. -o <filename> signals to create an output file called <filename> that can be used to run the program using ./<filename>.

To run an Open MPI script, use the command

mpirun ./<filename>

where <filename> is the name of the executable created when the script was compiled. mpirun takes command-line arguments such as -N 5, which would request 5 nodes to run the executable on.

Running Open MPI through a job script

1. Create a script. This repository provides a simple script, test_mpi.c, which runs on several CPU cores spanning several compute nodes.

test_mpi.c


#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {

        MPI_Init(NULL, NULL);      // initialize MPI environment
        int world_size; // number of processes
        MPI_Comm_size(MPI_COMM_WORLD, &world_size);

        int world_rank; // the rank of the process
        MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

        char processor_name[MPI_MAX_PROCESSOR_NAME]; // gets the name of the processor
        int name_len;
        MPI_Get_processor_name(processor_name, &name_len);

        printf("Hello world from processor %s, rank %d out of %d processors\n",
                processor_name, world_rank, world_size);

        MPI_Finalize(); // finish MPI environment
}

        

2.Compile the script using the correct compiler wrapper. In this case, we will compile the script using mpicc test_mpi.c -o test_mpi which creates an executable called test_mpi

3. Prepare the submission script, which is the script that is submitted to the Slurm scheduler as a job in order to run the Open MPI script. This repository provides the script job.sh as an example.

In this example, the batch script requests 8 nodes to run on.

job.sh


#!/bin/bash

#SBATCH --job-name=mpi_test
#SBATCH -o mpi_out%j.out
#SBATCH -e mpi_err%j.err
#SBATCH -N 3
#SBATCH --ntasks-per-node=2

echo -e '\n submitted Open MPI job'
echo 'hostname'
hostname

# load Open MPI module
module load openmpi/gcc

# compile the C file
mpicc test_mpi.c -o test_mpi

# run compiled test_mpi.c file
mpirun ./test_mpi
			
4. Submit the job using sbatch job.sh

5. Examine the results.