Home CPSC 425

MPI Odds & Ends

MPI Implementations

"MPI" is not a particular piece of code, but a standard. Like programming language standards, there are multiple possible implementations of this standard.

Vendors of high performance computing systems have proprietary implementations. There are two popular free implementations, however:


MPI Barriers

Recall that a barrier is a point in a parallel program where you want all threads or processes to reach before any of them continue past it.

A barrier is easy to achieve in an MPI program with the MPI_Barrier function:


int MPI_Barrier(MPI_Comm comm)

The only parameter is the communicator group. When called, the process will remain in the barrier until all processes in the communicator group have called it.

The following program uses MPI_Barrier to implement a barrier in MPI:


#include <unistd.h>
#include <stdlib.h>
#include <mpi.h>
#include <stdio.h>


int main(int argc, char** argv) {
    int rank, size;

    /* initialize MPI */
    MPI_Init(&argc, &argv);

    /* get the rank (process rank) and size (number of processes) */
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    /* we start to work */
    printf("Process %d starting!\n", rank);

    /* simulate the processes taking slightly different amounts of time by sleeping
     * for our process rank seconds */
    sleep(rank);
    printf("Process %d is done its work!\n", rank);

    /* a barrier */
    MPI_Barrier(MPI_COMM_WORLD);

    printf("Process %d is past the barrier!\n", rank); 

    /* quit MPI */
    MPI_Finalize();
    return 0;
}

Creating Communicator Groups

We have seen that most MPI functions deal with a communicator group. Often, just using the MPI_COMM_WORLD communicator group is sufficient. Other times, we may want to have a subset of processes work on something.

To do this, we first need to create an MPI_Group structure to hold a subset of out processes. The MPI_Group_incl function is used to specify a group:


MPI_Group_incl(orig_group, size, ranks, new_group);

The MPI_Group is used for referring to a set of processes. The MPI_Comm is used for communicating amongst a group. In order to communicate amongst our group, we need to create a communicator as well with MPI_Comm_create:


MPI_Comm_create(super_comm, group, new_comm);

The following program uses these functions to create two new communicator groups: one for the first half of processes, and one for the second:


#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
    /* our rank and size */
    int rank, size;

    /* we need to create groups of processes */
    MPI_Group orig_group, new_group;

    /* find our rank and size in MPI_COMM_WORLD */
    MPI_Init(&argc,&argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    /* make arrays storing the ranks of the processes in group1 and group2 */
    int ranks1[size / 2], ranks2[size / 2], i;
    for (i = 0; i < size/2; i++) {
        ranks1[i] = i;
    }
    for(i = size/2; i < size; i++) {
        ranks2[i - size/2] = i;
    }

    /* find the original group */
    MPI_Comm_group(MPI_COMM_WORLD, &orig_group);

    /* Divide tasks into two distinct groups based upon rank */
    if (rank < size/2) {
        MPI_Group_incl(orig_group, size/2, ranks1, &new_group);
    } else {
        MPI_Group_incl(orig_group, size/2, ranks2, &new_group);
    }

    /* Create new communicator for our group */
    MPI_Comm new_comm;
    MPI_Comm_create(MPI_COMM_WORLD, new_group, &new_comm);

    /* have the processes sum the ranks of each group */
    int send = rank, recv;
    MPI_Allreduce(&send, &recv, 1, MPI_INT, MPI_SUM, new_comm);

    /* get our rank within the new group */
    int grank;
    MPI_Comm_rank(new_comm, &grank);

    /* print the results */
    printf("Process %d (%d in sub-group) has %d!\n", rank, grank, recv);

    /* quit */
    MPI_Finalize();
    return 0;
}

Copyright © 2018 Ian Finlayson | Licensed under a Creative Commons Attribution 4.0 International License.