Open MPI
Introduction
The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
Features implemented or in short-term development for Open MPI include:
|
|
Open MPI's website
Documentation
Download
Running OpenMPI Programs on the Lab's cluster
OpenMPI programs are those written to the MPI-2 specification. This section provides information needed to use programs with OpenMPI as implemented in Scyld ClusterWare.
Pre-Requisites to Running OpenMPI
A number of commands, such as mpirun, are duplicated between OpenMPI and other MPI implementations. Scyld ClusterWare provides the env-modules package which gives users a convenient way to switch between the various implementations. Be sure to load an OpenMPI module to favor OpenMPI, located in /usr/openmpi/, over the MPICH commands which are located in /usr/. Each module bundles together various compiler-specific environment variables to configure your shell for building and running your application, and for accessing compiler-specific manpages. Be sure that you are loading the proper module to match the compiler that built the application you wish to run. For example, to load the OpenMPI module for use with the Intel compiler, do the following:
[user@cluster user] $ module load openmpi/intel |
Currently, there are modules for the GNU, Intel, PGI and Pathscale compilers. To see a list of all of the available modules:
[user@cluster user] $ module avail openmpi ------------------------------- /opt/modulefiles ------------------------------- openmpi/gnu(default) openmpi/path openmpi/intel openmpi/pgi |
For more information about creating your own modules, see http://modules.sourceforge.net and the manpages man module and man modulefile.
Using OpenMPI
Unlike the Scyld ClusterWare MPICH implementation, OpenMPI does not honor the Scyld Beowulf job mapping environment variables. You must either specify the list of hosts on the command line, or inside of a hostfile. To specify the list of hosts on the command line, use the -H option. The argument following -H is a comma separated list of hostnames, not node numbers. For example, to run a two process job, with one process running on node 0 and one on node 1:
[user@cluster user] $ mpirun -H n0,n1 -np 2 ./mpiprog |
Support for running jobs over Infiniband using the OpenIB transport is included with OpenMPI distributed with Scyld ClusterWare. Much like running a job with MPICH over Infiniband, one must specifically request the use of OpenIB. For example:
[user@cluster user] $ mpirun --mca btl openib,sm,self -H n0,n1 -np 2 ./myprog |
Read the OpenMPI mpirun man page for more information about, using a hostfile, and using other tunable options available through mpirun.