MVAPICH

Introduction

MVAPICH/MVAPICH2 software delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RDMAoE networking technologies. This software is being used by more than 1,070 organizations world-wide (Current Users) to extract the potential of these emerging networking technologies for modern systems. This software is also being distributed by many InfiniBand, 10GigE/iWARP and RDMAoE vendors in their software distributions. MVAPICH and MVAPICH2 are also available with Open Fabrics Enterprise Distribution (OFED) stack. MVAPICH/MVAPICH2 software is powering several supercomputers in the TOP 500 list.

  • MVAPICH is an MPI-1 implementation. This implementation is based on MPICH and MVICH. MVAPICH is pronounced as ``em-vah-pich''. The latest release is MVAPICH 1.2 (includes MPICH 1.2.7). It is available under BSD licensing. MVAPICH 1.2 supports the following seven underlying transport interfaces:
    • High-Performance support with scalability for OpenFabrics/Gen2 interface, developed by OpenFabrics, to work with InfiniBand and other RDMA interconnects.
    • (NEW) High-Performance support with scalability for OpenFabrics/Gen2-RDMAoE interface, developed by OpenFabrics
      High-Performance support with scalability (for clusters with multi-thousand cores) for OpenFabrics/Gen2-Hybrid interface, developed by OpenFabrics, to work with InfiniBand.
    • Shared-Memory only channel This interface support is useful for running MPI jobs on multi-processor systems without using any high-performance network. For example, multi-core servers, desktops, and laptops; and clusters with serial nodes.
      The InfiniPath interface for InfiniPath adapters from QLogic.
    • The standard TCP/IP interface (provided by MPICH) to work with a range of networks. This interface can be used with IPoIB support of InfiniBand also. However, it will not deliver good performance/scalability as compared to any of the lower-level (OpenFabrics/Gen2 or OpenFabrics/Gen2-Hybrid) support.
  • MVAPICH2 is an MPI-2 implementation (conforming to MPI 2.1 standard) which includes all MPI-1 features. It is based on MPICH2 and MVICH. The latest release is MVAPICH2 1.4 (includes MPICH2 1.0.8p1). It is available under BSD licensing. The current release supports the following six underlying transport interfaces:
    • OpenFabrics-IB: This interface supports all InfiniBand compliant devices based on the OpenFabrics Gen2 layer. This interface has the most features and is most widely used. For example, this interface can be used over all Mellanox InfiniBand adapters, IBM eHCA adapters and Qlogic adapters.
    • OpenFabrics-iWARP: This interface supports all iWARP compliant devices supported by OpenFabrics. For example, this layer supports Chelsio T3 adapters with the native iWARP mode.
    • OpenFabrics-RDMAoE: This interface supports the emerging RDMAoE (RDMA over Ethernet) interface for Mellanox ConnectX-EN adapters with 10GigE switches.
    • QLogic InfiniPath: This interface provides native support for InfiniPath adapters from QLogic over PSM interface. It provides high-performance point-to-point communication for both one-sided and two-sided operations.
    • uDAPL: This interface supports all network-adapters and software stacks which implement the portable DAPL interface from the DAT Collaborative. For example, this interface can be used over all Mellanox adapters, Chelsio adapters and NetEffect adapters. It can also be used with Solaris uDAPL-IBTL implementation over InfiniBand adapters.
    • TCP/IP: The standard TCP/IP interface (provided by MPICH2) to work with a range of network adapters supporting TCP/IP interface. This interface can be used with IPoIB (TCP/IP over InfiniBand network) support of InfiniBand also. However, it will not deliver good performance/ scalability as compared to the other interfaces.

MVAPICH's website

http://mvapich.cse.ohio-state.edu/

Download

Documentation

Documentation of MVAPICH

MPI Commands & MPI Routines of MVAPICH

List of MPI Commands & MPI Routines