FLOSS Weekly 50

From The Official TWiT Wiki
Jump to: navigation, search
FLOSS Weekly
Episode 50

FLOSS Weekly 50: Open MPI

Open MPI, a software implementation of the Message Passing Interface standard.

Guest

Jeff Squyres

Shownotes

Jeff Squyres works at cisco and is one of the core developer and chief evangelist of Open MPI.

MPI is a spec of about 600 pages which descries a communication protocol used to program parallel computers. Beside Open MPI there are a lot of closed source MPI implementations.

The core of MPI is Message Passing (send and recieve) with an abstraction of the real network communication (Sockets, Infiniband, OPI).

List of top500 Supercomputers

There are about 20 stakeholders in the Open MPI Projects, half of them are academic and the other half are vendors.

Open MPI provides libraries for different programming languages:

  • bcMPI: bcMPI is a software package that implements MPI extensions for MATLAB and GNU Octave. It consists of a core library (libbcmpi) that interfaces to the MPI library, a toolbox for MATLAB (mexmpi), and a toolbox for Octave (octmpi).
  • MPI Toolbox for Octave (MPITB): Octave Linux users in a cluster with several PCs can use MPITB in order to call MPI library routines from within the Octave environment.
  • Parallel::MPI: Perl bindings for MPI.
  • mpi4py: MPI for Python (or mpi4py) provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.
  • pyMPI: a project integrating the Message Passing Interface (MPI) into the Python interpreter.
  • Boost MPI: Boost C++ class library for MPI.

The core of Open MPI is written in C with some scripting in Bash and Perl.

Open MPI is licenced under the BSD License.

They use subversion as their primary source control management and are thinking to switch to mercurial. For bug tracking they use trac.

The current fastest machine in the Top500 is the IBM Roadrunner at the Los Alamos National Laboratory in New Mexico, USA. It is running Open MPI. The system is a hybrid design with 12,960 IBM PowerXCell 8i CPUs and 6,480 AMD Opteron dual-core processor.

A tutorial is available at the National Center for Supercomputing Applications (NCSA) at the University of Illinois.


Previous Show - Next Show