MPI for Python
- Author:
Lisandro Dalcin
- Contact:
- Date:
Apr 14, 2024
Introduction
Over the last years, high performance computing has become an affordable resource to many more researchers in the scientific community than ever before. The conjunction of quality open source software and commodity hardware strongly influenced the now widespread popularity of Beowulf class clusters and cluster of workstations.
Among many parallel computational models, message-passing has proven to be an effective one. This paradigm is specially suited for (but not limited to) distributed memory architectures and is used in today’s most demanding scientific and engineering application related to modeling, simulation, design, and signal processing. However, portable message-passing parallel programming used to be a nightmare in the past because of the many incompatible options developers were faced to. Fortunately, this situation definitely changed after the MPI Forum released its standard specification.
High performance computing is traditionally associated with software development using compiled languages. However, in typical applications programs, only a small part of the code is time-critical enough to require the efficiency of compiled languages. The rest of the code is generally related to memory management, error handling, input/output, and user interaction, and those are usually the most error prone and time-consuming lines of code to write and debug in the whole development process. Interpreted high-level languages can be really advantageous for this kind of tasks.
For implementing general-purpose numerical computations, MATLAB [1] is the dominant interpreted programming language. In the open source side, Octave and Scilab are well known, freely distributed software packages providing compatibility with the MATLAB language. In this work, we present MPI for Python, a new package enabling applications to exploit multiple processors using standard MPI “look and feel” in Python scripts.
What is MPI?
MPI, [mpi-using] [mpi-ref] the Message Passing Interface, is a standardized and portable message-passing system designed to function on a wide variety of parallel computers. The standard defines the syntax and semantics of library routines and allows users to write portable programs in the main scientific programming languages (Fortran, C, or C++).
Since its release, the MPI specification [mpi-std1] [mpi-std2] has become the leading standard for message-passing libraries for parallel computers. Implementations are available from vendors of high-performance computers and from well known open source projects like MPICH [mpi-mpich] and Open MPI [mpi-openmpi].
What is Python?
Python is a modern, easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming with dynamic typing and dynamic binding. It supports modules and packages, which encourages program modularity and code reuse. Python’s elegant syntax, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms.
The Python interpreter and the extensive standard library are available in source or binary form without charge for all major platforms, and can be freely distributed. It is easily extended with new functions and data types implemented in C or C++. Python is also suitable as an extension language for customizable applications.
Python is an ideal candidate for writing the higher-level parts of large-scale scientific applications [Hinsen97] and driving simulations in parallel architectures [Beazley97] like clusters of PC’s or SMP’s. Python codes are quickly developed, easily maintained, and can achieve a high degree of integration with other libraries written in compiled languages.
Overview
MPI for Python provides an object oriented approach to message passing which grounds on the standard MPI-2 C++ bindings. The interface was designed with focus in translating MPI syntax and semantics of standard MPI-2 bindings for C++ to Python. Any user of the standard C/C++ MPI bindings should be able to use this module without need of learning a new interface.
Communicating Python Objects and Array Data
The Python standard library supports different mechanisms for data persistence. Many of them rely on disk storage, but pickling and marshaling can also work with memory buffers.
The pickle
modules provide user-extensible facilities to
serialize general Python objects using ASCII or binary formats. The
marshal
module provides facilities to serialize built-in Python
objects using a binary format specific to Python, but independent of
machine architecture issues.
MPI for Python can communicate any built-in or user-defined Python
object taking advantage of the features provided by the pickle
module. These facilities will be routinely used to build binary
representations of objects to communicate (at sending processes), and
restoring them back (at receiving processes).
Although simple and general, the serialization approach (i.e., pickling and unpickling) previously discussed imposes important overheads in memory as well as processor usage, especially in the scenario of objects with large memory footprints being communicated. Pickling general Python objects, ranging from primitive or container built-in types to user-defined classes, necessarily requires computer resources. Processing is also needed for dispatching the appropriate serialization method (that depends on the type of the object) and doing the actual packing. Additional memory is always needed, and if its total amount is not known a priori, many reallocations can occur. Indeed, in the case of large numeric arrays, this is certainly unacceptable and precludes communication of objects occupying half or more of the available memory resources.
MPI for Python supports direct communication of any object exporting the single-segment buffer interface. This interface is a standard Python mechanism provided by some types (e.g., strings and numeric arrays), allowing access in the C side to a contiguous memory buffer (i.e., address and length) containing the relevant data. This feature, in conjunction with the capability of constructing user-defined MPI datatypes describing complicated memory layouts, enables the implementation of many algorithms involving multidimensional numeric arrays (e.g., image processing, fast Fourier transforms, finite difference schemes on structured Cartesian grids) directly in Python, with negligible overhead, and almost as fast as compiled Fortran, C, or C++ codes.
Communicators
In MPI for Python, Comm
is the base class of communicators. The
Intracomm
and Intercomm
classes are sublcasses of the Comm
class. The Comm.Is_inter
method (and Comm.Is_intra
, provided for
convenience but not part of the MPI specification) is defined for
communicator objects and can be used to determine the particular
communicator class.
The two predefined intracommunicator instances are available:
COMM_SELF
and COMM_WORLD
. From them, new communicators can be
created as needed.
The number of processes in a communicator and the calling process rank
can be respectively obtained with methods Comm.Get_size
and
Comm.Get_rank
. The associated process group can be retrieved from a
communicator by calling the Comm.Get_group
method, which returns an
instance of the Group
class. Set operations with Group
objects
like like Group.Union
, Group.Intersection
and Group.Difference
are fully supported, as well as the creation of new communicators from
these groups using Comm.Create
and Comm.Create_group
.
New communicator instances can be obtained with the Comm.Clone
,
Comm.Dup
and Comm.Split
methods, as well methods
Intracomm.Create_intercomm
and Intercomm.Merge
.
Virtual topologies (Cartcomm
, Graphcomm
and Distgraphcomm
classes, which are specializations of the Intracomm
class) are fully
supported. New instances can be obtained from intracommunicator
instances with factory methods Intracomm.Create_cart
and
Intracomm.Create_graph
.
Point-to-Point Communications
Point to point communication is a fundamental capability of message passing systems. This mechanism enables the transmission of data between a pair of processes, one side sending, the other receiving.
MPI provides a set of send and receive functions allowing the communication of typed data with an associated tag. The type information enables the conversion of data representation from one architecture to another in the case of heterogeneous computing environments; additionally, it allows the representation of non-contiguous data layouts and user-defined datatypes, thus avoiding the overhead of (otherwise unavoidable) packing/unpacking operations. The tag information allows selectivity of messages at the receiving end.
Blocking Communications
MPI provides basic send and receive functions that are blocking. These functions block the caller until the data buffers involved in the communication can be safely reused by the application program.
In MPI for Python, the Comm.Send
, Comm.Recv
and Comm.Sendrecv
methods of communicator objects provide support for blocking
point-to-point communications within Intracomm
and Intercomm
instances. These methods can communicate memory buffers. The variants
Comm.send
, Comm.recv
and Comm.sendrecv
can communicate general
Python objects.
Nonblocking Communications
On many systems, performance can be significantly increased by overlapping communication and computation. This is particularly true on systems where communication can be executed autonomously by an intelligent, dedicated communication controller.
MPI provides nonblocking send and receive functions. They allow the possible overlap of communication and computation. Non-blocking communication always come in two parts: posting functions, which begin the requested operation; and test-for-completion functions, which allow to discover whether the requested operation has completed.
In MPI for Python, the Comm.Isend
and Comm.Irecv
methods
initiate send and receive operations, respectively. These methods
return a Request
instance, uniquely identifying the started
operation. Its completion can be managed using the Request.Test
,
Request.Wait
and Request.Cancel
methods. The management of
Request
objects and associated memory buffers involved in
communication requires a careful, rather low-level coordination. Users
must ensure that objects exposing their memory buffers are not
accessed at the Python level while they are involved in nonblocking
message-passing operations.
Persistent Communications
Often a communication with the same argument list is repeatedly executed within an inner loop. In such cases, communication can be further optimized by using persistent communication, a particular case of nonblocking communication allowing the reduction of the overhead between processes and communication controllers. Furthermore , this kind of optimization can also alleviate the extra call overheads associated to interpreted, dynamic languages like Python.
In MPI for Python, the Comm.Send_init
and Comm.Recv_init
methods
create persistent requests for a send and receive operation,
respectively. These methods return an instance of the Prequest
class, a subclass of the Request
class. The actual communication can
be effectively started using the Prequest.Start
method, and its
completion can be managed as previously described.
Collective Communications
Collective communications allow the transmittal of data between multiple processes of a group simultaneously. The syntax and semantics of collective functions is consistent with point-to-point communication. Collective functions communicate typed data, but messages are not paired with an associated tag; selectivity of messages is implied in the calling order. Additionally, collective functions come in blocking versions only.
The more commonly used collective communication operations are the following.
Barrier synchronization across all group members.
Global communication functions
Broadcast data from one member to all members of a group.
Gather data from all members to one member of a group.
Scatter data from one member to all members of a group.
Global reduction operations such as sum, maximum, minimum, etc.
In MPI for Python, the Comm.Bcast
, Comm.Scatter
, Comm.Gather
,
Comm.Allgather
, Comm.Alltoall
methods provide support for
collective communications of memory buffers. The lower-case variants
Comm.bcast
, Comm.scatter
, Comm.gather
, Comm.allgather
and
Comm.alltoall
can communicate general Python objects. The vector
variants (which can communicate different amounts of data to each
process) Comm.Scatterv
, Comm.Gatherv
, Comm.Allgatherv
,
Comm.Alltoallv
and Comm.Alltoallw
are also supported, they can
only communicate objects exposing memory buffers.
Global reducion operations on memory buffers are accessible through
the Comm.Reduce
, Comm.Reduce_scatter
, Comm.Allreduce
,
Intracomm.Scan
and Intracomm.Exscan
methods. The lower-case
variants Comm.reduce
, Comm.allreduce
, Intracomm.scan
and
Intracomm.exscan
can communicate general Python objects; however,
the actual required reduction computations are performed sequentially
at some process. All the predefined (i.e., SUM
, PROD
, MAX
, etc.)
reduction operations can be applied.
Support for GPU-aware MPI
Several MPI implementations, including Open MPI and MVAPICH, support passing GPU pointers to MPI calls to avoid explict data movement between the host and the device. On the Python side, GPU arrays have been implemented by many libraries that need GPU computation, such as CuPy, Numba, PyTorch, and PyArrow. In order to increase library interoperability, two kinds of zero-copy data exchange protocols are defined and agreed upon: DLPack and CUDA Array Interface. For example, a CuPy array can be passed to a Numba CUDA-jit kernel.
MPI for Python provides an experimental support for GPU-aware MPI. This feature requires:
mpi4py is built against a GPU-aware MPI library.
The Python GPU arrays are compliant with either of the protocols.
See the Tutorial section for further information. We note that
Whether or not a MPI call can work for GPU arrays depends on the underlying MPI implementation, not on mpi4py.
This support is currently experimental and subject to change in the future.
Dynamic Process Management
In the context of the MPI-1 specification, a parallel application is static; that is, no processes can be added to or deleted from a running application after it has been started. Fortunately, this limitation was addressed in MPI-2. The new specification added a process management model providing a basic interface between an application and external resources and process managers.
This MPI-2 extension can be really useful, especially for sequential applications built on top of parallel modules, or parallel applications with a client/server model. The MPI-2 process model provides a mechanism to create new processes and establish communication between them and the existing MPI application. It also provides mechanisms to establish communication between two existing MPI applications, even when one did not start the other.
In MPI for Python, new independent process groups can be created by
calling the Intracomm.Spawn
method within an intracommunicator.
This call returns a new intercommunicator (i.e., an Intercomm
instance) at the parent process group. The child process group can
retrieve the matching intercommunicator by calling the
Comm.Get_parent
class method. At each side, the new
intercommunicator can be used to perform point to point and collective
communications between the parent and child groups of processes.
Alternatively, disjoint groups of processes can establish
communication using a client/server approach. Any server application
must first call the Open_port
function to open a port and the
Publish_name
function to publish a provided service, and next call
the Intracomm.Accept
method. Any client applications can first find
a published service by calling the Lookup_name
function, which
returns the port where a server can be contacted; and next call the
Intracomm.Connect
method. Both Intracomm.Accept
and
Intracomm.Connect
methods return an Intercomm
instance. When
connection between client/server processes is no longer needed, all of
them must cooperatively call the Comm.Disconnect
method. Additionally, server applications should release resources by
calling the Unpublish_name
and Close_port
functions.
One-Sided Communications
One-sided communications (also called Remote Memory Access, RMA) supplements the traditional two-sided, send/receive based MPI communication model with a one-sided, put/get based interface. One-sided communication that can take advantage of the capabilities of highly specialized network hardware. Additionally, this extension lowers latency and software overhead in applications written using a shared-memory-like paradigm.
The MPI specification revolves around the use of objects called windows; they intuitively specify regions of a process’s memory that have been made available for remote read and write operations. The published memory blocks can be accessed through three functions for put (remote send), get (remote write), and accumulate (remote update or reduction) data items. A much larger number of functions support different synchronization styles; the semantics of these synchronization operations are fairly complex.
In MPI for Python, one-sided operations are available by using
instances of the Win
class. New window objects are created by
calling the Win.Create
method at all processes within a communicator
and specifying a memory buffer . When a window instance is no longer
needed, the Win.Free
method should be called.
The three one-sided MPI operations for remote write, read and
reduction are available through calling the methods Win.Put
,
Win.Get
, and Win.Accumulate
respectively within a Win
instance.
These methods need an integer rank identifying the target process and
an integer offset relative the base address of the remote memory block
being accessed.
The one-sided operations read, write, and reduction are implicitly
nonblocking, and must be synchronized by using two primary modes.
Active target synchronization requires the origin process to call the
Win.Start
and Win.Complete
methods at the origin process, and
target process cooperates by calling the Win.Post
and Win.Wait
methods. There is also a collective variant provided by the
Win.Fence
method. Passive target synchronization is more lenient,
only the origin process calls the Win.Lock
and Win.Unlock
methods. Locks are used to protect remote accesses to the locked
remote window and to protect local load/store accesses to a locked
local window.
Parallel Input/Output
The POSIX standard provides a model of a widely portable file system. However, the optimization needed for parallel input/output cannot be achieved with this generic interface. In order to ensure efficiency and scalability, the underlying parallel input/output system must provide a high-level interface supporting partitioning of file data among processes and a collective interface supporting complete transfers of global data structures between process memories and files. Additionally, further efficiencies can be gained via support for asynchronous input/output, strided accesses to data, and control over physical file layout on storage devices. This scenario motivated the inclusion in the MPI-2 standard of a custom interface in order to support more elaborated parallel input/output operations.
The MPI specification for parallel input/output revolves around the use objects called files. As defined by MPI, files are not just contiguous byte streams. Instead, they are regarded as ordered collections of typed data items. MPI supports sequential or random access to any integral set of these items. Furthermore, files are opened collectively by a group of processes.
The common patterns for accessing a shared file (broadcast, scatter, gather, reduction) is expressed by using user-defined datatypes. Compared to the communication patterns of point-to-point and collective communications, this approach has the advantage of added flexibility and expressiveness. Data access operations (read and write) are defined for different kinds of positioning (using explicit offsets, individual file pointers, and shared file pointers), coordination (non-collective and collective), and synchronism (blocking, nonblocking, and split collective with begin/end phases).
In MPI for Python, all MPI input/output operations are performed
through instances of the File
class. File handles are obtained by
calling the File.Open
method at all processes within a communicator
and providing a file name and the intended access mode. After use,
they must be closed by calling the File.Close
method. Files even
can be deleted by calling method File.Delete
.
After creation, files are typically associated with a per-process
view. The view defines the current set of data visible and
accessible from an open file as an ordered set of elementary
datatypes. This data layout can be set and queried with the
File.Set_view
and File.Get_view
methods respectively.
Actual input/output operations are achieved by many methods combining read and write calls with different behavior regarding positioning, coordination, and synchronism. Summing up, MPI for Python provides the thirty (30) methods defined in MPI-2 for reading from or writing to files using explicit offsets or file pointers (individual or shared), in blocking or nonblocking and collective or noncollective versions.
Environmental Management
Initialization and Exit
Module functions Init
or Init_thread
and Finalize
provide MPI
initialization and finalization respectively. Module functions
Is_initialized
and Is_finalized
provide the respective tests for
initialization and finalization.
Note
MPI_Init()
or MPI_Init_thread()
is actually called
when you import the MPI
module from the
mpi4py
package, but only if MPI is not already
initialized. In such case, calling Init
or Init_thread
from
Python is expected to generate an MPI error, and in turn an
exception will be raised.
Note
MPI_Finalize()
is registered (by using Python C/API
function Py_AtExit()
) for being automatically called when
Python processes exit, but only if mpi4py
actually
initialized MPI. Therefore, there is no need to call Finalize
from Python to ensure MPI finalization.
Implementation Information
The MPI version number can be retrieved from module function
Get_version
. It returns a two-integer tuple(version, subversion)
.The
Get_processor_name
function can be used to access the processor name.The values of predefined attributes attached to the world communicator can be obtained by calling the
Comm.Get_attr
method within theCOMM_WORLD
instance.
Timers
MPI timer functionalities are available through the Wtime
and
Wtick
functions.
Error Handling
In order facilitate handle sharing with other Python modules
interfacing MPI-based parallel libraries, the predefined MPI error
handlers ERRORS_RETURN
and ERRORS_ARE_FATAL
can be assigned to and
retrieved from communicators using methods Comm.Set_errhandler
and
Comm.Get_errhandler
, and similarly for windows and files.
When the predefined error handler ERRORS_RETURN
is set, errors
returned from MPI calls within Python code will raise an instance of
the exception class Exception
, which is a subclass of the standard
Python exception RuntimeError
.
Note
After import, mpi4py overrides the default MPI rules governing
inheritance of error handlers. The ERRORS_RETURN
error handler is
set in the predefined COMM_SELF
and COMM_WORLD
communicators,
as well as any new Comm
, Win
, or File
instance created
through mpi4py. If you ever pass such handles to C/C++/Fortran
library code, it is recommended to set the ERRORS_ARE_FATAL
error
handler on them to ensure MPI errors do not pass silently.
Warning
Importing with from mpi4py.MPI import *
will cause a name
clashing with the standard Python Exception
base class.
Tutorial
Warning
Under construction. Contributions very welcome!
Tip
Rolf Rabenseifner at HLRS developed a comprehensive MPI-3.1/4.0 course with slides and a large set of exercises including solutions. This material is available online for self-study. The slides and exercises show the C, Fortran, and Python (mpi4py) interfaces. For performance reasons, most Python exercises use NumPy arrays and communication routines involving buffer-like objects.
Tip
Victor Eijkhout at TACC authored the book Parallel Programming for Science and Engineering. This book is available online in PDF and HTML formats. The book covers parallel programming with MPI and OpenMP in C/C++ and Fortran, and MPI in Python using mpi4py.
MPI for Python supports convenient, pickle-based communication of generic Python object as well as fast, near C-speed, direct array data communication of buffer-provider objects (e.g., NumPy arrays).
Communication of generic Python objects
You have to use methods with all-lowercase names, like
Comm.send
,Comm.recv
,Comm.bcast
,Comm.scatter
,Comm.gather
. An object to be sent is passed as a parameter to the communication call, and the received object is simply the return value.The
Comm.isend
andComm.irecv
methods returnRequest
instances; completion of these methods can be managed using theRequest.test
andRequest.wait
methods.The
Comm.recv
andComm.irecv
methods may be passed a buffer object that can be repeatedly used to receive messages avoiding internal memory allocation. This buffer must be sufficiently large to accommodate the transmitted messages; hence, any buffer passed toComm.recv
orComm.irecv
must be at least as long as the pickled data transmitted to the receiver.Collective calls like
Comm.scatter
,Comm.gather
,Comm.allgather
,Comm.alltoall
expect a single value or a sequence ofComm.size
elements at the root or all process. They return a single value, a list ofComm.size
elements, orNone
.Note
MPI for Python uses the highest protocol version available in the Python runtime (see the
HIGHEST_PROTOCOL
constant in thepickle
module). The default protocol can be changed at import time by setting theMPI4PY_PICKLE_PROTOCOL
environment variable, or at runtime by assigning a different value to thePROTOCOL
attribute of thepickle
object within theMPI
module.Communication of buffer-like objects
You have to use method names starting with an upper-case letter, like
Comm.Send
,Comm.Recv
,Comm.Bcast
,Comm.Scatter
,Comm.Gather
.In general, buffer arguments to these calls must be explicitly specified by using a 2/3-list/tuple like
[data, MPI.DOUBLE]
, or[data, count, MPI.DOUBLE]
(the former one uses the byte-size ofdata
and the extent of the MPI datatype to definecount
).For vector collectives communication operations like
Comm.Scatterv
andComm.Gatherv
, buffer arguments are specified as[data, count, displ, datatype]
, wherecount
anddispl
are sequences of integral values.Automatic MPI datatype discovery for NumPy/GPU arrays and PEP-3118 buffers is supported, but limited to basic C types (all C/C99-native signed/unsigned integral types and single/double precision real/complex floating types) and availability of matching datatypes in the underlying MPI implementation. In this case, the buffer-provider object can be passed directly as a buffer argument, the count and MPI datatype will be inferred.
If mpi4py is built against a GPU-aware MPI implementation, GPU arrays can be passed to upper-case methods as long as they have either the
__dlpack__
and__dlpack_device__
methods or the__cuda_array_interface__
attribute that are compliant with the respective standard specifications. Moreover, only C-contiguous or Fortran-contiguous GPU arrays are supported. It is important to note that GPU buffers must be fully ready before any MPI routines operate on them to avoid race conditions. This can be ensured by using the synchronization API of your array library. mpi4py does not have access to any GPU-specific functionality and thus cannot perform this operation automatically for users.
Running Python scripts with MPI
Most MPI programs can be run with the command mpiexec. In practice, running Python programs looks like:
$ mpiexec -n 4 python script.py
to run the program with 4 processors.
Point-to-Point Communication
Python objects (
pickle
under the hood):from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = {'a': 7, 'b': 3.14} comm.send(data, dest=1, tag=11) elif rank == 1: data = comm.recv(source=0, tag=11)
Python objects with non-blocking communication:
from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = {'a': 7, 'b': 3.14} req = comm.isend(data, dest=1, tag=11) req.wait() elif rank == 1: req = comm.irecv(source=0, tag=11) data = req.wait()
NumPy arrays (the fast way!):
from mpi4py import MPI import numpy comm = MPI.COMM_WORLD rank = comm.Get_rank() # passing MPI datatypes explicitly if rank == 0: data = numpy.arange(1000, dtype='i') comm.Send([data, MPI.INT], dest=1, tag=77) elif rank == 1: data = numpy.empty(1000, dtype='i') comm.Recv([data, MPI.INT], source=0, tag=77) # automatic MPI datatype discovery if rank == 0: data = numpy.arange(100, dtype=numpy.float64) comm.Send(data, dest=1, tag=13) elif rank == 1: data = numpy.empty(100, dtype=numpy.float64) comm.Recv(data, source=0, tag=13)
Collective Communication
Broadcasting a Python dictionary:
from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = {'key1' : [7, 2.72, 2+3j], 'key2' : ( 'abc', 'xyz')} else: data = None data = comm.bcast(data, root=0)
Scattering Python objects:
from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() if rank == 0: data = [(i+1)**2 for i in range(size)] else: data = None data = comm.scatter(data, root=0) assert data == (rank+1)**2
Gathering Python objects:
from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() data = (rank+1)**2 data = comm.gather(data, root=0) if rank == 0: for i in range(size): assert data[i] == (i+1)**2 else: assert data is None
Broadcasting a NumPy array:
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = np.arange(100, dtype='i') else: data = np.empty(100, dtype='i') comm.Bcast(data, root=0) for i in range(100): assert data[i] == i
Scattering NumPy arrays:
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() sendbuf = None if rank == 0: sendbuf = np.empty([size, 100], dtype='i') sendbuf.T[:,:] = range(size) recvbuf = np.empty(100, dtype='i') comm.Scatter(sendbuf, recvbuf, root=0) assert np.allclose(recvbuf, rank)
Gathering NumPy arrays:
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() sendbuf = np.zeros(100, dtype='i') + rank recvbuf = None if rank == 0: recvbuf = np.empty([size, 100], dtype='i') comm.Gather(sendbuf, recvbuf, root=0) if rank == 0: for i in range(size): assert np.allclose(recvbuf[i,:], i)
Parallel matrix-vector product:
from mpi4py import MPI import numpy def matvec(comm, A, x): m = A.shape[0] # local rows p = comm.Get_size() xg = numpy.zeros(m*p, dtype='d') comm.Allgather([x, MPI.DOUBLE], [xg, MPI.DOUBLE]) y = numpy.dot(A, xg) return y
MPI-IO
Collective I/O with NumPy arrays:
from mpi4py import MPI import numpy as np amode = MPI.MODE_WRONLY|MPI.MODE_CREATE comm = MPI.COMM_WORLD fh = MPI.File.Open(comm, "./datafile.contig", amode) buffer = np.empty(10, dtype=np.int) buffer[:] = comm.Get_rank() offset = comm.Get_rank()*buffer.nbytes fh.Write_at_all(offset, buffer) fh.Close()
Non-contiguous Collective I/O with NumPy arrays and datatypes:
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() amode = MPI.MODE_WRONLY|MPI.MODE_CREATE fh = MPI.File.Open(comm, "./datafile.noncontig", amode) item_count = 10 buffer = np.empty(item_count, dtype='i') buffer[:] = rank filetype = MPI.INT.Create_vector(item_count, 1, size) filetype.Commit() displacement = MPI.INT.Get_size()*rank fh.Set_view(displacement, filetype=filetype) fh.Write_all(buffer) filetype.Free() fh.Close()
Dynamic Process Management
Compute Pi - Master (or parent, or client) side:
#!/usr/bin/env python from mpi4py import MPI import numpy import sys comm = MPI.COMM_SELF.Spawn(sys.executable, args=['cpi.py'], maxprocs=5) N = numpy.array(100, 'i') comm.Bcast([N, MPI.INT], root=MPI.ROOT) PI = numpy.array(0.0, 'd') comm.Reduce(None, [PI, MPI.DOUBLE], op=MPI.SUM, root=MPI.ROOT) print(PI) comm.Disconnect()
Compute Pi - Worker (or child, or server) side:
#!/usr/bin/env python from mpi4py import MPI import numpy comm = MPI.Comm.Get_parent() size = comm.Get_size() rank = comm.Get_rank() N = numpy.array(0, dtype='i') comm.Bcast([N, MPI.INT], root=0) h = 1.0 / N; s = 0.0 for i in range(rank, N, size): x = h * (i + 0.5) s += 4.0 / (1.0 + x**2) PI = numpy.array(s * h, dtype='d') comm.Reduce([PI, MPI.DOUBLE], None, op=MPI.SUM, root=0) comm.Disconnect()
CUDA-aware MPI + Python GPU arrays
Reduce-to-all CuPy arrays:
from mpi4py import MPI import cupy as cp comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() sendbuf = cp.arange(10, dtype='i') recvbuf = cp.empty_like(sendbuf) assert hasattr(sendbuf, '__cuda_array_interface__') assert hasattr(recvbuf, '__cuda_array_interface__') cp.cuda.get_current_stream().synchronize() comm.Allreduce(sendbuf, recvbuf) assert cp.allclose(recvbuf, sendbuf*size)
One-Sided Communications
Read from (write to) the entire RMA window:
import numpy as np from mpi4py import MPI from mpi4py.util import dtlib comm = MPI.COMM_WORLD rank = comm.Get_rank() datatype = MPI.FLOAT np_dtype = dtlib.to_numpy_dtype(datatype) itemsize = datatype.Get_size() N = 10 win_size = N * itemsize if rank == 0 else 0 win = MPI.Win.Allocate(win_size, comm=comm) buf = np.empty(N, dtype=np_dtype) if rank == 0: buf.fill(42) win.Lock(rank=0) win.Put(buf, target_rank=0) win.Unlock(rank=0) comm.Barrier() else: comm.Barrier() win.Lock(rank=0) win.Get(buf, target_rank=0) win.Unlock(rank=0) assert np.all(buf == 42)
Accessing a part of the RMA window using the
target
argument, which is defined as(offset, count, datatype)
:import numpy as np from mpi4py import MPI from mpi4py.util import dtlib comm = MPI.COMM_WORLD rank = comm.Get_rank() datatype = MPI.FLOAT np_dtype = dtlib.to_numpy_dtype(datatype) itemsize = datatype.Get_size() N = comm.Get_size() + 1 win_size = N * itemsize if rank == 0 else 0 win = MPI.Win.Allocate( size=win_size, disp_unit=itemsize, comm=comm, ) if rank == 0: mem = np.frombuffer(win, dtype=np_dtype) mem[:] = np.arange(len(mem), dtype=np_dtype) comm.Barrier() buf = np.zeros(3, dtype=np_dtype) target = (rank, 2, datatype) win.Lock(rank=0) win.Get(buf, target_rank=0, target=target) win.Unlock(rank=0) assert np.all(buf == [rank, rank+1, 0])
Wrapping with SWIG
C source:
/* file: helloworld.c */ void sayhello(MPI_Comm comm) { int size, rank; MPI_Comm_size(comm, &size); MPI_Comm_rank(comm, &rank); printf("Hello, World! " "I am process %d of %d.\n", rank, size); }
SWIG interface file:
// file: helloworld.i %module helloworld %{ #include <mpi.h> #include "helloworld.c" }% %include mpi4py/mpi4py.i %mpi4py_typemap(Comm, MPI_Comm); void sayhello(MPI_Comm comm);
Try it in the Python prompt:
>>> from mpi4py import MPI >>> import helloworld >>> helloworld.sayhello(MPI.COMM_WORLD) Hello, World! I am process 0 of 1.
Wrapping with F2Py
Fortran 90 source:
! file: helloworld.f90 subroutine sayhello(comm) use mpi implicit none integer :: comm, rank, size, ierr call MPI_Comm_size(comm, size, ierr) call MPI_Comm_rank(comm, rank, ierr) print *, 'Hello, World! I am process ',rank,' of ',size,'.' end subroutine sayhello
Compiling example using f2py
$ f2py -c --f90exec=mpif90 helloworld.f90 -m helloworld
Try it in the Python prompt:
>>> from mpi4py import MPI >>> import helloworld >>> fcomm = MPI.COMM_WORLD.py2f() >>> helloworld.sayhello(fcomm) Hello, World! I am process 0 of 1.
mpi4py
This is the MPI for Python package.
The Message Passing Interface (MPI) is a standardized and portable message-passing system designed to function on a wide variety of parallel computers. The MPI standard defines the syntax and semantics of library routines and allows users to write portable programs in the main scientific programming languages (Fortran, C, or C++). Since its release, the MPI specification has become the leading standard for message-passing libraries for parallel computers.
MPI for Python provides MPI bindings for the Python programming language, allowing any Python program to exploit multiple processors. This package build on the MPI specification and provides an object oriented interface which closely follows MPI-2 C++ bindings.
Runtime configuration options
- mpi4py.rc
This object has attributes exposing runtime configuration options that become effective at import time of the
MPI
module.
Attributes Summary
Automatic MPI initialization at import |
|
Request initialization with thread support |
|
Level of thread support to request |
|
Automatic MPI finalization at exit |
|
Use tree-based reductions for objects |
|
Use matched probes to receive objects |
|
Error handling policy |
Attributes Documentation
- mpi4py.rc.initialize
Automatic MPI initialization at import.
See also
- mpi4py.rc.threads
Request initialization with thread support.
See also
- mpi4py.rc.thread_level
Level of thread support to request.
- Type:
- Default:
"multiple"
- Choices:
"multiple"
,"serialized"
,"funneled"
,"single"
See also
- mpi4py.rc.finalize
Automatic MPI finalization at exit.
See also
- mpi4py.rc.fast_reduce
Use tree-based reductions for objects.
See also
- mpi4py.rc.recv_mprobe
Use matched probes to receive objects.
See also
- mpi4py.rc.errors
Error handling policy.
- Type:
- Default:
"exception"
- Choices:
"exception"
,"default"
,"fatal"
See also
Example
MPI for Python features automatic initialization and finalization of the MPI
execution environment. By using the mpi4py.rc
object, MPI initialization and
finalization can be handled programatically:
import mpi4py
mpi4py.rc.initialize = False # do not initialize MPI automatically
mpi4py.rc.finalize = False # do not finalize MPI automatically
from mpi4py import MPI # import the 'MPI' module
MPI.Init() # manual initialization of the MPI environment
... # your finest code here ...
MPI.Finalize() # manual finalization of the MPI environment
Environment variables
The following environment variables override the corresponding attributes of
the mpi4py.rc
and MPI.pickle
objects at import time of the
MPI
module.
Note
For variables of boolean type, accepted values are 0
and 1
(interpreted as False
and True
, respectively), and strings
specifying a YAML boolean value (case-insensitive).
- MPI4PY_RC_INITIALIZE
-
Whether to automatically initialize MPI at import time of the
mpi4py.MPI
module.See also
New in version 3.1.0.
- MPI4PY_RC_FINALIZE
-
Whether to automatically finalize MPI at exit time of the Python process.
See also
New in version 3.1.0.
- MPI4PY_RC_THREADS
-
Whether to initialize MPI with thread support.
See also
New in version 3.1.0.
- MPI4PY_RC_THREAD_LEVEL
- Default:
"multiple"
- Choices:
"single"
,"funneled"
,"serialized"
,"multiple"
The level of required thread support.
See also
New in version 3.1.0.
- MPI4PY_RC_FAST_REDUCE
-
Whether to use tree-based reductions for objects.
See also
New in version 3.1.0.
- MPI4PY_RC_RECV_MPROBE
-
Whether to use matched probes to receive objects.
See also
- MPI4PY_RC_ERRORS
- Default:
"exception"
- Choices:
"exception"
,"default"
,"fatal"
Controls default MPI error handling policy.
See also
New in version 3.1.0.
- MPI4PY_PICKLE_PROTOCOL
- Type:
- Default:
Controls the default pickle protocol to use when communicating Python objects.
See also
PROTOCOL
attribute of theMPI.pickle
object within theMPI
module.New in version 3.1.0.
- MPI4PY_PICKLE_THRESHOLD
- Type:
- Default:
262144
Controls the default buffer size threshold for switching from in-band to out-of-band buffer handling when using pickle protocol version 5 or higher.
See also
Module
mpi4py.util.pkl5
.New in version 3.1.2.
Miscellaneous functions
- mpi4py.profile(name, *, path=None, logfile=None)
Support for the MPI profiling interface.
- mpi4py.get_include()
Return the directory in the package that contains header files.
Extension modules that need to compile against mpi4py should use this function to locate the appropriate include directory. Using Python distutils (or perhaps NumPy distutils):
import mpi4py Extension('extension_name', ... include_dirs=[..., mpi4py.get_include()])
- Return type:
mpi4py.MPI
Classes
Ancillary
|
Datatype object |
|
Status object |
|
Request handle |
|
Persistent request handle |
|
Generalized request handle |
|
Operation object |
|
Group of processes |
|
Info object |
Communication
|
Communicator |
|
Intracommunicator |
|
Topology intracommunicator |
|
Cartesian topology intracommunicator |
|
General graph topology intracommunicator |
|
Distributed graph topology intracommunicator |
|
Intercommunicator |
|
Matched message handle |
One-sided operations
|
Window handle |
Input/Output
|
File handle |
Error handling
|
Error handler |
|
Exception class |
Auxiliary
|
Pickle/unpickle Python objects |
|
Memory buffer |
Functions
Version inquiry
Obtain the version number of the MPI standard supported by the implementation as a tuple |
|
Obtain the version string of the MPI library |
Initialization and finalization
|
Initialize the MPI execution environment |
|
Initialize the MPI execution environment |
|
Terminate the MPI execution environment |
Indicates whether |
|
Indicates whether |
|
Return the level of thread support provided by the MPI library |
|
Indicate whether this thread called |
Memory allocation
|
Allocate memory for message passing and RMA |
|
Free memory allocated with |
Address manipulation
|
Get the address of a location in memory |
|
Return the sum of base address and displacement |
|
Return the difference between absolute addresses |
Timer
|
Return the resolution of |
|
Return an elapsed time on the calling processor |
Error handling
|
Convert an error code into an error class |
|
Return the error string for a given error class or error code |
Add an error class to the known error classes |
|
|
Add an error code to an error class |
|
Associate an error string with an error class or errorcode |
Dynamic process management
|
Return an address that can be used to establish connections between groups of MPI processes |
|
Close a port |
|
Publish a service name |
|
Unpublish a service name |
|
Lookup a port name given a service name |
Miscellanea
|
Attach a user-provided buffer for sending in buffered mode |
Remove an existing attached buffer |
|
|
Return a balanced distribution of processes per coordinate direction |
Obtain the name of the calling processor |
|
|
Register user-defined data representations |
|
Control profiling |
Utilities
Infomation about the underlying MPI implementation |
Attributes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
mpi4py.futures
New in version 3.0.0.
This package provides a high-level interface for asynchronously executing callables on a pool of worker processes using MPI for inter-process communication.
concurrent.futures
The mpi4py.futures
package is based on concurrent.futures
from
the Python standard library. More precisely, mpi4py.futures
provides the
MPIPoolExecutor
class as a concrete implementation of the abstract
class Executor
. The
submit()
interface schedules a callable to
be executed asynchronously and returns a Future
object representing the execution of the callable.
Future
instances can be queried for the call
result or exception. Sets of Future
instances can
be passed to the wait()
and
as_completed()
functions.
Note
The concurrent.futures
package was introduced in Python 3.2. A
backport targeting Python 2.7 is available on PyPI. The mpi4py.futures
package uses
concurrent.futures
if available, either from the Python 3 standard
library or the Python 2.7 backport if installed. Otherwise,
mpi4py.futures
uses a bundled copy of core functionality backported
from Python 3.5 to work with Python 2.7.
See also
- Module
concurrent.futures
Documentation of the
concurrent.futures
standard module.
MPIPoolExecutor
The MPIPoolExecutor
class uses a pool of MPI processes to execute
calls asynchronously. By performing computations in separate processes, it
allows to side-step the global interpreter lock but also means that
only picklable objects can be executed and returned. The __main__
module
must be importable by worker processes, thus MPIPoolExecutor
instances
may not work in the interactive interpreter.
MPIPoolExecutor
takes advantage of the dynamic process management
features introduced in the MPI-2 standard. In particular, the
MPI.Intracomm.Spawn
method of MPI.COMM_SELF
is used in the master (or
parent) process to spawn new worker (or child) processes running a Python
interpreter. The master process uses a separate thread (one for each
MPIPoolExecutor
instance) to communicate back and forth with the
workers. The worker processes serve the execution of tasks in the main (and
only) thread until they are signaled for completion.
Note
The worker processes must import the main script in order to unpickle any
callable defined in the __main__
module and submitted from the master
process. Furthermore, the callables may need access to other global
variables. At the worker processes, mpi4py.futures
executes the main
script code (using the runpy
module) under the __worker__
namespace to define the __main__
module. The __main__
and
__worker__
modules are added to sys.modules
(both at the
master and worker processes) to ensure proper pickling and unpickling.
Warning
During the initial import phase at the workers, the main script cannot
create and use new MPIPoolExecutor
instances. Otherwise, each
worker would attempt to spawn a new pool of workers, leading to infinite
recursion. mpi4py.futures
detects such recursive attempts to spawn
new workers and aborts the MPI execution environment. As the main script
code is run under the __worker__
namespace, the easiest way to avoid
spawn recursion is using the idiom if __name__ == '__main__': ...
in
the main script.
- class mpi4py.futures.MPIPoolExecutor(max_workers=None, initializer=None, initargs=(), **kwargs)
An
Executor
subclass that executes calls asynchronously using a pool of at most max_workers processes. If max_workers isNone
or not given, its value is determined from theMPI4PY_FUTURES_MAX_WORKERS
environment variable if set, or the MPI universe size if set, otherwise a single worker process is spawned. If max_workers is lower than or equal to0
, then aValueError
will be raised.initializer is an optional callable that is called at the start of each worker process before executing any tasks; initargs is a tuple of arguments passed to the initializer. If initializer raises an exception, all pending tasks and any attempt to submit new tasks to the pool will raise a
BrokenExecutor
exception.Other parameters:
python_exe: Path to the Python interpreter executable used to spawn worker processes, otherwise
sys.executable
is used.python_args:
list
or iterable with additional command line flags to pass to the Python executable. Command line flags determined from inspection ofsys.flags
,sys.warnoptions
andsys._xoptions
in are passed unconditionally.mpi_info:
dict
or iterable yielding(key, value)
pairs. These(key, value)
pairs are passed (through anMPI.Info
object) to theMPI.Intracomm.Spawn
call used to spawn worker processes. This mechanism allows telling the MPI runtime system where and how to start the processes. Check the documentation of the backend MPI implementation about the set of keys it interprets and the corresponding format for values.globals:
dict
or iterable yielding(name, value)
pairs to initialize the main module namespace in worker processes.main: If set to
False
, do not import the__main__
module in worker processes. Setting main toFalse
prevents worker processes from accessing definitions in the parent__main__
namespace.path:
list
or iterable with paths to append tosys.path
in worker processes to extend the module search path.wdir: Path to set the current working directory in worker processes using
os.chdir()
. The initial working directory is set by the MPI implementation. Quality MPI implementations should honor awdir
info key passed through mpi_info, although such feature is not mandatory.env:
dict
or iterable yielding(name, value)
pairs with environment variables to updateos.environ
in worker processes. The initial environment is set by the MPI implementation. MPI implementations may allow setting the initial environment through mpi_info, however such feature is not required nor recommended by the MPI standard.
- submit(func, *args, **kwargs)
Schedule the callable, func, to be executed as
func(*args, **kwargs)
and returns aFuture
object representing the execution of the callable.executor = MPIPoolExecutor(max_workers=1) future = executor.submit(pow, 321, 1234) print(future.result())
- map(func, *iterables, timeout=None, chunksize=1, **kwargs)
Equivalent to
map(func, *iterables)
except func is executed asynchronously and several calls to func may be made concurrently, out-of-order, in separate processes. The returned iterator raises aTimeoutError
if__next__()
is called and the result isn’t available after timeout seconds from the original call tomap()
. timeout can be an int or a float. If timeout is not specified orNone
, there is no limit to the wait time. If a call raises an exception, then that exception will be raised when its value is retrieved from the iterator. This method chops iterables into a number of chunks which it submits to the pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer. For very long iterables, using a large value for chunksize can significantly improve performance compared to the default size of one. By default, the returned iterator yields results in-order, waiting for successive tasks to complete . This behavior can be changed by passing the keyword argument unordered asTrue
, then the result iterator will yield a result as soon as any of the tasks complete.executor = MPIPoolExecutor(max_workers=3) for result in executor.map(pow, [2]*32, range(32)): print(result)
- starmap(func, iterable, timeout=None, chunksize=1, **kwargs)
Equivalent to
itertools.starmap(func, iterable)
. Used instead ofmap()
when argument parameters are already grouped in tuples from a single iterable (the data has been “pre-zipped”).map(func, *iterable)
is equivalent tostarmap(func, zip(*iterable))
.executor = MPIPoolExecutor(max_workers=3) iterable = ((2, n) for n in range(32)) for result in executor.starmap(pow, iterable): print(result)
- shutdown(wait=True, cancel_futures=False)
Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to
submit()
andmap()
made aftershutdown()
will raiseRuntimeError
.If wait is
True
then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait isFalse
then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing.If cancel_futures is
True
, this method will cancel all pending futures that the executor has not started running. Any futures that are completed or running won’t be cancelled, regardless of the value of cancel_futures.You can avoid having to call this method explicitly if you use the
with
statement, which will shutdown the executor instance (waiting as ifshutdown()
were called with wait set toTrue
).import time with MPIPoolExecutor(max_workers=1) as executor: future = executor.submit(time.sleep, 2) assert future.done()
- bootup(wait=True)
Signal the executor that it should allocate eagerly any required resources (in particular, MPI worker processes). If wait is
True
, thenbootup()
will not return until the executor resources are ready to process submissions. Resources are automatically allocated in the first call tosubmit()
, thus callingbootup()
explicitly is seldom needed.
- MPI4PY_FUTURES_MAX_WORKERS
If the max_workers parameter to
MPIPoolExecutor
isNone
or not given, theMPI4PY_FUTURES_MAX_WORKERS
environment variable provides fallback value for the maximum number of MPI worker processes to spawn.
Note
As the master process uses a separate thread to perform MPI communication
with the workers, the backend MPI implementation should provide support for
MPI.THREAD_MULTIPLE
. However, some popular MPI implementations do not
support yet concurrent MPI calls from multiple threads. Additionally, users
may decide to initialize MPI with a lower level of thread support. If the
level of thread support in the backend MPI is less than
MPI.THREAD_MULTIPLE
, mpi4py.futures
will use a global lock to
serialize MPI calls. If the level of thread support is less than
MPI.THREAD_SERIALIZED
, mpi4py.futures
will emit a
RuntimeWarning
.
Warning
If the level of thread support in the backend MPI is less than
MPI.THREAD_SERIALIZED
(i.e, it is either MPI.THREAD_SINGLE
or
MPI.THREAD_FUNNELED
), in theory mpi4py.futures
cannot be
used. Rather than raising an exception, mpi4py.futures
emits a
warning and takes a “cross-fingers” attitude to continue execution in the
hope that serializing MPI calls with a global lock will actually work.
MPICommExecutor
Legacy MPI-1 implementations (as well as some vendor MPI-2 implementations) do
not support the dynamic process management features introduced in the MPI-2
standard. Additionally, job schedulers and batch systems in supercomputing
facilities may pose additional complications to applications using the
MPI_Comm_spawn()
routine.
With these issues in mind, mpi4py.futures
supports an additonal, more
traditional, SPMD-like usage pattern requiring MPI-1 calls only. Python
applications are started the usual way, e.g., using the mpiexec
command. Python code should make a collective call to the
MPICommExecutor
context manager to partition the set of MPI processes
within a MPI communicator in one master processes and many workers
processes. The master process gets access to an MPIPoolExecutor
instance to submit tasks. Meanwhile, the worker process follow a different
execution path and team-up to execute the tasks submitted from the master.
Besides alleviating the lack of dynamic process managment features in legacy
MPI-1 or partial MPI-2 implementations, the MPICommExecutor
context
manager may be useful in classic MPI-based Python applications willing to take
advantage of the simple, task-based, master/worker approach available in the
mpi4py.futures
package.
- class mpi4py.futures.MPICommExecutor(comm=None, root=0)
Context manager for
MPIPoolExecutor
. This context manager splits a MPI (intra)communicator comm (defaults toMPI.COMM_WORLD
if not provided orNone
) in two disjoint sets: a single master process (with rank root in comm) and the remaining worker processes. These sets are then connected through an intercommunicator. The target of thewith
statement is assigned either anMPIPoolExecutor
instance (at the master) orNone
(at the workers).from mpi4py import MPI from mpi4py.futures import MPICommExecutor with MPICommExecutor(MPI.COMM_WORLD, root=0) as executor: if executor is not None: future = executor.submit(abs, -42) assert future.result() == 42 answer = set(executor.map(abs, [-42, 42])) assert answer == {42}
Warning
If MPICommExecutor
is passed a communicator of size one (e.g.,
MPI.COMM_SELF
), then the executor instace assigned to the target of the
with
statement will execute all submitted tasks in a single
worker thread, thus ensuring that task execution still progress
asynchronously. However, the GIL will prevent the main and worker
threads from running concurrently in multicore processors. Moreover, the
thread context switching may harm noticeably the performance of CPU-bound
tasks. In case of I/O-bound tasks, the GIL is not usually an issue,
however, as a single worker thread is used, it progress one task at a
time. We advice against using MPICommExecutor
with communicators of
size one and suggest refactoring your code to use instead a
ThreadPoolExecutor
.
Command line
Recalling the issues related to the lack of support for dynamic process
managment features in MPI implementations, mpi4py.futures
supports an
alternative usage pattern where Python code (either from scripts, modules, or
zip files) is run under command line control of the mpi4py.futures
package by passing -m mpi4py.futures
to the python
executable. The mpi4py.futures
invocation should be passed a pyfile path
to a script (or a zipfile/directory containing a __main__.py
file).
Additionally, mpi4py.futures
accepts -m mod
to execute a module
named mod, -c cmd
to execute a command string cmd, or even
-
to read commands from standard input (sys.stdin
).
Summarizing, mpi4py.futures
can be invoked in the following ways:
$ mpiexec -n numprocs python -m mpi4py.futures pyfile [arg] ...
$ mpiexec -n numprocs python -m mpi4py.futures -m mod [arg] ...
$ mpiexec -n numprocs python -m mpi4py.futures -c cmd [arg] ...
$ mpiexec -n numprocs python -m mpi4py.futures - [arg] ...
Before starting the main script execution, mpi4py.futures
splits
MPI.COMM_WORLD
in one master (the process with rank 0 in MPI.COMM_WORLD
) and
numprocs - 1 workers and connects them through an MPI intercommunicator.
Afterwards, the master process proceeds with the execution of the user script
code, which eventually creates MPIPoolExecutor
instances to submit
tasks. Meanwhile, the worker processes follow a different execution path to
serve the master. Upon successful termination of the main script at the master,
the entire MPI execution environment exists gracefully. In case of any unhandled
exception in the main script, the master process calls
MPI.COMM_WORLD.Abort(1)
to prevent deadlocks and force termination of entire
MPI execution environment.
Warning
Running scripts under command line control of mpi4py.futures
is quite
similar to executing a single-process application that spawn additional
workers as required. However, there is a very important difference users
should be aware of. All MPIPoolExecutor
instances created at the
master will share the pool of workers. Tasks submitted at the master from
many different executors will be scheduled for execution in random order as
soon as a worker is idle. Any executor can easily starve all the workers
(e.g., by calling MPIPoolExecutor.map()
with long iterables). If that
ever happens, submissions from other executors will not be serviced until
free workers are available.
See also
- Command line
Documentation on Python command line interface.
Examples
The following julia.py
script computes the Julia set and dumps an
image to disk in binary PGM format. The code starts by importing
MPIPoolExecutor
from the mpi4py.futures
package. Next, some
global constants and functions implement the computation of the Julia set. The
computations are protected with the standard if __name__ == '__main__':
...
idiom. The image is computed by whole scanlines submitting all these
tasks at once using the map
method. The result
iterator yields scanlines in-order as the tasks complete. Finally, each
scanline is dumped to disk.
julia.py
1from mpi4py.futures import MPIPoolExecutor
2
3x0, x1, w = -2.0, +2.0, 640*2
4y0, y1, h = -1.5, +1.5, 480*2
5dx = (x1 - x0) / w
6dy = (y1 - y0) / h
7
8c = complex(0, 0.65)
9
10def julia(x, y):
11 z = complex(x, y)
12 n = 255
13 while abs(z) < 3 and n > 1:
14 z = z**2 + c
15 n -= 1
16 return n
17
18def julia_line(k):
19 line = bytearray(w)
20 y = y1 - k * dy
21 for j in range(w):
22 x = x0 + j * dx
23 line[j] = julia(x, y)
24 return line
25
26if __name__ == '__main__':
27
28 with MPIPoolExecutor() as executor:
29 image = executor.map(julia_line, range(h))
30 with open('julia.pgm', 'wb') as f:
31 f.write(b'P5 %d %d %d\n' % (w, h, 255))
32 for line in image:
33 f.write(line)
The recommended way to execute the script is by using the mpiexec
command specifying one MPI process (master) and (optional but recommended) the
desired MPI universe size, which determines the number of additional
dynamically spawned processes (workers). The MPI universe size is provided
either by a batch system or set by the user via command-line arguments to
mpiexec or environment variables. Below we provide examples for
MPICH and Open MPI implementations [1]. In all of these examples, the
mpiexec command launches a single master process running the Python
interpreter and executing the main script. When required, mpi4py.futures
spawns the pool of 16 worker processes. The master submits tasks to the workers
and waits for the results. The workers receive incoming tasks, execute them,
and send back the results to the master.
When using MPICH implementation or its derivatives based on the Hydra process
manager, users can set the MPI universe size via the -usize
argument to
mpiexec:
$ mpiexec -n 1 -usize 17 python julia.py
or, alternatively, by setting the MPIEXEC_UNIVERSE_SIZE
environment
variable:
$ MPIEXEC_UNIVERSE_SIZE=17 mpiexec -n 1 python julia.py
In the Open MPI implementation, the MPI universe size can be set via the
-host
argument to mpiexec:
$ mpiexec -n 1 -host <hostname>:17 python julia.py
Another way to specify the number of workers is to use the
mpi4py.futures
-specific environment variable
MPI4PY_FUTURES_MAX_WORKERS
:
$ MPI4PY_FUTURES_MAX_WORKERS=16 mpiexec -n 1 python julia.py
Note that in this case, the MPI universe size is ignored.
Alternatively, users may decide to execute the script in a more traditional
way, that is, all the MPI processes are started at once. The user script is run
under command-line control of mpi4py.futures
passing the -m flag to the python executable:
$ mpiexec -n 17 python -m mpi4py.futures julia.py
As explained previously, the 17 processes are partitioned in one master and 16 workers. The master process executes the main script while the workers execute the tasks submitted by the master.
When using an MPI implementation other than MPICH or Open MPI, please check the documentation of the implementation and/or batch system for the ways to specify the desired MPI universe size.
- GIL
mpi4py.util
New in version 3.1.0.
The mpi4py.util
package collects miscellaneous utilities
within the intersection of Python and MPI.
mpi4py.util.pkl5
New in version 3.1.0.
pickle
protocol 5 (see PEP 574) introduced support for out-of-band
buffers, allowing for more efficient handling of certain object types with
large memory footprints.
MPI for Python uses the traditional in-band handling of buffers. This approach is appropriate for communicating non-buffer Python objects, or buffer-like objects with small memory footprints. For point-to-point communication, in-band buffer handling allows for the communication of a pickled stream with a single MPI message, at the expense of additional CPU and memory overhead in the pickling and unpickling steps.
The mpi4py.util.pkl5
module provides communicator wrapper classes
reimplementing pickle-based point-to-point communication methods using pickle
protocol 5. Handling out-of-band buffers necessarily involve multiple MPI
messages, thus increasing latency and hurting performance in case of small size
data. However, in case of large size data, the zero-copy savings of out-of-band
buffer handling more than offset the extra latency costs. Additionally, these
wrapper methods overcome the infamous 2 GiB message count limit (MPI-1 to
MPI-3).
Note
Support for pickle protocol 5 is available in the pickle
module
within the Python standard library since Python 3.8. Previous Python 3
releases can use the pickle5
backport, which is available on PyPI and can be installed with:
python -m pip install pickle5
- class mpi4py.util.pkl5.Request(request=None)
Request.
Custom request class for nonblocking communications.
Note
Request
is not a subclass ofmpi4py.MPI.Request
- Parameters:
request (Iterable[MPI.Request]) –
- Return type:
- Free()
Free a communication request.
- Return type:
None
- cancel()
Cancel a communication request.
- Return type:
None
- get_status(status=None)
Non-destructive test for the completion of a request.
- test(status=None)
Test for the completion of a request.
- wait(status=None)
Wait for a request to complete.
- Parameters:
status (Optional[Status]) –
- Return type:
Any
- classmethod testall(requests, statuses=None)
Test for the completion of all requests.
- Classmethod:
- classmethod waitall(requests, statuses=None)
Wait for all requests to complete.
- Classmethod:
- class mpi4py.util.pkl5.Message(message=None)
Message.
Custom message class for matching probes.
Note
Message
is not a subclass ofmpi4py.MPI.Message
- Parameters:
message (Iterable[MPI.Message]) –
- Return type:
- recv(status=None)
Blocking receive of matched message.
- Parameters:
status (Optional[Status]) –
- Return type:
Any
- classmethod probe(comm, source=ANY_SOURCE, tag=ANY_TAG, status=None)
Blocking test for a matched message.
- Classmethod:
- classmethod iprobe(comm, source=ANY_SOURCE, tag=ANY_TAG, status=None)
Nonblocking test for a matched message.
- Classmethod:
- class mpi4py.util.pkl5.Comm
Communicator.
Base communicator wrapper class.
- send(obj, dest, tag=0)
Blocking send in standard mode.
- bsend(obj, dest, tag=0)
Blocking send in buffered mode.
- ssend(obj, dest, tag=0)
Blocking send in synchronous mode.
- isend(obj, dest, tag=0)
Nonblocking send in standard mode.
- ibsend(obj, dest, tag=0)
Nonblocking send in buffered mode.
- issend(obj, dest, tag=0)
Nonblocking send in synchronous mode.
- recv(buf=None, source=ANY_SOURCE, tag=ANY_TAG, status=None)
Blocking receive.
- irecv(buf=None, source=ANY_SOURCE, tag=ANY_TAG)
Nonblocking receive.
Warning
This method cannot be supported reliably and raises
RuntimeError
.
- sendrecv(sendobj, dest, sendtag=0, recvbuf=None, source=ANY_SOURCE, recvtag=ANY_TAG, status=None)
Send and receive.
- mprobe(source=ANY_SOURCE, tag=ANY_TAG, status=None)
Blocking test for a matched message.
- improbe(source=ANY_SOURCE, tag=ANY_TAG, status=None)
Nonblocking test for a matched message.
- class mpi4py.util.pkl5.Intracomm
Intracommunicator.
Intracommunicator wrapper class.
- class mpi4py.util.pkl5.Intercomm
Intercommunicator.
Intercommunicator wrapper class.
Examples
test-pkl5-1.py
1import numpy as np
2from mpi4py import MPI
3from mpi4py.util import pkl5
4
5comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper
6size = comm.Get_size()
7rank = comm.Get_rank()
8dst = (rank + 1) % size
9src = (rank - 1) % size
10
11sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB
12sreq = comm.isend(sobj, dst, tag=42)
13robj = comm.recv (None, src, tag=42)
14sreq.Free()
15
16assert np.min(robj) == src
17assert np.max(robj) == src
test-pkl5-2.py
1import numpy as np
2from mpi4py import MPI
3from mpi4py.util import pkl5
4
5comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper
6size = comm.Get_size()
7rank = comm.Get_rank()
8dst = (rank + 1) % size
9src = (rank - 1) % size
10
11sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB
12sreq = comm.isend(sobj, dst, tag=42)
13
14status = MPI.Status()
15rmsg = comm.mprobe(status=status)
16assert status.Get_source() == src
17assert status.Get_tag() == 42
18rreq = rmsg.irecv()
19robj = rreq.wait()
20
21sreq.Free()
22assert np.max(robj) == src
23assert np.min(robj) == src
mpi4py.util.dtlib
New in version 3.1.0.
The mpi4py.util.dtlib
module provides converter routines between NumPy
and MPI datatypes.
- mpi4py.util.dtlib.from_numpy_dtype(dtype)
Convert NumPy datatype to MPI datatype.
- Parameters:
dtype (numpy.typing.DTypeLike) – NumPy dtype-like object.
- Return type:
mpi4py.run
New in version 3.0.0.
At import time, mpi4py
initializes the MPI execution environment calling
MPI_Init_thread()
and installs an exit hook to automatically call
MPI_Finalize()
just before the Python process terminates. Additionally,
mpi4py
overrides the default ERRORS_ARE_FATAL
error handler in favor
of ERRORS_RETURN
, which allows translating MPI errors in Python
exceptions. These departures from standard MPI behavior may be controversial,
but are quite convenient within the highly dynamic Python programming
environment. Third-party code using mpi4py
can just from mpi4py import
MPI
and perform MPI calls without the tedious initialization/finalization
handling. MPI errors, once translated automatically to Python exceptions, can
be dealt with the common try
…except
…finally
clauses; unhandled MPI exceptions will print a traceback
which helps in locating problems in source code.
Unfortunately, the interplay of automatic MPI finalization and unhandled exceptions may lead to deadlocks. In unattended runs, these deadlocks will drain the battery of your laptop, or burn precious allocation hours in your supercomputing facility.
Consider the following snippet of Python code. Assume this code is stored in a standard Python script file and run with mpiexec in two or more processes.
from mpi4py import MPI
assert MPI.COMM_WORLD.Get_size() > 1
rank = MPI.COMM_WORLD.Get_rank()
if rank == 0:
1/0
MPI.COMM_WORLD.send(None, dest=1, tag=42)
elif rank == 1:
MPI.COMM_WORLD.recv(source=0, tag=42)
Process 0 raises ZeroDivisionError
exception before performing a send call to
process 1. As the exception is not handled, the Python interpreter running in
process 0 will proceed to exit with non-zero status. However, as mpi4py
installed a finalizer hook to call MPI_Finalize()
before exit, process
0 will block waiting for other processes to also enter the
MPI_Finalize()
call. Meanwhile, process 1 will block waiting for a
message to arrive from process 0, thus never reaching to
MPI_Finalize()
. The whole MPI execution environment is irremediably in
a deadlock state.
To alleviate this issue, mpi4py
offers a simple, alternative command
line execution mechanism based on using the -m
flag and implemented with the runpy
module. To use this features, Python
code should be run passing -m mpi4py
in the command line invoking the
Python interpreter. In case of unhandled exceptions, the finalizer hook will
call MPI_Abort()
on the MPI_COMM_WORLD
communicator, thus
effectively aborting the MPI execution environment.
Warning
When a process is forced to abort, resources (e.g. open files) are not
cleaned-up and any registered finalizers (either with the atexit
module, the Python C/API function Py_AtExit()
, or even the C
standard library function atexit()
) will not be executed. Thus,
aborting execution is an extremely impolite way of ensuring process
termination. However, MPI provides no other mechanism to recover from a
deadlock state.
Interface options
The use of -m mpi4py
to execute Python code on the command line resembles
that of the Python interpreter.
mpiexec -n numprocs python -m mpi4py pyfile [arg] ...
mpiexec -n numprocs python -m mpi4py -m mod [arg] ...
mpiexec -n numprocs python -m mpi4py -c cmd [arg] ...
mpiexec -n numprocs python -m mpi4py - [arg] ...
- <pyfile>
Execute the Python code contained in pyfile, which must be a filesystem path referring to either a Python file, a directory containing a
__main__.py
file, or a zipfile containing a__main__.py
file.
- -c <cmd>
Execute the Python code in the cmd string command.
- -
Read commands from standard input (
sys.stdin
).
See also
- Command line
Documentation on Python command line interface.
Reference
Message Passing Interface. |
Citation
If MPI for Python been significant to a project that leads to an academic publication, please acknowledge that fact by citing the project.
L. Dalcin and Y.-L. L. Fang, mpi4py: Status Update After 12 Years of Development, Computing in Science & Engineering, 23(4):47-54, 2021. https://doi.org/10.1109/MCSE.2021.3083216
L. Dalcin, P. Kler, R. Paz, and A. Cosimo, Parallel Distributed Computing using Python, Advances in Water Resources, 34(9):1124-1139, 2011. https://doi.org/10.1016/j.advwatres.2011.04.013
L. Dalcin, R. Paz, M. Storti, and J. D’Elia, MPI for Python: performance improvements and MPI-2 extensions, Journal of Parallel and Distributed Computing, 68(5):655-662, 2008. https://doi.org/10.1016/j.jpdc.2007.09.005
L. Dalcin, R. Paz, and M. Storti, MPI for Python, Journal of Parallel and Distributed Computing, 65(9):1108-1115, 2005. https://doi.org/10.1016/j.jpdc.2005.03.010
Installation
Requirements
You need to have the following software properly installed in order to build MPI for Python:
A working MPI implementation, preferably supporting MPI-3 and built with shared/dynamic libraries.
Note
If you want to build some MPI implementation from sources, check the instructions at Building MPI from sources in the appendix.
Python 2.7, 3.5 or above.
Note
Some MPI-1 implementations do require the actual command line arguments to be passed in
MPI_Init()
. In this case, you will need to use a rebuilt, MPI-enabled, Python interpreter executable. MPI for Python has some support for alleviating you from this task. Check the instructions at MPI-enabled Python interpreter in the appendix.
Using pip
If you already have a working MPI (either if you installed it from sources or by using a pre-built package from your favourite GNU/Linux distribution) and the mpicc compiler wrapper is on your search path, you can use pip:
$ python -m pip install mpi4py
Note
If the mpicc compiler wrapper is not on your
search path (or if it has a different name) you can use
env to pass the environment variable MPICC
providing the full path to the MPI compiler wrapper executable:
$ env MPICC=/path/to/mpicc python -m pip install mpi4py
Warning
pip keeps previouly built wheel files on its cache for
future reuse. If you want to reinstall the mpi4py
package
using a different or updated MPI implementation, you have to either
first remove the cached wheel file with:
$ python -m pip cache remove mpi4py
or ask pip to disable the cache:
$ python -m pip install --no-cache-dir mpi4py
Using distutils
The MPI for Python package is available for download at the project website generously hosted by GitHub. You can use curl or wget to get a release tarball.
Using curl:
$ curl -O https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz
Using wget:
$ wget https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz
After unpacking the release tarball:
$ tar -zxf mpi4py-X.Y.Z.tar.gz
$ cd mpi4py-X.Y.Z
the package is ready for building.
MPI for Python uses a standard distutils-based build system. However, some distutils commands (like build) have additional options:
- --mpicc=
Lets you specify a special location or name for the mpicc compiler wrapper.
- --mpi=
Lets you pass a section with MPI configuration within a special configuration file.
- --configure
Runs exhaustive tests for checking about missing MPI types, constants, and functions. This option should be passed in order to build MPI for Python against old MPI-1 or MPI-2 implementations, possibly providing a subset of MPI-3.
If you use a MPI implementation providing a mpicc compiler wrapper (e.g., MPICH, Open MPI), it will be used for compilation and linking. This is the preferred and easiest way of building MPI for Python.
If mpicc is located somewhere in your search path, simply run the build command:
$ python setup.py build
If mpicc is not in your search path or the compiler wrapper has a different name, you can run the build command specifying its location:
$ python setup.py build --mpicc=/where/you/have/mpicc
Alternatively, you can provide all the relevant information about your
MPI implementation by editing the file called mpi.cfg
. You can
use the default section [mpi]
or add a new, custom section, for
example [other_mpi]
(see the examples provided in the
mpi.cfg
file as a starting point to write your own section):
[mpi]
include_dirs = /usr/local/mpi/include
libraries = mpi
library_dirs = /usr/local/mpi/lib
runtime_library_dirs = /usr/local/mpi/lib
[other_mpi]
include_dirs = /opt/mpi/include ...
libraries = mpi ...
library_dirs = /opt/mpi/lib ...
runtime_library_dirs = /op/mpi/lib ...
...
and then run the build command, perhaps specifying you custom configuration section:
$ python setup.py build --mpi=other_mpi
After building, the package is ready for install.
If you have root privileges (either by log-in as the root user of by using sudo) and you want to install MPI for Python in your system for all users, just do:
$ python setup.py install
The previous steps will install the mpi4py
package at standard
location prefix/lib/pythonX.X/site-packages
.
If you do not have root privileges or you want to install MPI for Python for your private use, just do:
$ python setup.py install --user
Testing
To quickly test the installation:
$ mpiexec -n 5 python -m mpi4py.bench helloworld
Hello, World! I am process 0 of 5 on localhost.
Hello, World! I am process 1 of 5 on localhost.
Hello, World! I am process 2 of 5 on localhost.
Hello, World! I am process 3 of 5 on localhost.
Hello, World! I am process 4 of 5 on localhost.
If you installed from source, issuing at the command line:
$ mpiexec -n 5 python demo/helloworld.py
or (in the case of ancient MPI-1 implementations):
$ mpirun -np 5 python `pwd`/demo/helloworld.py
will launch a five-process run of the Python interpreter and run the
test script demo/helloworld.py
from the source distribution.
You can also run all the unittest scripts:
$ mpiexec -n 5 python test/runtests.py
or, if you have nose unit testing framework installed:
$ mpiexec -n 5 nosetests -w test
or, if you have py.test unit testing framework installed:
$ mpiexec -n 5 py.test test/
Appendix
MPI-enabled Python interpreter
Warning
These days it is no longer required to use the MPI-enabled Python interpreter in most cases, and, therefore, it is not built by default anymore because it is too difficult to reliably build a Python interpreter across different distributions. If you know that you still really need it, see below on how to use the
build_exe
andinstall_exe
commands.
Some MPI-1 implementations (notably, MPICH 1) do require the
actual command line arguments to be passed at the time
MPI_Init()
is called. In this case, you will need to use a
re-built, MPI-enabled, Python interpreter binary executable. A basic
implementation (targeting Python 2.X) of what is required is shown
below:
#include <Python.h>
#include <mpi.h>
int main(int argc, char *argv[])
{
int status, flag;
MPI_Init(&argc, &argv);
status = Py_Main(argc, argv);
MPI_Finalized(&flag);
if (!flag) MPI_Finalize();
return status;
}
The source code above is straightforward; compiling it should also be. However, the linking step is more tricky: special flags have to be passed to the linker depending on your platform. In order to alleviate you for such low-level details, MPI for Python provides some pure-distutils based support to build and install an MPI-enabled Python interpreter executable:
$ cd mpi4py-X.X.X
$ python setup.py build_exe [--mpi=<name>|--mpicc=/path/to/mpicc]
$ [sudo] python setup.py install_exe [--install-dir=$HOME/bin]
After the above steps you should have the MPI-enabled interpreter
installed as prefix/bin/pythonX.X-mpi
(or
$HOME/bin/pythonX.X-mpi
). Assuming that
prefix/bin
(or $HOME/bin
) is listed on your
PATH
, you should be able to enter your MPI-enabled Python
interactively, for example:
$ python2.7-mpi
Python 2.7.8 (default, Nov 10 2014, 08:19:18)
[GCC 4.9.2 20141101 (Red Hat 4.9.2-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/usr/bin/python2.7-mpi'
>>>
Building MPI from sources
In the list below you have some executive instructions for building some of the open-source MPI implementations out there with support for shared/dynamic libraries on POSIX environments.
MPICH
$ tar -zxf mpich-X.X.X.tar.gz $ cd mpich-X.X.X $ ./configure --enable-shared --prefix=/usr/local/mpich $ make $ make install
Open MPI
$ tar -zxf openmpi-X.X.X tar.gz $ cd openmpi-X.X.X $ ./configure --prefix=/usr/local/openmpi $ make all $ make install
MPICH 1
$ tar -zxf mpich-X.X.X.tar.gz $ cd mpich-X.X.X $ ./configure --enable-sharedlib --prefix=/usr/local/mpich1 $ make $ make install
Perhaps you will need to set the LD_LIBRARY_PATH
environment variable (using export, setenv or
what applies to your system) pointing to the directory containing the
MPI libraries . In case of getting runtime linking errors when running
MPI programs, the following lines can be added to the user login shell
script (.profile
, .bashrc
, etc.).
MPICH
MPI_DIR=/usr/local/mpich export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH
Open MPI
MPI_DIR=/usr/local/openmpi export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH
MPICH 1
MPI_DIR=/usr/local/mpich1 export LD_LIBRARY_PATH=$MPI_DIR/lib/shared:$LD_LIBRARY_PATH: export MPICH_USE_SHLIB=yes
Warning
MPICH 1 support for dynamic libraries is not completely transparent. Users should set the environment variable
MPICH_USE_SHLIB
toyes
in order to avoid link problems when using the mpicc compiler wrapper.