mpi4py.MPI.Intracomm
- class mpi4py.MPI.Intracomm
Bases:
Comm
Intracommunicator
Methods Summary
Accept
(port_name[, info, root])Accept a request to form a new intercommunicator
Cart_map
(dims[, periods])Return an optimal placement for the calling process on the physical machine
Connect
(port_name[, info, root])Make a request to form a new intercommunicator
Create_cart
(dims[, periods, reorder])Create cartesian communicator
Create_dist_graph
(sources, degrees, destinations)Create distributed graph communicator
Create_dist_graph_adjacent
(sources, destinations)Create distributed graph communicator
Create_from_group
(group[, stringtag, info, ...])Create communicator from group
Create_graph
(index, edges[, reorder])Create graph communicator
Create_group
(group[, tag])Create communicator from group
Create_intercomm
(local_leader, peer_comm, ...)Create intercommunicator
Exscan
(sendbuf, recvbuf[, op])Exclusive Scan
Exscan_init
(sendbuf, recvbuf[, op, info])Inclusive Scan
Graph_map
(index, edges)Return an optimal placement for the calling process on the physical machine
Iexscan
(sendbuf, recvbuf[, op])Inclusive Scan
Iscan
(sendbuf, recvbuf[, op])Inclusive Scan
Scan
(sendbuf, recvbuf[, op])Inclusive Scan
Scan_init
(sendbuf, recvbuf[, op, info])Inclusive Scan
Spawn
(command[, args, maxprocs, info, root, ...])Spawn instances of a single MPI application
Spawn_multiple
(command[, args, maxprocs, ...])Spawn instances of multiple MPI applications
exscan
(sendobj[, op])Exclusive Scan
scan
(sendobj[, op])Inclusive Scan
Methods Documentation
- Accept(port_name, info=INFO_NULL, root=0)
Accept a request to form a new intercommunicator
- Cart_map(dims, periods=None)
Return an optimal placement for the calling process on the physical machine
- Connect(port_name, info=INFO_NULL, root=0)
Make a request to form a new intercommunicator
- Create_cart(dims, periods=None, reorder=False)
Create cartesian communicator
- Create_dist_graph(sources, degrees, destinations, weights=None, info=INFO_NULL, reorder=False)
Create distributed graph communicator
- Create_dist_graph_adjacent(sources, destinations, sourceweights=None, destweights=None, info=INFO_NULL, reorder=False)
Create distributed graph communicator
- classmethod Create_from_group(group, stringtag='org.mpi4py', info=INFO_NULL, errhandler=None)
Create communicator from group
- Create_graph(index, edges, reorder=False)
Create graph communicator
- Create_group(group, tag=0)
Create communicator from group
- Create_intercomm(local_leader, peer_comm, remote_leader, tag=0)
Create intercommunicator
- Exscan(sendbuf, recvbuf, op=SUM)
Exclusive Scan
- Exscan_init(sendbuf, recvbuf, op=SUM, info=INFO_NULL)
Inclusive Scan
- Graph_map(index, edges)
Return an optimal placement for the calling process on the physical machine
- Iexscan(sendbuf, recvbuf, op=SUM)
Inclusive Scan
- Iscan(sendbuf, recvbuf, op=SUM)
Inclusive Scan
- Scan(sendbuf, recvbuf, op=SUM)
Inclusive Scan
- Scan_init(sendbuf, recvbuf, op=SUM, info=INFO_NULL)
Inclusive Scan
- Spawn(command, args=None, maxprocs=1, info=INFO_NULL, root=0, errcodes=None)
Spawn instances of a single MPI application
- Spawn_multiple(command, args=None, maxprocs=None, info=INFO_NULL, root=0, errcodes=None)
Spawn instances of multiple MPI applications
- exscan(sendobj, op=SUM)
Exclusive Scan