Copyright | (C) 2018 2019 2020 Erik Schnetter |
---|---|
License | Apache-2.0 |
Maintainer | Erik Schnetter <schnetter@gmail.com> |
Stability | experimental |
Portability | Requires an externally installed MPI library |
Safe Haskell | None |
Language | Haskell2010 |
MPI (the Message Passing Interface) is widely used standard for distributed-memory programming on HPC (High Performance Computing) systems. MPI allows exchanging data (_messages_) between programs running in parallel. There are several high-quality open source MPI implementations (e.g. MPICH, MVAPICH, OpenMPI) as well as a variety of closed-source implementations. These libraries can typically make use of high-bandwidth low-latency communication hardware such as InfiniBand.
This library mpi-hs
provides Haskell bindings for MPI. It is
based on ideas taken from
haskell-mpi,
Boost.MPI,
and MPI for Python.
mpi-hs
provides two API levels: A low-level API gives rather
direct access to the MPI API, apart from certain "reasonable"
mappings from C to Haskell (e.g. output arguments that are in C
stored to a pointer are in Haskell regular return values). A
high-level API simplifies exchanging arbitrary values that can be
serialized.
This module MPI
is the low-level interface.
In general, the MPI C API is translated to Haskell in the following
way, greatly aided by c2hs
:
- Names of constants and functions have the
MPI_
prefix removed. Underscores are replaced by CamelCase. TheMPI
module is intended to be imported qualified, as in 'import qualified Control.Distributed.MPI as MPI'. - Opaque types such as
MPI_Request
are wrapped via newtypes. - The MPI error return code is omitted. Currently error codes are ignored, since the default MPI behaviour is to terminate the application instead of actually returning error codes. In the future, error codes might be reported via exceptions.
- Output arguments that are written via pointers in C are returned.
Some functions now return tuples. If the output argument is a
boolean value that indicates whether another output argument is
valid, then this is translated into a
Maybe
. - MPI has a facility to pass
MPI_STATUS_IGNORE
to indicate that no message status should be returned. This is instead handled by providing alternative functions ending with an underscore (e.g.recv_
) that return()
instead ofStatus
. - Datatype arguments are hidden. Instead, the correct MPI datatypes are inferred from the pointer type specifying the communication buffers. (This translation could be relaxed, and the original MPI functions could be exposed as well when needed.)
Synopsis
- class Buffer buf where
- newtype Comm = Comm CComm
- data ComparisonResult
- commCompare :: Comm -> Comm -> IO ComparisonResult
- commRank :: Comm -> IO Rank
- commSize :: Comm -> IO Rank
- commNull :: Comm
- commSelf :: Comm
- commWorld :: Comm
- newtype Count = Count CInt
- fromCount :: Integral i => Count -> i
- toCount :: Integral i => i -> Count
- countUndefined :: Count
- newtype Datatype = Datatype CDatatype
- datatypeNull :: Datatype
- datatypeByte :: Datatype
- datatypeChar :: Datatype
- datatypeDouble :: Datatype
- datatypeFloat :: Datatype
- datatypeInt :: Datatype
- datatypeLong :: Datatype
- datatypeLongDouble :: Datatype
- datatypeLongLong :: Datatype
- datatypeLongLongInt :: Datatype
- datatypeShort :: Datatype
- datatypeUnsigned :: Datatype
- datatypeUnsignedChar :: Datatype
- datatypeUnsignedLong :: Datatype
- datatypeUnsignedLongLong :: Datatype
- datatypeUnsignedShort :: Datatype
- class HasDatatype a where
- newtype Op = Op COp
- opNull :: Op
- opBand :: Op
- opBor :: Op
- opBxor :: Op
- opLand :: Op
- opLor :: Op
- opLxor :: Op
- opMax :: Op
- opMaxloc :: Op
- opMin :: Op
- opMinloc :: Op
- opProd :: Op
- opSum :: Op
- newtype Rank = Rank CInt
- fromRank :: Enum e => Rank -> e
- rootRank :: Rank
- toRank :: Enum e => e -> Rank
- anySource :: Rank
- newtype Request = Request CRequest
- requestNull :: IO Request
- newtype Status = Status (ForeignPtr Status)
- getSource :: Status -> IO Rank
- getTag :: Status -> IO Tag
- getCount :: Status -> Datatype -> IO Count
- getElements :: Status -> Datatype -> IO Int
- newtype Tag = Tag CInt
- fromTag :: Enum e => Tag -> e
- toTag :: Enum e => e -> Tag
- unitTag :: Tag
- anyTag :: Tag
- data ThreadSupport
- threadSupport :: IO (Maybe ThreadSupport)
- abort :: Comm -> Int -> IO ()
- finalize :: IO ()
- finalized :: IO Bool
- init :: IO ()
- initThread :: ThreadSupport -> IO ThreadSupport
- initialized :: IO Bool
- getLibraryVersion :: IO String
- getProcessorName :: IO String
- getVersion :: IO Version
- probe :: Rank -> Tag -> Comm -> IO Status
- probe_ :: Rank -> Tag -> Comm -> IO ()
- recv :: Buffer rb => rb -> Rank -> Tag -> Comm -> IO Status
- recv_ :: Buffer rb => rb -> Rank -> Tag -> Comm -> IO ()
- send :: Buffer sb => sb -> Rank -> Tag -> Comm -> IO ()
- sendrecv :: (Buffer sb, Buffer rb) => sb -> Rank -> Tag -> rb -> Rank -> Tag -> Comm -> IO Status
- sendrecv_ :: (Buffer sb, Buffer rb) => sb -> Rank -> Tag -> rb -> Rank -> Tag -> Comm -> IO ()
- wait :: Request -> IO Status
- wait_ :: Request -> IO ()
- iprobe :: Rank -> Tag -> Comm -> IO (Maybe Status)
- iprobe_ :: Rank -> Tag -> Comm -> IO Bool
- irecv :: Buffer rb => rb -> Rank -> Tag -> Comm -> IO Request
- isend :: Buffer sb => sb -> Rank -> Tag -> Comm -> IO Request
- requestGetStatus :: Request -> IO (Maybe Status)
- requestGetStatus_ :: Request -> IO Bool
- test :: Request -> IO (Maybe Status)
- test_ :: Request -> IO Bool
- allgather :: (Buffer sb, Buffer rb) => sb -> rb -> Comm -> IO ()
- allreduce :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Comm -> IO ()
- alltoall :: (Buffer sb, Buffer rb) => sb -> rb -> Comm -> IO ()
- barrier :: Comm -> IO ()
- bcast :: Buffer b => b -> Rank -> Comm -> IO ()
- exscan :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Comm -> IO ()
- gather :: (Buffer sb, Buffer rb) => sb -> rb -> Rank -> Comm -> IO ()
- reduce :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Rank -> Comm -> IO ()
- scan :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Comm -> IO ()
- scatter :: (Buffer sb, Buffer rb) => sb -> rb -> Rank -> Comm -> IO ()
- iallgather :: (Buffer sb, Buffer rb) => sb -> rb -> Comm -> IO Request
- iallreduce :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Comm -> IO Request
- ialltoall :: (Buffer sb, Buffer rb) => sb -> rb -> Comm -> IO Request
- ibarrier :: Comm -> IO Request
- ibcast :: Buffer b => b -> Rank -> Comm -> IO Request
- iexscan :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Comm -> IO Request
- igather :: (Buffer rb, Buffer sb) => sb -> rb -> Rank -> Comm -> IO Request
- ireduce :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Rank -> Comm -> IO Request
- iscan :: (Buffer sb, Buffer rb) => sb -> rb -> Op -> Comm -> IO Request
- iscatter :: (Buffer sb, Buffer rb) => sb -> rb -> Rank -> Comm -> IO Request
- wtick :: IO Double
- wtime :: IO Double
Types, and associated functions and constants
Communication buffers
class Buffer buf where Source #
A generic pointer-like type that supports converting to a Ptr
,
and which knows the type and number of its elements. This class
describes the MPI buffers used to send and receive messages.
Instances
Buffer ByteString Source # | |
Defined in Control.Distributed.MPI type Elem ByteString Source # withPtrLenType :: ByteString -> (Ptr (Elem ByteString) -> Count -> Datatype -> IO a) -> IO a Source # | |
(Storable a, HasDatatype a, Integral i) => Buffer (StablePtr a, i) Source # | |
(Storable a, HasDatatype a, Integral i) => Buffer (Ptr a, i) Source # | |
(Storable a, HasDatatype a, Integral i) => Buffer (ForeignPtr a, i) Source # | |
Defined in Control.Distributed.MPI type Elem (ForeignPtr a, i) Source # withPtrLenType :: (ForeignPtr a, i) -> (Ptr (Elem (ForeignPtr a, i)) -> Count -> Datatype -> IO a0) -> IO a0 Source # |
Communicators
An MPI communicator, wrapping MPI_Comm
. A communicator defines
an independent communication channel between a group of processes.
Communicators need to be explicitly created and freed by the MPI
library. commWorld
is a communicator that is always available,
and which includes all processes.
Comm CComm |
data ComparisonResult Source #
The result of comparing two MPI communicator (see commCompare
).
Instances
:: Comm | Communicator |
-> Comm | Other communicator |
-> IO ComparisonResult |
Compare two communicators
(MPI_Comm_compare
).
Return this process's rank in a communicator
(MPI_Comm_rank
).
Return the number of processes in a communicator
(MPI_Comm_size
).
The self communicator (MPI_COMM_SELF
). Each process has its own
self communicator that includes only this process.
Message sizes
Instances
Enum Count Source # | |
Defined in Control.Distributed.MPI | |
Eq Count Source # | |
Integral Count Source # | |
Num Count Source # | |
Ord Count Source # | |
Read Count Source # | |
Real Count Source # | |
Defined in Control.Distributed.MPI toRational :: Count -> Rational # | |
Show Count Source # | |
Generic Count Source # | |
Storable Count Source # | |
type Rep Count Source # | |
Defined in Control.Distributed.MPI |
countUndefined :: Count Source #
Error value returned by getCount
if the message is too large,
or if the message size is not an integer multiple of the provided
datatype (MPI_UNDEFINED
).
Datatypes
An MPI datatype, wrapping MPI_Datatype
. Datatypes need to be
explicitly created and freed by the MPI library. Predefined
datatypes exist for most simple C types such as CInt
or
CDouble
.
Datatype CDatatype |
datatypeNull :: Datatype Source #
A null (invalid) datatype.
datatypeByte :: Datatype Source #
MPI datatype for a byte (essentially CUChar
) (MPI_BYTE
).
datatypeChar :: Datatype Source #
MPI datatype for CChar
(MPI_CHAR
).
datatypeDouble :: Datatype Source #
MPI datatype for CDouble
(MPI_DOUBLE
).
datatypeFloat :: Datatype Source #
MPI datatype for CFloat
(MPI_FLOAT
).
datatypeInt :: Datatype Source #
MPI datatype for CInt
(MPI_INT
).
datatypeLong :: Datatype Source #
MPI datatype for CLong
(MPI_LONG
).
datatypeLongDouble :: Datatype Source #
MPI datatype for the C type 'long double' (MPI_LONG_DOUBLE
).
datatypeLongLong :: Datatype Source #
MPI datatype for CLLong
(MPI_LONG_LONG
).
datatypeLongLongInt :: Datatype Source #
MPI datatype for CLLong
(MPI_LONG_LONG_INT
).
datatypeShort :: Datatype Source #
MPI datatype for CShort
(MPI_SHORT
).
datatypeUnsigned :: Datatype Source #
MPI datatype for CUInt
(MPI_UNSIGNED
).
datatypeUnsignedChar :: Datatype Source #
MPI datatype for CUChar
(MPI_UNSIGNED_CHAR
).
datatypeUnsignedLong :: Datatype Source #
MPI datatype for CULong
(MPI_UNSIGNED_LONG
).
datatypeUnsignedLongLong :: Datatype Source #
MPI datatype for CULLong
(MPI_UNSIGNED_LONG_LONG
).
datatypeUnsignedShort :: Datatype Source #
MPI datatype for CUShort
(MPI_UNSIGNED_SHORT
).
class HasDatatype a where Source #
A type class mapping Haskell types to MPI datatypes. This is used to automatically determine the MPI datatype for communication buffers.
Instances
Reduction operations
An MPI reduction operation, wrapping MPI_Op
. Reduction
operations need to be explicitly created and freed by the MPI
library. Predefined operation exist for simple semigroups such as
sum, maximum, or minimum.
An MPI reduction operation corresponds to a Semigroup, not a Monoid, i.e. MPI has no notion of a respective neutral element.
Op COp |
The argmax reduction operation to find the maximum and its rank
(MPI_MAXLOC
).
The argmin reduction operation to find the minimum and its rank
(MPI_MINLOC
).
Process ranks
A newtype wrapper describing the source or destination of a
message, i.e. a process. Each communicator numbers its processes
sequentially starting from zero. Use toRank
and fromRank
to
convert between Rank
and other integral types. rootRank
is the
root (first) process of a communicator.
The association between a rank and a communicator is not explicitly tracked. From MPI's point of view, ranks are simply integers. The same rank might correspond to different processes in different communicators.
Instances
Enum Rank Source # | |
Eq Rank Source # | |
Integral Rank Source # | |
Num Rank Source # | |
Ord Rank Source # | |
Read Rank Source # | |
Real Rank Source # | |
Defined in Control.Distributed.MPI toRational :: Rank -> Rational # | |
Show Rank Source # | |
Ix Rank Source # | |
Generic Rank Source # | |
Storable Rank Source # | |
Defined in Control.Distributed.MPI | |
type Rep Rank Source # | |
Defined in Control.Distributed.MPI |
Communication requests
An MPI request, wrapping MPI_Request
. A request describes a
communication that is currently in progress. Each request must be
explicitly freed via cancel
, test
, or wait
.
Some MPI functions modify existing requests. The new requests are never interesting, and will not be returned.
TODO: Handle Comm
, Datatype
etc. in this way as well (all
except Status
).
Request CRequest |
requestNull :: IO Request Source #
A null (invalid) request (MPI_REQUEST_NULL
).
Message status
An MPI status, wrapping MPI_Status
. The status describes
certain properties of a message. It contains information such as
the source of a communication (getSource
), the message tag
(getTag
), or the size of the message (getCount
, getElements
).
In many cases, the status is not interesting. In this case, you can
use alternative functions ending with an underscore (e.g. recv_
)
that do not calculate a status.
The status is particularly interesting when using probe
or
iprobe
, as it describes a message that is ready to be received,
but which has not been received yet.
Get the size of a message, in terms of objects of type Datatype
(MPI_Get_count
).
To determine the MPI datatype for a given Haskell type, use
datatype
(call e.g. as 'datatype @CInt').
Get the number of elements in message, in terms of sub-object of
the type datatype
(MPI_Get_elements
).
This is useful when a message contains partial objects of type
datatype
. To determine the MPI datatype for a given Haskell type,
use datatype
(call e.g. as 'datatype @CInt').
Message tags
A newtype wrapper describing a message tag. A tag defines a
sub-channel within a communicator. While communicators are
heavy-weight object that are expensive to set up and tear down, a
tag is a lightweight mechanism using an integer. Use toTag
and
fromTag
to convert between Count
and other enum types.
unitTag
defines a standard tag that can be used as default.
Thread support
data ThreadSupport Source #
Thread support levels for MPI (see initThread
):
ThreadSingle
(MPI_THREAD_SINGLE
): The application must be- single-threaded
ThreadFunneled
(MPI_THREAD_FUNNELED
): The application might be multi-threaded, but only a single thread will call MPIThreadSerialized
(MPI_THREAD_SERIALIZED
): The application might be multi-threaded, but the application guarantees that only one thread at a time will call MPIThreadMultiple
(MPI_THREAD_MULTIPLE
): The application is multi-threaded, and different threads might call MPI at the same time
Instances
threadSupport :: IO (Maybe ThreadSupport) Source #
When MPI is initialized with this library, then it will remember the provided level of thread support. (This might be less than the requested level.)
Functions
Initialization and shutdown
Terminate MPI execution environment
(MPI_Abort
).
Finalize (shut down) the MPI library (collective, MPI_Finalize
).
Initialize the MPI library (collective,
MPI_Init
).
This corresponds to calling initThread
ThreadSingle
.
:: ThreadSupport | required level of thread support |
-> IO ThreadSupport | provided level of thread support |
Initialize the MPI library (collective,
MPI_Init_thread
).
Note that the provided level of thread support might be less than
(!) the required level.
initialized :: IO Bool Source #
Return whether the MPI library has been initialized
(MPI_Initialized
).
Inquiry
getLibraryVersion :: IO String Source #
Return the version of the MPI library
(MPI_Get_library_version
).
Note that the version of the MPI standard that this library
implements is returned by getVersion
.
getProcessorName :: IO String Source #
Return the name of the current process
(MPI_Get_Processor_name
).
This should uniquely identify the hardware on which this process is
running.
getVersion :: IO Version Source #
Return the version of the MPI standard implemented by this
library
(MPI_Get_version
).
Note that the version of the MPI library itself is returned by
getLibraryVersion
.
Point-to-point (blocking)
:: Rank | Source rank (may be |
-> Tag | Message tag (may be |
-> Comm | Communicator |
-> IO Status | Message status |
Probe (wait) for an incoming message
(MPI_Probe
).
Probe (wait) for an incoming message
(MPI_Probe
).
This function does not return a status, which might be more
efficient if the status is not needed.
:: Buffer rb | |
=> rb | Receive buffer |
-> Rank | Source rank (may be |
-> Tag | Message tag (may be |
-> Comm | Communicator |
-> IO Status | Message status |
Receive a message
(MPI_Recv
).
The MPI datatypeis determined automatically from the buffer
pointer type.
:: Buffer rb | |
=> rb | Receive buffer |
-> Rank | Source rank (may be |
-> Tag | Message tag (may be |
-> Comm | Communicator |
-> IO () |
Receive a message
(MPI_Recv
).
The MPI datatype is determined automatically from the buffer
pointer type. This function does not return a status, which might
be more efficient if the status is not needed.
Send a message
(MPI_Send
).
The MPI datatype is determined automatically from the buffer
pointer type.
:: (Buffer sb, Buffer rb) | |
=> sb | Send buffer |
-> Rank | Destination rank |
-> Tag | Sent message tag |
-> rb | Receive buffer |
-> Rank | Source rank (may be |
-> Tag | Received message tag (may be |
-> Comm | Communicator |
-> IO Status | Status for received message |
Send and receive a message with a single call
(MPI_Sendrecv
).
The MPI datatypes are determined automatically from the buffer
pointer types.
:: (Buffer sb, Buffer rb) | |
=> sb | Send buffer |
-> Rank | Destination rank |
-> Tag | Sent message tag |
-> rb | Receive buffer |
-> Rank | Source rank (may be |
-> Tag | Received message tag (may be |
-> Comm | Communicator |
-> IO () |
Send and receive a message with a single call
(MPI_Sendrecv
).
The MPI datatypes are determined automatically from the buffer
pointer types. This function does not return a status, which might
be more efficient if the status is not needed.
Wait for a communication request to complete, then free the
request
(MPI_Wait
).
Wait for a communication request to complete, then free the
request
(MPI_Wait
).
This function does not return a status, which might be more
efficient if the status is not needed.
Point-to-point (non-blocking)
:: Rank | Source rank (may be |
-> Tag | Message tag (may be |
-> Comm | Communicator |
-> IO (Maybe Status) |
|
Probe (check) for incoming messages without waiting
(non-blocking,
MPI_Iprobe
).
:: Rank | Source rank (may be |
-> Tag | Message tag (may be |
-> Comm | Communicator |
-> IO Bool | Whether a message is available |
Probe (check) for an incoming message without waiting
(MPI_Iprobe
).
This function does not return a status, which might be more
efficient if the status is not needed.
Check whether a communication has completed without freeing the
communication request
(MPI_Request_get_status
).
requestGetStatus_ :: Request -> IO Bool Source #
Check whether a communication has completed without freeing the
communication request
(MPI_Request_get_status
).
This function does not return a status, which might be more
efficient if the status is not needed.
Check whether a communication has completed, and free the
communication request if so
(MPI_Test
).
test_ :: Request -> IO Bool Source #
Check whether a communication has completed, and free the
communication request if so
(MPI_Test
).
This function does not return a status, which might be more
efficient if the status is not needed.
Collective (blocking)
Gather data from all processes and broadcast the result
(collective,
MPI_Allgather
).
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Comm | Communicator |
-> IO () |
Reduce data from all processes and broadcast the result
(collective,
MPI_Allreduce
).
The MPI datatype is determined automatically from the buffer
pointer types.
Send data from all processes to all processes (collective,
MPI_Alltoall
).
The MPI datatypes are determined automatically from the buffer
pointer types.
:: Buffer b | |
=> b | Buffer (read on the root process, written on all other processes) |
-> Rank | Root rank (sending process) |
-> Comm | Communicator |
-> IO () |
Broadcast data from one process to all processes (collective,
MPI_Bcast
).
The MPI datatype is determined automatically from the buffer
pointer type.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Comm | Communicator |
-> IO () |
Reduce data from all processes via an exclusive (prefix) scan
(collective,
MPI_Exscan
).
Each process with rank r
receives the result of reducing data
from rank 0
to rank r-1
(inclusive). Rank 0 should logically
receive a neutral element of the reduction operation, but instead
receives an undefined value since MPI is not aware of neutral
values for reductions.
The MPI datatype is determined automatically from the buffer pointer type.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer (only used on the root process) |
-> Rank | Root rank |
-> Comm | Communicator |
-> IO () |
Gather data from all processes to the root process (collective,
MPI_Gather
).
The MPI datatypes are determined automatically from the buffer
pointer types.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Rank | Root rank |
-> Comm | Communicator |
-> IO () |
Reduce data from all processes (collective,
MPI_Reduce
).
The result is only available on the root process. The MPI datatypes
are determined automatically from the buffer pointer types.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Comm | Communicator |
-> IO () |
Reduce data from all processes via an (inclusive) scan
(collective,
MPI_Scan
).
Each process with rank r
receives the result of reducing data
from rank 0
to rank r
(inclusive). The MPI datatype is
determined automatically from the buffer pointer type.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer (only used on the root process) |
-> rb | Destination buffer |
-> Rank | Root rank |
-> Comm | Communicator |
-> IO () |
Scatter data from the root process to all processes (collective,
MPI_Scatter
).
The MPI datatypes are determined automatically from the buffer
pointer types.
Collective (non-blocking)
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to gather data from all processes and broadcast the result,
and return a handle to the communication request (collective,
non-blocking,
MPI_Iallgather
).
The request must be freed by calling test
, wait
, or similar.
The MPI datatypes are determined automatically from the buffer
pointer types.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to reduce data from all processes and broadcast the result,
and return a handle to the communication request (collective,
non-blocking,
MPI_Iallreduce
).
The request must be freed by calling test
, wait
, or similar.
The MPI datatype is determined automatically from the buffer
pointer types.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to send data from all processes to all processes, and
return a handle to the communication request (collective,
non-blocking,
MPI_Ialltoall
).
The request must be freed by calling test
, wait
, or similar.
The MPI datatypes are determined automatically from the buffer
pointer types.
Start a barrier, and return a handle to the communication request
(collective, non-blocking,
MPI_Ibarrier
).
The request must be freed by calling test
, wait
, or similar.
:: Buffer b | |
=> b | Buffer (read on the root process, written on all other processes) |
-> Rank | Root rank (sending process) |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to broadcast data from one process to all processes, and
return a handle to the communication request (collective,
non-blocking,
MPI_Ibcast
).
The request must be freed by calling test
, wait
, or similar.
The MPI datatype is determined automatically from the buffer
pointer type.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to reduce data from all processes via an exclusive (prefix)
scan, and return a handle to the communication request (collective,
non-blocking,
MPI_Iexscan
).
Each process with rank r
receives the result of reducing data
from rank 0
to rank r-1
(inclusive). Rank 0 should logically
receive a neutral element of the reduction operation, but instead
receives an undefined value since MPI is not aware of neutral
values for reductions.
The request must be freed by calling test
, wait
, or similar.
The MPI datatype is determined automatically from the buffer
pointer type.
:: (Buffer rb, Buffer sb) | |
=> sb | Source buffer |
-> rb | Destination buffer (relevant only on the root process) |
-> Rank | Root rank |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to gather data from all processes to the root process, and
return a handle to the communication request (collective,
non-blocking,
MPI_Igather
).
The request must be freed by calling test
, wait
, or similar.
The MPI datatypes are determined automatically from the buffer
pointer types.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Rank | Root rank |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to reduce data from all processes, and return a handle to
the communication request (collective, non-blocking,
MPI_Ireduce
).
The result is only available on the root process. The request must
be freed by calling test
, wait
, or similar. The MPI datatypes
are determined automatically from the buffer pointer types.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer |
-> rb | Destination buffer |
-> Op | Reduction operation |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to reduce data from all processes via an (inclusive) scan,
and return a handle to the communication request (collective,
non-blocking,
MPI_Iscan
).
Each process with rank r
receives the result of reducing data
from rank 0
to rank r
(inclusive). The request must be freed by
calling test
, wait
, or similar. The MPI datatype is determined
automatically from the buffer pointer type.
:: (Buffer sb, Buffer rb) | |
=> sb | Source buffer (only used on the root process) |
-> rb | Destination buffer |
-> Rank | Root rank |
-> Comm | Communicator |
-> IO Request | Communication request |
Begin to scatter data from the root process to all processes, and
return a handle to the communication request (collective,
non-blocking,
MPI_Iscatter
).
The request must be freed by calling test
, wait
, or similar.
The MPI datatypes are determined automatically from the buffer
pointer types.