« Return to documentation listing
NAME
MPI_Allreduce - Combines values from all processes and distributes the
result back to all processes.
SYNTAX
C Syntax
#include <mpi.h>
int MPI_Allreduce(void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
Fortran Syntax
INCLUDE 'mpif.h'
MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP,
COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER COUNT, DATATYPE, OP, COMM, IERROR
C++ Syntax
#include <mpi.h>
void MPI::Comm::Allreduce(const void* sendbuf, void* recvbuf,
int count, const MPI::Datatype& datatype, const
MPI::Op& op) const=0
INPUT PARAMETERS
sendbuf Starting address of send buffer (choice).
count Number of elements in send buffer (integer).
datatype Datatype of elements of send buffer (handle).
op Operation (handle).
comm Communicator (handle).
OUTPUT PARAMETERS
recvbuf Starting address of receive buffer (choice).
IERROR Fortran only: Error status (integer).
DESCRIPTION
Same as MPI_Reduce except that the result appears in the receive buffer
of all the group members.
Example 1: A routine that computes the product of a vector and an array
that are distributed across a group of processes and returns the answer
at all nodes (compare with Example 2, with MPI_Reduce, below).
SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm)
REAL a(m), b(m,n) ! local slice of array
REAL c(n) ! result
REAL sum(n)
INTEGER n, comm, i, j, ierr
! global sum
CALL MPI_ALLREDUCE(sum, c, n, MPI_REAL, MPI_SUM, comm, ierr)
! return result at all nodes
RETURN
Example 2: A routine that computes the product of a vector and an array
that are distributed across a group of processes and returns the answer
at node zero.
SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm)
REAL a(m), b(m,n) ! local slice of array
REAL c(n) ! result
REAL sum(n)
INTEGER n, comm, i, j, ierr
! local sum
DO j= 1, n
sum(j) = 0.0
DO i = 1, m
sum(j) = sum(j) + a(i)*b(i,j)
END DO
END DO
! global sum
CALL MPI_REDUCE(sum, c, n, MPI_REAL, MPI_SUM, 0, comm, ierr)
! return result at node zero (and garbage at the other nodes)
RETURN
USE OF IN-PLACE OPTION
When the communicator is an intracommunicator, you can perform an all-
reduce operation in-place (the output buffer is used as the input
buffer). Use the variable MPI_IN_PLACE as the value of sendbuf at all
processes.
Note that MPI_IN_PLACE is a special kind of value; it has the same
restrictions on its use as MPI_BOTTOM.
Because the in-place option converts the receive buffer into a send-
and-receive buffer, a Fortran binding that includes INTENT must mark
these as INOUT, not OUT.
WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
When the communicator is an inter-communicator, the reduce operation
occurs in two phases. The data is reduced from all the members of the
first group and received by all the members of the second group. Then
the data is reduced from all the members of the second group and
received by all the members of the first. The operation exhibits a
symmetric, full-duplex behavior.
The first group defines the root process. The root process uses
MPI_ROOT as the value of root. All other processes in the first group
use MPI_PROC_NULL as the value of root. All processes in the second
group use the rank of the root process in the first group as the value
of root.
error handler from MPI_ERRORS_ARE_FATAL to something else, for example,
MPI_ERRORS_RETURN , then no error may be indicated.
ERRORS
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument. C++ func-
tions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI:Exception object.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
may be used to cause error values to be returned. Note that MPI does
not guarantee that an MPI program can continue past an error.
1.3.4 Nov 11, 2009 MPI_Allreduce(3)
« Return to documentation listing
|