MPI_Win_complete
Completes an RMA operations begun after an MPI_Win_start.int MPI_Win_complete( MPI_Win win );
Parameters
- win
- [in] window object (handle)
Remarks
Completes an RMA access epoch on win started by a call to MPI_WIN_START. All RMA communication calls issued on win during this epoch will have completed at the origin when the call returns.
MPI_WIN_COMPLETE enforces completion of preceding RMA calls at the origin, but not at the target. A put or accumulate call may not have completed at the target when it has completed at the origin.
Consider the sequence of calls in the example below:
MPI_Win_start(group, flag, win); MPI_Put(...,win); MPI_Win_complete(win);
The call to MPI_WIN_COMPLETE does not return until the put call has completed at the origin; and the target window will be accessed by the put operation only after the call to MPI_WIN_START has matched a call to MPI_WIN_POST by the target process. This still leaves much choice to implementors. The call to MPI_WIN_START can block until the matching call to MPI_WIN_POST occurs at all target processes. One can also have implementations where the call to MPI_WIN_START is nonblocking, but the call to MPI_PUT blocks until the matching call to MPI_WIN_POST occurred; or implementations where the first two calls are nonblocking, but the call to MPI_WIN_COMPLETE blocks until the call to MPI_WIN_POST occurred; or even implementations where all three calls can complete before any target process called MPI_WIN_POST --- the data put must be buffered, in this last case, so as to allow the put to complete at the origin ahead of its completion at the target. However, once the call to MPI_WIN_POST is issued, the sequence above must complete, without further dependencies.
Thread and Interrupt Safety
This routine is thread-safe. This means that this routine may be safely used by multiple threads without the need for any user-provided thread locks. However, the routine is not interrupt safe. Typically, this is due to the use of memory allocation routines such as malloc or other non-MPICH runtime routines that are themselves not interrupt-safe.
Notes for Fortran
All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK) have an additional argument ierr at the end of the argument list. ierr is an integer and has the same meaning as the return value of the routine in C. In Fortran, MPI routines are subroutines, and are invoked with the call statement.All MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGER in Fortran.
Errors
All MPI routines (except MPI_Wtime and MPI_Wtick) return an error value; C routines as the value of the function and Fortran routines in the last argument. Before the value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job. The error handler may be changed with MPI_Comm_set_errhandler (for communicators), MPI_File_set_errhandler (for files), and MPI_Win_set_errhandler (for RMA windows). The MPI-1 routine MPI_Errhandler_set may be used but its use is deprecated. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarentee that an MPI program can continue past an error; however, MPI implementations will attempt to continue whenever possible.
- MPI_SUCCESS
- No error; MPI routine completed successfully.
- MPI_ERR_WIN
- Invalid MPI window object
- MPI_ERR_OTHER
- Other error; use MPI_Error_string to get more information about this error code.
Example Code
The following sample code illustrates MPI_Win_complete.
#include "mpi.h"
#include "stdio.h"
/* tests put and get with post/start/complete/wait on 2 processes */
#define SIZE1 100
#define SIZE2 200
int main(int
argc, char *argv[])
{
int rank,
destrank, nprocs, *A, *B, i;
MPI_Group comm_group, group;
MPI_Win win;
int errs = 0;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if (nprocs != 2)
{
printf("Run this program with 2 processes\n");fflush(stdout);
MPI_Abort(MPI_COMM_WORLD,1);
}
i = MPI_Alloc_mem(SIZE2 * sizeof(int),
MPI_INFO_NULL, &A);
if (i) {
printf("Can't allocate memory in test program\n");fflush(stdout);
MPI_Abort(MPI_COMM_WORLD, 1);
}
i = MPI_Alloc_mem(SIZE2 * sizeof(int),
MPI_INFO_NULL, &B);
if (i) {
printf("Can't allocate memory in test program\n");fflush(stdout);
MPI_Abort(MPI_COMM_WORLD, 1);
}
MPI_Comm_group(MPI_COMM_WORLD, &comm_group);
if (rank == 0) {
for (i=0;
i<SIZE2; i++) A[i] = B[i] = i;
MPI_Win_create(NULL, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win);
destrank = 1;
MPI_Group_incl(comm_group, 1, &destrank, &group);
MPI_Win_start(group, 0, win);
for (i=0;
i<SIZE1; i++)
MPI_Put(A+i, 1, MPI_INT, 1, i, 1, MPI_INT, win);
for (i=0; i<SIZE1;
i++)
MPI_Get(B+i, 1, MPI_INT, 1, SIZE1+i, 1, MPI_INT, win);
MPI_Win_complete(win);
for (i=0; i<SIZE1;
i++)
if (B[i] !=
(-4)*(i+SIZE1)) {
printf("Get Error: B[i] is %d, should be %d\n", B[i], (-4)*(i+SIZE1));fflush(stdout);
errs++;
}
}
else {
/* rank=1 */
for (i=0; i<SIZE2;
i++) B[i] = (-4)*i;
MPI_Win_create(B, SIZE2*sizeof(int),
sizeof(int),
MPI_INFO_NULL, MPI_COMM_WORLD, &win);
destrank = 0;
MPI_Group_incl(comm_group, 1, &destrank, &group);
MPI_Win_post(group, 0, win);
MPI_Win_wait(win);
for (i=0;
i<SIZE1; i++) {
if (B[i] != i) {
printf("Put Error: B[i] is %d, should be %d\n", B[i], i);fflush(stdout);
errs++;
}
}
}
MPI_Group_free(&group);
MPI_Group_free(&comm_group);
MPI_Win_free(&win);
MPI_Free_mem(A);
MPI_Free_mem(B);
MPI_Finalize();
return errs;
}
DOWNLOAD
Win32 DeinoMPI.2.0.1.msi
Win64 DeinoMPI.x64.2.0.1.msi