HyPar  1.0
Finite-Difference Hyperbolic-Parabolic PDE Solver on Cartesian Grids
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
MPIPartitionArray1D.c File Reference

Partition an essentially 1D array. More...

#include <stdio.h>
#include <stdlib.h>
#include <basic.h>
#include <arrayfunctions.h>
#include <mpivars.h>

Go to the source code of this file.

Functions

int MPIPartitionArray1D (void *m, double *xg, double *x, int istart, int iend, int N_local, int ghosts)
 

Detailed Description

Partition an essentially 1D array.

Author
Debojyoti Ghosh

Definition in file MPIPartitionArray1D.c.

Function Documentation

int MPIPartitionArray1D ( void *  m,
double *  xg,
double *  x,
int  istart,
int  iend,
int  N_local,
int  ghosts 
)

Partitions the contents of a global 1D array on the root rank (rank 0) to local arrays on all the MPI ranks. See documentation of MPIExchangeBoundaries1D() on what a "1D array" is in the context of a multidimensional simulation. The 1D array must be the same along spatial dimensions normal to the one it represents.

Notes:

  • The global array must not have ghost points.
  • The global array must be preallocated on only rank 0. On other ranks, it must be NULL.
  • Since this function deals with a 1D array, more than one rank may be receiving the same piece of data from rank 0 (i.e. if there are more than one MPI rank along the dimensions normal to one corresponding to x ).
Parameters
mMPI object of type MPIVariables
xgGlobal 1D array (must be preallocated) without ghost points
xLocal 1D array to be gathered
istartStarting index (global) of this rank's portion of the array
iendEnding index (global) of this rank's portion of the array + 1
N_localLocal size of the array
ghostsNumber of ghost points

Definition at line 25 of file MPIPartitionArray1D.c.

34 {
35  MPIVariables *mpi = (MPIVariables*) m;
36  int ierr = 0;
37 #ifndef serial
38  MPI_Status status;
39 #endif
40 
41  /* xg should be non-null only on root */
42  if (mpi->rank && xg) {
43  fprintf(stderr,"Error in MPIPartitionArray1D(): global array exists on non-root processors (rank %d).\n",
44  mpi->rank);
45  ierr = 1;
46  }
47  if ((!mpi->rank) && (!xg)) {
48  fprintf(stderr,"Error in MPIPartitionArray1D(): global array is not allocated on root processor.\n");
49  ierr = 1;
50  }
51 
52  if (!mpi->rank) {
53  int proc;
54  for (proc = 0; proc < mpi->nproc; proc++) {
55  /* Find out the domain limits for each process */
56  int is,ie;
57  if (proc) {
58 #ifndef serial
59  MPI_Recv(&is,1,MPI_INT,proc,1442,mpi->world,&status);
60  MPI_Recv(&ie,1,MPI_INT,proc,1443,mpi->world,&status);
61 #endif
62  } else {
63  is = istart;
64  ie = iend;
65  }
66  int size = ie - is;
67  double *buffer = (double*) calloc (size, sizeof(double));
68  _ArrayCopy1D_((xg+is),buffer,size);
69  if (proc) {
70 #ifndef serial
71  MPI_Send(buffer,size,MPI_DOUBLE,proc,1539,mpi->world);
72 #endif
73  } else _ArrayCopy1D_(buffer,x,N_local);
74  free(buffer);
75  }
76 
77  } else {
78 #ifndef serial
79  /* Meanwhile, on other processes */
80  /* send local start and end indices to root */
81  MPI_Send(&istart,1,MPI_INT,0,1442,mpi->world);
82  MPI_Send(&iend ,1,MPI_INT,0,1443,mpi->world);
83  double *buffer = (double*) calloc (N_local, sizeof(buffer));
84  MPI_Recv(buffer,N_local,MPI_DOUBLE,0,1539,mpi->world,&status);
85  _ArrayCopy1D_(buffer,x,N_local);
86  free(buffer);
87 #endif
88  }
89  return(ierr);
90 }
MPI_Comm world
#define _ArrayCopy1D_(x, y, size)
Structure of MPI-related variables.