HyPar
1.0
Finite-Difference Hyperbolic-Parabolic PDE Solver on Cartesian Grids
|
MPI related function definitions. More...
#include <mpivars_struct.h>
Go to the source code of this file.
Functions | |
int | MPIBroadcast_double (double *, int, int, void *) |
int | MPIBroadcast_integer (int *, int, int, void *) |
int | MPIBroadcast_character (char *, int, int, void *) |
int | MPICreateCommunicators (int, void *) |
int | MPIFreeCommunicators (int, void *) |
int | MPICreateIOGroups (void *) |
int | MPIExchangeBoundaries1D (void *, double *, int, int, int, int) |
int | MPIExchangeBoundariesnD (int, int, int *, int, void *, double *) |
int | MPIGatherArray1D (void *, double *, double *, int, int, int, int) |
int | MPIGatherArraynD (int, void *, double *, double *, int *, int *, int, int) |
int | MPIGatherArraynDwGhosts (int, void *, double *, double *, int *, int *, int, int) |
int | MPIPartitionArraynD (int, void *, double *, double *, int *, int *, int, int) |
int | MPIPartitionArraynDwGhosts (int, void *, double *, double *, int *, int *, int, int) |
int | MPIPartitionArray1D (void *, double *, double *, int, int, int, int) |
int | MPIGetArrayDatanD (double *, double *, int *, int *, int *, int *, int, int, int, void *) |
int | MPILocalDomainLimits (int, int, void *, int *, int *, int *) |
int | MPIMax_integer (int *, int *, int, void *) |
int | MPIMax_long (long *, long *, int, void *) |
int | MPIMax_double (double *, double *, int, void *) |
int | MPIMin_integer (int *, int *, int, void *) |
int | MPIMin_double (double *, double *, int, void *) |
int | MPISum_double (double *, double *, int, void *) |
int | MPISum_integer (int *, int *, int, void *) |
int | MPIPartition1D (int, int, int) |
int | MPIRank1D (int, int *, int *) |
int | MPIRanknD (int, int, int *, int *) |
void | MPIGetFilename (char *, void *, char *) |
int | gpuMPIExchangeBoundariesnD (int, int, const int *, int, void *, double *) |
MPI related function definitions.
Definition in file mpivars.h.
int MPIBroadcast_double | ( | double * | x, |
int | size, | ||
int | root, | ||
void * | comm | ||
) |
Broadcast a double to all ranks
Broadcast an array of type double to all MPI ranks
x | array to broadcast to all ranks |
size | size of array to broadcast |
root | rank from which to broadcast |
comm | MPI communicator within which to broadcast |
Definition at line 9 of file MPIBroadcast.c.
int MPIBroadcast_integer | ( | int * | x, |
int | size, | ||
int | root, | ||
void * | comm | ||
) |
Broadcast an integer to all ranks
Broadcast an array of type int to all MPI ranks
x | array to broadcast to all ranks |
size | size of array to broadcast |
root | rank from which to broadcast |
comm | MPI communicator within which to broadcast |
Definition at line 23 of file MPIBroadcast.c.
int MPIBroadcast_character | ( | char * | x, |
int | size, | ||
int | root, | ||
void * | comm | ||
) |
Broadcast a character to all ranks
Broadcast an array of type char to all MPI ranks
x | array to broadcast to all ranks |
size | size of array to broadcast |
root | rank from which to broadcast |
comm | MPI communicator within which to broadcast |
Definition at line 37 of file MPIBroadcast.c.
int MPICreateCommunicators | ( | int | ndims, |
void * | m | ||
) |
Create communicators required for the tridiagonal solver in compact schemes
Create subcommunicators from MPI_WORLD, where each subcommunicator contains MPI ranks along a spatial dimension. Consider a two-dimensional problem, partitioned on 21 MPI ranks as follows:
This function will create 10 subcommunicators with the following ranks:
These subcommunicators are useful for parallel computations along grid lines. For example, a compact finite-difference scheme solves implicit systems along grid lines in every spatial dimension. Thus, the subcommunicator may be passed on to the parallel systems solver instead of MPI_WORLD.
ndims | Number of spatial dimensions |
m | MPI object of type MPIVariables |
Definition at line 35 of file MPICommunicators.c.
int MPIFreeCommunicators | ( | int | ndims, |
void * | m | ||
) |
Free/destroy communicators created
Free the subcommunicators created in MPICreateCommunicators().
ndims | Number of spatial dimensions |
m | MPI object of type MPIVariables |
Definition at line 75 of file MPICommunicators.c.
int MPICreateIOGroups | ( | void * | m | ) |
Create I/O groups for file reading and writing – Group leaders gather data from all other ranks in the group and write to file, and read from file and sends the data to the appropriate ranks in the group. Thus, number of ranks participating in file I/O is equal to the number of groups (which is an input), and can be set to the number of I/O nodes available.
Create I/O groups of MPI ranks: A scalable approach to file I/O when running simulations on a large number of processors (>10,000) is partitioning all the MPI ranks into I/O group. Each group has a "leader" that:
The number of I/O groups (and hence, the number of I/O ranks reading and writing to files) is specified through MPIVariables::N_IORanks. Ideally, this would correspond to the number of I/O nodes available for the total number of compute nodes being used on a HPC platform.
Two extreme cases are:
Neither of the extreme cases are scalable.
Notes:
m | MPI object of type MPIVariables |
Definition at line 37 of file MPIIOGroups.c.
int MPIExchangeBoundaries1D | ( | void * | m, |
double * | x, | ||
int | N, | ||
int | ghosts, | ||
int | dir, | ||
int | ndims | ||
) |
Exchange boundary (ghost point) values for an essentially 1D array (like grid coordinates)
Exchange the data across MPI ranks and fill in ghost points for a 1D array. In a multidimensional simulation, a 1D array is an array of data along one of the spatial dimensions, i.e. its an array storing a variable that varies in only one of the spatial dimension. For example, for a 2D problem on a Cartesian grid (with spatial dimensions x and y), the array of x-coordinates is a 1D array along x, and the array of y-coordinates is a 1D array along y. Thus, the size of the 1D array is equal to the size of the domain along the spatial dimension corresponding to that array.
Consider a two-dimensional problem, partitioned on 21 MPI ranks as follows:
If the argument dir is specified as 0, and thus we are dealing with a 1D array along dimension 0, then
If dir is specified as 1, and thus we are dealing with a 1D array along dimension 1, then
m | MPI object of type MPIVariables |
x | The 1D array for which to exchange data |
N | Size of the array |
ghosts | Number of ghost points |
dir | Spatial dimension corresponding to the 1D array |
ndims | Number of spatial dimensions in the simulation |
Definition at line 32 of file MPIExchangeBoundaries1D.c.
int MPIExchangeBoundariesnD | ( | int | ndims, |
int | nvars, | ||
int * | dim, | ||
int | ghosts, | ||
void * | m, | ||
double * | var | ||
) |
Exchange boundary (ghost point) values for an n-dimensional array (like the solution array)
Exchange data across MPI ranks, and fill in ghost points for an n-dimensional array (where n is the total number of spatial dimensions). If any of the physical boundaries are periodic, this function also exchanges data and fills in the ghost points for these boundaries.
The n-dimensional array must be stored in the memory as a single-index array, with the following order of mapping:
For example, consider a 2D simulation (ndims = 2), of size \(7 \times 3\), with \(4\) vector components (nvars = 4). The following figure shows the layout (without the ghost points):
The bold numbers in parentheses represent the 2D indices. The numbers below them are the indices of the array that correspond to that 2D location. Thus, elements 40,41,42, and 43 in the array are the 1st, 2nd, 3rd, and 4th vector components at location (1,3).
If \({\bf i}\left[{\rm ndims}\right]\) is an integer array representing an n-dimensional index (for example, \(\left(5,4\right)\) in 2D, \(\left(3,5,2\right)\) in 3D), and the number of vector components is nvars, then:
ndims | Number of spatial dimensions |
nvars | Number of variables (vector components) at each grid location |
dim | Integer array whose elements are the local size along each spatial dimension |
ghosts | Number of ghost points |
m | MPI object of type MPIVariables |
var | The array for which to exchange data and fill in ghost points |
Definition at line 42 of file MPIExchangeBoundariesnD.c.
int MPIGatherArray1D | ( | void * | m, |
double * | xg, | ||
double * | x, | ||
int | istart, | ||
int | iend, | ||
int | N_local, | ||
int | ghosts | ||
) |
Gather local arrays into a global array for an essentially 1D array
Gathers the contents of a 1D array (partitioned amongst MPI ranks) into a global 1D array on the root rank (rank 0). See documentation of MPIExchangeBoundaries1D() on what a "1D array" is in the context of a multidimensional simulation. The 1D array must be the same along spatial dimensions normal to the one it represents.
Notes:
m | MPI object of type MPIVariables |
xg | Global 1D array (must be preallocated) without ghost points |
x | Local 1D array to be gathered |
istart | Starting index (global) of this rank's portion of the array |
iend | Ending index (global) of this rank's portion of the array + 1 |
N_local | Local size of the array |
ghosts | Number of ghost points |
Definition at line 26 of file MPIGatherArray1D.c.
int MPIGatherArraynD | ( | int | ndims, |
void * | m, | ||
double * | xg, | ||
double * | x, | ||
int * | dim_global, | ||
int * | dim_local, | ||
int | ghosts, | ||
int | nvars | ||
) |
Gather local arrays into a global array for an n-dimensional array
Gathers the contents of an n-dimensional array, partitioned amongst the MPI ranks, in to a global array on rank 0. See documentation of MPIExchangeBoundariesnD() for how the n-dimensional array is stored in the memory as a single-index array.
Notes:
ndims | Number of spatial dimensions |
m | MPI object of type MPIVariables |
xg | Global array (preallocated) without ghost points |
x | Local array |
dim_global | Integer array with elements as global size along each spatial dimension |
dim_local | Integer array with elements as local size along each spatial dimension |
ghosts | Number of ghost points |
nvars | Number of variables (vector components) |
Definition at line 21 of file MPIGatherArraynD.c.
int MPIGatherArraynDwGhosts | ( | int | ndims, |
void * | m, | ||
double * | xg, | ||
double * | x, | ||
int * | dim_global, | ||
int * | dim_local, | ||
int | ghosts, | ||
int | nvars | ||
) |
Gather local arrays into a global array for an n-dimensional array (with ghosts)
Gathers the contents of an n-dimensional array, partitioned amongst the MPI ranks, in to a global array on rank 0. See documentation of MPIExchangeBoundariesnD() for how the n-dimensional array is stored in the memory as a single-index array. This is same as MPIGatherArraynD() but where the global array has ghost points.
Notes:
ndims | Number of spatial dimensions |
m | MPI object of type MPIVariables |
xg | Global array (preallocated) with ghost points |
x | Local array |
dim_global | Integer array with elements as global size along each spatial dimension |
dim_local | Integer array with elements as local size along each spatial dimension |
ghosts | Number of ghost points |
nvars | Number of variables (vector components) |
Definition at line 115 of file MPIGatherArraynD.c.
int MPIPartitionArraynD | ( | int | ndims, |
void * | m, | ||
double * | xg, | ||
double * | x, | ||
int * | dim_global, | ||
int * | dim_local, | ||
int | ghosts, | ||
int | nvars | ||
) |
Partition a global array into local arrays for an n-dimensional array
Partitions the contents of a global n-dimensional array on rank 0 (root) to local n-dimensional arrays on all the MPI ranks. See documentation of MPIExchangeBoundariesnD() for how the n-dimensional array is stored in the memory as a single-index array.
Notes:
ndims | Number of spatial dimensions |
m | MPI object of type MPIVariables |
xg | Global array (preallocated) without ghost points |
x | Local array |
dim_global | Integer array with elements as global size along each spatial dimension |
dim_local | Integer array with elements as local size along each spatial dimension |
ghosts | Number of ghost points |
nvars | Number of variables (vector components) |
Definition at line 21 of file MPIPartitionArraynD.c.
int MPIPartitionArraynDwGhosts | ( | int | ndims, |
void * | m, | ||
double * | xg, | ||
double * | x, | ||
int * | dim_global, | ||
int * | dim_local, | ||
int | ghosts, | ||
int | nvars | ||
) |
Partition a global array into local arrays for an n-dimensional array
Partitions the contents of a global n-dimensional array on rank 0 (root) to local n-dimensional arrays on all the MPI ranks. See documentation of MPIExchangeBoundariesnD() for how the n-dimensional array is stored in the memory as a single-index array. This is same as MPIPartitionArraynD() but where the global array has ghost points.
Notes:
ndims | Number of spatial dimensions |
m | MPI object of type MPIVariables |
xg | Global array (preallocated) without ghost points |
x | Local array |
dim_global | Integer array with elements as global size along each spatial dimension |
dim_local | Integer array with elements as local size along each spatial dimension |
ghosts | Number of ghost points |
nvars | Number of variables (vector components) |
Definition at line 114 of file MPIPartitionArraynD.c.
int MPIPartitionArray1D | ( | void * | m, |
double * | xg, | ||
double * | x, | ||
int | istart, | ||
int | iend, | ||
int | N_local, | ||
int | ghosts | ||
) |
Partition a global array into local arrays for an essentially 1D array
Partitions the contents of a global 1D array on the root rank (rank 0) to local arrays on all the MPI ranks. See documentation of MPIExchangeBoundaries1D() on what a "1D array" is in the context of a multidimensional simulation. The 1D array must be the same along spatial dimensions normal to the one it represents.
Notes:
m | MPI object of type MPIVariables |
xg | Global 1D array (must be preallocated) without ghost points |
x | Local 1D array to be gathered |
istart | Starting index (global) of this rank's portion of the array |
iend | Ending index (global) of this rank's portion of the array + 1 |
N_local | Local size of the array |
ghosts | Number of ghost points |
Definition at line 25 of file MPIPartitionArray1D.c.
int MPIGetArrayDatanD | ( | double * | xbuf, |
double * | x, | ||
int * | source, | ||
int * | dest, | ||
int * | limits, | ||
int * | dim, | ||
int | ghosts, | ||
int | ndims, | ||
int | nvars, | ||
void * | m | ||
) |
fetch data from an n-dimensional local array on another rank
This function lets one rank get a portion of a local n-dimensional array on another rank. The n-dimensional array must be stored in the memory as a single-index array as described in the documentation of MPIExchangeBoundariesnD(). The source rank sends to the dest rank a logically rectangular n-dimensional portion of its local copy of an array x. The extent of this logically rectangular portion is defined by limits.
xbuf | preallocated memory on destination rank to hold the received data |
x | local array of which a part is needed |
source | MPI rank of the source |
dest | MPI rank of the destination |
limits | Integer array (of size 2*ndims) with the start and end indices along each spatial dimension of the desired portion of the array |
dim | Integer array whose elements are the local size of x in each spatial dimension |
ghosts | Number of ghost points |
ndims | Number of spatial dimensions |
nvars | Number of variables (vector components) |
m | MPI object of type MPIVariables |
Definition at line 21 of file MPIGetArrayDatanD.c.
int MPILocalDomainLimits | ( | int | ndims, |
int | p, | ||
void * | m, | ||
int * | dim_global, | ||
int * | is, | ||
int * | ie | ||
) |
Calculate the local domain limits/extend in terms of the global domain
Computes the local domain limites: Given a MPI rank p, this function will compute the starting and ending indices of the local domain on this rank. These indices are global.
ndims | Number of spatial dimensions |
p | MPI rank |
m | MPI object of type MPIVariables |
dim_global | Integer array with elements as global size in each spatial dimension |
is | Integer array whose elements will contain the starting index of the local domain on rank p |
ie | Integer array whose elements will contain the ending index of the local domain on rank p |
Definition at line 18 of file MPILocalDomainLimits.c.
int MPIMax_integer | ( | int * | global, |
int * | var, | ||
int | size, | ||
void * | comm | ||
) |
Find the maximum in an integer array over all ranks
Compute the global maximum over all MPI ranks in a given communicator for int datatype.
global | array to contain the global maximums |
var | the local array |
size | size of the local array |
comm | MPI communicator |
int MPIMax_long | ( | long * | , |
long * | , | ||
int | , | ||
void * | |||
) |
Find the maximum in a long integer array over all ranks
int MPIMax_double | ( | double * | global, |
double * | var, | ||
int | size, | ||
void * | comm | ||
) |
Find the maximum in a double array over all ranks
Compute the global maximum over all MPI ranks in a given communicator for double datatype.
global | array to contain the global maximums |
var | the local array |
size | size of the local array |
comm | MPI communicator |
int MPIMin_integer | ( | int * | global, |
int * | var, | ||
int | size, | ||
void * | comm | ||
) |
Find the minimum in an integer array over all ranks
Compute the global minimum over all MPI ranks in a given communicator for int datatype.
global | array to contain the global minimums |
var | the local array |
size | size of the local array |
comm | MPI communicator |
int MPIMin_double | ( | double * | global, |
double * | var, | ||
int | size, | ||
void * | comm | ||
) |
Find the minimum in a double array over all ranks
Compute the global minimum over all MPI ranks in a given communicator for double datatype.
global | array to contain the global minimums |
var | the local array |
size | size of the local array |
comm | MPI communicator |
int MPISum_double | ( | double * | global, |
double * | var, | ||
int | size, | ||
void * | comm | ||
) |
Calculate the sum of an array of doubles over all ranks
Compute the global sum over all MPI ranks in a given communicator for double datatype.
global | array to contain the global sums |
var | the local array |
size | size of the local array |
comm | MPI communicator |
int MPISum_integer | ( | int * | global, |
int * | var, | ||
int | size, | ||
void * | comm | ||
) |
Calculate the sum of an array of integers over all ranks
Compute the global sum over all MPI ranks in a given communicator for int datatype.
global | array to contain the global sums |
var | the local array |
size | size of the local array |
comm | MPI communicator |
int MPIPartition1D | ( | int | nglobal, |
int | nproc, | ||
int | rank | ||
) |
Partition (along a dimension) the domain given global size and number of ranks
Given a 1D array of a given global size nglobal, and the total number of MPI ranks nproc on which it will be partitioned, this function computes the size of the local part of the 1D array on rank.
nglobal | Global size |
nproc | Total number of ranks |
rank | Rank |
Definition at line 14 of file MPIPartition1D.c.
int MPIRank1D | ( | int | ndims, |
int * | iproc, | ||
int * | ip | ||
) |
Calculate 1D rank from the n-dimensional rank
This function returns the 1D rank, given the n-dimensional rank and the total number of MPI ranks along each spatial dimension.
1D Rank: This is the rank of the process in the communicator.
n-Dimensional Rank: This represents an integer array, where each element is the rank of the process along a spatial dimension.
Consider a 2D simulation running with 21 MPI ranks - 7 along the x direction, and 3 along the y direction, as shown in the following figure:
ndims | Number of spatial dimensions |
iproc | Integer array whose elements are the number of MPI ranks along each dimension |
ip | Integer array whose elements are the rank of this process along each dimension |
Definition at line 26 of file MPIRank1D.c.
int MPIRanknD | ( | int | ndims, |
int | rank, | ||
int * | iproc, | ||
int * | ip | ||
) |
Calculate the n-dimensional rank from the 1D rank
This function computes the n-dimensional rank, given the 1D rank and the total number of MPI ranks along each spatial dimension.
1D Rank: This is the rank of the process in the communicator.
n-Dimensional Rank: This represents an integer array, where each element is the rank of the process along a spatial dimension.
Consider a 2D simulation running with 21 MPI ranks - 7 along the x direction, and 3 along the y direction, as shown in the following figure:
Note:
ndims | Number of spatial dimensions |
rank | 1D rank |
iproc | Integer array whose elements are the number of MPI ranks along each dimension |
ip | Integer array whose elements are the rank of this process along each dimesion |
Definition at line 27 of file MPIRanknD.c.
void MPIGetFilename | ( | char * | root, |
void * | c, | ||
char * | filename | ||
) |
Generate a unique filename given the rank of the process to let that process write to its own file
Get a string representing a filename indexed by the MPI rank: filename = root.index, where index is the string corresponding to the MPI rank.
root | filename root |
c | MPI communicator |
filename | filename |
Definition at line 19 of file MPIGetFilename.c.
int gpuMPIExchangeBoundariesnD | ( | int | , |
int | , | ||
const int * | , | ||
int | , | ||
void * | , | ||
double * | |||
) |