HyPar
1.0
Finite-Difference Hyperbolic-Parabolic PDE Solver on Cartesian Grids
|
Read in a vector field from file. More...
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <basic.h>
#include <arrayfunctions.h>
#include <mpivars.h>
#include <hypar.h>
Go to the source code of this file.
Functions | |
static int | ReadArraySerial (int, int, int *, int *, int, void *, void *, double *, double *, char *, int *) |
static int | ReadArrayParallel (int, int, int *, int *, int, void *, void *, double *, double *, char *, int *) |
static int | ReadArrayMPI_IO (int, int, int *, int *, int, void *, void *, double *, double *, char *, int *) |
int | ReadArray (int ndims, int nvars, int *dim_global, int *dim_local, int ghosts, void *s, void *m, double *x, double *u, char *fname_root, int *read_flag) |
Read in a vector field from file.
Definition in file ReadArray.c.
|
static |
Read an array in a serial fashion: For multi-processor simulation, only rank 0 reads in the entire solution from the file, and then distributes the relevant portions to each of the processors. This involves memory allocation for the global domain on rank 0. Thus, do not use for large domains. This approach is not very scalable either, if running with a very large number of processors (> 1000). Supports both binary and ASCII formats.
The name of the file being read is <fname_root>.inp
ASCII format:-
The input file should contain the ASCII data as follows:
x0_i (0 <= i < dim_global[0])
x1_i (0 <= i < dim_global[1])
...
x{ndims-1}_i (0 <= i < dim_global[ndims-1])
u0_p (0 <= p < N)
u1_p (0 <= p < N)
...
u{nvars-1}_p (0 <= p < N)
where
x0, x1, ..., x{ndims-1} represent the spatial dimensions (for a 3D problem, x0 = x, x1 = y, x2 = z),
u0, u1, ..., u{nvars-1} are each component of the vector u,
N = dim_global[0]*dim_global[1]*...*dim_global[ndims-1] is the total number of points,
and p = i0 + dim_global[0]*( i1 + dim_global[1]*( i2 + dim_global[2]*( ... + dim_global[ndims-2]*i{ndims-1} ))) (see _ArrayIndexnD_)
with i0, i1, i2, etc representing grid indices along each spatial dimension, i.e.,
0 <= i0 < dim_global[0]-1
0 <= i1 < dim_global[1]-1
...
0 <= i{ndims-1} < dim_global[ndims=1]-1
Binary format:-
The input file should contain the binary data in as follows:
x0_i (0 <= i < dim_global[0])
x1_i (0 <= i < dim_global[1])
...
x{ndims-1}_i (0 <= i < dim_global[ndims-1])
[u0,u1,...,u{nvars-1}]_p (0 <= p < N) (with no commas)
where
x0, x1, ..., x{ndims-1} represent the spatial dimensions (for a 3D problem, x0 = x, x1 = y, x2 = z),
u0, u1, ..., u{nvars-1} are each component of the vector u at a grid point,
N = dim_global[0]*dim_global[1]*...*dim_global[ndims-1] is the total number of points,
and p = i0 + dim_global[0]*( i1 + dim_global[1]*( i2 + dim_global[2]*( ... + dim_global[ndims-2]*i{ndims-1} ))) (see _ArrayIndexnD_)
with i0, i1, i2, etc representing grid indices along each spatial dimension, i.e.,
0 <= i0 < dim_global[0]-1
0 <= i1 < dim_global[1]-1
...
0 <= i{ndims-1} < dim_global[ndims=1]-1
For serial runs, this is the only input mode (of course!).
ndims | Number of spatial dimensions |
nvars | Number of variables per grid point |
dim_global | Integer array of size ndims with global grid size in each dimension |
dim_local | Integer array of size ndims with local grid size in each dimension |
ghosts | Number of ghost points |
s | Solver object of type HyPar |
m | MPI object of type MPIVariables |
x | Grid associated with the array (can be NULL) |
u | Array to hold the vector field being read |
fname_root | Filename root |
read_flag | Flag to indicate if the file was read |
Definition at line 150 of file ReadArray.c.
|
static |
Read in a vector field in a parallel fashion: The number of MPI ranks participating in file I/O is specified as an input (MPIVariables::N_IORanks). All the MPI ranks are divided into that many I/O groups, with one rank in each group as the "leader" that does the file reading and writing. For reading in the solution, the leader of an I/O group reads its own file and distributes the solution to the processors in its group. The number of I/O group is typically specified as the number of I/O nodes available on the HPC platform, given the number of compute nodes the code is running on. This is a good balance between all the processors serially reading from the same file, and having as many files (with the local solution) as the number of processors. This approach has been observed to be very scalable (up to ~ 100,000 - 1,000,000 processors).
There should be as many files as the number of IO ranks (MPIVariables::N_IORanks). The files should be named as: <fname_root>_par.inp.<nnnn>, where <nnnn> is the string of formast "%04d" corresponding to integer n, 0 <= n < MPIVariables::N_IORanks.
Each file should contain the following data:
{
x0_i (0 <= i < dim_local[0])
x1_i (0 <= i < dim_local[1])
...
x{ndims-1}_i (0 <= i < dim_local[ndims-1])
[u0,u1,...,u{nvars-1}]_p (0 <= p < N) (with no commas)
where
x0, x1, ..., x{ndims-1} represent the spatial dimensions (for a 3D problem, x0 = x, x1 = y, x2 = z),
u0, u1, ..., u{nvars-1} are each component of the vector u at a grid point,
N = dim_local[0]*dim_local[1]*...*dim_local[ndims-1] is the total number of points,
and p = i0 + dim_local[0]*( i1 + dim_local[1]*( i2 + dim_local[2]*( ... + dim_global[ndims-2]*i{ndims-1} ))) (see _ArrayIndexnD_)
with i0, i1, i2, etc representing grid indices along each spatial dimension, i.e.,
0 <= i0 < dim_local[0]-1
0 <= i1 < dim_local[1]-1
...
0 <= i{ndims-1} < dim_local[ndims=1]-1
}
for each rank in the IO group corresponding to the file being read.
ndims | Number of spatial dimensions |
nvars | Number of variables per grid point |
dim_global | Integer array of size ndims with global grid size in each dimension |
dim_local | Integer array of size ndims with local grid size in each dimension |
ghosts | Number of ghost points |
s | Solver object of type HyPar |
m | MPI object of type MPIVariables |
x | Grid associated with the array (can be NULL) |
u | Array to hold the vector field being read |
fname_root | Filename root |
read_flag | Flag to indicate if file was read |
Definition at line 341 of file ReadArray.c.
|
static |
Read in an array in a parallel fashion using MPI-IO: Similar to ReadArrayParallel(), except that the I/O leaders read from the same file using the MPI I/O routines, by calculating their respective offsets and reading the correct chunk of data from that offset. The MPI-IO functions (part of MPICH) are constantly being developed to be scalable on the latest and greatest HPC platforms.
There should be as one file named as <fname_root>_mpi.inp. It should contain the following data:
{
x0_i (0 <= i < dim_local[0])
x1_i (0 <= i < dim_local[1])
...
x{ndims-1}_i (0 <= i < dim_local[ndims-1])
[u0,u1,...,u{nvars-1}]_p (0 <= p < N) (with no commas)
where
x0, x1, ..., x{ndims-1} represent the spatial dimensions (for a 3D problem, x0 = x, x1 = y, x2 = z),
u0, u1, ..., u{nvars-1} are each component of the vector u at a grid point,
N = dim_local[0]*dim_local[1]*...*dim_local[ndims-1] is the total number of points,
and p = i0 + dim_local[0]*( i1 + dim_local[1]*( i2 + dim_local[2]*( ... + dim_global[ndims-2]*i{ndims-1} ))) (see _ArrayIndexnD_)
with i0, i1, i2, etc representing grid indices along each spatial dimension, i.e.,
0 <= i0 < dim_local[0]-1
0 <= i1 < dim_local[1]-1
...
0 <= i{ndims-1} < dim_local[ndims=1]-1
}
for each rank, in the order of rank number (0 to nproc-1).
ndims | Number of spatial dimensions |
nvars | Number of variables per grid point |
dim_global | Integer array of size ndims with global grid size in each dimension |
dim_local | Integer array of size ndims with local grid size in each dimension |
ghosts | Number of ghost points |
s | Solver object of type HyPar |
m | MPI object of type MPIVariables |
x | Grid associated with the array (can be NULL) |
u | Array to hold the vector field being read |
fname_root | Filename root |
read_flag | Flag to indicate if file was read |
Definition at line 513 of file ReadArray.c.
int ReadArray | ( | int | ndims, |
int | nvars, | ||
int * | dim_global, | ||
int * | dim_local, | ||
int | ghosts, | ||
void * | s, | ||
void * | m, | ||
double * | x, | ||
double * | u, | ||
char * | fname_root, | ||
int * | read_flag | ||
) |
Read in a vector field from file: wrapper function that calls the appropriate function depending on input mode (HyPar::input_mode).
The mode and type of input are specified through HyPar::input_mode and HyPar::ip_file_type. A vector field is read from file and stored in an array.
ndims | Number of spatial dimensions |
nvars | Number of variables per grid point |
dim_global | Integer array of size ndims with global grid size in each dimension |
dim_local | Integer array of size ndims with local grid size in each dimension |
ghosts | Number of ghost points |
s | Solver object of type HyPar |
m | MPI object of type MPIVariables |
x | Grid associated with the array (can be NULL) |
u | Array to hold the vector field |
fname_root | Filename root |
read_flag | Flag to indicate if the file was read |
Definition at line 25 of file ReadArray.c.