HyPar  1.0
Finite-Difference Hyperbolic-Parabolic PDE Solver on Cartesian Grids
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
Interp1PrimFifthOrderCompactUpwindChar.c File Reference

Characteristic-based 5th order Compact Upwind Scheme. More...

#include <stdio.h>
#include <basic.h>
#include <arrayfunctions.h>
#include <matmult_native.h>
#include <mathfunctions.h>
#include <interpolation.h>
#include <tridiagLU.h>
#include <mpivars.h>
#include <hypar.h>

Go to the source code of this file.

Macros

#define _MINIMUM_GHOSTS_   3
 

Functions

int Interp1PrimFifthOrderCompactUpwindChar (double *fI, double *fC, double *u, double *x, int upw, int dir, void *s, void *m, int uflag)
 5th order compact upwind reconstruction (characteristic-based) on a uniform grid More...
 

Detailed Description

Characteristic-based 5th order Compact Upwind Scheme.

Author
Debojyoti Ghosh

Definition in file Interp1PrimFifthOrderCompactUpwindChar.c.

Macro Definition Documentation

#define _MINIMUM_GHOSTS_   3

Minimum number of ghost points required for this interpolation method.

Definition at line 25 of file Interp1PrimFifthOrderCompactUpwindChar.c.

Function Documentation

int Interp1PrimFifthOrderCompactUpwindChar ( double *  fI,
double *  fC,
double *  u,
double *  x,
int  upw,
int  dir,
void *  s,
void *  m,
int  uflag 
)

5th order compact upwind reconstruction (characteristic-based) on a uniform grid

Computes the interpolated values of the first primitive of a function \({\bf f}\left({\bf u}\right)\) at the interfaces from the cell-centered values of the function using the fifth order compact upwind scheme on a uniform grid. The first primitive is defined as a function \({\bf h}\left({\bf u}\right)\) that satisfies:

\begin{equation} {\bf f}\left({\bf u}\left(x\right)\right) = \frac{1}{\Delta x} \int_{x-\Delta x/2}^{x+\Delta x/2} {\bf h}\left({\bf u}\left(\zeta\right)\right)d\zeta, \end{equation}

where \(x\) is the spatial coordinate along the dimension of the interpolation. This function computes the 5th order compact upwind numerical approximation \(\hat{\bf f}_{j+1/2} \approx {\bf h}_{j+1/2}\) as:

\begin{align} \frac{3}{10}\hat{\alpha}^k_{j-1/2} + \frac{6}{10}\hat{\alpha}^k_{j+1/2} + \frac{1}{10}\hat{\alpha}^k_{j+3/2} = \frac{}{30}{\alpha}^k_{j-1} + \frac{19}{30}{\alpha}^k_j + \frac{1}{3}{\alpha}^k_{j+1}, \end{align}

where

\begin{equation} \alpha^k = {\bf l}_k \cdot {\bf f},\ k=1,\cdots,n \end{equation}

is the \(k\)-th characteristic quantity, and \({\bf l}_k\) is the \(k\)-th left eigenvector, \({\bf r}_k\) is the \(k\)-th right eigenvector, and \(n\) is HyPar::nvars. The resulting block tridiagonal system is solved using blocktridiagLU() (see also TridiagLU, tridiagLU.h). The final interpolated function is computed from the interpolated characteristic quantities as:

\begin{equation} \hat{\bf f}_{j+1/2} = \sum_{k=1}^n \alpha^k_{j+1/2} {\bf r}_k \end{equation}

Implementation Notes:

  • This method assumes a uniform grid in the spatial dimension corresponding to the interpolation.
  • The method described above corresponds to a left-biased interpolation. The corresponding right-biased interpolation can be obtained by reflecting the equations about interface j+1/2.
  • The left and right eigenvectors are computed at an averaged quantity at j+1/2. Thus, this function requires functions to compute the average state, and the left and right eigenvectors. These are provided by the physical model through

    If these functions are not provided by the physical model, then a characteristic-based interpolation cannot be used.

  • The function computes the interpolant for the entire grid in one call. It loops over all the grid lines along the interpolation direction and carries out the 1D interpolation along these grid lines.
  • Location of cell-centers and cell interfaces along the spatial dimension of the interpolation is shown in the following figure:
    chap1_1Ddomain.png

Function arguments:

Argument Type Explanation
fI double* Array to hold the computed interpolant at the grid interfaces. This array must have the same layout as the solution, but with no ghost points. Its size should be the same as u in all dimensions, except dir (the dimension along which to interpolate) along which it should be larger by 1 (number of interfaces is 1 more than the number of interior cell centers).
fC double* Array with the cell-centered values of the flux function \({\bf f}\left({\bf u}\right)\). This array must have the same layout and size as the solution, with ghost points.
u double* The solution array \({\bf u}\) (with ghost points). If the interpolation is characteristic based, this is needed to compute the eigendecomposition. For a multidimensional problem, the layout is as follows: u is a contiguous 1D array of size (nvars*dim[0]*dim[1]*...*dim[D-1]) corresponding to the multi-dimensional solution, with the following ordering - nvars, dim[0], dim[1], ..., dim[D-1], where nvars is the number of solution components (HyPar::nvars), dim is the local size (HyPar::dim_local), D is the number of spatial dimensions.
x double* The grid array (with ghost points). This is used only by non-uniform-grid interpolation methods. For multidimensional problems, the layout is as follows: x is a contiguous 1D array of size (dim[0]+dim[1]+...+dim[D-1]), with the spatial coordinates along dim[0] stored from 0,...,dim[0]-1, the spatial coordinates along dim[1] stored along dim[0],...,dim[0]+dim[1]-1, and so forth.
upw int Upwinding direction: if positive, a left-biased interpolant will be computed; if negative, a right-biased interpolant will be computed. If the interpolation method is central, then this has no effect.
dir int Spatial dimension along which to interpolate (eg: 0 for 1D; 0 or 1 for 2D; 0,1 or 2 for 3D)
s void* Solver object of type HyPar: the following variables are needed - HyPar::ghosts, HyPar::ndims, HyPar::nvars, HyPar::dim_local.
m void* MPI object of type MPIVariables: this is needed only by compact interpolation method that need to solve a global implicit system across MPI ranks.
uflag int A flag indicating if the function being interpolated \({\bf f}\) is the solution itself \({\bf u}\) (if 1, \({\bf f}\left({\bf u}\right) \equiv {\bf u}\)).

Reference:

  • Ghosh, D., Baeder, J. D., Compact Reconstruction Schemes with Weighted ENO Limiting for Hyperbolic Conservation Laws, SIAM Journal on Scientific Computing, 34 (3), 2012, A1678–A1706, http://dx.doi.org/10.1137/110857659
  • Ghosh, D., Constantinescu, E. M., Brown, J., Efficient Implementation of Nonlinear Compact Schemes on Massively Parallel Platforms, SIAM Journal on Scientific Computing, 37 (3), 2015, C354–C383, http://dx.doi.org/10.1137/140989261
Parameters
fIArray of interpolated function values at the interfaces
fCArray of cell-centered values of the function \({\bf f}\left({\bf u}\right)\)
uArray of cell-centered values of the solution \({\bf u}\)
xGrid coordinates
upwUpwind direction (left or right biased)
dirSpatial dimension along which to interpolation
sObject of type HyPar containing solver-related variables
mObject of type MPIVariables containing MPI-related variables
uflagFlag to indicate if \(f(u) \equiv u\), i.e, if the solution is being reconstructed

Definition at line 90 of file Interp1PrimFifthOrderCompactUpwindChar.c.

101 {
102  HyPar *solver = (HyPar*) s;
103  MPIVariables *mpi = (MPIVariables*) m;
104  CompactScheme *compact= (CompactScheme*) solver->compact;
105  TridiagLU *lu = (TridiagLU*) solver->lusolver;
106  int sys,Nsys,d,v,k;
108 
109  int ghosts = solver->ghosts;
110  int ndims = solver->ndims;
111  int nvars = solver->nvars;
112  int *dim = solver->dim_local;
113 
114  /* define some constants */
115  static const double three_by_ten = 3.0/10.0,
116  six_by_ten = 6.0/10.0,
117  one_by_ten = 1.0/10.0,
118  one_by_thirty = 1.0/30.0,
119  nineteen_by_thirty = 19.0/30.0,
120  one_third = 1.0/3.0,
121  thirteen_by_sixty = 13.0/60.0,
122  fortyseven_by_sixty = 47.0/60.0,
123  twentyseven_by_sixty = 27.0/60.0,
124  one_by_twenty = 1.0/20.0;
125 
126  /* create index and bounds for the outer loop, i.e., to loop over all 1D lines along
127  dimension "dir" */
128  int indexC[ndims], indexI[ndims], index_outer[ndims], bounds_outer[ndims], bounds_inter[ndims];
129  _ArrayCopy1D_(dim,bounds_outer,ndims); bounds_outer[dir] = 1;
130  _ArrayCopy1D_(dim,bounds_inter,ndims); bounds_inter[dir] += 1;
131 
132  /* calculate total number of block tridiagonal systems to solve */
133  _ArrayProduct1D_(bounds_outer,ndims,Nsys);
134 
135  /* allocate arrays for the averaged state, eigenvectors and characteristic interpolated f */
136  double R[nvars*nvars], L[nvars*nvars], uavg[nvars];
137 
138  /* Allocate arrays for tridiagonal system */
139  double *A = compact->A;
140  double *B = compact->B;
141  double *C = compact->C;
142  double *F = compact->R;
143 
144 #pragma omp parallel for schedule(auto) default(shared) private(sys,d,v,k,R,L,uavg,index_outer,indexC,indexI)
145  for (sys=0; sys<Nsys; sys++) {
146  _ArrayIndexnD_(ndims,sys,bounds_outer,index_outer,0);
147  _ArrayCopy1D_(index_outer,indexC,ndims);
148  _ArrayCopy1D_(index_outer,indexI,ndims);
149  for (indexI[dir] = 0; indexI[dir] < dim[dir]+1; indexI[dir]++) {
150  int qm1,qm2,qm3,qp1,qp2;
151  if (upw > 0) {
152  indexC[dir] = indexI[dir]-3; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm3);
153  indexC[dir] = indexI[dir]-2; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm2);
154  indexC[dir] = indexI[dir]-1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm1);
155  indexC[dir] = indexI[dir] ; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp1);
156  indexC[dir] = indexI[dir]+1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp2);
157  } else {
158  indexC[dir] = indexI[dir]+2; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm3);
159  indexC[dir] = indexI[dir]+1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm2);
160  indexC[dir] = indexI[dir] ; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm1);
161  indexC[dir] = indexI[dir]-1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp1);
162  indexC[dir] = indexI[dir]-2; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp2);
163  }
164 
165  int p; /* 1D index of the interface */
166  _ArrayIndex1D_(ndims,bounds_inter,indexI,0,p);
167 
168  /* find averaged state at this interface */
169  IERR solver->AveragingFunction(uavg,&u[nvars*qm1],&u[nvars*qp1],solver->physics); CHECKERR(ierr);
170 
171  /* Get the left and right eigenvectors */
172  IERR solver->GetLeftEigenvectors (uavg,L,solver->physics,dir); CHECKERR(ierr);
173  IERR solver->GetRightEigenvectors (uavg,R,solver->physics,dir); CHECKERR(ierr);
174 
175  for (v=0; v<nvars; v++) {
176 
177  /* calculate the characteristic flux components along this characteristic */
178  double fm3, fm2, fm1, fp1, fp2;
179  fm3 = fm2 = fm1 = fp1 = fp2 = 0;
180  for (k = 0; k < nvars; k++) {
181  fm3 += L[v*nvars+k] * fC[qm3*nvars+k];
182  fm2 += L[v*nvars+k] * fC[qm2*nvars+k];
183  fm1 += L[v*nvars+k] * fC[qm1*nvars+k];
184  fp1 += L[v*nvars+k] * fC[qp1*nvars+k];
185  fp2 += L[v*nvars+k] * fC[qp2*nvars+k];
186  }
187 
188  if ( ((mpi->ip[dir] == 0 ) && (indexI[dir] == 0 ))
189  || ((mpi->ip[dir] == mpi->iproc[dir]-1) && (indexI[dir] == dim[dir])) ) {
190  /* Use 5th order upwind at the physical boundaries */
191  for (k=0; k<nvars; k++) {
192  A[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = 0.0;
193  C[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = 0.0;
194  B[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = L[v*nvars+k];
195  }
196  F[(Nsys*indexI[dir]+sys)*nvars+v] = one_by_thirty * fm3
197  - thirteen_by_sixty * fm2
198  + fortyseven_by_sixty * fm1
199  + twentyseven_by_sixty * fp1
200  - one_by_twenty * fp2;
201  } else {
202  /* 5th order compact upwind at the interior points */
203  if (upw > 0) {
204  for (k=0; k<nvars; k++) {
205  A[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = three_by_ten * L[v*nvars+k];
206  B[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = six_by_ten * L[v*nvars+k];
207  C[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = one_by_ten * L[v*nvars+k];
208  }
209  } else {
210  for (k=0; k<nvars; k++) {
211  C[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = three_by_ten * L[v*nvars+k];
212  B[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = six_by_ten * L[v*nvars+k];
213  A[(Nsys*indexI[dir]+sys)*nvars*nvars+v*nvars+k] = one_by_ten * L[v*nvars+k];
214  }
215  }
216  F[(Nsys*indexI[dir]+sys)*nvars+v] = one_by_thirty * fm2
217  + nineteen_by_thirty * fm1
218  + one_third * fp1;
219  }
220  }
221  }
222  }
223 
224 #ifdef serial
225 
226  /* Solve the tridiagonal system */
227  IERR blocktridiagLU(A,B,C,F,dim[dir]+1,Nsys,nvars,lu,NULL); CHECKERR(ierr);
228 
229 #else
230 
231  /* Solve the tridiagonal system */
232  /* all processes except the last will solve without the last interface to avoid overlap */
233  if (mpi->ip[dir] != mpi->iproc[dir]-1) {
234  IERR blocktridiagLU(A,B,C,F,dim[dir] ,Nsys,nvars,lu,&mpi->comm[dir]); CHECKERR(ierr);
235  } else {
236  IERR blocktridiagLU(A,B,C,F,dim[dir]+1,Nsys,nvars,lu,&mpi->comm[dir]); CHECKERR(ierr);
237  }
238 
239  /* Now get the solution to the last interface from the next proc */
240  double *sendbuf = compact->sendbuf;
241  double *recvbuf = compact->recvbuf;
242  MPI_Request req[2] = {MPI_REQUEST_NULL,MPI_REQUEST_NULL};
243  if (mpi->ip[dir]) for (d=0; d<Nsys*nvars; d++) sendbuf[d] = F[d];
244  if (mpi->ip[dir] != mpi->iproc[dir]-1) MPI_Irecv(recvbuf,Nsys*nvars,MPI_DOUBLE,mpi->ip[dir]+1,214,mpi->comm[dir],&req[0]);
245  if (mpi->ip[dir]) MPI_Isend(sendbuf,Nsys*nvars,MPI_DOUBLE,mpi->ip[dir]-1,214,mpi->comm[dir],&req[1]);
246  MPI_Status status_arr[2];
247  MPI_Waitall(2,&req[0],status_arr);
248  if (mpi->ip[dir] != mpi->iproc[dir]-1) for (d=0; d<Nsys*nvars; d++) F[d+Nsys*nvars*dim[dir]] = recvbuf[d];
249 
250 #endif
251 
252  /* save the solution to fI */
253 #pragma omp parallel for schedule(auto) default(shared) private(sys,d,v,k,R,L,uavg,index_outer,indexC,indexI)
254  for (sys=0; sys<Nsys; sys++) {
255  _ArrayIndexnD_(ndims,sys,bounds_outer,index_outer,0);
256  _ArrayCopy1D_(index_outer,indexI,ndims);
257  for (indexI[dir] = 0; indexI[dir] < dim[dir]+1; indexI[dir]++) {
258  int p; _ArrayIndex1D_(ndims,bounds_inter,indexI,0,p);
259  _ArrayCopy1D_((F+sys*nvars+Nsys*nvars*indexI[dir]),(fI+nvars*p),nvars);
260  }
261  }
262 
263  return(0);
264 }
MPI_Comm * comm
#define _ArrayIndexnD_(N, index, imax, i, ghost)
void * physics
Definition: hypar.h:266
int(* GetRightEigenvectors)(double *, double *, void *, int)
Definition: hypar.h:359
int * dim_local
Definition: hypar.h:37
int blocktridiagLU(double *, double *, double *, double *, int, int, int, void *, void *)
double * recvbuf
Structure of variables/parameters needed by the compact schemes.
int ghosts
Definition: hypar.h:52
#define _ArrayIndex1D_(N, imax, i, ghost, index)
int(* AveragingFunction)(double *, double *, double *, void *)
Definition: hypar.h:354
void * compact
Definition: hypar.h:364
int(* GetLeftEigenvectors)(double *, double *, void *, int)
Definition: hypar.h:357
#define _ArrayCopy1D_(x, y, size)
int nvars
Definition: hypar.h:29
#define CHECKERR(ierr)
Definition: basic.h:18
int ndims
Definition: hypar.h:26
#define IERR
Definition: basic.h:16
#define _ArrayProduct1D_(x, size, p)
Structure of MPI-related variables.
Structure containing all solver-specific variables and functions.
Definition: hypar.h:23
double * sendbuf
void * lusolver
Definition: hypar.h:368
#define _DECLARE_IERR_
Definition: basic.h:17