HyPar  1.0
Finite-Difference Hyperbolic-Parabolic PDE Solver on Cartesian Grids
Interp1PrimFifthOrderHCWENO.c File Reference

hybrid compact-WENO5 Scheme (Component-wise application to vectors) More...

#include <stdio.h>
#include <basic.h>
#include <arrayfunctions.h>
#include <mathfunctions.h>
#include <interpolation.h>
#include <tridiagLU.h>
#include <mpivars.h>
#include <hypar.h>

Go to the source code of this file.

Macros

#define _MINIMUM_GHOSTS_   3
 

Functions

int Interp1PrimFifthOrderHCWENO (double *fI, double *fC, double *u, double *x, int upw, int dir, void *s, void *m, int uflag)
 5th order hybrid compact-WENO reconstruction (component-wise) on a uniform grid More...
 

Detailed Description

hybrid compact-WENO5 Scheme (Component-wise application to vectors)

Author
Debojyoti Ghosh

Definition in file Interp1PrimFifthOrderHCWENO.c.

Macro Definition Documentation

◆ _MINIMUM_GHOSTS_

#define _MINIMUM_GHOSTS_   3

Minimum number of ghost points required for this interpolation method.

Definition at line 24 of file Interp1PrimFifthOrderHCWENO.c.

Function Documentation

◆ Interp1PrimFifthOrderHCWENO()

int Interp1PrimFifthOrderHCWENO ( double *  fI,
double *  fC,
double *  u,
double *  x,
int  upw,
int  dir,
void *  s,
void *  m,
int  uflag 
)

5th order hybrid compact-WENO reconstruction (component-wise) on a uniform grid

Computes the interpolated values of the first primitive of a function \({\bf f}\left({\bf u}\right)\) at the interfaces from the cell-centered values of the function using the fifth order hybrid compact-WENO scheme on a uniform grid. The tridiagonal system is solved using tridiagLU() (see also TridiagLU, tridiagLU.h). See references below for a complete description of the method implemented here.

Implementation Notes:

  • This method assumes a uniform grid in the spatial dimension corresponding to the interpolation.
  • The scalar interpolation method is applied to the vector function in a component-wise manner.
  • The WENO weights are computed in WENOFifthOrderCalculateWeights().
  • The function computes the interpolant for the entire grid in one call. It loops over all the grid lines along the interpolation direction and carries out the 1D interpolation along these grid lines.
  • Location of cell-centers and cell interfaces along the spatial dimension of the interpolation is shown in the following figure:
    chap1_1Ddomain.png

Function arguments:

Argument Type Explanation ------—
fI double* Array to hold the computed interpolant at the grid interfaces. This array must have the same layout as the solution, but with no ghost points. Its size should be the same as u in all dimensions, except dir (the dimension along which to interpolate) along which it should be larger by 1 (number of interfaces is 1 more than the number of interior cell centers).
fC double* Array with the cell-centered values of the flux function \({\bf f}\left({\bf u}\right)\). This array must have the same layout and size as the solution, with ghost points.
u double* The solution array \({\bf u}\) (with ghost points). If the interpolation is characteristic based, this is needed to compute the eigendecomposition. For a multidimensional problem, the layout is as follows: u is a contiguous 1D array of size (nvars*dim[0]*dim[1]*...*dim[D-1]) corresponding to the multi-dimensional solution, with the following ordering - nvars, dim[0], dim[1], ..., dim[D-1], where nvars is the number of solution components (HyPar::nvars), dim is the local size (HyPar::dim_local), D is the number of spatial dimensions.
x double* The grid array (with ghost points). This is used only by non-uniform-grid interpolation methods. For multidimensional problems, the layout is as follows: x is a contiguous 1D array of size (dim[0]+dim[1]+...+dim[D-1]), with the spatial coordinates along dim[0] stored from 0,...,dim[0]-1, the spatial coordinates along dim[1] stored along dim[0],...,dim[0]+dim[1]-1, and so forth.
upw int Upwinding direction: if positive, a left-biased interpolant will be computed; if negative, a right-biased interpolant will be computed. If the interpolation method is central, then this has no effect.
dir int Spatial dimension along which to interpolate (eg: 0 for 1D; 0 or 1 for 2D; 0,1 or 2 for 3D)
s void* Solver object of type HyPar: the following variables are needed - HyPar::ghosts, HyPar::ndims, HyPar::nvars, HyPar::dim_local.
m void* MPI object of type MPIVariables: this is needed only by compact interpolation method that need to solve a global implicit system across MPI ranks.
uflag int A flag indicating if the function being interpolated \({\bf f}\) is the solution itself \({\bf u}\) (if 1, \({\bf f}\left({\bf u}\right) \equiv {\bf u}\)).

Reference:

Parameters
fIArray of interpolated function values at the interfaces
fCArray of cell-centered values of the function \({\bf f}\left({\bf u}\right)\)
uArray of cell-centered values of the solution \({\bf u}\)
xGrid coordinates
upwUpwind direction (left or right biased)
dirSpatial dimension along which to interpolation
sObject of type HyPar containing solver-related variables
mObject of type MPIVariables containing MPI-related variables
uflagFlag to indicate if \(f(u) \equiv u\), i.e, if the solution is being reconstructed

Definition at line 63 of file Interp1PrimFifthOrderHCWENO.c.

74 {
75  HyPar *solver = (HyPar*) s;
76  MPIVariables *mpi = (MPIVariables*) m;
77  CompactScheme *compact= (CompactScheme*) solver->compact;
78  WENOParameters *weno = (WENOParameters*) solver->interp;
79  TridiagLU *lu = (TridiagLU*) solver->lusolver;
80  int sys,Nsys,d;
82 
83  int ghosts = solver->ghosts;
84  int ndims = solver->ndims;
85  int nvars = solver->nvars;
86  int *dim = solver->dim_local;
87 
88  /* define some constants */
89  static const double one_half = 1.0/2.0;
90  static const double one_third = 1.0/3.0;
91  static const double one_sixth = 1.0/6.0;
92 
93  double *ww1, *ww2, *ww3;
94  ww1 = weno->w1 + (upw < 0 ? 2*weno->size : 0) + (uflag ? weno->size : 0) + weno->offset[dir];
95  ww2 = weno->w2 + (upw < 0 ? 2*weno->size : 0) + (uflag ? weno->size : 0) + weno->offset[dir];
96  ww3 = weno->w3 + (upw < 0 ? 2*weno->size : 0) + (uflag ? weno->size : 0) + weno->offset[dir];
97 
98  /* create index and bounds for the outer loop, i.e., to loop over all 1D lines along
99  dimension "dir" */
100  int indexC[ndims], indexI[ndims], index_outer[ndims], bounds_outer[ndims], bounds_inter[ndims];
101  _ArrayCopy1D_(dim,bounds_outer,ndims); bounds_outer[dir] = 1;
102  _ArrayCopy1D_(dim,bounds_inter,ndims); bounds_inter[dir] += 1;
103  int N_outer; _ArrayProduct1D_(bounds_outer,ndims,N_outer);
104 
105  /* calculate total number of tridiagonal systems to solve */
106  _ArrayProduct1D_(bounds_outer,ndims,Nsys); Nsys *= nvars;
107 
108  /* Allocate arrays for tridiagonal system */
109  double *A = compact->A;
110  double *B = compact->B;
111  double *C = compact->C;
112  double *R = compact->R;
113 
114 #pragma omp parallel for schedule(auto) default(shared) private(sys,d,index_outer,indexC,indexI)
115  for (sys=0; sys < N_outer; sys++) {
116  _ArrayIndexnD_(ndims,sys,bounds_outer,index_outer,0);
117  _ArrayCopy1D_(index_outer,indexC,ndims);
118  _ArrayCopy1D_(index_outer,indexI,ndims);
119  for (indexI[dir] = 0; indexI[dir] < dim[dir]+1; indexI[dir]++) {
120  int qm1,qm2,qm3,qp1,qp2,p;
121  _ArrayIndex1D_(ndims,bounds_inter,indexI,0,p);
122  if (upw > 0) {
123  indexC[dir] = indexI[dir]-3; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm3);
124  indexC[dir] = indexI[dir]-2; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm2);
125  indexC[dir] = indexI[dir]-1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm1);
126  indexC[dir] = indexI[dir] ; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp1);
127  indexC[dir] = indexI[dir]+1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp2);
128  } else {
129  indexC[dir] = indexI[dir]+2; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm3);
130  indexC[dir] = indexI[dir]+1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm2);
131  indexC[dir] = indexI[dir] ; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qm1);
132  indexC[dir] = indexI[dir]-1; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp1);
133  indexC[dir] = indexI[dir]-2; _ArrayIndex1D_(ndims,dim,indexC,ghosts,qp2);
134  }
135  int v;
136  for (v=0; v<nvars; v++) {
137  /* Defining stencil points */
138  double fm3, fm2, fm1, fp1, fp2;
139  fm3 = fC[qm3*nvars+v];
140  fm2 = fC[qm2*nvars+v];
141  fm1 = fC[qm1*nvars+v];
142  fp1 = fC[qp1*nvars+v];
143  fp2 = fC[qp2*nvars+v];
144 
145  /* Candidate stencils and their optimal weights*/
146  double f1, f2, f3;
147  f1 = (2*one_sixth)*fm3 - (7.0*one_sixth)*fm2 + (11.0*one_sixth)*fm1;
148  f2 = (-one_sixth)*fm2 + (5.0*one_sixth)*fm1 + (2*one_sixth)*fp1;
149  f3 = (2*one_sixth)*fm1 + (5*one_sixth)*fp1 - (one_sixth)*fp2;
150 
151  /* calculate WENO weights */
152  double w1,w2,w3;
153  w1 = *(ww1+p*nvars+v);
154  w2 = *(ww2+p*nvars+v);
155  w3 = *(ww3+p*nvars+v);
156 
157  /* calculate the hybridization parameter */
158  double sigma;
159  if ( ((mpi->ip[dir] == 0 ) && (indexI[dir] == 0 ))
160  || ((mpi->ip[dir] == mpi->iproc[dir]-1) && (indexI[dir] == dim[dir])) ) {
161  /* Standard WENO at physical boundaries */
162  sigma = 0.0;
163  } else {
164  double cuckoo, df_jm12, df_jp12, df_jp32, r_j, r_jp1, r_int;
165  cuckoo = (0.9*weno->rc / (1.0-0.9*weno->rc)) * weno->xi * weno->xi;
166  df_jm12 = fm1 - fm2;
167  df_jp12 = fp1 - fm1;
168  df_jp32 = fp2 - fp1;
169  r_j = (absolute(2*df_jp12*df_jm12)+cuckoo)/(df_jp12*df_jp12+df_jm12*df_jm12+cuckoo);
170  r_jp1 = (absolute(2*df_jp32*df_jp12)+cuckoo)/(df_jp32*df_jp32+df_jp12*df_jp12+cuckoo);
171  r_int = min(r_j, r_jp1);
172  sigma = min((r_int/weno->rc), 1.0);
173  }
174 
175  if (upw > 0) {
176  A[sys*nvars+v+Nsys*indexI[dir]] = one_half * sigma;
177  B[sys*nvars+v+Nsys*indexI[dir]] = 1.0;
178  C[sys*nvars+v+Nsys*indexI[dir]] = one_sixth * sigma;
179  } else {
180  C[sys*nvars+v+Nsys*indexI[dir]] = one_half * sigma;
181  B[sys*nvars+v+Nsys*indexI[dir]] = 1.0;
182  A[sys*nvars+v+Nsys*indexI[dir]] = one_sixth * sigma;
183  }
184 
185  double fWENO, fCompact;
186  fWENO = w1*f1 + w2*f2 + w3*f3;
187  fCompact = one_sixth * (one_third*fm2 + 19.0*one_third*fm1 + 10.0*one_third*fp1);
188  R[sys*nvars+v+Nsys*indexI[dir]] = sigma*fCompact + (1.0-sigma)*fWENO;
189  }
190  }
191  }
192 
193 #ifdef serial
194 
195  /* Solve the tridiagonal system */
196  IERR tridiagLU(A,B,C,R,dim[dir]+1,Nsys,lu,NULL); CHECKERR(ierr);
197 
198 #else
199 
200  /* Solve the tridiagonal system */
201  /* all processes except the last will solve without the last interface to avoid overlap */
202  if (mpi->ip[dir] != mpi->iproc[dir]-1) { IERR tridiagLU(A,B,C,R,dim[dir] ,Nsys,lu,&mpi->comm[dir]); CHECKERR(ierr); }
203  else { IERR tridiagLU(A,B,C,R,dim[dir]+1,Nsys,lu,&mpi->comm[dir]); CHECKERR(ierr); }
204 
205  /* Now get the solution to the last interface from the next proc */
206  double *sendbuf = compact->sendbuf;
207  double *recvbuf = compact->recvbuf;
208  MPI_Request req[2] = {MPI_REQUEST_NULL,MPI_REQUEST_NULL};
209  if (mpi->ip[dir]) for (d=0; d<Nsys; d++) sendbuf[d] = R[d];
210  if (mpi->ip[dir] != mpi->iproc[dir]-1) MPI_Irecv(recvbuf,Nsys,MPI_DOUBLE,mpi->ip[dir]+1,214,mpi->comm[dir],&req[0]);
211  if (mpi->ip[dir]) MPI_Isend(sendbuf,Nsys,MPI_DOUBLE,mpi->ip[dir]-1,214,mpi->comm[dir],&req[1]);
212  MPI_Status status_arr[2];
213  MPI_Waitall(2,&req[0],status_arr);
214  if (mpi->ip[dir] != mpi->iproc[dir]-1) for (d=0; d<Nsys; d++) R[d+Nsys*dim[dir]] = recvbuf[d];
215 
216 #endif
217 
218  /* save the solution to fI */
219 #pragma omp parallel for schedule(auto) default(shared) private(sys,d,index_outer,indexC,indexI)
220  for (sys=0; sys < N_outer; sys++) {
221  _ArrayIndexnD_(ndims,sys,bounds_outer,index_outer,0);
222  _ArrayCopy1D_(index_outer,indexI,ndims);
223  for (indexI[dir] = 0; indexI[dir] < dim[dir]+1; indexI[dir]++) {
224  int p; _ArrayIndex1D_(ndims,bounds_inter,indexI,0,p);
225  _ArrayCopy1D_((R+sys*nvars+Nsys*indexI[dir]),(fI+nvars*p),nvars);
226  }
227  }
228 
229  return(0);
230 }
#define absolute(a)
Definition: math_ops.h:32
int nvars
Definition: hypar.h:29
#define IERR
Definition: basic.h:16
#define CHECKERR(ierr)
Definition: basic.h:18
#define min(a, b)
Definition: math_ops.h:14
#define _ArrayIndexnD_(N, index, imax, i, ghost)
int ndims
Definition: hypar.h:26
Structure containing all solver-specific variables and functions.
Definition: hypar.h:23
MPI_Comm * comm
#define _ArrayIndex1D_(N, imax, i, ghost, index)
void * lusolver
Definition: hypar.h:368
double * recvbuf
Structure of variables/parameters needed by the WENO-type scheme.
Structure of variables/parameters needed by the compact schemes.
int * dim_local
Definition: hypar.h:37
void * interp
Definition: hypar.h:362
int tridiagLU(double *, double *, double *, double *, int, int, void *, void *)
Definition: tridiagLU.c:83
int ghosts
Definition: hypar.h:52
Structure of MPI-related variables.
#define _DECLARE_IERR_
Definition: basic.h:17
#define _ArrayCopy1D_(x, y, size)
#define _ArrayProduct1D_(x, size, p)
void * compact
Definition: hypar.h:364
double * sendbuf