Block-Structured AMR Software Framework
amrex Namespace Reference

Namespaces

 algoim
 
 Amrvis
 
 AsyncOut
 
 BGColor
 
 BinPolicy
 
 Cuda
 
 detail
 
 disabled
 
 EB2
 
 experimental
 
 Extrapolater
 
 FFT
 
 FGColor
 
 FileSystem
 
 Font
 
 fudetail
 
 Gpu
 
 HostDevice
 
 InSituUtils
 
 interp_detail
 
 Lazy
 
 LongParticleIds
 
 loop_detail
 
 machine
 
 Math
 
 MFUtil
 
 mlndts_detail
 
 Morton
 
 MPMD
 
 nodelap_detail
 
 NonLocalBC
 
 openbc
 
 OpenMP
 
 ParallelAllGather
 
 ParallelAllReduce
 
 ParallelContext
 
 ParallelDescriptor
 Parallel frontend that abstracts functionalities needed to spawn processes and handle communication.
 
 ParallelGather
 
 ParallelReduce
 
 particle_detail
 
 ParticleIdCpus
 
 ParticleInterpolator
 
 pp_detail
 
 ppdetail
 
 Reduce
 
 RungeKutta
 Functions for Runge-Kutta methods.
 
 Scan
 
 sundials
 
 SundialsUserFun
 
 system
 
 tri_geom_ops
 
 TwoD
 
 VectorGrowthStrategy
 

Classes

class  Amr
 Manage hierarchy of levels for time-dependent AMR computations. More...
 
class  AmrLevel
 Virtual base class for managing individual levels. AmrLevel functions both as a container for state data on a level and also manages the advancement of data in time. More...
 
class  FillPatchIterator
 
class  FillPatchIteratorHelper
 
class  DeriveRec
 Derived Type Record. More...
 
class  DeriveList
 A list of DeriveRecs. More...
 
class  LevelBld
 Builds problem-specific AmrLevels. More...
 
class  StateData
 Current and previous level-time data. More...
 
class  StateDataPhysBCFunct
 
class  StateDescriptor
 Attributes of StateData. More...
 
class  DescriptorList
 
class  AmrCore
 Provide basic functionalities to set up an AMR hierarchy. More...
 
struct  AmrInfo
 
class  AmrMesh
 
class  AmrParGDB
 
class  AmrParticleContainer_impl
 
class  AmrTracerParticleContainer
 
class  Cluster
 A cluster of tagged cells. More...
 
class  ClusterList
 A list of Cluster objects. More...
 
class  ErrorRec
 Error Record. More...
 
class  ErrorList
 A List of ErrorRecs. More...
 
struct  AMRErrorTagInfo
 
class  AMRErrorTag
 
class  FillPatcher
 FillPatcher is for filling a fine level MultiFab/FabArray. More...
 
struct  NullInterpHook
 
class  FluxRegister
 Flux Register. More...
 
class  InterpolaterBoxCoarsener
 
class  InterpBase
 
class  InterpFaceRegister
 InterpFaceRegister is a coarse/fine boundary register for interpolation of face data at the coarse/fine boundary. More...
 
class  Interpolater
 Virtual base class for interpolaters. More...
 
class  NodeBilinear
 Bilinear interpolation on node centered data. More...
 
class  CellBilinear
 Bilinear interpolation on cell centered data. More...
 
class  CellConservativeLinear
 Linear conservative interpolation on cell centered data. More...
 
class  CellConservativeProtected
 Lin. cons. interp. on cc data with protection against under/over-shoots. More...
 
class  CellQuadratic
 Quadratic interpolation on cell centered data. More...
 
class  PCInterp
 Piecewise Constant interpolation on cell centered data. More...
 
class  CellConservativeQuartic
 Conservative quartic interpolation on cell averaged data. More...
 
class  FaceDivFree
 Divergence-preserving interpolation on face centered data. More...
 
class  FaceLinear
 Piecewise constant tangential interpolation / linear normal interpolation of face data. More...
 
class  FaceConservativeLinear
 Bilinear tangential interpolation / linear normal interpolation of face data. More...
 
class  CellQuartic
 Quartic interpolation on cell centered data. More...
 
class  MFInterpolater
 
class  MFPCInterp
 Piecewise constant interpolation on cell-centered data. More...
 
class  MFCellConsLinInterp
 Linear conservative interpolation on cell centered data. More...
 
class  MFCellConsLinMinmaxLimitInterp
 Linear conservative interpolation on cell centered data. More...
 
class  MFCellBilinear
 [Bi|Tri]linear interpolation on cell centered data. More...
 
class  MFNodeBilinear
 
class  TagBox
 Tagged cells in a Box. More...
 
class  TagBoxArray
 An array of TagBoxes. More...
 
class  AMReX
 
class  Any
 
struct  MemStat
 
struct  ArenaInfo
 
class  Arena
 A virtual base class for objects that manage their own dynamic memory allocation. More...
 
struct  GpuArray
 
struct  Array1D
 
struct  Array2D
 
struct  Array3D
 
struct  CellData
 
struct  Array4
 
struct  HasMultiComp
 
struct  PolymorphicArray4
 
class  BackgroundThread
 
class  BArena
 A Concrete Class for Dynamic Memory Management This is the simplest dynamic memory management class derived from Arena. Makes calls to std::malloc and std::free. More...
 
struct  SrcComp
 
struct  DestComp
 
struct  NumComps
 
class  BaseFab
 A FortranArrayBox(FAB)-like object. More...
 
class  FabArray
 An Array of FortranArrayBox(FAB)-like Objects. More...
 
class  LayoutData
 a one-thingy-per-box distributed object More...
 
class  BoxND
 A Rectangular Domain on an Integer Lattice. More...
 
class  IntVectND
 
class  IndexTypeND
 Cell-Based or Node-Based Indices. More...
 
class  FabFactory
 
struct  ParserExecutor
 
class  BCRec
 Boundary Condition Records. Necessary information and functions for computing boundary conditions. More...
 
struct  BLBackTrace
 
class  BLBTer
 
struct  BlockMutex
 
class  BLProfiler
 
class  BoxCommHelper
 
class  BoxConverter
 
struct  BoxIndexerND
 
struct  BoxIndexerND< 1 >
 
struct  BARef
 
struct  BATnull
 
struct  BATindexType
 
struct  BATcoarsenRatio
 
struct  BATindexType_coarsenRatio
 
struct  BATbndryReg
 
struct  BATransformer
 
class  BoxArray
 A collection of Boxes stored in an Array. More...
 
class  BoxDomain
 A List of Disjoint Boxes. More...
 
class  BoxIterator
 iterates through the IntVects of a Box More...
 
class  BoxList
 A class for managing a List of Boxes that share a common IndexType. This class implements operations for sets of Boxes. This is a concrete class, not a polymorphic one. More...
 
class  CArena
 A Concrete Class for Dynamic Memory Management using first fit. This is a coalescing memory manager. It allocates (possibly) large chunks of heap space and apportions it out as requested. It merges together neighboring chunks on each free(). More...
 
class  CoordSys
 Coordinate System. More...
 
struct  CompileTimeOptions
 
struct  DataAllocator
 
struct  DataDeleter
 
struct  Dim3
 
struct  XDim3
 
class  WeightedBox
 
struct  WeightedBoxList
 
class  DistributionMapping
 Calculates the distribution of FABs to MPI processes. More...
 
struct  MFInfo
 FabArray memory allocation information. More...
 
struct  TheFaArenaDeleter
 
struct  FBData
 
struct  PCData
 
struct  MultiArray4
 
class  FabArrayBase
 Base class for FabArray. More...
 
class  IntDescriptor
 A Descriptor of the Long Integer type. More...
 
class  RealDescriptor
 A Descriptor of the Real Type. More...
 
struct  FabDataType
 
struct  FabDataType< T, std::enable_if_t< IsMultiFabLike_v< T > > >
 
struct  FabDataType< T, std::enable_if_t< IsMultiFabLike_v< typename T::value_type > > >
 
struct  FabInfo
 
class  DefaultFabFactory
 
class  FillBoxId
 
class  FabArrayId
 
struct  FabCopyDescriptor
 
class  FabArrayCopyDescriptor
 This class orchestrates filling a destination fab of size destFabBox from fabarray on the local processor (myProc). More...
 
class  FABio_8bit
 
class  FABio_ascii
 
class  FABio
 A Class Facilitating I/O for Fabs. More...
 
class  FABio_binary
 
class  FArrayBox
 A Fortran Array of REALs. More...
 
class  FEIntegrator
 
struct  FilccCell
 
struct  FilfcFace
 
class  ForkJoin
 
class  FPC
 A Collection of Floating-Point Constants Supporting FAB I/O. More...
 
struct  Plus
 
struct  Minus
 
struct  Minimum
 
struct  Maximum
 
struct  LogicalAnd
 
struct  LogicalOr
 
struct  Multiplies
 
struct  Divides
 
struct  GeometryData
 
class  Geometry
 Rectangular problem domain geometry. More...
 
struct  FatPtr
 
struct  ArenaAllocatorBase
 
struct  ArenaWrapper
 
struct  DeviceArenaWrapper
 
struct  PinnedArenaWrapper
 
struct  ManagedArenaWrapper
 
struct  AsyncArenaWrapper
 
struct  PolymorphicArenaWrapper
 
class  ArenaAllocator
 
class  DeviceArenaAllocator
 
class  PinnedArenaAllocator
 
class  ManagedArenaAllocator
 
class  AsyncArenaAllocator
 
class  PolymorphicArenaAllocator
 
struct  RunOnGpu
 
struct  IsArenaAllocator
 
struct  IsArenaAllocator< T, std::enable_if_t< std::is_base_of_v< ArenaAllocatorBase< typename T::value_type, typename T::arena_wrapper_type >, T > > >
 
struct  IsPolymorphicArenaAllocator
 
struct  RunOnGpu< ArenaAllocator< T > >
 
struct  RunOnGpu< DeviceArenaAllocator< T > >
 
struct  RunOnGpu< ManagedArenaAllocator< T > >
 
struct  RunOnGpu< AsyncArenaAllocator< T > >
 
struct  IsPolymorphicArenaAllocator< PolymorphicArenaAllocator< T > >
 
struct  GpuComplex
 A host / device complex number type, because std::complex doesn't work in device code with Cuda yet. More...
 
class  IFABio
 
class  IArrayBox
 A Fortran Array of ints. More...
 
class  iMultiFab
 
struct  CellIndexEnum
 Type for defining CellIndex so that all IndexTypeND with different dimensions have the same CellIndex type. More...
 
struct  IntegratorOps
 
struct  IntegratorOps< T, std::enable_if_t< std::is_same_v< amrex::Vector< amrex::MultiFab >, T > > >
 
struct  IntegratorOps< T, std::enable_if_t< std::is_same_v< amrex::MultiFab, T > > >
 
class  IntegratorBase
 
class  IOFormatSaver
 
class  LUSolver
 
class  MemProfiler
 
class  MultiFabCopyDescriptor
 
struct  MFItInfo
 
class  MFIter
 
struct  TileSize
 
struct  DynamicTiling
 
class  MultiFab
 A collection (stored as an array) of FArrayBox objects. More...
 
class  NFilesIter
 This class encapsulates writing to nfiles. More...
 
class  Orientation
 Encapsulation of the Orientation of the Faces of a Box. More...
 
class  OrientationIter
 An Iterator over the Orientation of Faces of a Box. More...
 
class  PArena
 This arena uses CUDA stream-ordered memory allocator if available. If not, use The_Arena(). More...
 
class  ParmParse
 Parse Parameters From Command Line and Input Files. More...
 
class  Periodicity
 This provides length of period for periodic domains. 0 means it is not periodic in that direction. It is also assumed that the periodic domain starts with index 0. More...
 
class  BndryFuncArray
 This version calls function working on array. More...
 
class  GpuBndryFuncFab
 
struct  FabFillNoOp
 
class  CpuBndryFuncFab
 This cpu version calls function working on FArrayBox. More...
 
class  PhysBCFunctNoOp
 
class  PhysBCFunctUseCoarseGhost
 
class  PhysBCFunct
 
class  PlotFileDataImpl
 
class  PlotFileData
 
class  PODVector
 
class  Print
 This class provides the user with a few print options. More...
 
class  AllPrint
 Print on all processors of the default communicator. More...
 
class  PrintToFile
 This class prints to a file with a given base name. More...
 
class  AllPrintToFile
 Print on all processors of the default communicator. More...
 
struct  RandomEngine
 
class  RealBox
 A Box with real dimensions. A RealBox is OK iff volume >= 0. More...
 
class  RealVect
 A Real vector in SpaceDim-dimensional space. More...
 
struct  ReduceOpSum
 
struct  ReduceOpMin
 
struct  ReduceOpMax
 
struct  ReduceOpLogicalAnd
 
struct  ReduceOpLogicalOr
 
class  ReduceOps
 
class  ReduceData
 
class  RKIntegrator
 
struct  SmallMatrix
 Matrix class with compile-time size. More...
 
struct  Stack
 
struct  Table1D
 
struct  Table2D
 
struct  Table3D
 
struct  Table4D
 
class  TableData
 Multi-dimensional array class. More...
 
struct  Array4PairTag
 
struct  Array4CopyTag
 
struct  Array4MaskCopyTag
 
struct  Array4Tag
 
struct  Array4BoxTag
 
struct  Array4BoxValTag
 
struct  Array4BoxOrientationTag
 
struct  Array4BoxOffsetTag
 
struct  VectorTag
 
class  TimeIntegrator
 
class  TinyProfiler
 A simple profiler that returns basic performance information (e.g. min, max, and average running time) More...
 
class  TinyProfileRegion
 
class  GpuTuple
 
struct  GpuTupleSize
 
struct  GpuTupleSize< GpuTuple< Ts... > >
 
struct  GpuTupleElement
 
struct  GpuTupleElement< I, GpuTuple< Head, Tail... > >
 
struct  GpuTupleElement< 0, GpuTuple< Head, Tail... > >
 
struct  TypeList
 Struct for holding types. More...
 
struct  IsBaseFab
 
struct  IsBaseFab< D, std::enable_if_t< std::is_base_of_v< BaseFab< typename D::value_type >, D > > >
 
struct  IsFabArray
 
struct  IsFabArray< D, std::enable_if_t< std::is_base_of_v< FabArray< typename D::FABType::value_type >, D > > >
 
struct  IsMultiFabLike
 
struct  IsMultiFabLike< M, std::enable_if_t< IsFabArray_v< M > &&IsBaseFab_v< typename M::fab_type > > >
 
struct  HasAtomicAdd
 
struct  HasAtomicAdd< int >
 
struct  HasAtomicAdd< long >
 
struct  HasAtomicAdd< unsigned int >
 
struct  HasAtomicAdd< unsigned long long >
 
struct  HasAtomicAdd< float >
 
struct  HasAtomicAdd< double >
 
struct  IsMultiFabIterator
 
struct  MaybeDeviceRunnable
 
struct  MaybeHostDeviceRunnable
 
struct  DefinitelyNotHostRunnable
 
struct  Same
 
struct  Same< T, U >
 
struct  IsCallable
 Test if a given type T is callable with arguments of type Args... More...
 
struct  IsCallableR
 Test if a given type T is callable with arguments of type Args... More...
 
struct  Conjunction
 Logical traits let us combine multiple type requirements in one enable_if_t clause. More...
 
struct  Conjunction< B1 >
 
struct  Conjunction< B1, Bn... >
 
struct  Disjunction
 
struct  Disjunction< B1 >
 
struct  Disjunction< B1, Bn... >
 
struct  IsConvertible
 Test if all the types Args... are automatically convertible to type T. More...
 
struct  IsStoreAtomic
 
class  expect
 
class  StreamRetry
 
struct  ValLocPair
 
class  Vector
 This class is a thin wrapper around std::vector. Unlike vector, Vector::operator[] provides bound checking when compiled with DEBUG=TRUE. More...
 
class  VisMF
 File I/O for FabArray<FArrayBox>. Wrapper class for reading/writing FabArray<FArrayBox> objects to disk in various "smart" ways. More...
 
class  VisMFBuffer
 
struct  IParserExecutor
 
class  IParser
 
struct  IParserExeNull
 
struct  IParserExeNumber
 
struct  IParserExeSymbol
 
struct  IParserExeADD
 
struct  IParserExeSUB
 
struct  IParserExeMUL
 
struct  IParserExeDIV_F
 
struct  IParserExeDIV_B
 
struct  IParserExeNEG
 
struct  IParserExeF1
 
struct  IParserExeF2_F
 
struct  IParserExeF2_B
 
struct  IParserExeADD_VP
 
struct  IParserExeSUB_VP
 
struct  IParserExeMUL_VP
 
struct  IParserExeDIV_VP
 
struct  IParserExeDIV_PV
 
struct  IParserExeADD_PP
 
struct  IParserExeSUB_PP
 
struct  IParserExeMUL_PP
 
struct  IParserExeDIV_PP
 
struct  IParserExeNEG_P
 
struct  IParserExeADD_VN
 
struct  IParserExeSUB_VN
 
struct  IParserExeMUL_VN
 
struct  IParserExeDIV_VN
 
struct  IParserExeDIV_NV
 
struct  IParserExeADD_PN
 
struct  IParserExeSUB_PN
 
struct  IParserExeMUL_PN
 
struct  IParserExeDIV_PN
 
struct  IParserExeIF
 
struct  IParserExeJUMP
 
union  iparser_nvp
 
struct  iparser_node
 
struct  iparser_number
 
struct  iparser_symbol
 
struct  iparser_f1
 
struct  iparser_f2
 
struct  iparser_f3
 
struct  iparser_assign
 
struct  amrex_iparser
 
class  Parser
 
struct  ParserExeNull
 
struct  ParserExeNumber
 
struct  ParserExeSymbol
 
struct  ParserExeADD
 
struct  ParserExeSUB_F
 
struct  ParserExeSUB_B
 
struct  ParserExeMUL
 
struct  ParserExeDIV_F
 
struct  ParserExeDIV_B
 
struct  ParserExeF1
 
struct  ParserExeF2_F
 
struct  ParserExeF2_B
 
struct  ParserExeADD_VP
 
struct  ParserExeSUB_VP
 
struct  ParserExeMUL_VP
 
struct  ParserExeDIV_VP
 
struct  ParserExeADD_PP
 
struct  ParserExeSUB_PP
 
struct  ParserExeMUL_PP
 
struct  ParserExeDIV_PP
 
struct  ParserExeADD_VN
 
struct  ParserExeSUB_VN
 
struct  ParserExeMUL_VN
 
struct  ParserExeDIV_VN
 
struct  ParserExeADD_PN
 
struct  ParserExeSUB_PN
 
struct  ParserExeMUL_PN
 
struct  ParserExeDIV_PN
 
struct  ParserExeSquare
 
struct  ParserExePOWI
 
struct  ParserExeIF
 
struct  ParserExeJUMP
 
struct  parser_node
 
struct  parser_number
 
struct  parser_symbol
 
struct  parser_f1
 
struct  parser_f2
 
struct  parser_f3
 
struct  parser_assign
 
struct  amrex_parser
 
class  BndryDataT
 A BndryData stores and manipulates boundary data information on each side of each box in a BoxArray. More...
 
class  BndryRegisterT
 A BndryRegister organizes FabSets bounding each grid in a BoxArray. A FabSet is maintained for each boundary orientation, as well as the BoxArray domain of definition. More...
 
class  BoundCond
 Maintain an identifier for boundary condition types. More...
 
class  EdgeFluxRegister
 
class  FabSetT
 A FabSet is a group of FArrayBox's. The grouping is designed specifically to represent regions along the boundary of Box's, and are used to implement boundary conditions to discretized partial differential equations. More...
 
class  FabSetIter
 
class  InterpBndryDataT
 An InterpBndryData object adds to a BndryData object the ability to manipulate and set the data stored in the boundary cells. More...
 
class  Mask
 
class  MultiMask
 
class  MultiMaskIter
 
class  YAFluxRegisterT
 
class  distFcnElement2d
 
class  LineDistFcnElement2d
 
class  SplineDistFcnElement2d
 
struct  GPUable
 
class  STLtools
 
class  EBCellFlag
 
struct  IsStoreAtomic< EBCellFlag >
 
class  EBCellFlagFab
 
struct  EBData
 
class  EBDataCollection
 
class  EBFArrayBoxFactory
 
class  EBFArrayBox
 
class  EBFluxRegister
 
class  EBCellConservativeLinear
 
class  EBMFCellConsLinInterp
 
class  EBToPVD
 
class  CutFab
 
class  MultiCutFab
 
class  AmrData
 
class  DataServices
 
class  OrderedBoxes
 
class  XYPlotDataListLink
 
class  XYPlotDataList
 
class  btUnit
 
class  Hypre
 
class  HypreABecLap
 
class  HypreABecLap2
 
class  HypreABecLap3
 
class  HypreIJIface
 
class  HypreMLABecLap
 
class  HypreNodeLap
 
class  HypreSolver
 Solve Ax = b using HYPRE's generic IJ matrix format where A is a sparse matrix specified using the compressed sparse row (CSR) format. More...
 
struct  amrex_KSP
 
struct  amrex_Mat
 
struct  amrex_Vec
 
class  PETScABecLap
 
class  AmrDataAdaptor
 
class  AmrInSituBridge
 Contains the bridge code for simulations that use amrex::Amr. More...
 
class  AmrMeshDataAdaptor
 
class  AmrMeshInSituBridge
 SENSEI bridge for code simulations that use amrex::AmrMesh/Core. More...
 
class  InSituBridge
 A base class for coupling to the SENSEI in situ library. More...
 
struct  SundialsUserData
 
class  SundialsIntegrator
 
class  AlgPartition
 
class  AlgVector
 
struct  IsAlgVector
 
struct  IsAlgVector< V, std::enable_if_t< std::is_same_v< AlgVector< typename V::value_type, typename V::allocator_type >, V > > >
 
class  GMRES
 GMRES. More...
 
class  GMRESMLMGT
 Solve using GMRES with multigrid as preconditioner. More...
 
class  GMRES_MV
 
class  JacobiSmoother
 
class  SpMatrix
 
class  MLABecLaplacianT
 
class  MLALaplacianT
 
class  MLCellABecLapT
 
class  MLCellLinOpT
 
struct  MLMGABCTag
 
struct  MLMGPSTag
 
class  MLCGSolverT
 
class  MLCurlCurl
 curl (alpha curl E) + beta E = rhs More...
 
struct  CurlCurlDirichletInfo
 
struct  CurlCurlSymmetryInfo
 
class  MLEBABecLap
 
class  MLEBNodeFDLaplacian
 
class  MLEBTensorOp
 
struct  LPInfo
 
struct  LinOpEnumType
 
class  MLMGT
 
class  MLPoissonT
 
class  MLLinOpT
 
class  MLMGBndryT
 
class  MLNodeABecLaplacian
 
class  MLNodeLaplacian
 
class  MLNodeLinOp
 
class  MLNodeTensorLaplacian
 
class  MLTensorOp
 
class  OpenBCSolver
 Open Boundary Poisson Solver. More...
 
class  ArrayOfStructs
 
struct  BinIterator
 
struct  DenseBinIteratorFactory
 
class  DenseBins
 A container for storing items in a set of bins. More...
 
struct  Neighbors
 
struct  NeighborData
 
class  NeighborList
 
struct  NeighborCode
 
class  NeighborParticleContainer
 
class  ParGDBBase
 
class  ParGDB
 we use this for non-Amr particle code More...
 
class  ParticleContainer_impl
 A distributed container for Particles sorted onto the levels, grids, and tiles of a block-structured AMR hierarchy. More...
 
struct  Particle
 The struct used to store particles. More...
 
struct  SoAParticle
 
class  ParIterBase_impl
 
class  ParIter_impl
 
class  ParConstIter_impl
 
struct  ParticleIDWrapper
 
struct  ParticleCPUWrapper
 
struct  ConstParticleIDWrapper
 
struct  ConstParticleCPUWrapper
 
struct  ParticleBase
 
struct  ParticleBase< T, 0, NInt >
 
struct  ParticleBase< T, NReal, 0 >
 
struct  ParticleBase< T, 0, 0 >
 
struct  SoAParticleBase
 
struct  DataLayoutPolicy
 
struct  DataLayoutPolicyRaw
 
struct  ParticleArrayAccessor
 
class  ref_wrapper
 
struct  DataLayoutPolicy< ContainerType, ParticleType< Types... >, DataLayout::AoS >
 
struct  DataLayoutPolicyRaw< ParticleType< Types... >, DataLayout::AoS >
 
struct  DataLayoutPolicy< ContainerType, ParticleType< Types... >, DataLayout::SoA >
 
struct  DataLayoutPolicyRaw< ParticleType< Types... >, DataLayout::SoA >
 
struct  ParticleArray
 
struct  GetPID
 
struct  GetBucket
 
class  ParticleBufferMap
 
struct  NeighborUnpackPolicy
 
struct  RedistributeUnpackPolicy
 
struct  ParticleCopyOp
 
struct  ParticleCopyPlan
 
struct  GetSendBufferOffset
 
struct  ParticleCommData
 A struct used for communicating particle data across processes during multi-level operations. More...
 
struct  ParticleLocData
 A struct used for storing a particle's position in the AMR hierarchy. More...
 
struct  ParticleInitType
 A struct used to pass initial data into the various Init methods. This struct is used to pass initial data into the various Init methods of the particle container. That data should be initialized in the order real struct data, int struct data, real array data, int array data. If fewer components are specified than the template parameters specify for, a given component, then the extra values will be set to zero. If more components are specified, it is a compile-time error. More...
 
class  ParticleContainerBase
 
struct  AssignGrid
 
class  ParticleLocator
 
struct  AmrAssignGrid
 
class  AmrParticleLocator
 
struct  ConstSoAParticle
 
struct  ConstParticleTileData
 
struct  ParticleTileData
 
struct  ThisParticleTileHasNoParticleVector
 
struct  ThisParticleTileHasNoAoS
 
struct  ParticleTile
 
struct  BinMapper
 
struct  GetParticleBin
 
struct  DefaultAssignor
 
struct  SparseBinIteratorFactory
 
class  SparseBins
 A container for storing items in a set of bins using "sparse" storage. More...
 
struct  StructOfArrays
 
class  TracerParticleContainer
 

Typedefs

using DeriveFunc = void(*)(amrex::Real *data, AMREX_ARLIM_P(dlo), AMREX_ARLIM_P(dhi), const int *nvar, const amrex::Real *compdat, AMREX_ARLIM_P(compdat_lo), AMREX_ARLIM_P(compdat_hi), const int *ncomp, const int *lo, const int *hi, const int *domain_lo, const int *domain_hi, const amrex::Real *delta, const amrex::Real *xlo, const amrex::Real *time, const amrex::Real *dt, const int *bcrec, const int *level, const int *grid_no)
 Type of extern "C" function called by DeriveRec to compute derived quantity. More...
 
using DeriveFunc3D = void(*)(amrex::Real *data, const int *dlo, const int *dhi, const int *nvar, const amrex::Real *compdat, const int *clo, const int *chi, const int *ncomp, const int *lo, const int *hi, const int *domain_lo, const int *domain_hi, const amrex::Real *delta, const amrex::Real *xlo, const amrex::Real *time, const amrex::Real *dt, const int *bcrec, const int *level, const int *grid_no)
 This is dimension agnostic. For example, dlo always has three elements. More...
 
using DeriveFuncFab = std::function< void(const amrex::Box &bx, amrex::FArrayBox &derfab, int dcomp, int ncomp, const amrex::FArrayBox &datafab, const amrex::Geometry &geomdata, amrex::Real time, const int *bcrec, int level)>
 
using DeriveFuncMF = std::function< void(amrex::MultiFab &der_mf, int dcomp, int ncomp, const amrex::MultiFab &data_mf, const amrex::Geometry &geomdata, amrex::Real time, const int *bcrec, int level)>
 
using BndryFuncFabDefault = std::function< void(Box const &bx, FArrayBox &data, int dcomp, int numcomp, Geometry const &geom, Real time, const Vector< BCRec > &bcr, int bcomp, int scomp)>
 
template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using AmrParticleContainer = AmrParticleContainer_impl< Particle< T_NStructReal, T_NStructInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
using ErrorFuncDefault = void(*)(int *tag, AMREX_ARLIM_P(tlo), AMREX_ARLIM_P(thi), const int *tagval, const int *clearval, amrex::Real *data, AMREX_ARLIM_P(data_lo), AMREX_ARLIM_P(data_hi), const int *lo, const int *hi, const int *nvar, const int *domain_lo, const int *domain_hi, const amrex::Real *dx, const amrex::Real *xlo, const amrex::Real *prob_lo, const amrex::Real *time, const int *level)
 Type of extern "C" function called by ErrorRec to do tagging of cells for refinement. More...
 
using ErrorFunc2Default = void(*)(int *tag, AMREX_ARLIM_P(tlo), AMREX_ARLIM_P(thi), const int *tagval, const int *clearval, amrex::Real *data, AMREX_ARLIM_P(data_lo), AMREX_ARLIM_P(data_hi), const int *lo, const int *hi, const int *nvar, const int *domain_lo, const int *domain_hi, const amrex::Real *dx, const int *level, const amrex::Real *avg)
 
using ErrorFunc3DDefault = void(*)(int *tag, const int *tlo, const int *thi, const int *tagval, const int *clearval, amrex::Real *data, const int *data_lo, const int *data_hi, const int *lo, const int *hi, const int *nvar, const int *domain_lo, const int *domain_hi, const amrex::Real *dx, const amrex::Real *xlo, const amrex::Real *prob_lo, const amrex::Real *time, const int *level)
 Dimension agnostic version that always has three elements. Note that this is only implemented for the ErrorFunc class, not ErrorFunc2. More...
 
using PTR_TO_VOID_FUNC = void(*)()
 
using ErrorHandler = void(*)(const char *)
 
template<class T , std::size_t N>
using Array = std::array< T, N >
 
using RealArray = Array< Real, AMREX_SPACEDIM >
 
using IntArray = Array< int, AMREX_SPACEDIM >
 
using Box = BoxND< AMREX_SPACEDIM >
 
using IntVect = IntVectND< AMREX_SPACEDIM >
 
using IndexType = IndexTypeND< AMREX_SPACEDIM >
 
using BoxIndexer = BoxIndexerND< AMREX_SPACEDIM >
 
using BndryBATransformer = BATransformer
 
using DMRef = DistributionMapping::Ref
 
using RuntimeError = std::runtime_error
 
using TheFaArenaPointer = std::unique_ptr< char, TheFaArenaDeleter >
 
using cMultiFab = FabArray< BaseFab< GpuComplex< Real > > >
 
using FArrayBoxFactory = DefaultFabFactory< FArrayBox >
 
template<class T >
using DefaultAllocator = amrex::ArenaAllocator< T >
 
using gpuStream_t = cudaStream_t
 
using gpuDeviceProp_t = cudaDeviceProp
 
using gpuError_t = cudaError_t
 
using MultiFabId = FabArrayId
 
using fMultiFab = FabArray< BaseFab< float > >
 
using BndryFuncDefault = void(*)(Real *data, AMREX_ARLIM_P(lo), AMREX_ARLIM_P(hi), const int *dom_lo, const int *dom_hi, const Real *dx, const Real *grd_lo, const Real *time, const int *bc)
 
using BndryFunc3DDefault = void(*)(Real *data, const int *lo, const int *hi, const int *dom_lo, const int *dom_hi, const Real *dx, const Real *grd_lo, const Real *time, const int *bc)
 
using UserFillBox = void(*)(Box const &bx, Array4< Real > const &dest, int dcomp, int numcomp, GeometryData const &geom, Real time, const BCRec *bcr, int bcomp, int orig_comp)
 
using randState_t = curandState_t
 
using randGenerator_t = curandGenerator_t
 
template<class T , int N, int StartIndex = 0>
using SmallVector = SmallMatrix< T, N, 1, Order::F, StartIndex >
 
template<class T , int N, int StartIndex = 0>
using SmallRowVector = SmallMatrix< T, 1, N, Order::F, StartIndex >
 
template<class... Ts>
using Tuple = std::tuple< Ts... >
 
template<std::size_t I, typename T >
using TypeAt = typename detail::TypeListGet< I, T >::type
 Type at position I of a TypeList. More...
 
template<template< class... > class TParam, class... Types>
using TypeMultiplier = TypeAt< 0, decltype(detail::TApply< TParam >((TypeList<>{}+...+detail::SingleTypeMultiplier(Types{}))))>
 Return the first template argument with the later arguments applied to it. Types of the form T[N] are expanded to T, T, T, T, ... (N times with N >= 1). More...
 
template<bool B, class T = void>
using EnableIf_t = std::enable_if_t< B, T >
 
template<template< class... > class Op, class... Args>
using IsDetected = typename detail::Detector< detail::Nonesuch, void, Op, Args... >::value_t
 
template<template< class... > class Op, class... Args>
using Detected_t = typename detail::Detector< detail::Nonesuch, void, Op, Args... >::type
 
template<class Default , template< class... > class Op, class... Args>
using DetectedOr = typename detail::Detector< Default, void, Op, Args... >::type
 
template<class Expected , template< typename... > class Op, class... Args>
using IsDetectedExact = std::is_same< Expected, Detected_t< Op, Args... > >
 
template<class B >
using Negation = std::integral_constant< bool, !bool(B::value)>
 
using MaxResSteadyClock = std::conditional_t< std::chrono::high_resolution_clock::is_steady, std::chrono::high_resolution_clock, std::chrono::steady_clock >
 
template<typename K , typename V >
using KeyValuePair = ValLocPair< K, V >
 
using BndryData = BndryDataT< MultiFab >
 
using fBndryData = BndryDataT< fMultiFab >
 
using BndryRegister = BndryRegisterT< MultiFab >
 
using fBndryRegister = BndryRegisterT< fMultiFab >
 
using FabSet = FabSetT< MultiFab >
 
using fFabSet = FabSetT< fMultiFab >
 
using InterpBndryData = InterpBndryDataT< MultiFab >
 
using fInterpBndryData = InterpBndryDataT< fMultiFab >
 
using YAFluxRegister = YAFluxRegisterT< MultiFab >
 
using GMRESMLMG = GMRESMLMGT< MultiFab >
 
using MLABecLaplacian = MLABecLaplacianT< MultiFab >
 
using MLALaplacian = MLALaplacianT< MultiFab >
 
using MLCellABecLap = MLCellABecLapT< MultiFab >
 
using MLCellLinOp = MLCellLinOpT< MultiFab >
 
using MLCGSolver = MLCGSolverT< MultiFab >
 
using MLLinOp = MLLinOpT< MultiFab >
 
using MLMG = MLMGT< MultiFab >
 
using MLMGBndry = MLMGBndryT< MultiFab >
 
using MLPoisson = MLPoissonT< MultiFab >
 
template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParticleContainer = ParticleContainer_impl< Particle< T_NStructReal, T_NStructInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
template<bool is_const, int T_NStructReal, int T_NStructInt, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParIterBase = ParIterBase_impl< is_const, Particle< T_NStructReal, T_NStructInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
template<bool is_const, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParIterBaseSoA = ParIterBase_impl< is_const, SoAParticle< T_NArrayReal, T_NArrayInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParConstIter = ParConstIter_impl< Particle< T_NStructReal, T_NStructInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
template<int T_NArrayReal, int T_NArrayInt, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParConstIterSoA = ParConstIter_impl< SoAParticle< T_NArrayReal, T_NArrayInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParIter = ParIter_impl< Particle< T_NStructReal, T_NStructInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
template<int T_NArrayReal, int T_NArrayInt, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParIterSoA = ParIter_impl< SoAParticle< T_NArrayReal, T_NArrayInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
template<int T_NArrayReal, int T_NArrayInt, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using ParticleContainerPureSoA = ParticleContainer_impl< SoAParticle< T_NArrayReal, T_NArrayInt >, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor >
 
using TracerParIter = ParIter< AMREX_SPACEDIM >
 

Enumerations

enum  InterpEM_t { InterpE , InterpB }
 
enum class  FPExcept : std::uint8_t {
  none = 0B0000 , invalid = 0B0001 , zero = 0B0010 , overflow = 0B0100 ,
  all = 0B0111
}
 
enum class  FabType : int {
  covered = -1 , regular = 0 , singlevalued = 1 , multivalued = 2 ,
  undefined = 100
}
 
enum  FillType { FillLocally , FillRemotely , Unfillable }
 
enum class  RunOn { Gpu , Cpu , Device =Gpu , Host =Cpu }
 
enum  MakeType { make_alias , make_deep_copy }
 
enum class  Direction : int { AMREX_D_DECL =(x = 0, y = 1, z = 2) }
 
enum class  ButcherTableauTypes {
  User = 0 , ForwardEuler , Trapezoid , SSPRK3 ,
  RK4 , NumTypes
}
 
enum class  Order { C , F , RowMajor =C , ColumnMajor =F }
 
enum class  IntegratorTypes { ForwardEuler = 0 , ExplicitRungeKutta , Sundials }
 
enum  iparser_exe_t {
  IPARSER_EXE_NULL = 0 , IPARSER_EXE_NUMBER , IPARSER_EXE_SYMBOL , IPARSER_EXE_ADD ,
  IPARSER_EXE_SUB , IPARSER_EXE_MUL , IPARSER_EXE_DIV_F , IPARSER_EXE_DIV_B ,
  IPARSER_EXE_NEG , IPARSER_EXE_F1 , IPARSER_EXE_F2_F , IPARSER_EXE_F2_B ,
  IPARSER_EXE_ADD_VP , IPARSER_EXE_SUB_VP , IPARSER_EXE_MUL_VP , IPARSER_EXE_DIV_VP ,
  IPARSER_EXE_DIV_PV , IPARSER_EXE_ADD_PP , IPARSER_EXE_SUB_PP , IPARSER_EXE_MUL_PP ,
  IPARSER_EXE_DIV_PP , IPARSER_EXE_NEG_P , IPARSER_EXE_ADD_VN , IPARSER_EXE_SUB_VN ,
  IPARSER_EXE_MUL_VN , IPARSER_EXE_DIV_NV , IPARSER_EXE_DIV_VN , IPARSER_EXE_ADD_PN ,
  IPARSER_EXE_SUB_PN , IPARSER_EXE_MUL_PN , IPARSER_EXE_DIV_PN , IPARSER_EXE_IF ,
  IPARSER_EXE_JUMP
}
 
enum  iparser_f1_t { IPARSER_ABS = 1 }
 
enum  iparser_f2_t {
  IPARSER_FLRDIV = 1 , IPARSER_POW , IPARSER_GT , IPARSER_LT ,
  IPARSER_GEQ , IPARSER_LEQ , IPARSER_EQ , IPARSER_NEQ ,
  IPARSER_AND , IPARSER_OR , IPARSER_MIN , IPARSER_MAX
}
 
enum  iparser_f3_t { IPARSER_IF }
 
enum  iparser_node_t {
  IPARSER_NUMBER = 1 , IPARSER_SYMBOL , IPARSER_ADD , IPARSER_SUB ,
  IPARSER_MUL , IPARSER_DIV , IPARSER_NEG , IPARSER_F1 ,
  IPARSER_F2 , IPARSER_F3 , IPARSER_ASSIGN , IPARSER_LIST ,
  IPARSER_ADD_VP , IPARSER_ADD_PP , IPARSER_SUB_VP , IPARSER_SUB_PP ,
  IPARSER_MUL_VP , IPARSER_MUL_PP , IPARSER_DIV_VP , IPARSER_DIV_PV ,
  IPARSER_DIV_PP , IPARSER_NEG_P
}
 
enum  parser_exe_t {
  PARSER_EXE_NULL = 0 , PARSER_EXE_NUMBER , PARSER_EXE_SYMBOL , PARSER_EXE_ADD ,
  PARSER_EXE_SUB_F , PARSER_EXE_SUB_B , PARSER_EXE_MUL , PARSER_EXE_DIV_F ,
  PARSER_EXE_DIV_B , PARSER_EXE_F1 , PARSER_EXE_F2_F , PARSER_EXE_F2_B ,
  PARSER_EXE_ADD_VP , PARSER_EXE_SUB_VP , PARSER_EXE_MUL_VP , PARSER_EXE_DIV_VP ,
  PARSER_EXE_ADD_PP , PARSER_EXE_SUB_PP , PARSER_EXE_MUL_PP , PARSER_EXE_DIV_PP ,
  PARSER_EXE_ADD_VN , PARSER_EXE_SUB_VN , PARSER_EXE_MUL_VN , PARSER_EXE_DIV_VN ,
  PARSER_EXE_ADD_PN , PARSER_EXE_SUB_PN , PARSER_EXE_MUL_PN , PARSER_EXE_DIV_PN ,
  PARSER_EXE_SQUARE , PARSER_EXE_POWI , PARSER_EXE_IF , PARSER_EXE_JUMP
}
 
enum  parser_f1_t {
  PARSER_SQRT , PARSER_EXP , PARSER_LOG , PARSER_LOG10 ,
  PARSER_SIN , PARSER_COS , PARSER_TAN , PARSER_ASIN ,
  PARSER_ACOS , PARSER_ATAN , PARSER_SINH , PARSER_COSH ,
  PARSER_TANH , PARSER_ASINH , PARSER_ACOSH , PARSER_ATANH ,
  PARSER_ABS , PARSER_FLOOR , PARSER_CEIL , PARSER_COMP_ELLINT_1 ,
  PARSER_COMP_ELLINT_2 , PARSER_ERF
}
 
enum  parser_f2_t {
  PARSER_POW , PARSER_ATAN2 , PARSER_GT , PARSER_LT ,
  PARSER_GEQ , PARSER_LEQ , PARSER_EQ , PARSER_NEQ ,
  PARSER_AND , PARSER_OR , PARSER_HEAVISIDE , PARSER_JN ,
  PARSER_YN , PARSER_MIN , PARSER_MAX , PARSER_FMOD
}
 
enum  parser_f3_t { PARSER_IF }
 
enum  parser_node_t {
  PARSER_NUMBER , PARSER_SYMBOL , PARSER_ADD , PARSER_SUB ,
  PARSER_MUL , PARSER_DIV , PARSER_F1 , PARSER_F2 ,
  PARSER_F3 , PARSER_ASSIGN , PARSER_LIST
}
 
enum class  EBData_t : int {
  levelset , volfrac , centroid , bndrycent ,
  bndrynorm , bndryarea , AMREX_D_DECL =(apx, apy, apz) , AMREX_D_DECL =(fcx, fcy, fcz) ,
  AMREX_D_DECL =(ecx, ecy, ecz) , cellflag
}
 
enum class  EBSupport : int { none = 0 , basic = 1 , volume = 2 , full = 3 }
 
enum class  HypreSolverID { BoomerAMG , SSAMG }
 
enum class  CurlCurlStateType { x , b , r }
 
enum class  BottomSolver : int {
  Default , smoother , bicgstab , cg ,
  bicgcg , cgbicg , hypre , petsc
}
 
enum class  DataLayout { AoS = 0 , SoA }
 

Functions

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_first_order_extrap_cpu (amrex::Box const &bx, int nComp, amrex::Array4< const int > const &mask, amrex::Array4< amrex::Real > const &data) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_first_order_extrap_gpu (int i, int j, int k, int n, amrex::Box const &bx, amrex::Array4< const int > const &mask, amrex::Array4< amrex::Real > const &data) noexcept
 
std::ostream & operator<< (std::ostream &os, AmrMesh const &amr_mesh)
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void ParticleToMesh (PC const &pc, const Vector< MultiFab * > &mf, int lev_min, int lev_max, F &&f, bool zero_out_input=true, bool vol_weight=true)
 
std::ostream & operator<< (std::ostream &os, const ErrorList &elst)
 
void InterpCrseFineBndryEMfield (InterpEM_t interp_type, const Array< MultiFab, AMREX_SPACEDIM > &crse, Array< MultiFab, AMREX_SPACEDIM > &fine, const Geometry &cgeom, const Geometry &fgeom, int ref_ratio)
 
void InterpCrseFineBndryEMfield (InterpEM_t interp_type, const Array< MultiFab const *, AMREX_SPACEDIM > &crse, const Array< MultiFab *, AMREX_SPACEDIM > &fine, const Geometry &cgeom, const Geometry &fgeom, int ref_ratio)
 
void FillPatchInterp (MultiFab &mf_fine_patch, int fcomp, MultiFab const &mf_crse_patch, int ccomp, int ncomp, IntVect const &ng, const Geometry &cgeom, const Geometry &fgeom, Box const &dest_domain, const IntVect &ratio, MFInterpolater *mapper, const Vector< BCRec > &bcs, int bcscomp)
 
template<typename Interp >
bool ProperlyNested (const IntVect &ratio, const IntVect &blocking_factor, int ngrow, const IndexType &boxType, Interp *mapper)
 Test if AMR grids are properly nested. More...
 
template<typename MF , typename BC >
std::enable_if_t< IsFabArray< MF >::value > FillPatchSingleLevel (MF &mf, IntVect const &nghost, Real time, const Vector< MF * > &smf, const Vector< Real > &stime, int scomp, int dcomp, int ncomp, const Geometry &geom, BC &physbcf, int bcfcomp)
 FillPatch with data from the current level. More...
 
template<typename MF , typename BC >
std::enable_if_t< IsFabArray< MF >::value > FillPatchSingleLevel (MF &mf, Real time, const Vector< MF * > &smf, const Vector< Real > &stime, int scomp, int dcomp, int ncomp, const Geometry &geom, BC &physbcf, int bcfcomp)
 FillPatch with data from the current level. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > FillPatchTwoLevels (MF &mf, IntVect const &nghost, Real time, const Vector< MF * > &cmf, const Vector< Real > &ct, const Vector< MF * > &fmf, const Vector< Real > &ft, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, BC &cbc, int cbccomp, BC &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Vector< BCRec > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 FillPatch with data from the current level and the level below. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > FillPatchTwoLevels (MF &mf, Real time, const Vector< MF * > &cmf, const Vector< Real > &ct, const Vector< MF * > &fmf, const Vector< Real > &ft, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, BC &cbc, int cbccomp, BC &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Vector< BCRec > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 FillPatch with data from the current level and the level below. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > FillPatchTwoLevels (Array< MF *, AMREX_SPACEDIM > const &mf, IntVect const &nghost, Real time, const Vector< Array< MF *, AMREX_SPACEDIM > > &cmf, const Vector< Real > &ct, const Vector< Array< MF *, AMREX_SPACEDIM > > &fmf, const Vector< Real > &ft, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, Array< BC, AMREX_SPACEDIM > &cbc, const Array< int, AMREX_SPACEDIM > &cbccomp, Array< BC, AMREX_SPACEDIM > &fbc, const Array< int, AMREX_SPACEDIM > &fbccomp, const IntVect &ratio, Interp *mapper, const Array< Vector< BCRec >, AMREX_SPACEDIM > &bcs, const Array< int, AMREX_SPACEDIM > &bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 FillPatch for face variables with data from the current level and the level below. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > FillPatchTwoLevels (Array< MF *, AMREX_SPACEDIM > const &mf, IntVect const &nghost, Real time, const Vector< Array< MF *, AMREX_SPACEDIM > > &cmf, const Vector< Real > &ct, const Vector< Array< MF *, AMREX_SPACEDIM > > &fmf, const Vector< Real > &ft, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, Array< BC, AMREX_SPACEDIM > &cbc, int cbccomp, Array< BC, AMREX_SPACEDIM > &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Array< Vector< BCRec >, AMREX_SPACEDIM > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 FillPatch for face variables with data from the current level and the level below. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > FillPatchTwoLevels (Array< MF *, AMREX_SPACEDIM > const &mf, Real time, const Vector< Array< MF *, AMREX_SPACEDIM > > &cmf, const Vector< Real > &ct, const Vector< Array< MF *, AMREX_SPACEDIM > > &fmf, const Vector< Real > &ft, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, Array< BC, AMREX_SPACEDIM > &cbc, int cbccomp, Array< BC, AMREX_SPACEDIM > &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Array< Vector< BCRec >, AMREX_SPACEDIM > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 FillPatch for face variables with data from the current level and the level below. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > InterpFromCoarseLevel (MF &mf, Real time, const MF &cmf, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, BC &cbc, int cbccomp, BC &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Vector< BCRec > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 Fill with interpolation of coarse level data. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > InterpFromCoarseLevel (MF &mf, IntVect const &nghost, Real time, const MF &cmf, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, BC &cbc, int cbccomp, BC &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Vector< BCRec > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 Fill with interpolation of coarse level data. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > InterpFromCoarseLevel (Array< MF *, AMREX_SPACEDIM > const &mf, Real time, const Array< MF *, AMREX_SPACEDIM > &cmf, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, Array< BC, AMREX_SPACEDIM > &cbc, int cbccomp, Array< BC, AMREX_SPACEDIM > &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Array< Vector< BCRec >, AMREX_SPACEDIM > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 Fill face variables with data from the coarse level. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving. More...
 
template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > InterpFromCoarseLevel (Array< MF *, AMREX_SPACEDIM > const &mf, IntVect const &nghost, Real time, const Array< MF *, AMREX_SPACEDIM > &cmf, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, Array< BC, AMREX_SPACEDIM > &cbc, int cbccomp, Array< BC, AMREX_SPACEDIM > &fbc, int fbccomp, const IntVect &ratio, Interp *mapper, const Array< Vector< BCRec >, AMREX_SPACEDIM > &bcs, int bcscomp, const PreInterpHook &pre_interp={}, const PostInterpHook &post_interp={})
 Fill face variables with data from the coarse level. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving. More...
 
template<typename MF , typename Interp >
std::enable_if_t< IsFabArray< MF >::value > InterpFromCoarseLevel (MF &mf, IntVect const &nghost, IntVect const &nghost_outside_domain, const MF &cmf, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, const IntVect &ratio, Interp *mapper, const Vector< BCRec > &bcs, int bcscomp)
 Fill with interpolation of coarse level data. More...
 
template<typename MF >
std::enable_if_t< IsFabArray< MF >::value > FillPatchSingleLevel (MF &mf, IntVect const &nghost, Real time, const Vector< MF * > &smf, IntVect const &snghost, const Vector< Real > &stime, int scomp, int dcomp, int ncomp, const Geometry &geom)
 FillPatch with data from the current level. More...
 
template<typename MF , typename Interp >
std::enable_if_t< IsFabArray< MF >::value > FillPatchTwoLevels (MF &mf, IntVect const &nghost, IntVect const &nghost_outside_domain, Real time, const Vector< MF * > &cmf, const Vector< Real > &ct, const Vector< MF * > &fmf, const Vector< Real > &ft, int scomp, int dcomp, int ncomp, const Geometry &cgeom, const Geometry &fgeom, const IntVect &ratio, Interp *mapper, const Vector< BCRec > &bcs, int bcscomp)
 FillPatch with data from the current level and the level below. More...
 
template<typename MF , typename BC , typename Interp >
std::enable_if_t< IsFabArray< MF >::value > FillPatchNLevels (MF &mf, int level, const IntVect &nghost, Real time, const Vector< Vector< MF * >> &smf, const Vector< Vector< Real >> &st, int scomp, int dcomp, int ncomp, const Vector< Geometry > &geom, Vector< BC > &bc, int bccomp, const Vector< IntVect > &ratio, Interp *mapper, const Vector< BCRec > &bcr, int bcrcomp)
 FillPatch with data from AMR levels. More...
 
template<typename MF , typename Interp >
std::enable_if_t< IsFabArray< MF >::value &&!std::is_same_v< Interp, MFInterpolater > > FillPatchInterp (MF &mf_fine_patch, int fcomp, MF const &mf_crse_patch, int ccomp, int ncomp, IntVect const &ng, const Geometry &cgeom, const Geometry &fgeom, Box const &dest_domain, const IntVect &ratio, Interp *mapper, const Vector< BCRec > &bcs, int bcscomp)
 
template<typename MF >
std::enable_if_t< IsFabArray< MF >::value > FillPatchInterp (MF &mf_fine_patch, int fcomp, MF const &mf_crse_patch, int ccomp, int ncomp, IntVect const &ng, const Geometry &cgeom, const Geometry &fgeom, Box const &dest_domain, const IntVect &ratio, InterpBase *mapper, const Vector< BCRec > &bcs, int bcscomp)
 
template<typename MF , typename iMF , typename Interp >
std::enable_if_t< IsFabArray< MF >::value &&!std::is_same_v< Interp, MFInterpolater > > InterpFace (Interp *interp, MF const &mf_crse_patch, int crse_comp, MF &mf_refined_patch, int fine_comp, int ncomp, const IntVect &ratio, const iMF &solve_mask, const Geometry &crse_geom, const Geometry &fine_geom, int bcscomp, RunOn gpu_or_cpu, const Vector< BCRec > &bcs)
 
template<typename MF , typename iMF >
std::enable_if_t< IsFabArray< MF >::value > InterpFace (InterpBase *interp, MF const &mf_crse_patch, int crse_comp, MF &mf_refined_patch, int fine_comp, int ncomp, const IntVect &ratio, const iMF &solve_mask, const Geometry &crse_geom, const Geometry &fine_geom, int bccomp, RunOn gpu_or_cpu, const Vector< BCRec > &bcs)
 
AMREX_GPU_HOST_DEVICE void fluxreg_fineadd (Box const &bx, Array4< Real > const &reg, const int rcomp, Array4< Real const > const &flx, const int fcomp, const int ncomp, const int, Dim3 const &ratio, const Real mult) noexcept
 Add fine grid flux to flux register. Flux array is a fine grid edge based object, Register is a coarse grid edge based object. It is assumed that the coarsened flux region contains the register region. More...
 
AMREX_GPU_HOST_DEVICE void fluxreg_fineareaadd (Box const &bx, Array4< Real > const &reg, const int rcomp, Array4< Real const > const &area, Array4< Real const > const &flx, const int fcomp, const int ncomp, const int, Dim3 const &ratio, const Real mult) noexcept
 Add fine grid flux times area to flux register. Flux array is a fine grid edge based object, Register is a coarse grid edge based object. It is assumed that the coarsened flux region contains the register region. More...
 
AMREX_GPU_HOST_DEVICE void fluxreg_reflux (Box const &bx, Array4< Real > const &s, const int scomp, Array4< Real const > const &f, Array4< Real const > const &v, const int ncomp, const Real mult, const Orientation face) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void pcinterp_interp (Box const &bx, Array4< Real > const &fine, const int fcomp, const int ncomp, Array4< Real const > const &crse, const int ccomp, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void nodebilin_slopes (Box const &bx, Array4< T > const &slope, Array4< T const > const &u, const int icomp, const int ncomp, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void nodebilin_interp (Box const &bx, Array4< T > const &fine, const int fcomp, const int ncomp, Array4< T const > const &slope, Array4< T const > const &crse, const int ccomp, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void facediv_face_interp (int, int, int, int, int, int, Array4< T const > const &, Array4< T > const &, Array4< const int > const &, IntVect const &) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void facediv_int (int, int, int, int, GpuArray< Array4< T >, AMREX_SPACEDIM > const &, IntVect const &, GpuArray< Real, AMREX_SPACEDIM > const &) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_interp_x (int i, int, int, int n, Array4< T > const &fine, Array4< T const > const &crse, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void ccquartic_interp (int i, int, int, int n, Array4< Real const > const &crse, Array4< Real > const &fine) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_interp_y (int i, int j, int, int n, Array4< T > const &fine, Array4< T const > const &crse, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void ccprotect_2d (int ic, int jc, int, int nvar, Box const &fine_bx, IntVect const &ratio, GeometryData cs_geomdata, GeometryData fn_geomdata, Array4< T > const &fine, Array4< T const > const &fine_state) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_interp_z (int i, int j, int k, int n, Array4< T > const &fine, Array4< T const > const &crse, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void ccprotect_3d (int ic, int jc, int kc, int nvar, Box const &fine_bx, IntVect const &ratio, Array4< T > const &fine, Array4< T const > const &fine_state) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_face_interp_x (int fi, int fj, int fk, int n, Array4< T > const &fine, Array4< T const > const &crse, Array4< int const > const &mask, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_face_interp_y (int fi, int fj, int fk, int n, Array4< T > const &fine, Array4< T const > const &crse, Array4< int const > const &mask, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_face_interp_z (int fi, int fj, int fk, int n, Array4< T > const &fine, Array4< T const > const &crse, Array4< int const > const &mask, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_cons_linear_face_interp (int i, int j, int k, int n, Array4< T > const &fine, Array4< T const > const &crse, Array4< int const > const &mask, IntVect const &ratio, Box const &per_grown_domain, int dim) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_interp_x (int i, int j, int k, int n, amrex::Array4< amrex::Real > const &fine, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_interp_y (int i, int j, int k, int n, amrex::Array4< amrex::Real > const &fine, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void face_linear_interp_z (int i, int j, int k, int n, amrex::Array4< amrex::Real > const &fine, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cell_quartic_interp_x (int i, int j, int k, int n, Array4< Real > const &fine, Array4< Real const > const &crse) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cell_quartic_interp_y (int i, int j, int k, int n, Array4< Real > const &fine, Array4< Real const > const &crse) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cell_quartic_interp_z (int i, int j, int k, int n, Array4< Real > const &fine, Array4< Real const > const &crse) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void interp_face_reg (int i, int j, IntVect const &rr, Array4< Real > const &fine, int scomp, Array4< Real const > const &crse, Array4< Real > const &slope, int ncomp, Box const &domface, int idim)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void interp_face_reg (int i, int j, int k, IntVect const &rr, Array4< Real > const &fine, int scomp, Array4< Real const > const &crse, Array4< Real > const &slope, int ncomp, Box const &domface, int idim)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp_limit_minmax_llslope (int i, int, int, Array4< Real > const &slope, Array4< Real const > const &u, int scomp, int ncomp, Box const &domain, IntVect const &ratio, BCRec const *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp_llslope (int i, int, int, Array4< Real > const &slope, Array4< Real const > const &u, int scomp, int ncomp, Box const &domain, IntVect const &, BCRec const *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp_mcslope (int i, int, int, int ns, Array4< Real > const &slope, Array4< Real const > const &u, int scomp, int, Box const &domain, IntVect const &ratio, BCRec const *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp (int i, int, int, int ns, Array4< Real > const &fine, int fcomp, Array4< Real const > const &slope, Array4< Real const > const &crse, int ccomp, int, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp_mcslope_sph (int i, int ns, Array4< Real > const &slope, Array4< Real const > const &u, int scomp, int, Box const &domain, IntVect const &ratio, BCRec const *bc, Real drf, Real rlo) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp_sph (int i, int ns, Array4< Real > const &fine, int fcomp, Array4< Real const > const &slope, Array4< Real const > const &crse, int ccomp, int, IntVect const &ratio, Real drf, Real rlo) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_bilin_interp (int i, int, int, int n, Array4< T > const &fine, int fcomp, Array4< T const > const &crse, int ccomp, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_nodebilin_interp (int i, int, int, int n, Array4< Real > const &fine, int fcomp, Array4< Real const > const &crse, int ccomp, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp_mcslope_rz (int i, int j, int ns, Array4< Real > const &slope, Array4< Real const > const &u, int scomp, int ncomp, Box const &domain, IntVect const &ratio, BCRec const *bc, Real drf, Real rlo) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_cons_lin_interp_rz (int i, int j, int ns, Array4< Real > const &fine, int fcomp, Array4< Real const > const &slope, Array4< Real const > const &crse, int ccomp, int ncomp, IntVect const &ratio, Real drf, Real rlo) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_quadratic_calcslope (int i, int j, int, int n, Array4< Real const > const &crse, int ccomp, Array4< Real > const &slope, Box const &domain, BCRec const *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_quadratic_interp (int i, int j, int, int n, Array4< Real > const &fine, int fcomp, Array4< Real const > const &crse, int ccomp, Array4< Real const > const &slope, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mf_cell_quadratic_interp_rz (int i, int j, int, int n, Array4< Real > const &fine, int fcomp, Array4< Real const > const &crse, int ccomp, Array4< Real const > const &slope, IntVect const &ratio, GeometryData const &cs_geomdata, GeometryData const &fn_geomdata) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_compute_slopes_x (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_compute_slopes_y (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_compute_slopes_z (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_cell_quadratic_compute_slopes_xx (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_cell_quadratic_compute_slopes_yy (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_cell_quadratic_compute_slopes_zz (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_cell_quadratic_compute_slopes_xy (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_cell_quadratic_compute_slopes_xz (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mf_cell_quadratic_compute_slopes_yz (int i, int j, int k, Array4< Real const > const &u, int nu, Box const &domain, BCRec const &bc)
 
FPExcept getFPExcept ()
 Return currently enabled FP exceptions. Linux only. More...
 
FPExcept setFPExcept (FPExcept excepts)
 
FPExcept disableFPExcept (FPExcept excepts)
 Disable FP exceptions. Linux Only. More...
 
FPExcept enableFPExcept (FPExcept excepts)
 Enable FP exceptions. Linux Only. More...
 
std::string Version ()
 
AMReXInitialize (MPI_Comm mpi_comm, std::ostream &a_osout=std::cout, std::ostream &a_oserr=std::cerr, ErrorHandler a_errhandler=nullptr)
 
AMReXInitialize (int &argc, char **&argv, bool build_parm_parse=true, MPI_Comm mpi_comm=MPI_COMM_WORLD, const std::function< void()> &func_parm_parse={}, std::ostream &a_osout=std::cout, std::ostream &a_oserr=std::cerr, ErrorHandler a_errhandler=nullptr)
 
bool Initialized ()
 Returns true if there are any currently-active and initialized AMReX instances (i.e. one for which amrex::Initialize has been called, and amrex::Finalize has not). Otherwise false. More...
 
void Finalize (AMReX *pamrex)
 
void Finalize ()
 
void ExecOnFinalize (std::function< void()>)
 We maintain a stack of functions that need to be called in Finalize(). The functions are called in LIFO order. The idea here is to allow classes to clean up any "global" state that they maintain when we're exiting from AMReX. More...
 
void ExecOnInitialize (std::function< void()>)
 
template<class... Ts>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void ignore_unused (const Ts &...)
 This shuts up the compiler about unused variables. More...
 
void Error (const std::string &msg)
 Print out message to cerr and exit via amrex::Abort(). More...
 
void Error_host (const char *type, const char *msg)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void Error (const char *msg=nullptr)
 
void Warning (const std::string &msg)
 Print out warning message to cerr. More...
 
void Warning_host (const char *msg)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void Warning (const char *msg)
 
void Abort (const std::string &msg)
 Print out message to cerr and exit via abort(). More...
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void Abort (const char *msg=nullptr)
 
void Assert_host (const char *EX, const char *file, int line, const char *msg)
 Prints assertion failed messages to cerr and exits via abort(). Intended for use by the BL_ASSERT() macro in <AMReX_BLassert.H>. More...
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void Assert (const char *EX, const char *file, int line, const char *msg=nullptr)
 
void write_to_stderr_without_buffering (const char *str)
 This is used by amrex::Error(), amrex::Abort(), and amrex::Assert() to ensure that when writing the message to stderr, that no additional heap-based memory is allocated. More...
 
void SetErrorHandler (ErrorHandler f)
 
std::ostream & OutStream ()
 
std::ostream & ErrorStream ()
 
int Verbose () noexcept
 
void SetVerbose (int v) noexcept
 
bool InitSNaN () noexcept
 
void SetInitSNaN (bool v) noexcept
 
std::string get_command ()
 
int command_argument_count ()
 
std::string get_command_argument (int number)
 Get command line arguments. The executable name is the zero-th argument. Return empty string if there are not that many arguments. std::string. More...
 
void GccPlacater ()
 
bool any (FPExcept a)
 
FPExcept operator| (FPExcept a, FPExcept b)
 
FPExcept operator& (FPExcept a, FPExcept b)
 
template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T & min (const T &a, const T &b) noexcept
 
template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T & min (const T &a, const T &b, const Ts &... c) noexcept
 
template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T & max (const T &a, const T &b) noexcept
 
template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T & max (const T &a, const T &b, const Ts &... c) noexcept
 
template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINEelemwiseMin (T const &a, T const &b) noexcept
 
template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINEelemwiseMin (const T &a, const T &b, const Ts &... c) noexcept
 
template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINEelemwiseMax (T const &a, T const &b) noexcept
 
template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINEelemwiseMax (const T &a, const T &b, const Ts &... c) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void Swap (T &t1, T &t2) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T & Clamp (const T &v, const T &lo, const T &hi)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE std::enable_if_t< std::is_floating_point_v< T >, bool > almostEqual (T x, T y, int ulp=2)
 
template<class T , class F , std::enable_if_t< std::is_floating_point_v< T >, int > FOO = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINEbisect (T lo, T hi, F f, T tol=1e-12, int max_iter=100)
 
template<typename T , typename I , std::enable_if_t< std::is_integral_v< I >, int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINEbisect (T const *d, I lo, I hi, T const &v)
 
template<typename ItType , typename ValType >
AMREX_GPU_HOST_DEVICE ItType upper_bound (ItType first, ItType last, const ValType &val)
 
template<typename ItType , typename ValType >
AMREX_GPU_HOST_DEVICE ItType lower_bound (ItType first, ItType last, const ValType &val)
 
template<typename ItType , typename ValType , std::enable_if_t< std::is_floating_point_v< typename std::iterator_traits< ItType >::value_type > &&std::is_floating_point_v< ValType >, int > = 0>
AMREX_GPU_HOST_DEVICE void linspace (ItType first, const ItType &last, const ValType &start, const ValType &stop)
 
template<typename ItType , typename ValType , std::enable_if_t< std::is_floating_point_v< typename std::iterator_traits< ItType >::value_type > &&std::is_floating_point_v< ValType >, int > = 0>
AMREX_GPU_HOST_DEVICE void logspace (ItType first, const ItType &last, const ValType &start, const ValType &stop, const ValType &base)
 
template<class T , std::enable_if_t< std::is_same_v< std::decay_t< T >, std::uint8_t >||std::is_same_v< std::decay_t< T >, std::uint16_t >||std::is_same_v< std::decay_t< T >, std::uint32_t >||std::is_same_v< std::decay_t< T >, std::uint64_t >, int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int clz (T x) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int clz_generic (std::uint8_t x) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int clz_generic (std::uint16_t x) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int clz_generic (std::uint32_t x) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int clz_generic (std::uint64_t x) noexcept
 
ArenaThe_Arena ()
 
ArenaThe_Async_Arena ()
 
ArenaThe_Device_Arena ()
 
ArenaThe_Managed_Arena ()
 
ArenaThe_Pinned_Arena ()
 
ArenaThe_Cpu_Arena ()
 
ArenaThe_Comms_Arena ()
 
std::size_t aligned_size (std::size_t align_requirement, std::size_t size) noexcept
 Given a minimum required size of size bytes, this returns the next largest arena size that will align to align_requirement bytes. More...
 
bool is_aligned (const void *p, std::size_t alignment) noexcept
 
template<class T , typename = typename T::FABType>
std::array< T *, AMREX_SPACEDIM > GetArrOfPtrs (std::array< T, AMREX_SPACEDIM > &a) noexcept
 
template<class T >
std::array< T *, AMREX_SPACEDIM > GetArrOfPtrs (const std::array< std::unique_ptr< T >, AMREX_SPACEDIM > &a) noexcept
 
template<class T >
std::array< T const *, AMREX_SPACEDIM > GetArrOfConstPtrs (const std::array< T, AMREX_SPACEDIM > &a) noexcept
 
template<class T >
std::array< T const *, AMREX_SPACEDIM > GetArrOfConstPtrs (const std::array< T *, AMREX_SPACEDIM > &a) noexcept
 
template<class T >
std::array< T const *, AMREX_SPACEDIM > GetArrOfConstPtrs (const std::array< std::unique_ptr< T >, AMREX_SPACEDIM > &a) noexcept
 
XDim3 makeXDim3 (const Array< Real, AMREX_SPACEDIM > &a) noexcept
 
template<class Tto , class Tfrom >
AMREX_GPU_HOST_DEVICE Array4< Tto > ToArray4 (Array4< Tfrom > const &a_in) noexcept
 
template<class T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 lbound (Array4< T > const &a) noexcept
 
template<class T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 ubound (Array4< T > const &a) noexcept
 
template<class T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 length (Array4< T > const &a) noexcept
 
template<typename T >
std::ostream & operator<< (std::ostream &os, const Array4< T > &a)
 
template<typename T >
PolymorphicArray4< T > makePolymorphic (Array4< T > const &a)
 
void BaseFab_Initialize ()
 
void BaseFab_Finalize ()
 
Long TotalBytesAllocatedInFabs () noexcept
 
Long TotalBytesAllocatedInFabsHWM () noexcept
 
Long TotalCellsAllocatedInFabs () noexcept
 
Long TotalCellsAllocatedInFabsHWM () noexcept
 
void ResetTotalBytesAllocatedInFabsHWM () noexcept
 
void update_fab_stats (Long n, Long s, size_t szt) noexcept
 
void update_fab_stats (Long n, Long s, std::size_t szt) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Array4< T > makeArray4 (T *p, Box const &bx, int ncomp) noexcept
 
template<typename T >
std::enable_if_t< std::is_arithmetic_v< T > > placementNew (T *const, Long)
 
template<typename T >
std::enable_if_t< std::is_trivially_default_constructible_v< T > &&!std::is_arithmetic_v< T > > placementNew (T *const ptr, Long n)
 
template<typename T >
std::enable_if_t<!std::is_trivially_default_constructible_v< T > > placementNew (T *const ptr, Long n)
 
template<typename T >
std::enable_if_t< std::is_trivially_destructible_v< T > > placementDelete (T *const, Long)
 
template<typename T >
std::enable_if_t<!std::is_trivially_destructible_v< T > > placementDelete (T *const ptr, Long n)
 
template<class Tto , class Tfrom >
AMREX_GPU_HOST_DEVICE void cast (BaseFab< Tto > &tofab, BaseFab< Tfrom > const &fromfab, Box const &bx, SrcComp scomp, DestComp dcomp, NumComps ncomp) noexcept
 
template<typename STRUCT , typename F , std::enable_if_t<(sizeof(STRUCT)<=36 *8) &&AMREX_IS_TRIVIALLY_COPYABLE(STRUCT) &&std::is_trivially_destructible_v< STRUCT >, int > FOO = 0>
void fill (BaseFab< STRUCT > &aos_fab, F const &f)
 
void setBC (const Box &bx, const Box &domain, int src_comp, int dest_comp, int ncomp, const Vector< BCRec > &bc_dom, Vector< BCRec > &bcr) noexcept
 Function for setting array of BCs. More...
 
std::ostream & operator<< (std::ostream &os, const BCRec &b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void setBC (const Box &bx, const Box &domain, const BCRec &bc_dom, BCRec &bcr) noexcept
 Function for setting a BC. More...
 
void FillDomainBoundary (MultiFab &phi, const Geometry &geom, const Vector< BCRec > &bc)
 
void AllGatherBoxes (Vector< Box > &bxs, int n_extra_reserve)
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > grow (const BoxND< dim > &b, int i) noexcept
 Grow BoxND in all directions by given amount. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > grow (const BoxND< dim > &b, const IntVectND< dim > &v) noexcept
 Grow BoxND in each direction by specified amount. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > grow (const BoxND< dim > &b, int idir, int n_cell) noexcept
 Grow BoxND in direction idir be n_cell cells. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > grow (const BoxND< dim > &b, Direction d, int n_cell) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > growLo (const BoxND< dim > &b, int idir, int n_cell) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > growLo (const BoxND< dim > &b, Direction d, int n_cell) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > growHi (const BoxND< dim > &b, int idir, int n_cell) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > growHi (const BoxND< dim > &b, Direction d, int n_cell) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > coarsen (const BoxND< dim > &b, int ref_ratio) noexcept
 Coarsen BoxND by given (positive) refinement ratio. NOTE: if type(dir) = CELL centered: lo <- lo/ratio and hi <- hi/ratio. NOTE: if type(dir) = NODE centered: lo <- lo/ratio and hi <- hi/ratio + ((hiratio)==0 ? 0 : 1). That is, refinement of coarsened BoxND must contain the original BoxND. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > coarsen (const BoxND< dim > &b, const IntVectND< dim > &ref_ratio) noexcept
 Coarsen BoxND by given (positive) refinement ratio. NOTE: if type(dir) = CELL centered: lo <- lo/ratio and hi <- hi/ratio. NOTE: if type(dir) = NODE centered: lo <- lo/ratio and hi <- hi/ratio + ((hiratio)==0 ? 0 : 1). That is, refinement of coarsened BoxND must contain the original BoxND. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > refine (const BoxND< dim > &b, int ref_ratio) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > refine (const BoxND< dim > &b, const IntVectND< dim > &ref_ratio) noexcept
 Refine BoxND by given (positive) refinement ratio. NOTE: if type(dir) = CELL centered: lo <- lo*ratio and hi <- (hi+1)*ratio - 1. NOTE: if type(dir) = NODE centered: lo <- lo*ratio and hi <- hi*ratio. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > shift (const BoxND< dim > &b, int dir, int nzones) noexcept
 Return a BoxND with indices shifted by nzones in dir direction. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > shift (const BoxND< dim > &b, const IntVectND< dim > &nzones) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > surroundingNodes (const BoxND< dim > &b, int dir) noexcept
 Returns a BoxND with NODE based coordinates in direction dir that encloses BoxND b. NOTE: equivalent to b.convert(dir,NODE) NOTE: error if b.type(dir) == NODE. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > surroundingNodes (const BoxND< dim > &b, Direction d) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > surroundingNodes (const BoxND< dim > &b) noexcept
 Returns a BoxND with NODE based coordinates in all directions that encloses BoxND b. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > convert (const BoxND< dim > &b, const IntVectND< dim > &typ) noexcept
 Returns a BoxND with different type. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > convert (const BoxND< dim > &b, const IndexTypeND< dim > &typ) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > enclosedCells (const BoxND< dim > &b, int dir) noexcept
 Returns a BoxND with CELL based coordinates in direction dir that is enclosed by b. NOTE: equivalent to b.convert(dir,CELL) NOTE: error if b.type(dir) == CELL. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > enclosedCells (const BoxND< dim > &b, Direction d) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > enclosedCells (const BoxND< dim > &b) noexcept
 Returns a BoxND with CELL based coordinates in all directions that is enclosed by b. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > bdryLo (const BoxND< dim > &b, int dir, int len=1) noexcept
 Returns the edge-centered BoxND (in direction dir) defining the low side of BoxND b. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > bdryHi (const BoxND< dim > &b, int dir, int len=1) noexcept
 Returns the edge-centered BoxND (in direction dir) defining the high side of BoxND b. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > bdryNode (const BoxND< dim > &b, Orientation face, int len=1) noexcept
 Similar to bdryLo and bdryHi except that it operates on the given face of box b. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > adjCellLo (const BoxND< dim > &b, int dir, int len=1) noexcept
 Returns the cell centered BoxND of length len adjacent to b on the low end along the coordinate direction dir. The return BoxND is identical to b in the other directions. The return BoxND and b have an empty intersection. NOTE: len >= 1 NOTE: BoxND retval = b.adjCellLo(b,dir,len) is equivalent to the following set of operations: BoxND retval(b); retval.convert(dir,BoxND::CELL); retval.setrange(dir,retval.smallEnd(dir)-len,len);. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > adjCellHi (const BoxND< dim > &b, int dir, int len=1) noexcept
 Similar to adjCellLo but builds an adjacent BoxND on the high end. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > adjCell (const BoxND< dim > &b, Orientation face, int len=1) noexcept
 Similar to adjCellLo and adjCellHi; operates on given face. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > minBox (const BoxND< dim > &b1, const BoxND< dim > &b2) noexcept
 Modify BoxND to that of the minimum BoxND containing both the original BoxND and the argument. Both BoxNDes must have identical type. More...
 
template<int dim>
std::ostream & operator<< (std::ostream &os, const BoxND< dim > &bx)
 Write an ASCII representation to the ostream. More...
 
template<int dim>
std::istream & operator>> (std::istream &is, BoxND< dim > &bx)
 Read from istream. More...
 
template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND< detail::get_sum< d, dims... >)> BoxCat (const BoxND< d > &bx, const BoxND< dims > &...boxes) noexcept
 Returns a BoxND obtained by concatenating the input BoxNDs. The dimension of the return value equals the sum of the dimensions of the inputted BoxNDs. More...
 
template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE GpuTuple< BoxND< d >, BoxND< dims >... > BoxSplit (const BoxND< detail::get_sum< d, dims... >()> &bx) noexcept
 Returns a tuple of BoxNDs obtained by splitting the input BoxND according to the dimensions specified by the template arguments. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND< new_dim > BoxShrink (const BoxND< old_dim > &bx) noexcept
 Returns a new BoxND of dimension new_dim and assigns the first new_dim dimension of this BoxND to it. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND< new_dim > BoxExpand (const BoxND< old_dim > &bx) noexcept
 Returns a new BoxND of size new_dim and assigns all values of this BoxND to it and (small=0, big=0, typ=CELL) to the remaining elements. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND< new_dim > BoxResize (const BoxND< old_dim > &bx) noexcept
 Returns a new BoxND of size new_dim by either shrinking or expanding this BoxND. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > lbound_iv (BoxND< dim > const &box) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > ubound_iv (BoxND< dim > const &box) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > begin_iv (BoxND< dim > const &box) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > end_iv (BoxND< dim > const &box) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > length_iv (BoxND< dim > const &box) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > max_lbound_iv (BoxND< dim > const &b1, BoxND< dim > const &b2) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > max_lbound_iv (BoxND< dim > const &b1, IntVectND< dim > const &lo) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > min_ubound_iv (BoxND< dim > const &b1, BoxND< dim > const &b2) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > min_ubound_iv (BoxND< dim > const &b1, IntVectND< dim > const &hi) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 lbound (BoxND< dim > const &box) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 ubound (BoxND< dim > const &box) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 begin (BoxND< dim > const &box) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 end (BoxND< dim > const &box) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 length (BoxND< dim > const &box) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 max_lbound (BoxND< dim > const &b1, BoxND< dim > const &b2) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 max_lbound (BoxND< dim > const &b1, Dim3 const &lo) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 min_ubound (BoxND< dim > const &b1, BoxND< dim > const &b2) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 min_ubound (BoxND< dim > const &b1, Dim3 const &hi) noexcept
 
template<int dim>
AMREX_FORCE_INLINE BoxND< dim > getIndexBounds (BoxND< dim > const &b1) noexcept
 
template<int dim>
AMREX_FORCE_INLINE BoxND< dim > getIndexBounds (BoxND< dim > const &b1, BoxND< dim > const &b2) noexcept
 
template<class T , class ... Ts>
AMREX_FORCE_INLINE auto getIndexBounds (T const &b1, T const &b2, Ts const &... b3) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > getCell (BoxND< dim > const *boxes, int nboxes, Long icell) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > makeSlab (BoxND< dim > const &b, int direction, int slab_index) noexcept
 
template<int dim = AMREX_SPACEDIM, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > makeSingleCellBox (int i, int j, int k, IndexTypeND< dim > typ=IndexTypeND< dim >::TheCellType())
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND< dim > makeSingleCellBox (IntVectND< dim > const &vect, IndexTypeND< dim > typ=IndexTypeND< dim >::TheCellType())
 
BoxArray boxComplement (const Box &b1in, const Box &b2)
 Make a BoxArray from the the complement of b2 in b1in. More...
 
BoxArray complementIn (const Box &b, const BoxArray &ba)
 Make a BoxArray from the complement of BoxArray ba in Box b. More...
 
BoxArray intersect (const BoxArray &ba, const Box &b, int ng=0)
 Make a BoxArray from the intersection of Box b and BoxArray(+ghostcells). More...
 
BoxArray intersect (const BoxArray &ba, const Box &b, const IntVect &ng)
 
BoxArray intersect (const BoxArray &lhs, const BoxArray &rhs)
 Make a BoxArray from the intersection of two BoxArrays. More...
 
BoxList intersect (const BoxArray &ba, const BoxList &bl)
 Make a BoxList from the intersection of BoxArray and BoxList. More...
 
BoxArray convert (const BoxArray &ba, IndexType typ)
 
BoxArray convert (const BoxArray &ba, const IntVect &typ)
 
BoxArray coarsen (const BoxArray &ba, int ratio)
 
BoxArray coarsen (const BoxArray &ba, const IntVect &ratio)
 
BoxArray refine (const BoxArray &ba, int ratio)
 
BoxArray refine (const BoxArray &ba, const IntVect &ratio)
 
BoxList GetBndryCells (const BoxArray &ba, int ngrow)
 Find the ghost cells of a given BoxArray. More...
 
void readBoxArray (BoxArray &ba, std::istream &s, bool b=false)
 Read a BoxArray from a stream. If b is true, read in a special way. More...
 
bool match (const BoxArray &x, const BoxArray &y)
 Note that two BoxArrays that match are not necessarily equal. More...
 
BoxArray decompose (Box const &domain, int nboxes, Array< bool, AMREX_SPACEDIM > const &decomp={AMREX_D_DECL(true, true, true)}, bool no_overlap=false)
 Decompose domain box into BoxArray. More...
 
std::ostream & operator<< (std::ostream &os, const BoxArray &ba)
 Write a BoxArray to an ostream in ASCII format. More...
 
std::ostream & operator<< (std::ostream &os, const BoxArray::RefID &id)
 
void intersect (BoxDomain &dest, const BoxDomain &fin, const Box &b)
 Compute the intersection of BoxDomain fin with Box b and place the result into BoxDomain dest. More...
 
void refine (BoxDomain &dest, const BoxDomain &fin, int ratio)
 Refine all Boxes in the domain by the refinement ratio and return the result in dest. More...
 
void accrete (BoxDomain &dest, const BoxDomain &fin, int sz=1)
 Grow each Box in BoxDomain fin by size sz and place the result into BoxDomain dest. More...
 
void coarsen (BoxDomain &dest, const BoxDomain &fin, int ratio)
 Coarsen all Boxes in the domain by the refinement ratio. The result is placed into a new BoxDomain. More...
 
BoxDomain complementIn (const Box &b, const BoxDomain &bl)
 Returns the complement of BoxDomain bl in Box b. More...
 
std::ostream & operator<< (std::ostream &os, const BoxDomain &bd)
 Output a BoxDomain to an ostream is ASCII format. More...
 
BoxList complementIn (const Box &b, const BoxList &bl)
 Returns a BoxList defining the complement of BoxList bl in Box b. More...
 
BoxList boxDiff (const Box &b1in, const Box &b2)
 Returns BoxList defining the compliment of b2 in b1in. More...
 
void boxDiff (BoxList &bl_diff, const Box &b1in, const Box &b2)
 
BoxList refine (const BoxList &bl, int ratio)
 Returns a new BoxList in which each Box is refined by the given ratio. More...
 
BoxList coarsen (const BoxList &bl, int ratio)
 Returns a new BoxList in which each Box is coarsened by the given ratio. More...
 
BoxList intersect (const BoxList &bl, const Box &b)
 Returns a BoxList defining the intersection of bl with b. More...
 
BoxList accrete (const BoxList &bl, int sz)
 Returns a new BoxList in which each Box is grown by the given size. More...
 
BoxList removeOverlap (const BoxList &bl)
 Return BoxList which covers the same area but has no overlapping boxes. More...
 
std::ostream & operator<< (std::ostream &os, const BoxList &blist)
 Output a BoxList to an ostream in ASCII format. More...
 
std::ostream & operator<< (std::ostream &os, const CArena &arena)
 
template<auto I, auto N, class F >
AMREX_GPU_HOST_DEVICE constexpr AMREX_INLINE void constexpr_for (F const &f)
 
std::ostream & operator<< (std::ostream &os, const CoordSys &c)
 
std::istream & operator>> (std::istream &is, CoordSys &c)
 
AMREX_GPU_HOST_DEVICE void amrex_setvol (Box const &bx, Array4< Real > const &vol, GpuArray< Real, 1 > const &offset, GpuArray< Real, 1 > const &dx, const int coord) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_setarea (Box const &bx, Array4< Real > const &area, GpuArray< Real, 1 > const &offset, GpuArray< Real, 1 > const &dx, const int, const int coord) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_setdloga (Box const &bx, Array4< Real > const &dloga, GpuArray< Real, 1 > const &offset, GpuArray< Real, 1 > const &dx, const int, const int coord) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_setvol (Box const &bx, Array4< Real > const &vol, GpuArray< Real, 2 > const &offset, GpuArray< Real, 2 > const &dx, const int coord) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_setarea (Box const &bx, Array4< Real > const &area, GpuArray< Real, 2 > const &offset, GpuArray< Real, 2 > const &dx, const int dir, const int coord) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_setdloga (Box const &bx, Array4< Real > const &dloga, GpuArray< Real, 2 > const &offset, GpuArray< Real, 2 > const &dx, const int dir, const int coord) noexcept
 
template<class L , class... Fs, typename... CTOs>
void AnyCTO ([[maybe_unused]] TypeList< CTOs... > list_of_compile_time_options, std::array< int, sizeof...(CTOs)> const &runtime_options, L &&l, Fs &&...cto_functs)
 Compile time optimization of kernels with run time options. More...
 
template<int MT, typename T , class F , typename... CTOs>
std::enable_if_t< std::is_integral_v< T > > ParallelFor (TypeList< CTOs... > ctos, std::array< int, sizeof...(CTOs)> const &runtime_options, T N, F &&f)
 
template<int MT, class F , int dim, typename... CTOs>
void ParallelFor (TypeList< CTOs... > ctos, std::array< int, sizeof...(CTOs)> const &runtime_options, BoxND< dim > const &box, F &&f)
 
template<int MT, typename T , class F , int dim, typename... CTOs>
std::enable_if_t< std::is_integral_v< T > > ParallelFor (TypeList< CTOs... > ctos, std::array< int, sizeof...(CTOs)> const &runtime_options, BoxND< dim > const &box, T ncomp, F &&f)
 
template<typename T , class F , typename... CTOs>
std::enable_if_t< std::is_integral_v< T > > ParallelFor (TypeList< CTOs... > ctos, std::array< int, sizeof...(CTOs)> const &option, T N, F &&f)
 ParallelFor with compile time optimization of kernels with run time options. More...
 
template<class F , int dim, typename... CTOs>
void ParallelFor (TypeList< CTOs... > ctos, std::array< int, sizeof...(CTOs)> const &option, BoxND< dim > const &box, F &&f)
 ParallelFor with compile time optimization of kernels with run time options. More...
 
template<typename T , class F , int dim, typename... CTOs>
std::enable_if_t< std::is_integral_v< T > > ParallelFor (TypeList< CTOs... > ctos, std::array< int, sizeof...(CTOs)> const &option, BoxND< dim > const &box, T ncomp, F &&f)
 ParallelFor with compile time optimization of kernels with run time options. More...
 
std::string demangle (const char *name)
 Demangle C++ name. More...
 
template<typename T , std::enable_if_t< std::is_same_v< T, Dim3 >||std::is_same_v< T, XDim3 >> * = nullptr>
std::ostream & operator<< (std::ostream &os, const T &d)
 
std::ostream & operator<< (std::ostream &os, const DistributionMapping &pmap)
 Our output operator. More...
 
std::ostream & operator<< (std::ostream &os, const DistributionMapping::RefID &id)
 
DistributionMapping MakeSimilarDM (const BoxArray &ba, const MultiFab &mf, const IntVect &ng)
 Function that creates a DistributionMapping "similar" to that of a MultiFab. More...
 
DistributionMapping MakeSimilarDM (const BoxArray &ba, const BoxArray &src_ba, const DistributionMapping &src_dm, const IntVect &ng)
 Function that creates a DistributionMapping "similar" to that of a MultiFab. More...
 
template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::vector< std::pair< std::string, T > > getEnumNameValuePairs ()
 
template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
getEnum (std::string_view const &s)
 
template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
getEnumCaseInsensitive (std::string_view const &s)
 
template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::string getEnumNameString (T const &v)
 
template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::vector< std::string > getEnumNameStrings ()
 
template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::string getEnumClassName ()
 
template<typename T , std::enable_if_t<!IsBaseFab< T >::value, int > = 0>
Long nBytesOwned (T const &) noexcept
 
template<typename T >
Long nBytesOwned (BaseFab< T > const &fab) noexcept
 
template<class DFAB , class SFAB , std::enable_if_t< std::conjunction_v< IsBaseFab< DFAB >, IsBaseFab< SFAB >, std::is_convertible< typename SFAB::value_type, typename DFAB::value_type >>, int > BAR = 0>
void Copy (FabArray< DFAB > &dst, FabArray< SFAB > const &src, int srccomp, int dstcomp, int numcomp, int nghost)
 
template<class DFAB , class SFAB , std::enable_if_t< std::conjunction_v< IsBaseFab< DFAB >, IsBaseFab< SFAB >, std::is_convertible< typename SFAB::value_type, typename DFAB::value_type >>, int > BAR = 0>
void Copy (FabArray< DFAB > &dst, FabArray< SFAB > const &src, int srccomp, int dstcomp, int numcomp, const IntVect &nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Add (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, int nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Add (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, const IntVect &nghost)
 
int nComp (FabArrayBase const &fa)
 
IntVect nGrowVect (FabArrayBase const &fa)
 
BoxArray const & boxArray (FabArrayBase const &fa)
 
DistributionMapping const & DistributionMap (FabArrayBase const &fa)
 
std::ostream & operator<< (std::ostream &os, const FabArrayBase::BDKey &id)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type ReduceSum (FabArray< FAB > const &fa, int nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type ReduceSum (FabArray< FAB > const &fa, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceSum (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceSum (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceSum (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, FabArray< FAB3 > const &fa3, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceSum (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, FabArray< FAB3 > const &fa3, IntVect const &nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type ReduceMin (FabArray< FAB > const &fa, int nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type ReduceMin (FabArray< FAB > const &fa, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMin (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMin (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMin (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, FabArray< FAB3 > const &fa3, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMin (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, FabArray< FAB3 > const &fa3, IntVect const &nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type ReduceMax (FabArray< FAB > const &fa, int nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type ReduceMax (FabArray< FAB > const &fa, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMax (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMax (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMax (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, FabArray< FAB3 > const &fa3, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type ReduceMax (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, FabArray< FAB3 > const &fa3, IntVect const &nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool ReduceLogicalAnd (FabArray< FAB > const &fa, int nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool ReduceLogicalAnd (FabArray< FAB > const &fa, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool ReduceLogicalAnd (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool ReduceLogicalAnd (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, IntVect const &nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool ReduceLogicalOr (FabArray< FAB > const &fa, int nghost, F &&f)
 
template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool ReduceLogicalOr (FabArray< FAB > const &fa, IntVect const &nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool ReduceLogicalOr (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, int nghost, F &&f)
 
template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool ReduceLogicalOr (FabArray< FAB1 > const &fa1, FabArray< FAB2 > const &fa2, IntVect const &nghost, F &&f)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void printCell (FabArray< FAB > const &mf, const IntVect &cell, int comp=-1, const IntVect &ng=IntVect::TheZeroVector())
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Subtract (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, int nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Subtract (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, const IntVect &nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Multiply (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, int nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Multiply (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, const IntVect &nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Divide (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, int nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Divide (FabArray< FAB > &dst, FabArray< FAB > const &src, int srccomp, int dstcomp, int numcomp, const IntVect &nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Abs (FabArray< FAB > &fa, int icomp, int numcomp, int nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void Abs (FabArray< FAB > &fa, int icomp, int numcomp, const IntVect &nghost)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void prefetchToHost (FabArray< FAB > const &fa, const bool synchronous=true)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void prefetchToDevice (FabArray< FAB > const &fa, const bool synchronous=true)
 
template<class FAB , class IFAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value && IsBaseFab<IFAB>::value>>
void OverrideSync (FabArray< FAB > &fa, FabArray< IFAB > const &msk, const Periodicity &period)
 
template<class FAB , class IFAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value && IsBaseFab<IFAB>::value>>
void OverrideSync_nowait (FabArray< FAB > &fa, FabArray< IFAB > const &msk, const Periodicity &period)
 
template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void OverrideSync_finish (FabArray< FAB > &fa)
 
template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void dtoh_memcpy (FabArray< FAB > &dst, FabArray< FAB > const &src, int scomp, int dcomp, int ncomp)
 
template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void dtoh_memcpy (FabArray< FAB > &dst, FabArray< FAB > const &src)
 
template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void htod_memcpy (FabArray< FAB > &dst, FabArray< FAB > const &src, int scomp, int dcomp, int ncomp)
 
template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void htod_memcpy (FabArray< FAB > &dst, FabArray< FAB > const &src)
 
template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
IntVect indexFromValue (FabArray< FAB > const &mf, int comp, IntVect const &nghost, typename FAB::value_type value)
 
template<typename FAB , std::enable_if_t< IsBaseFab< FAB >::value, int > FOO = 0>
FAB::value_type Dot (FabArray< FAB > const &x, int xcomp, FabArray< FAB > const &y, int ycomp, int ncomp, IntVect const &nghost, bool local=false)
 Compute dot products of two FabArrays. More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void setVal (MF &dst, typename MF::value_type val)
 dst = val More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void setBndry (MF &dst, typename MF::value_type val, int scomp, int ncomp)
 dst = val in ghost cells. More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void Scale (MF &dst, typename MF::value_type val, int scomp, int ncomp, int nghost)
 dst *= val More...
 
template<class DMF , class SMF , std::enable_if_t< IsMultiFabLike_v< DMF > &&IsMultiFabLike_v< SMF >, int > = 0>
void LocalCopy (DMF &dst, SMF const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst = src More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void LocalAdd (MF &dst, MF const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst += src More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void Saxpy (MF &dst, typename MF::value_type a, MF const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst += a * src More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void Xpay (MF &dst, typename MF::value_type a, MF const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst = src + a * dst More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void LinComb (MF &dst, typename MF::value_type a, MF const &src_a, int acomp, typename MF::value_type b, MF const &src_b, int bcomp, int dcomp, int ncomp, IntVect const &nghost)
 dst = a*src_a + b*src_b More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void ParallelCopy (MF &dst, MF const &src, int scomp, int dcomp, int ncomp, IntVect const &ng_src=IntVect(0), IntVect const &ng_dst=IntVect(0), Periodicity const &period=Periodicity::NonPeriodic())
 dst = src w/ MPI communication More...
 
template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
MF::value_type norminf (MF const &mf, int scomp, int ncomp, IntVect const &nghost, bool local=false)
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void setVal (Array< MF, N > &dst, typename MF::value_type val)
 dst = val More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void setBndry (Array< MF, N > &dst, typename MF::value_type val, int scomp, int ncomp)
 dst = val in ghost cells. More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void Scale (Array< MF, N > &dst, typename MF::value_type val, int scomp, int ncomp, int nghost)
 dst *= val More...
 
template<class DMF , class SMF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< DMF > &&IsMultiFabLike_v< SMF >, int > = 0>
void LocalCopy (Array< DMF, N > &dst, Array< SMF, N > const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst = src More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void LocalAdd (Array< MF, N > &dst, Array< MF, N > const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst += src More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void Saxpy (Array< MF, N > &dst, typename MF::value_type a, Array< MF, N > const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst += a * src More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void Xpay (Array< MF, N > &dst, typename MF::value_type a, Array< MF, N > const &src, int scomp, int dcomp, int ncomp, IntVect const &nghost)
 dst = src + a * dst More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void LinComb (Array< MF, N > &dst, typename MF::value_type a, Array< MF, N > const &src_a, int acomp, typename MF::value_type b, Array< MF, N > const &src_b, int bcomp, int dcomp, int ncomp, IntVect const &nghost)
 dst = a*src_a + b*src_b More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void ParallelCopy (Array< MF, N > &dst, Array< MF, N > const &src, int scomp, int dcomp, int ncomp, IntVect const &ng_src=IntVect(0), IntVect const &ng_dst=IntVect(0), Periodicity const &period=Periodicity::NonPeriodic())
 dst = src w/ MPI communication More...
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
MF::value_type norminf (Array< MF, N > const &mf, int scomp, int ncomp, IntVect const &nghost, bool local=false)
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
int nComp (Array< MF, N > const &mf)
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
IntVect nGrowVect (Array< MF, N > const &mf)
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
BoxArray const & boxArray (Array< MF, N > const &mf)
 
template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
DistributionMapping const & DistributionMap (Array< MF, N > const &mf)
 
std::ostream & operator<< (std::ostream &os, const IntDescriptor &id)
 
std::istream & operator>> (std::istream &is, IntDescriptor &id)
 
void ONES_COMP_NEG (Long &n, int nb, Long incr)
 
int _pd_get_bit (char const *base, int offs, int nby, const int *ord)
 
void _pd_insert_field (Long in_long, int nb, char *out, int offs, int l_order, int l_bytes)
 
void _pd_set_bit (char *base, int offs)
 
std::ostream & operator<< (std::ostream &os, const RealDescriptor &rd)
 
std::istream & operator>> (std::istream &is, RealDescriptor &rd)
 
std::ostream & operator<< (std::ostream &os, const FArrayBox &f)
 
std::istream & operator>> (std::istream &is, FArrayBox &f)
 
void fab_filcc (Box const &bx, Array4< Real > const &qn, int ncomp, Box const &domain, Real const *, Real const *, BCRec const *bcn)
 
void fab_filfc (Box const &bx, Array4< Real > const &qn, int ncomp, Box const &domain, Real const *, Real const *, BCRec const *bcn)
 
void fab_filnd (Box const &bx, Array4< Real > const &qn, int ncomp, Box const &domain, Real const *, Real const *, BCRec const *bcn)
 
std::ostream & operator<< (std::ostream &, const Geometry &)
 Nice ASCII output. More...
 
std::istream & operator>> (std::istream &, Geometry &)
 Nice ASCII input. More...
 
Geometry coarsen (Geometry const &fine, IntVect const &rr)
 
Geometry coarsen (Geometry const &fine, int rr)
 
Geometry refine (Geometry const &crse, IntVect const &rr)
 
Geometry refine (Geometry const &crse, int rr)
 
const GeometryDefaultGeometry ()
 
template<typename A1 , typename A2 , std::enable_if_t< IsArenaAllocator< A1 >::value &&IsArenaAllocator< A2 >::value, int > = 0>
bool operator== (A1 const &a1, A2 const &a2)
 
template<typename A1 , typename A2 , std::enable_if_t< IsArenaAllocator< A1 >::value &&IsArenaAllocator< A2 >::value, int > = 0>
bool operator!= (A1 const &a1, A2 const &a2)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINEnorm (const GpuComplex< T > &a_z) noexcept
 Return the norm (magnitude squared) of a complex number. More...
 
template<typename U >
std::ostream & operator<< (std::ostream &out, const GpuComplex< U > &c)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator+ (const GpuComplex< T > &a_x)
 Identity operation on a complex number. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator- (const GpuComplex< T > &a_x)
 Negate a complex number. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator- (const GpuComplex< T > &a_x, const GpuComplex< T > &a_y) noexcept
 Subtract two complex numbers. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator- (const GpuComplex< T > &a_x, const T &a_y) noexcept
 Subtract a real number from a complex one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator- (const T &a_x, const GpuComplex< T > &a_y) noexcept
 Subtract a complex number from a real one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator+ (const GpuComplex< T > &a_x, const GpuComplex< T > &a_y) noexcept
 Add two complex numbers. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator+ (const GpuComplex< T > &a_x, const T &a_y) noexcept
 Add a real number to a complex one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator+ (const T &a_x, const GpuComplex< T > &a_y) noexcept
 Add a complex number to a real one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator* (const GpuComplex< T > &a_x, const GpuComplex< T > &a_y) noexcept
 Multiply two complex numbers. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator* (const GpuComplex< T > &a_x, const T &a_y) noexcept
 Multiply a complex number by a real one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator* (const T &a_x, const GpuComplex< T > &a_y) noexcept
 Multiply a real number by a complex one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator/ (const GpuComplex< T > &a_x, const GpuComplex< T > &a_y) noexcept
 Divide a complex number by another one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator/ (const GpuComplex< T > &a_x, const T &a_y) noexcept
 Divide a complex number by a real. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > operator/ (const T &a_x, const GpuComplex< T > &a_y) noexcept
 Divide a real number by a complex one. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > polar (const T &a_r, const T &a_theta) noexcept
 Return a complex number given its polar representation. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > exp (const GpuComplex< T > &a_z) noexcept
 Complex expotential function. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINEabs (const GpuComplex< T > &a_z) noexcept
 Return the absolute value of a complex number. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > sqrt (const GpuComplex< T > &a_z) noexcept
 Return the square root of a complex number. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINEarg (const GpuComplex< T > &a_z) noexcept
 Return the angle of a complex number's polar representation. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > log (const GpuComplex< T > &a_z) noexcept
 Complex natural logarithm function. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > pow (const GpuComplex< T > &a_z, const T &a_y) noexcept
 Raise a complex number to a (real) power. More...
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex< T > pow (const GpuComplex< T > &a_z, int a_n) noexcept
 Raise a complex number to an integer power. More...
 
gpuError_t gpuGetLastError ()
 
const char * gpuGetErrorString (gpuError_t error)
 
template<class L , class... Lambdas>
AMREX_GPU_GLOBAL void launch_global (L f0, Lambdas... fs)
 
template<class L >
AMREX_GPU_DEVICE void call_device (L &&f0) noexcept
 
template<class L , class... Lambdas>
AMREX_GPU_DEVICE void call_device (L &&f0, Lambdas &&... fs) noexcept
 
template<class L >
void launch_host (L &&f0) noexcept
 
template<class L , class... Lambdas>
void launch_host (L &&f0, Lambdas &&... fs) noexcept
 
template<typename T , typename L >
void launch (T const &n, L &&f) noexcept
 
template<int MT, typename T , typename L >
void launch (T const &n, L &&f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void For (T n, L const &f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void For (T n, L &&f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void For (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void For (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void ParallelFor (T n, L const &f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void ParallelFor (T n, L &&f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void ParallelFor (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void ParallelFor (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<typename L , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void For (BoxND< dim > const &box, L const &f) noexcept
 
template<int MT, typename L , int dim>
void For (BoxND< dim > const &box, L &&f) noexcept
 
template<typename L , int dim>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<int MT, typename L , int dim>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<typename L , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void ParallelFor (BoxND< dim > const &box, L const &f) noexcept
 
template<int MT, typename L , int dim>
void ParallelFor (BoxND< dim > const &box, L &&f) noexcept
 
template<typename L , int dim>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<int MT, typename L , int dim>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void For (BoxND< dim > const &box, T ncomp, L const &f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void For (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void ParallelFor (BoxND< dim > const &box, T ncomp, L const &f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void ParallelFor (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename L1 , typename L2 , int dim>
void For (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void For (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , int dim>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void For (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void For (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void For (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void For (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void For (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void For (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void For (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename L1 , typename L2 , int dim>
void ParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void ParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , int dim>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void ParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void ParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void ParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void ParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void ParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void ParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (T n, L &&f) noexcept
 
template<typename L , int dim>
void HostDeviceParallelFor (BoxND< dim > const &box, L &&f) noexcept
 
template<int MT, typename L , int dim>
void HostDeviceParallelFor (BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename L1 , typename L2 , int dim>
void HostDeviceParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void HostDeviceParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceParallelFor (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceParallelFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (T n, L &&f) noexcept
 
template<typename L , int dim>
void HostDeviceFor (BoxND< dim > const &box, L &&f) noexcept
 
template<int MT, typename L , int dim>
void HostDeviceFor (BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<int MT, typename T , int dim, typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename L1 , typename L2 , int dim>
void HostDeviceFor (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void HostDeviceFor (BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceFor (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceFor (BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceFor (BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<typename L , int dim>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<int MT, typename L , int dim>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename L1 , typename L2 , int dim>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (Gpu::KernelInfo const &, T n, L &&f) noexcept
 
template<typename L , int dim>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<int MT, typename L , int dim>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename L1 , typename L2 , int dim>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void HostDeviceFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void ParallelForRNG (T n, L const &f) noexcept
 
template<typename L , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void ParallelForRNG (BoxND< dim > const &box, L const &f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void ParallelForRNG (BoxND< dim > const &box, T ncomp, L const &f) noexcept
 
template<typename L >
void single_task (L &&f) noexcept
 
template<typename L >
void single_task (gpuStream_t stream, L const &f) noexcept
 
template<int MT, typename L >
void launch (int nblocks, std::size_t shared_mem_bytes, gpuStream_t stream, L const &f) noexcept
 
template<int MT, typename L >
void launch (int nblocks, gpuStream_t stream, L const &f) noexcept
 
template<typename L >
void launch (int nblocks, int nthreads_per_block, std::size_t shared_mem_bytes, gpuStream_t stream, L const &f) noexcept
 
template<typename L >
void launch (int nblocks, int nthreads_per_block, gpuStream_t stream, L &&f) noexcept
 
template<int MT, typename T , typename L , std::enable_if_t< std::is_integral_v< T >, int > FOO = 0>
void launch (T const &n, L const &f) noexcept
 
template<int MT, int dim, typename L >
void launch (BoxND< dim > const &box, L const &f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelFor (Gpu::KernelInfo const &, T n, L const &f) noexcept
 
template<int MT, typename L , int dim>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, L const &f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box, T ncomp, L const &f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelForRNG (T n, L const &f) noexcept
 
template<typename L , int dim>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelForRNG (BoxND< dim > const &box, L const &f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelForRNG (BoxND< dim > const &box, T ncomp, L const &f) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value > ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value &&MaybeDeviceRunnable< L3 >::value > ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value > ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value &&MaybeDeviceRunnable< L3 >::value > ParallelFor (Gpu::KernelInfo const &, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelFor (Gpu::KernelInfo const &info, T n, L &&f) noexcept
 
template<typename L , int dim>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeDeviceRunnable< L >::value > ParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename L1 , typename L2 , int dim>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value > ParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<typename L1 , typename L2 , typename L3 , int dim>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value &&MaybeDeviceRunnable< L3 >::value > ParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value > ParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t< MaybeDeviceRunnable< L1 >::value &&MaybeDeviceRunnable< L2 >::value &&MaybeDeviceRunnable< L3 >::value > ParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
void ParallelFor (T n, L &&f) noexcept
 
template<typename L , int dim>
void ParallelFor (BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
void ParallelFor (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
void For (T n, L &&f) noexcept
 
template<typename L , int dim>
void For (BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
void For (BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, T n, L &&f) noexcept
 
template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (T n, L &&f) noexcept
 
template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (T n, L &&f) noexcept
 
template<typename L , int dim>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box, L &&f) noexcept
 
template<int MT, typename L , int dim>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box, L &&f) noexcept
 
template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box, T ncomp, L &&f) noexcept
 
template<typename L1 , typename L2 , int dim>
std::enable_if_t< MaybeHostDeviceRunnable< L1 >::value &&MaybeHostDeviceRunnable< L2 >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , int dim>
std::enable_if_t< MaybeHostDeviceRunnable< L1 >::value &&MaybeHostDeviceRunnable< L2 >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, BoxND< dim > const &box2, L1 &&f1, L2 &&f2) noexcept
 
template<int MT, typename L1 , typename L2 , typename L3 , int dim>
std::enable_if_t< MaybeHostDeviceRunnable< L1 >::value &&MaybeHostDeviceRunnable< L2 >::value &&MaybeHostDeviceRunnable< L3 >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, BoxND< dim > const &box2, BoxND< dim > const &box3, L1 &&f1, L2 &&f2, L3 &&f3) noexcept
 
template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L1 >::value &&MaybeHostDeviceRunnable< L2 >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L1 >::value &&MaybeHostDeviceRunnable< L2 >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2) noexcept
 
template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L1 >::value &&MaybeHostDeviceRunnable< L2 >::value &&MaybeHostDeviceRunnable< L3 >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t< MaybeHostDeviceRunnable< L1 >::value &&MaybeHostDeviceRunnable< L2 >::value &&MaybeHostDeviceRunnable< L3 >::value > HostDeviceParallelFor (Gpu::KernelInfo const &info, BoxND< dim > const &box1, T1 ncomp1, L1 &&f1, BoxND< dim > const &box2, T2 ncomp2, L2 &&f2, BoxND< dim > const &box3, T3 ncomp3, L3 &&f3) noexcept
 
template<class L >
AMREX_GPU_GLOBAL void launch_global (L f0)
 
template<int amrex_launch_bounds_max_threads, class L >
 __launch_bounds__ (amrex_launch_bounds_max_threads) AMREX_GPU_GLOBAL void launch_global(L f0)
 
template<int amrex_launch_bounds_max_threads, int min_blocks, class L >
 __launch_bounds__ (amrex_launch_bounds_max_threads, min_blocks) AMREX_GPU_GLOBAL void launch_global(L f0)
 
template<typename T , std::enable_if_t< std::is_integral_v< T >, int > = 0>
bool isEmpty (T n) noexcept
 
template<int dim>
AMREX_FORCE_INLINE bool isEmpty (BoxND< dim > const &b) noexcept
 
std::ostream & operator<< (std::ostream &os, const dim3 &d)
 
std::unique_ptr< iMultiFabOwnerMask (FabArrayBase const &mf, const Periodicity &period, const IntVect &ngrow)
 
template<int dim>
AMREX_GPU_HOST_DEVICE IndexTypeND (const IntVectND< dim > &) -> IndexTypeND< dim >
 
template<class... Args, std::enable_if_t< IsConvertible_v< IndexType::CellIndex, Args... >, int > = 0>
AMREX_GPU_HOST_DEVICE IndexTypeND (IndexType::CellIndex, Args...) -> IndexTypeND< sizeof...(Args)+1 >
 
template<int dim>
std::ostream & operator<< (std::ostream &os, const IndexTypeND< dim > &it)
 Write an IndexTypeND to an ostream in ASCII. More...
 
template<int dim>
std::istream & operator>> (std::istream &is, IndexTypeND< dim > &it)
 Read an IndexTypeND from an istream. More...
 
template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND< detail::get_sum< d, dims... >)> IndexTypeCat (const IndexTypeND< d > &v, const IndexTypeND< dims > &...vects) noexcept
 Returns a IndexTypeND obtained by concatenating the input IndexTypeNDs. The dimension of the return value equals the sum of the dimensions of the inputted IndexTypeNDs. More...
 
template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE GpuTuple< IndexTypeND< d >, IndexTypeND< dims >... > IndexTypeSplit (const IndexTypeND< detail::get_sum< d, dims... >()> &v) noexcept
 Returns a tuple of IndexTypeND obtained by splitting the input IndexTypeND according to the dimensions specified by the template arguments. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND< new_dim > IndexTypeShrink (const IndexTypeND< old_dim > &v) noexcept
 Returns a new IndexTypeND of size new_dim and assigns the first new_dim values of v to it. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND< new_dim > IndexTypeExpand (const IndexTypeND< old_dim > &v, IndexType::CellIndex fill_extra=IndexType::CellIndex::CELL) noexcept
 Returns a new IndexTypeND of size new_dim and assigns all values of iv to it and fill_extra to the remaining elements. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND< new_dim > IndexTypeResize (const IndexTypeND< old_dim > &v, IndexType::CellIndex fill_extra=IndexType::CellIndex::CELL) noexcept
 Returns a new IndexTypeND of size new_dim by either shrinking or expanding iv. More...
 
std::int16_t swapBytes (std::int16_t val)
 
std::int32_t swapBytes (std::int32_t val)
 
std::int64_t swapBytes (std::int64_t val)
 
std::uint16_t swapBytes (std::uint16_t val)
 
std::uint32_t swapBytes (std::uint32_t val)
 
std::uint64_t swapBytes (std::uint64_t val)
 
template<typename To , typename From >
void writeIntData (const From *data, std::size_t size, std::ostream &os, const amrex::IntDescriptor &id)
 
template<typename To , typename From >
void readIntData (To *data, std::size_t size, std::istream &is, const amrex::IntDescriptor &id)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int coarsen (int i, int ratio) noexcept
 
template<std::size_t dim>
AMREX_GPU_HOST_DEVICE IntVectND (const Array< int, dim > &) -> IntVectND< dim >
 
template<class... Args, std::enable_if_t< IsConvertible_v< int, Args... >, int > = 0>
AMREX_GPU_HOST_DEVICE IntVectND (int, int, Args...) -> IntVectND< sizeof...(Args)+2 >
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > operator+ (int s, const IntVectND< dim > &p) noexcept
 Returns p + s. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE AMREX_GPU_HOST_DEVICE IntVectND< dim > operator- (int s, const IntVectND< dim > &p) noexcept
 Returns -p + s. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > operator* (int s, const IntVectND< dim > &p) noexcept
 Returns p * s. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > min (const IntVectND< dim > &p1, const IntVectND< dim > &p2) noexcept
 Returns the IntVectND that is the component-wise minimum of two argument IntVectNDs. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > elemwiseMin (const IntVectND< dim > &p1, const IntVectND< dim > &p2) noexcept
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > max (const IntVectND< dim > &p1, const IntVectND< dim > &p2) noexcept
 Returns the IntVectND that is the component-wise maximum of two argument IntVectNDs. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > elemwiseMax (const IntVectND< dim > &p1, const IntVectND< dim > &p2) noexcept
 
template<int dim = AMREX_SPACEDIM>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > BASISV (int dir) noexcept
 Returns a basis vector in the given coordinate direction; eg. IntVectND<3> BASISV<3>(1) == (0,1,0). Note that the coordinate directions are zero based. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > scale (const IntVectND< dim > &p, int s) noexcept
 Returns a IntVectND obtained by multiplying each of the components of this IntVectND by s. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > reflect (const IntVectND< dim > &a, int ref_ix, int idir) noexcept
 Returns an IntVectND that is the reflection of input in the plane which passes through ref_ix and normal to the coordinate direction idir. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > diagShift (const IntVectND< dim > &p, int s) noexcept
 Returns IntVectND obtained by adding s to each of the components of this IntVectND. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > coarsen (const IntVectND< dim > &p, int s) noexcept
 Returns an IntVectND that is the component-wise integer projection of p by s. More...
 
template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND< dim > coarsen (const IntVectND< dim > &p1, const IntVectND< dim > &p2) noexcept
 Returns an IntVectND which is the component-wise integer projection of IntVectND p1 by IntVectND p2. More...
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 refine (Dim3 const &coarse, IntVectND< dim > const &ratio) noexcept
 
template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 coarsen (Dim3 const &fine, IntVectND< dim > const &ratio) noexcept
 
template<int dim>
std::ostream & operator<< (std::ostream &os, const IntVectND< dim > &iv)
 
template<int dim>
std::istream & operator>> (std::istream &is, IntVectND< dim > &iv)
 
template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND< detail::get_sum< d, dims... >)> IntVectCat (const IntVectND< d > &v, const IntVectND< dims > &...vects) noexcept
 Returns a IntVectND obtained by concatenating the input IntVectNDs. The dimension of the return value equals the sum of the dimensions of the inputted IntVectNDs. More...
 
template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE GpuTuple< IntVectND< d >, IntVectND< dims >... > IntVectSplit (const IntVectND< detail::get_sum< d, dims... >()> &v) noexcept
 Returns a tuple of IntVectND obtained by splitting the input IntVectND according to the dimensions specified by the template arguments. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND< new_dim > IntVectShrink (const IntVectND< old_dim > &iv) noexcept
 Returns a new IntVectND of size new_dim and assigns the first new_dim values of iv to it. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND< new_dim > IntVectExpand (const IntVectND< old_dim > &iv, int fill_extra=0) noexcept
 Returns a new IntVectND of size new_dim and assigns all values of iv to it and fill_extra to the remaining elements. More...
 
template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND< new_dim > IntVectResize (const IntVectND< old_dim > &iv, int fill_extra=0) noexcept
 Returns a new IntVectND of size new_dim by either shrinking or expanding iv. More...
 
template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void Loop (Dim3 lo, Dim3 hi, F const &f) noexcept
 
template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void Loop (Dim3 lo, Dim3 hi, int ncomp, F const &f) noexcept
 
template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrent (Dim3 lo, Dim3 hi, F const &f) noexcept
 
template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrent (Dim3 lo, Dim3 hi, int ncomp, F const &f) noexcept
 
template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void Loop (BoxND< dim > const &bx, F const &f) noexcept
 
template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void Loop (BoxND< dim > const &bx, int ncomp, F const &f) noexcept
 
template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrent (BoxND< dim > const &bx, F const &f) noexcept
 
template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrent (BoxND< dim > const &bx, int ncomp, F const &f) noexcept
 
template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopOnCpu (Dim3 lo, Dim3 hi, F const &f) noexcept
 
template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopOnCpu (Dim3 lo, Dim3 hi, int ncomp, F const &f) noexcept
 
template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrentOnCpu (Dim3 lo, Dim3 hi, F const &f) noexcept
 
template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrentOnCpu (Dim3 lo, Dim3 hi, int ncomp, F const &f) noexcept
 
template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopOnCpu (BoxND< dim > const &bx, F const &f) noexcept
 
template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopOnCpu (BoxND< dim > const &bx, int ncomp, F const &f) noexcept
 
template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrentOnCpu (BoxND< dim > const &bx, F const &f) noexcept
 
template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void LoopConcurrentOnCpu (BoxND< dim > const &bx, int ncomp, F const &f) noexcept
 
AMREX_GPU_HOST_DEVICE double abs (double)
 
AMREX_GPU_HOST_DEVICE float abs (float)
 
AMREX_GPU_HOST_DEVICE long double abs (long double)
 
AMREX_GPU_HOST_DEVICE int abs (int)
 
AMREX_GPU_HOST_DEVICE long abs (long)
 
AMREX_GPU_HOST_DEVICE long long abs (long long)
 
template<RunOn run_on, typename T , std::enable_if_t< std::is_same_v< T, double >||std::is_same_v< T, float >, int > FOO = 0>
void fill_snan (T *p, std::size_t nelems)
 
std::ostream & operator<< (std::ostream &os, const MemProfiler::Bytes &bytes)
 
std::ostream & operator<< (std::ostream &os, const MemProfiler::Builds &builds)
 
void InterpAddBox (MultiFabCopyDescriptor &fabCopyDesc, BoxList *returnUnfilledBoxes, Vector< FillBoxId > &returnedFillBoxIds, const Box &subbox, MultiFabId faid1, MultiFabId faid2, Real t1, Real t2, Real t, int src_comp, int dest_comp, int num_comp, bool extrap)
 
void InterpFillFab (MultiFabCopyDescriptor &fabCopyDesc, const Vector< FillBoxId > &fillBoxIds, MultiFabId faid1, MultiFabId faid2, FArrayBox &dest, Real t1, Real t2, Real t, int src_comp, int dest_comp, int num_comp, bool extrap)
 
bool TilingIfNotGPU () noexcept
 
bool isMFIterSafe (const FabArrayBase &x, const FabArrayBase &y)
 
void GccPlacaterMF ()
 
void average_node_to_cellcenter (MultiFab &cc, int dcomp, const MultiFab &nd, int scomp, int ncomp, int ngrow=0)
 Average nodal-based MultiFab onto cell-centered MultiFab. More...
 
void average_edge_to_cellcenter (MultiFab &cc, int dcomp, const Vector< const MultiFab * > &edge, int ngrow=0)
 Average edge-based MultiFab onto cell-centered MultiFab. More...
 
void average_face_to_cellcenter (MultiFab &cc, int dcomp, const Vector< const MultiFab * > &fc, int ngrow=0)
 Average face-based MultiFab onto cell-centered MultiFab. More...
 
void average_face_to_cellcenter (MultiFab &cc, const Vector< const MultiFab * > &fc, const Geometry &geom)
 Average face-based MultiFab onto cell-centered MultiFab with geometric weighting. More...
 
void average_face_to_cellcenter (MultiFab &cc, const Array< const MultiFab *, AMREX_SPACEDIM > &fc, const Geometry &geom)
 Average face-based MultiFab onto cell-centered MultiFab with geometric weighting. More...
 
void average_cellcenter_to_face (const Vector< MultiFab * > &fc, const MultiFab &cc, const Geometry &geom, int ncomp=1, bool use_harmonic_averaging=false)
 Average cell-centered MultiFab onto face-based MultiFab with geometric weighting. More...
 
void average_cellcenter_to_face (const Array< MultiFab *, AMREX_SPACEDIM > &fc, const MultiFab &cc, const Geometry &geom, int ncomp=1, bool use_harmonic_averaging=false)
 Average cell-centered MultiFab onto face-based MultiFab with geometric weighting. More...
 
void average_down (const MultiFab &S_fine, MultiFab &S_crse, const Geometry &fgeom, const Geometry &cgeom, int scomp, int ncomp, int rr)
 
void average_down (const MultiFab &S_fine, MultiFab &S_crse, const Geometry &fgeom, const Geometry &cgeom, int scomp, int ncomp, const IntVect &ratio)
 Volume weighed average of fine MultiFab onto coarse MultiFab. More...
 
void sum_fine_to_coarse (const MultiFab &S_fine, MultiFab &S_crse, int scomp, int ncomp, const IntVect &ratio, const Geometry &cgeom, const Geometry &)
 
void average_down_edges (const Vector< const MultiFab * > &fine, const Vector< MultiFab * > &crse, const IntVect &ratio, int ngcrse=0)
 Average fine edge-based MultiFab onto crse edge-based MultiFab. More...
 
void average_down_edges (const Array< const MultiFab *, AMREX_SPACEDIM > &fine, const Array< MultiFab *, AMREX_SPACEDIM > &crse, const IntVect &ratio, int ngcrse)
 
void average_down_edges (const MultiFab &fine, MultiFab &crse, const IntVect &ratio, int ngcrse)
 
void print_state (const MultiFab &mf, const IntVect &cell, int n=-1, const IntVect &ng=IntVect::TheZeroVector())
 Output state data for a single zone. More...
 
void writeFabs (const MultiFab &mf, const std::string &name)
 Write each fab individually. More...
 
void writeFabs (const MultiFab &mf, int comp, int ncomp, const std::string &name)
 
MultiFab ToMultiFab (const iMultiFab &imf)
 Convert iMultiFab to MultiFab. More...
 
FabArray< BaseFab< Long > > ToLongMultiFab (const iMultiFab &imf)
 Convert iMultiFab to Long. More...
 
std::unique_ptr< MultiFabget_slice_data (int dir, Real coord, const MultiFab &cc, const Geometry &geom, int start_comp, int ncomp, bool interpolate, RealBox const &bnd_rbx)
 
iMultiFab makeFineMask (const BoxArray &cba, const DistributionMapping &cdm, const BoxArray &fba, const IntVect &ratio, int crse_value, int fine_value)
 
template<typename FAB >
void makeFineMask_doit (FabArray< FAB > &mask, const BoxArray &fba, const IntVect &ratio, Periodicity const &period, typename FAB::value_type crse_value, typename FAB::value_type fine_value)
 
iMultiFab makeFineMask (const BoxArray &cba, const DistributionMapping &cdm, const IntVect &cnghost, const BoxArray &fba, const IntVect &ratio, Periodicity const &period, int crse_value, int fine_value)
 
MultiFab makeFineMask (const BoxArray &cba, const DistributionMapping &cdm, const BoxArray &fba, const IntVect &ratio, Real crse_value, Real fine_value)
 
void computeDivergence (MultiFab &divu, const Array< MultiFab const *, AMREX_SPACEDIM > &umac, const Geometry &geom)
 Computes divergence of face-data stored in the umac MultiFab. More...
 
void computeGradient (MultiFab &grad, const Array< MultiFab const *, AMREX_SPACEDIM > &umac, const Geometry &geom)
 Computes gradient of face-data stored in the umac MultiFab. More...
 
MultiFab periodicShift (MultiFab const &mf, IntVect const &offset, Periodicity const &period)
 Periodic shift MultiFab. More...
 
Gpu::HostVector< Real > sumToLine (MultiFab const &mf, int icomp, int ncomp, Box const &domain, int direction, bool local=false)
 Sum MultiFab data to line. More...
 
Real volumeWeightedSum (Vector< MultiFab const * > const &mf, int icomp, Vector< Geometry > const &geom, Vector< IntVect > const &ratio, bool local=false)
 Volume weighted sum for a vector of MultiFabs. More...
 
void FourthOrderInterpFromFineToCoarse (MultiFab &cmf, int scomp, int ncomp, MultiFab const &fmf, IntVect const &ratio)
 Fourth-order interpolation from fine to coarse level. More...
 
void FillRandom (MultiFab &mf, int scomp, int ncomp)
 Fill MultiFab with random numbers from uniform distribution. More...
 
void FillRandomNormal (MultiFab &mf, int scomp, int ncomp, Real mean, Real stddev)
 Fill MultiFab with random numbers from normal distribution. More...
 
Vector< MultiFabconvexify (Vector< MultiFab const * > const &mf, Vector< IntVect > const &refinement_ratio)
 Convexify AMR data. More...
 
template<typename CMF , typename FMF , std::enable_if_t< IsFabArray_v< CMF > &&IsFabArray_v< FMF >, int > = 0>
void average_face_to_cellcenter (CMF &cc, int dcomp, const Array< const FMF *, AMREX_SPACEDIM > &fc, int ngrow=0)
 Average face-based FabArray onto cell-centered FabArray. More...
 
template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void average_down_faces (const Vector< const MF * > &fine, const Vector< MF * > &crse, const IntVect &ratio, int ngcrse=0)
 Average fine face-based FabArray onto crse face-based FabArray. More...
 
template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void average_down_faces (const Vector< const MF * > &fine, const Vector< MF * > &crse, int ratio, int ngcrse=0)
 Average fine face-based FabArray onto crse face-based FabArray. More...
 
template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void average_down_faces (const Array< const MF *, AMREX_SPACEDIM > &fine, const Array< MF *, AMREX_SPACEDIM > &crse, const IntVect &ratio, int ngcrse=0)
 Average fine face-based FabArray onto crse face-based FabArray. More...
 
template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void average_down_faces (const Array< const MF *, AMREX_SPACEDIM > &fine, const Array< MF *, AMREX_SPACEDIM > &crse, int ratio, int ngcrse=0)
 Average fine face-based FabArray onto crse face-based FabArray. More...
 
template<typename FAB >
void average_down_faces (const FabArray< FAB > &fine, FabArray< FAB > &crse, const IntVect &ratio, int ngcrse=0)
 This version does average down for one face direction. More...
 
template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void average_down_faces (const Array< const MF *, AMREX_SPACEDIM > &fine, const Array< MF *, AMREX_SPACEDIM > &crse, const IntVect &ratio, const Geometry &crse_geom)
 
template<typename FAB >
void average_down_faces (const FabArray< FAB > &fine, FabArray< FAB > &crse, const IntVect &ratio, const Geometry &crse_geom)
 
template<typename FAB >
void average_down_nodal (const FabArray< FAB > &S_fine, FabArray< FAB > &S_crse, const IntVect &ratio, int ngcrse=0, bool mfiter_is_definitely_safe=false)
 Average fine node-based MultiFab onto crse node-centered MultiFab. More...
 
template<typename FAB >
void average_down (const FabArray< FAB > &S_fine, FabArray< FAB > &S_crse, int scomp, int ncomp, const IntVect &ratio)
 
template<typename FAB >
void average_down (const FabArray< FAB > &S_fine, FabArray< FAB > &S_crse, int scomp, int ncomp, int rr)
 
template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > FOO = 0>
Vector< typename MF::value_type > get_cell_data (MF const &mf, IntVect const &cell)
 Get data in a cell of MultiFab/FabArray. More...
 
template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > FOO = 0>
MF get_line_data (MF const &mf, int dir, IntVect const &cell, Box const &bnd_bx=Box())
 Get data in a line of MultiFab/FabArray. More...
 
template<typename FAB >
iMultiFab makeFineMask (const FabArray< FAB > &cmf, const BoxArray &fba, const IntVect &ratio, int crse_value=0, int fine_value=1)
 
template<typename FAB >
iMultiFab makeFineMask (const FabArray< FAB > &cmf, const BoxArray &fba, const IntVect &ratio, Periodicity const &period, int crse_value, int fine_value)
 
template<typename FAB >
iMultiFab makeFineMask (const FabArray< FAB > &cmf, const FabArray< FAB > &fmf, const IntVect &cnghost, const IntVect &ratio, Periodicity const &period, int crse_value, int fine_value)
 
template<typename FAB >
iMultiFab makeFineMask (const FabArray< FAB > &cmf, const FabArray< FAB > &fmf, const IntVect &cnghost, const IntVect &ratio, Periodicity const &period, int crse_value, int fine_value, LayoutData< int > &has_cf)
 
template<typename T , typename U >
cast (U const &mf_in)
 example: auto mf = amrex::cast<MultiFab>(imf); More...
 
template<typename Op , typename T , typename FAB , typename F , std::enable_if_t< IsBaseFab< FAB >::value, int > FOO = 0>
BaseFab< T > ReduceToPlane (int direction, Box const &domain, FabArray< FAB > const &mf, F const &f)
 Reduce FabArray/MultiFab data to a plane. More...
 
template<typename F >
Real NormHelper (const MultiFab &x, int xcomp, const MultiFab &y, int ycomp, F const &f, int numcomp, IntVect nghost, bool local)
 Returns part of a norm based on two MultiFabs. More...
 
template<typename MMF , typename Pred , typename F >
Real NormHelper (const MMF &mask, const MultiFab &x, int xcomp, const MultiFab &y, int ycomp, Pred const &pf, F const &f, int numcomp, IntVect nghost, bool local)
 Returns part of a norm based on three MultiFabs. More...
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_nd_to_cc (int i, int, int, int n, Array4< Real > const &cc, Array4< Real const > const &nd, int cccomp, int ndcomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_eg_to_cc (int i, int, int, Array4< Real > const &cc, Array4< Real const > const &Ex, int cccomp) noexcept
 
template<typename CT , typename FT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_fc_to_cc (int i, int, int, Array4< CT > const &cc, Array4< FT const > const &fx, int cccomp, GeometryData const &gd) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_cc_to_fc (int i, int, int, int n, Box const &xbx, Array4< Real > const &fx, Array4< Real const > const &cc, GeometryData const &gd, bool use_harmonic_averaging) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_faces (Box const &bx, Array4< T > const &crse, Array4< T const > const &fine, int ccomp, int fcomp, int ncomp, IntVect const &ratio, int) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_faces (int i, int, int, int n, Array4< T > const &crse, Array4< T const > const &fine, int ccomp, int fcomp, IntVect const &ratio, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_edges (Box const &bx, Array4< Real > const &crse, Array4< Real const > const &fine, int ccomp, int fcomp, int ncomp, IntVect const &ratio, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_edges (int i, int, int, int n, Array4< Real > const &crse, Array4< Real const > const &fine, int ccomp, int fcomp, IntVect const &ratio, int) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown (Box const &bx, Array4< T > const &crse, Array4< T const > const &fine, int ccomp, int fcomp, int ncomp, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown (int i, int, int, int n, Array4< T > const &crse, Array4< T const > const &fine, int ccomp, int fcomp, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_with_vol (int i, int, int, int n, Array4< T > const &crse, Array4< T const > const &fine, Array4< T const > const &fv, int ccomp, int fcomp, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_nodes (Box const &bx, Array4< T > const &crse, Array4< T const > const &fine, int ccomp, int fcomp, int ncomp, IntVect const &ratio) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_nodes (int i, int, int, int n, Array4< T > const &crse, Array4< T const > const &fine, int ccomp, int fcomp, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_divergence (Box const &bx, Array4< Real > const &divu, Array4< Real const > const &u, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_gradient (Box const &bx, Array4< Real > const &grad, Array4< Real const > const &u, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_eg_to_cc (int i, int j, int, Array4< Real > const &cc, Array4< Real const > const &Ex, Array4< Real const > const &Ey, int cccomp) noexcept
 
template<typename CT , typename FT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_fc_to_cc (int i, int j, int, Array4< CT > const &cc, Array4< FT const > const &fx, Array4< FT const > const &fy, int cccomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_cc_to_fc (int i, int j, int, int n, Box const &xbx, Box const &ybx, Array4< Real > const &fx, Array4< Real > const &fy, Array4< Real const > const &cc, bool use_harmonic_averaging) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avgdown_with_vol (int i, int j, int, int n, Array4< Real > const &crse, Array4< Real const > const &fine, Array4< Real const > const &fv, int ccomp, int fcomp, IntVect const &ratio) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_divergence (Box const &bx, Array4< Real > const &divu, Array4< Real const > const &u, Array4< Real const > const &v, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_gradient (Box const &bx, Array4< Real > const &grad, Array4< Real const > const &u, Array4< Real const > const &v, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_convective_difference (Box const &bx, Array4< amrex::Real > const &diff, Array4< Real const > const &u_face, Array4< Real const > const &v_face, Array4< Real const > const &s_on_x_face, Array4< Real const > const &s_on_y_face, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_divergence_rz (Box const &bx, Array4< Real > const &divu, Array4< Real const > const &u, Array4< Real const > const &v, Array4< Real const > const &ax, Array4< Real const > const &ay, Array4< Real const > const &vol) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_gradient_rz (Box const &bx, Array4< Real > const &grad, Array4< Real const > const &u, Array4< Real const > const &v, Array4< Real const > const &ax, Array4< Real const > const &ay, Array4< Real const > const &vol) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_eg_to_cc (int i, int j, int k, Array4< Real > const &cc, Array4< Real const > const &Ex, Array4< Real const > const &Ey, Array4< Real const > const &Ez, int cccomp) noexcept
 
template<typename CT , typename FT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_fc_to_cc (int i, int j, int k, Array4< CT > const &cc, Array4< FT const > const &fx, Array4< FT const > const &fy, Array4< FT const > const &fz, int cccomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_avg_cc_to_fc (int i, int j, int k, int n, Box const &xbx, Box const &ybx, Box const &zbx, Array4< Real > const &fx, Array4< Real > const &fy, Array4< Real > const &fz, Array4< Real const > const &cc, bool use_harmonic_averaging) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_divergence (Box const &bx, Array4< Real > const &divu, Array4< Real const > const &u, Array4< Real const > const &v, Array4< Real const > const &w, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_gradient (Box const &bx, Array4< Real > const &grad, Array4< Real const > const &u, Array4< Real const > const &v, Array4< Real const > const &w, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_compute_convective_difference (Box const &bx, Array4< Real > const &diff, Array4< Real const > const &u_face, Array4< Real const > const &v_face, Array4< Real const > const &w_face, Array4< Real const > const &s_on_x_face, Array4< Real const > const &s_on_y_face, Array4< Real const > const &s_on_z_face, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE void amrex_fill_slice_interp (Box const &bx, Array4< Real > slice, Array4< Real const > const &full, int scomp, int fcomp, int ncomp, int dir, Real coord, GeometryData const &gd) noexcept
 
int numUniquePhysicalCores ()
 
std::ostream & operator<< (std::ostream &os, const Orientation &o)
 Write to an ostream in ASCII format. More...
 
std::istream & operator>> (std::istream &is, Orientation &o)
 
template<typename... Ops, typename... Ts, typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ReduceData< Ts... >::Type ParReduce (TypeList< Ops... > operation_list, TypeList< Ts... > type_list, FabArray< FAB > const &fa, IntVect const &nghost, F &&f)
 Parallel reduce for MultiFab/FabArray. More...
 
template<typename Op , typename T , typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ParReduce (TypeList< Op > operation_list, TypeList< T > type_list, FabArray< FAB > const &fa, IntVect const &nghost, F &&f)
 Parallel reduce for MultiFab/FabArray. More...
 
template<typename... Ops, typename... Ts, typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ReduceData< Ts... >::Type ParReduce (TypeList< Ops... > operation_list, TypeList< Ts... > type_list, FabArray< FAB > const &fa, IntVect const &nghost, int ncomp, F &&f)
 Parallel reduce for MultiFab/FabArray. More...
 
template<typename Op , typename T , typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ParReduce (TypeList< Op > operation_list, TypeList< T > type_list, FabArray< FAB > const &fa, IntVect const &nghost, int ncomp, F &&f)
 Parallel reduce for MultiFab/FabArray. More...
 
template<typename... Ops, typename... Ts, typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ReduceData< Ts... >::Type ParReduce (TypeList< Ops... > operation_list, TypeList< Ts... > type_list, FabArray< FAB > const &fa, F &&f)
 Parallel reduce for MultiFab/FabArray. More...
 
template<typename Op , typename T , typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ParReduce (TypeList< Op > operation_list, TypeList< T > type_list, FabArray< FAB > const &fa, F &&f)
 Parallel reduce for MultiFab/FabArray. More...
 
std::ostream & pout ()
 the stream that all output except error msgs should use More...
 
void setPoutBaseName (const std::string &a_Name)
 Set the base name for the parallel output files used by pout(). More...
 
const std::string & poutFileName ()
 return the current filename as used by pout() More...
 
template<typename T , typename F >
int Partition (T *data, int beg, int end, F &&f)
 A GPU-capable partition function for contiguous data. More...
 
template<typename T , typename F >
int Partition (T *data, int n, F &&f)
 A GPU-capable partition function for contiguous data. More...
 
template<typename T , typename F >
int Partition (Gpu::DeviceVector< T > &v, F &&f)
 A GPU-capable partition function for contiguous data. More...
 
template<typename T , typename F >
int StablePartition (T *data, int beg, int end, F &&f)
 A GPU-capable partition function for contiguous data. More...
 
template<typename T , typename F >
int StablePartition (T *data, int n, F &&f)
 A GPU-capable partition function for contiguous data. More...
 
template<typename T , typename F >
int StablePartition (Gpu::DeviceVector< T > &v, F &&f)
 A GPU-capable partition function for contiguous data. More...
 
std::string LevelPath (int level, const std::string &levelPrefix="Level_")
 return the name of the level directory, e.g., Level_5 More...
 
std::string MultiFabHeaderPath (int level, const std::string &levelPrefix="Level_", const std::string &mfPrefix="Cell")
 return the path of the multifab to write to the header, e.g., Level_5/Cell More...
 
std::string LevelFullPath (int level, const std::string &plotfilename, const std::string &levelPrefix="Level_")
 return the full path of the level directory, e.g., plt00005/Level_5 More...
 
std::string MultiFabFileFullPrefix (int level, const std::string &plotfilename, const std::string &levelPrefix="Level_", const std::string &mfPrefix="Cell")
 return the full path multifab prefix, e.g., plt00005/Level_5/Cell More...
 
void PreBuildDirectorHierarchy (const std::string &dirName, const std::string &subDirPrefix, int nSubDirs, bool callBarrier)
 prebuild a hierarchy of directories dirName is built first. if dirName exists, it is renamed. then build dirName/subDirPrefix_0 .. dirName/subDirPrefix_nSubDirs-1 if callBarrier is true, call ParallelDescriptor::Barrier() after all directories are built ParallelDescriptor::IOProcessor() creates the directories More...
 
void WriteGenericPlotfileHeader (std::ostream &HeaderFile, int nlevels, const Vector< BoxArray > &bArray, const Vector< std::string > &varnames, const Vector< Geometry > &geom, Real time, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix)
 
void WriteMultiLevelPlotfile (const std::string &plotfilename, int nlevels, const Vector< const MultiFab * > &mf, const Vector< std::string > &varnames, const Vector< Geometry > &geom, Real time, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteMLMF (const std::string &plotfilename, const Vector< const MultiFab * > &mf, const Vector< Geometry > &geom)
 write a plotfile to disk given: -plotfile name -vector of MultiFabs -vector of Geometrys variable names are written as "Var0", "Var1", etc. refinement ratio is computed from the Geometry vector "time" and "level_steps" are set to zero More...
 
void WriteMultiLevelPlotfileHeaders (const std::string &plotfilename, int nlevels, const Vector< const MultiFab * > &mf, const Vector< std::string > &varnames, const Vector< Geometry > &geom, Real time, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteSingleLevelPlotfile (const std::string &plotfilename, const MultiFab &mf, const Vector< std::string > &varnames, const Geometry &geom, Real time, int level_step, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
template<typename T >
std::ostream & operator<< (std::ostream &os, Array< T, AMREX_SPACEDIM > const &a)
 
template<typename T , typename S >
std::ostream & operator<< (std::ostream &os, const std::pair< T, S > &v)
 
void InitRandom (ULong cpu_seed, int nprocs=ParallelDescriptor::NProcs(), ULong gpu_seed=detail::DefaultGpuSeed())
 Set the seed of the random number generator. More...
 
Real RandomNormal (Real mean, Real stddev)
 Generate a psuedo-random double from a normal distribution. More...
 
Real Random ()
 Generate a psuedo-random double from uniform distribution. More...
 
unsigned int RandomPoisson (Real lambda)
 Generate a psuedo-random integer from a Poisson distribution. More...
 
Real RandomGamma (Real alpha, Real beta)
 Generate a psuedo-random floating point number from the Gamma distribution. More...
 
unsigned int Random_int (unsigned int n)
 Generates one pseudorandom unsigned integer which is uniformly distributed on [0,n-1]-interval for each call. More...
 
ULong Random_long (ULong n)
 Generates one pseudorandom unsigned long which is uniformly distributed on [0,n-1]-interval for each call. More...
 
void SaveRandomState (std::ostream &os)
 Save and restore random state. More...
 
void RestoreRandomState (std::istream &is, int nthreads_old, int nstep_old)
 
void UniqueRandomSubset (Vector< int > &uSet, int setSize, int poolSize, bool printSet=false)
 Create a unique subset of random numbers from a pool of integers in the range [0, poolSize - 1] the set will be in the order they are found setSize must be <= poolSize uSet will be resized to setSize if you want all processors to have the same set, call this on one processor and broadcast the array. More...
 
void ResetRandomSeed (ULong cpu_seed, ULong gpu_seed)
 
void DeallocateRandomSeedDevArray ()
 
void FillRandom (Real *p, Long N)
 
void FillRandomNormal (Real *p, Long N, Real mean, Real stddev)
 Fill random numbers from normal distribution. More...
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real Random (RandomEngine const &random_engine)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real RandomNormal (Real mean, Real stddev, RandomEngine const &random_engine)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE unsigned int RandomPoisson (Real lambda, RandomEngine const &random_engine)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real RandomGamma (Real alpha, Real beta, RandomEngine const &random_engine)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE unsigned int Random_int (unsigned int n, RandomEngine const &random_engine)
 
AMREX_FORCE_INLINE randState_tgetRandState ()
 
std::ostream & operator<< (std::ostream &, const RealBox &)
 Nice ASCII output. More...
 
std::istream & operator>> (std::istream &, RealBox &)
 Nice ASCII input. More...
 
bool AlmostEqual (const RealBox &box1, const RealBox &box2, Real eps=0.0) noexcept
 Check for equality of real boxes within a certain tolerance. More...
 
std::ostream & operator<< (std::ostream &ostr, const RealVect &p)
 
std::istream & operator>> (std::istream &is, RealVect &iv)
 
AMREX_GPU_HOST_DEVICE RealVect scale (const RealVect &p, Real s) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect min (const RealVect &p1, const RealVect &p2) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect max (const RealVect &p1, const RealVect &p2) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect BASISREALV (int dir) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator/ (Real s, const RealVect &p) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator+ (Real s, const RealVect &p) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator- (Real s, const RealVect &p) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator* (Real s, const RealVect &p) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator/ (const RealVect &s, const RealVect &p) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator+ (const RealVect &s, const RealVect &p) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator- (const RealVect &s, const RealVect &p) noexcept
 
AMREX_GPU_HOST_DEVICE RealVect operator* (const RealVect &s, const RealVect &p) noexcept
 
template<typename... Ts, typename... Ps>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple< Ts... > IdentityTuple (GpuTuple< Ts... >, ReduceOps< Ps... >) noexcept
 Return a GpuTuple containing the identity element for each operation in ReduceOps. For example 0, +inf and -inf for ReduceOpSum, ReduceOpMin and ReduceOpMax respectively. More...
 
template<typename... Ts, typename... Ps>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple< Ts... > IdentityTuple (GpuTuple< Ts... >, TypeList< Ps... >) noexcept
 Return a GpuTuple containing the identity element for each ReduceOp in TypeList. For example 0, +inf and -inf for ReduceOpSum, ReduceOpMin and ReduceOpMax respectively. More...
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex_calc_xslope (int i, int j, int k, int n, int order, amrex::Array4< Real const > const &q) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex_calc_xslope_extdir (int i, int j, int k, int n, int order, amrex::Array4< Real const > const &q, bool edlo, bool edhi, int domlo, int domhi) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex_calc_yslope (int i, int j, int k, int n, int order, amrex::Array4< Real const > const &q) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex_calc_yslope_extdir (int i, int j, int k, int n, int order, amrex::Array4< Real const > const &q, bool edlo, bool edhi, int domlo, int domhi) noexcept
 
template<class U , int N1, int N2, int N3, Order Ord, int SI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE SmallMatrix< U, N1, N3, Ord, SI > operator* (SmallMatrix< U, N1, N2, Ord, SI > const &lhs, SmallMatrix< U, N2, N3, Ord, SI > const &rhs)
 
template<class T , int NRows, int NCols, Order ORDER, int SI>
std::ostream & operator<< (std::ostream &os, SmallMatrix< T, NRows, NCols, ORDER, SI > const &mat)
 
std::string toLower (std::string s)
 Converts all characters of the string into lower case based on std::locale. More...
 
std::string toUpper (std::string s)
 Converts all characters of the string into uppercase based on std::locale. More...
 
std::string trim (std::string s, std::string const &space)
 
std::string Concatenate (const std::string &root, int num, int mindigits=5)
 Returns rootNNNN where NNNN == num. More...
 
std::vector< std::string > split (std::string const &s, std::string const &sep=" \t")
 Split a string using given tokens in sep. More...
 
template<class TagType , class F >
std::enable_if_t< std::is_same< std::decay_t< decltype(std::declval< TagType >).box())>, Box >::value > ParallelFor (Vector< TagType > const &tags, int ncomp, F &&f)
 
template<class TagType , class F >
std::enable_if_t< std::is_same< std::decay_t< decltype(std::declval< TagType >).box())>, Box >::value > ParallelFor (Vector< TagType > const &tags, F &&f)
 
template<class TagType , class F >
std::enable_if_t< std::is_integral< std::decay_t< decltype(std::declval< TagType >).size())> >::value > ParallelFor (Vector< TagType > const &tags, F &&f)
 
template<std::size_t I, typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTupleElement< I, GpuTuple< Ts... > >::type & get (GpuTuple< Ts... > &tup) noexcept
 
template<std::size_t I, typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTupleElement< I, GpuTuple< Ts... > >::type const & get (GpuTuple< Ts... > const &tup) noexcept
 
template<std::size_t I, typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTupleElement< I, GpuTuple< Ts... > >::type && get (GpuTuple< Ts... > &&tup) noexcept
 
template<typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple< detail::tuple_decay_t< Ts >... > makeTuple (Ts &&... args)
 
template<typename TP >
constexpr AMREX_GPU_HOST_DEVICE auto TupleCat (TP &&a) -> typename detail::tuple_cat_result< detail::tuple_decay_t< TP > >::type
 
template<typename TP1 , typename TP2 >
constexpr AMREX_GPU_HOST_DEVICE auto TupleCat (TP1 &&a, TP2 &&b) -> typename detail::tuple_cat_result< detail::tuple_decay_t< TP1 >, detail::tuple_decay_t< TP2 > >::type
 
template<typename TP1 , typename TP2 , typename... TPs>
constexpr AMREX_GPU_HOST_DEVICE auto TupleCat (TP1 &&a, TP2 &&b, TPs &&... args) -> typename detail::tuple_cat_result< detail::tuple_decay_t< TP1 >, detail::tuple_decay_t< TP2 >, detail::tuple_decay_t< TPs >... >::type
 
template<std::size_t... Is, typename... Args>
constexpr AMREX_GPU_HOST_DEVICE auto TupleSplit (const GpuTuple< Args... > &tup) noexcept
 Returns a GpuTuple of GpuTuples obtained by splitting the input GpuTuple according to the sizes specified by the template arguments. More...
 
template<typename F , typename TP >
constexpr AMREX_GPU_HOST_DEVICE auto Apply (F &&f, TP &&t) -> typename detail::apply_result< F, detail::tuple_decay_t< TP > >::type
 
template<typename... Args>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple< Args &... > Tie (Args &... args) noexcept
 
template<typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple< Ts &&... > ForwardAsTuple (Ts &&... args) noexcept
 
template<typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple< Ts... > MakeZeroTuple (GpuTuple< Ts... >) noexcept
 Return a GpuTuple containing all zeros. Note that a default-constructed GpuTuple can have uninitialized values. More...
 
template<typename T >
constexpr AMREX_GPU_HOST_DEVICE auto tupleToArray (GpuTuple< T > const &tup)
 
template<typename T , typename T2 , typename... Ts, std::enable_if_t< Same< T, T2, Ts... >::value, int > = 0>
constexpr AMREX_GPU_HOST_DEVICE auto tupleToArray (GpuTuple< T, T2, Ts... > const &tup)
 Convert GpuTuple<T,T2,Ts...> to GpuArray. More...
 
template<typename... Ts, typename F >
constexpr void ForEach (TypeList< Ts... >, F &&f)
 For each type t in TypeList, call f(t) More...
 
template<typename... Ts, typename F >
constexpr bool ForEachUntil (TypeList< Ts... >, F &&f)
 For each type t in TypeList, call f(t) until true is returned. More...
 
template<typename... As, typename... Bs>
constexpr auto operator+ (TypeList< As... >, TypeList< Bs... >)
 Concatenate two TypeLists. More...
 
template<typename... Ls, typename A >
constexpr auto single_product (TypeList< Ls... >, A)
 
template<typename LLs , typename... As>
constexpr auto operator* (LLs, TypeList< As... >)
 
template<typename... Ls>
constexpr auto CartesianProduct (Ls...)
 Cartesian Product of TypeLists. More...
 
bool is_integer (const char *str)
 Useful C++ Utility Functions. More...
 
template<typename T >
bool is_it (std::string const &s, T &v)
 Return true and store value in v if string s is type T. More...
 
const std::vector< std::string > & Tokenize (const std::string &instr, const std::string &separators)
 Splits "instr" into separate pieces based on "separators". More...
 
bool UtilCreateDirectory (const std::string &path, mode_t mode, bool verbose=false)
 Creates the specified directories. path may be either a full pathname or a relative pathname. It will create all the directories in the pathname, if they don't already exist, so that on successful return the pathname refers to an existing directory. Returns true or false depending upon whether or not it was successful. Also returns true if path is NULL or "/". mode is the mode passed to mkdir() for any directories that must be created (for example: 0755). verbose will print out the directory creation steps. More...
 
void CreateDirectoryFailed (const std::string &dir)
 Output a message and abort when couldn't create the directory. More...
 
void FileOpenFailed (const std::string &file)
 Output a message and abort when couldn't open the file. More...
 
bool FileExists (const std::string &filename)
 Check if a file already exists. Return true if the filename is an existing file, directory, or link. For links, this operates on the link and not what the link points to. More...
 
std::string UniqueString ()
 Create a (probably) unique string. More...
 
void UtilCreateCleanDirectory (const std::string &path, bool callbarrier=true)
 Create a new directory, renaming the old one if it exists. More...
 
void UtilCreateDirectoryDestructive (const std::string &path, bool callbarrier=true)
 
void UtilRenameDirectoryToOld (const std::string &path, bool callbarrier=true)
 Rename a current directory if it exists. More...
 
void OutOfMemory ()
 Aborts after printing message indicating out-of-memory; i.e. operator new has failed. This is the "supported" set_new_handler() function for AMReX applications. More...
 
double InvNormDist (double p)
 This function returns an approximation of the inverse cumulative standard normal distribution function. I.e., given P, it returns an approximation to the X satisfying P = Pr{Z <= X} where Z is a random variable from the standard normal distribution. More...
 
double InvNormDistBest (double p)
 This function returns an approximation of the inverse cumulative standard normal distribution function. I.e., given P, it returns an approximation to the X satisfying P = Pr{Z <= X} where Z is a random variable from the standard normal distribution. More...
 
int CRRBetweenLevels (int fromlevel, int tolevel, const Vector< int > &refratios)
 
std::istream & operator>> (std::istream &, const expect &exp)
 
Vector< char > SerializeStringArray (const Vector< std::string > &stringArray)
 
Vector< std::string > UnSerializeStringArray (const Vector< char > &charArray)
 
void SyncStrings (const Vector< std::string > &localStrings, Vector< std::string > &syncedStrings, bool &alreadySynced)
 
template<typename T >
Long bytesOf (const std::vector< T > &v)
 
template<typename Key , typename T , class Compare >
Long bytesOf (const std::map< Key, T, Compare > &m)
 
void BroadcastBool (bool &bBool, int myLocalId, int rootId, const MPI_Comm &localComm)
 
void BroadcastString (std::string &bStr, int myLocalId, int rootId, const MPI_Comm &localComm)
 
void BroadcastStringArray (Vector< std::string > &bSA, int myLocalId, int rootId, const MPI_Comm &localComm)
 
template<class T >
void BroadcastArray (Vector< T > &aT, int myLocalId, int rootId, const MPI_Comm &localComm)
 
void Sleep (double sleepsec)
 
double second () noexcept
 
template<typename T >
void hash_combine (uint64_t &seed, const T &val) noexcept
 
template<typename T >
uint64_t hash_vector (const Vector< T > &vec, uint64_t seed=0xDEADBEEFDEADBEEF) noexcept
 
template<class T , typename = typename T::FABType>
Vector< T * > GetVecOfPtrs (Vector< T > &a)
 
template<class T >
Vector< T * > GetVecOfPtrs (const Vector< std::unique_ptr< T > > &a)
 
template<class T , typename = typename T::FABType>
Vector< const T * > GetVecOfConstPtrs (const Vector< T > &a)
 
template<class T >
Vector< const T * > GetVecOfConstPtrs (const Vector< std::unique_ptr< T > > &a)
 
template<class T , typename = typename T::FABType>
Vector< const T * > GetVecOfConstPtrs (const Vector< T * > &a)
 
template<class T >
Vector< Vector< T * > > GetVecOfVecOfPtrs (const Vector< Vector< std::unique_ptr< T > > > &a)
 
template<class T >
Vector< std::array< T *, AMREX_SPACEDIM > > GetVecOfArrOfPtrs (const Vector< std::array< std::unique_ptr< T >, AMREX_SPACEDIM > > &a)
 
template<class T >
Vector< std::array< T const *, AMREX_SPACEDIM > > GetVecOfArrOfPtrsConst (const Vector< std::array< std::unique_ptr< T >, AMREX_SPACEDIM > > &a)
 
template<class T >
Vector< std::array< T const *, AMREX_SPACEDIM > > GetVecOfArrOfConstPtrs (const Vector< std::array< std::unique_ptr< T >, AMREX_SPACEDIM > > &a)
 
template<class T , std::enable_if_t< IsFabArray< T >::value||IsBaseFab< T >::value, int > = 0>
Vector< std::array< T const *, AMREX_SPACEDIM > > GetVecOfArrOfConstPtrs (const Vector< std::array< T, AMREX_SPACEDIM > > &a)
 
template<class T , std::enable_if_t< IsFabArray< T >::value||IsBaseFab< T >::value, int > = 0>
Vector< std::array< T *, AMREX_SPACEDIM > > GetVecOfArrOfPtrs (Vector< std::array< T, AMREX_SPACEDIM > > &a)
 
template<class T >
void FillNull (Vector< T * > &a)
 
template<class T >
void FillNull (Vector< std::unique_ptr< T > > &a)
 
template<class T >
void RemoveDuplicates (Vector< T > &vec)
 
template<class T , class H >
void RemoveDuplicates (Vector< T > &vec)
 
void writeIntData (const int *data, std::size_t size, std::ostream &os, const IntDescriptor &id=FPC::NativeIntDescriptor())
 Functions for writing integer data to disk in a portable, self-describing manner. More...
 
void readIntData (int *data, std::size_t size, std::istream &is, const IntDescriptor &id)
 
void writeLongData (const Long *data, std::size_t size, std::ostream &os, const IntDescriptor &id=FPC::NativeLongDescriptor())
 
void readLongData (Long *data, std::size_t size, std::istream &is, const IntDescriptor &id)
 
void writeRealData (const Real *data, std::size_t size, std::ostream &os, const RealDescriptor &rd=FPC::NativeRealDescriptor())
 
void readRealData (Real *data, std::size_t size, std::istream &is, const RealDescriptor &rd)
 
void writeFloatData (const float *data, std::size_t size, std::ostream &os, const RealDescriptor &rd=FPC::Native32RealDescriptor())
 
void readFloatData (float *data, std::size_t size, std::istream &is, const RealDescriptor &rd)
 
void writeDoubleData (const double *data, std::size_t size, std::ostream &os, const RealDescriptor &rd=FPC::Native64RealDescriptor())
 
void readDoubleData (double *data, std::size_t size, std::istream &is, const RealDescriptor &rd)
 
void writeData (int const *data, std::size_t size, std::ostream &os)
 
void writeData (Long const *data, std::size_t size, std::ostream &os)
 
void writeData (float const *data, std::size_t size, std::ostream &os)
 
void writeData (double const *data, std::size_t size, std::ostream &os)
 
void readData (int *data, std::size_t size, std::istream &is)
 
void readData (Long *data, std::size_t size, std::istream &is)
 
void readData (float *data, std::size_t size, std::istream &is)
 
void readData (double *data, std::size_t size, std::istream &is)
 
std::ostream & operator<< (std::ostream &os, const VisMF::FabOnDisk &fod)
 Write a FabOnDisk to an ostream in ASCII. More...
 
std::istream & operator>> (std::istream &is, VisMF::FabOnDisk &fod)
 Read a FabOnDisk from an istream. More...
 
std::ostream & operator<< (std::ostream &os, const Vector< VisMF::FabOnDisk > &fa)
 Write an Vector<FabOnDisk> to an ostream in ASCII. More...
 
std::istream & operator>> (std::istream &is, Vector< VisMF::FabOnDisk > &fa)
 Read an Vector<FabOnDisk> from an istream. More...
 
std::ostream & operator<< (std::ostream &os, const VisMF::Header &hd)
 Write a VisMF::Header to an ostream in ASCII. More...
 
std::istream & operator>> (std::istream &is, VisMF::Header &hd)
 Read a VisMF::Header from an istream. More...
 
template<typename FAB >
std::enable_if_t< std::is_same_v< FAB, IArrayBox > > Write (const FabArray< FAB > &fa, const std::string &name)
 Write iMultiFab/FabArray<IArrayBox> More...
 
template<typename FAB >
std::enable_if_t< std::is_same_v< FAB, IArrayBox > > Read (FabArray< FAB > &fa, const std::string &name)
 Read iMultiFab/FabArray<IArrayBox> More...
 
void iparser_compile_exe_size (struct iparser_node *node, char *&p, std::size_t &exe_size, int &max_stack_size, int &stack_size, Vector< char * > &local_variables)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long iparser_exe_eval (const char *p, long long const *x)
 
std::size_t iparser_exe_size (struct amrex_iparser *parser, int &max_stack_size, int &stack_size)
 
void iparser_compile (struct amrex_iparser *parser, char *p)
 
void iparser_defexpr (struct iparser_node *body)
 
struct iparser_symboliparser_makesymbol (char *name)
 
struct iparser_nodeiparser_newnode (enum iparser_node_t type, struct iparser_node *l, struct iparser_node *r)
 
struct iparser_nodeiparser_newnumber (long long d)
 
struct iparser_nodeiparser_newsymbol (struct iparser_symbol *symbol)
 
struct iparser_nodeiparser_newf1 (enum iparser_f1_t ftype, struct iparser_node *l)
 
struct iparser_nodeiparser_newf2 (enum iparser_f2_t ftype, struct iparser_node *l, struct iparser_node *r)
 
struct iparser_nodeiparser_newf3 (enum iparser_f3_t ftype, struct iparser_node *n1, struct iparser_node *n2, struct iparser_node *n3)
 
struct iparser_nodeiparser_newassign (struct iparser_symbol *sym, struct iparser_node *v)
 
struct iparser_nodeiparser_newlist (struct iparser_node *nl, struct iparser_node *nr)
 
struct amrex_iparseramrex_iparser_new ()
 
void amrex_iparser_delete (struct amrex_iparser *iparser)
 
struct amrex_iparseriparser_dup (struct amrex_iparser *source)
 
std::size_t iparser_ast_size (struct iparser_node *node)
 
struct iparser_nodeiparser_ast_dup (struct amrex_iparser *my_iparser, struct iparser_node *node, int move)
 
void iparser_ast_optimize (struct iparser_node *node)
 
void iparser_ast_print (struct iparser_node *node, std::string const &space, AllPrint &printer)
 
int iparser_ast_depth (struct iparser_node *node)
 
void iparser_ast_regvar (struct iparser_node *node, char const *name, int i)
 
void iparser_ast_setconst (struct iparser_node *node, char const *name, long long c)
 
void iparser_ast_get_symbols (struct iparser_node *node, std::set< std::string > &symbols, std::set< std::string > &local_symbols)
 
void iparser_regvar (struct amrex_iparser *iparser, char const *name, int i)
 
void iparser_setconst (struct amrex_iparser *iparser, char const *name, long long c)
 
void iparser_print (struct amrex_iparser *iparser)
 
std::set< std::string > iparser_get_symbols (struct amrex_iparser *iparser)
 
int iparser_depth (struct amrex_iparser *iparser)
 
long long iparser_atoll (const char *str)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long iparser_call_f1 (enum iparser_f1_t, long long a)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long iparser_call_f2 (enum iparser_f2_t type, long long a, long long b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long iparser_call_f3 (enum iparser_f3_t, long long a, long long b, long long c)
 
void parser_compile_exe_size (struct parser_node *node, char *&p, std::size_t &exe_size, int &max_stack_size, int &stack_size, Vector< char const * > &local_variables)
 
void parser_exe_print (char const *p, Vector< std::string > const &vars, Vector< char const * > const &locals)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double parser_exe_eval (const char *p, double const *x)
 
std::size_t parser_exe_size (struct amrex_parser *parser, int &max_stack_size, int &stack_size)
 
Vector< char const * > parser_compile (struct amrex_parser *parser, char *p)
 
void parser_defexpr (struct parser_node *body)
 
struct parser_symbolparser_makesymbol (char *name)
 
struct parser_nodeparser_newnode (enum parser_node_t type, struct parser_node *l, struct parser_node *r)
 
struct parser_nodeparser_newneg (struct parser_node *n)
 
struct parser_nodeparser_newnumber (double d)
 
struct parser_nodeparser_newsymbol (struct parser_symbol *symbol)
 
struct parser_nodeparser_newf1 (enum parser_f1_t ftype, struct parser_node *l)
 
struct parser_nodeparser_newf2 (enum parser_f2_t ftype, struct parser_node *l, struct parser_node *r)
 
struct parser_nodeparser_newf3 (enum parser_f3_t ftype, struct parser_node *n1, struct parser_node *n2, struct parser_node *n3)
 
struct parser_nodeparser_newassign (struct parser_symbol *sym, struct parser_node *v)
 
struct parser_nodeparser_newlist (struct parser_node *nl, struct parser_node *nr)
 
struct amrex_parseramrex_parser_new ()
 
void amrex_parser_delete (struct amrex_parser *parser)
 
struct amrex_parserparser_dup (struct amrex_parser *source)
 
std::size_t parser_ast_size (struct parser_node *node)
 
struct parser_nodeparser_ast_dup (struct amrex_parser *my_parser, struct parser_node *node, int move)
 
bool parser_node_equal (struct parser_node *a, struct parser_node *b)
 
void parser_ast_optimize (struct parser_node *node)
 
void parser_ast_print (struct parser_node *node, std::string const &space, std::ostream &printer)
 
int parser_ast_depth (struct parser_node *node)
 
void parser_ast_sort (struct parser_node *node)
 
void parser_ast_regvar (struct parser_node *node, char const *name, int i)
 
void parser_ast_setconst (struct parser_node *node, char const *name, double c)
 
void parser_ast_get_symbols (struct parser_node *node, std::set< std::string > &symbols, std::set< std::string > &local_symbols)
 
void parser_regvar (struct amrex_parser *parser, char const *name, int i)
 
void parser_setconst (struct amrex_parser *parser, char const *name, double c)
 
void parser_print (struct amrex_parser *parser)
 
std::set< std::string > parser_get_symbols (struct amrex_parser *parser)
 
int parser_depth (struct amrex_parser *parser)
 
double parser_get_number (struct parser_node *node)
 
void parser_set_number (struct parser_node *node, double v)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_exp (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_log (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_log10 (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_sin (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_cos (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_tan (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_asin (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_acos (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_atan (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_sinh (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_cosh (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_tanh (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_asinh (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_acosh (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_atanh (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINEparser_math_comp_ellint_1 (T k)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINEparser_math_comp_ellint_2 (T k)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_erf (T a)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_pow (T a, T b)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_atan2 (T a, T b)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_jn (int a, T b)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINEparser_math_yn (int a, T b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double parser_call_f1 (enum parser_f1_t type, double a)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double parser_call_f2 (enum parser_f2_t type, double a, double b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double parser_call_f3 (enum parser_f3_t, double a, double b, double c)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void interpbndrydata_o1 (int i, int, int, int n, Array4< T > const &bdry, int nb, Array4< T const > const &crse, int nc, Dim3 const &r) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void interpbndrydata_x_o3 (int i, int, int, int n, Array4< T > const &bdry, int nb, Array4< T const > const &crse, int nc, Dim3 const &r, Array4< int const > const &, int, int) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void interpbndrydata_y_o3 (int i, int j, int, int n, Array4< T > const &bdry, int nb, Array4< T const > const &crse, int nc, Dim3 const &r, Array4< int const > const &mask, int not_covered, int max_width) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void interpbndrydata_z_o3 (int i, int j, int k, int n, Array4< T > const &bdry, int nb, Array4< T const > const &crse, int nc, Dim3 const &r, Array4< int const > const &mask, int not_covered, int) noexcept
 
std::ostream & operator<< (std::ostream &os, const LinOpBCType &t)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void poly_interp_coeff (T xInt, T const *AMREX_RESTRICT x, int N, T *AMREX_RESTRICT c) noexcept
 
template<int N, typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void poly_interp_coeff (T xInt, T const *AMREX_RESTRICT x, T *AMREX_RESTRICT c) noexcept
 
std::ostream & operator<< (std::ostream &os, const Mask &m)
 
std::istream & operator>> (std::istream &is, Mask &m)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void yafluxreg_crseadd (Box const &bx, Array4< T > const &d, Array4< int const > const &flag, Array4< T const > const &fx, T dtdx, int nc) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void yafluxreg_fineadd (Box const &bx, Array4< T > const &d, Array4< T const > const &f, T dtdx, int nc, int dirside, Dim3 const &rr) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void yafluxreg_crseadd (Box const &bx, Array4< T > const &d, Array4< int const > const &flag, Array4< T const > const &fx, Array4< T const > const &fy, T dtdx, T dtdy, int nc) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void yafluxreg_crseadd (Box const &bx, Array4< T > const &d, Array4< int const > const &flag, Array4< T const > const &fx, Array4< T const > const &fy, Array4< T const > const &fz, T dtdx, T dtdy, T dtdz, int nc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void decomp_chol_np6 (Array2D< Real, 0, 5, 0, 5 > &aa)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cholsol_np6 (Array2D< Real, 0, 11, 0, 5 > &Amatrix, Array1D< Real, 0, 5 > &b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cholsol_for_eb (Array2D< Real, 0, 17, 0, 5 > &Amatrix, Array1D< Real, 0, 5 > &b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_x_of_phi_on_centroids (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Real &yloc_on_xface, bool is_eb_dirichlet, bool is_eb_inhomog)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_y_of_phi_on_centroids (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Real &xloc_on_yface, bool is_eb_dirichlet, bool is_eb_inhomog)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_eb_of_phi_on_centroids (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Real &nrmx, Real &nrmy, bool is_eb_inhomog)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_x_of_phi_on_centroids_extdir (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Array4< Real const > const &vfrac, Real &yloc_on_xface, bool is_eb_dirichlet, bool is_eb_inhomog, const bool on_x_face, const int domlo_x, const int domhi_x, const bool on_y_face, const int domlo_y, const int domhi_y)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_y_of_phi_on_centroids_extdir (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Array4< Real const > const &vfrac, Real &xloc_on_yface, bool is_eb_dirichlet, bool is_eb_inhomog, const bool on_x_face, const int domlo_x, const int domhi_x, const bool on_y_face, const int domlo_y, const int domhi_y)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_eb_of_phi_on_centroids_extdir (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Array4< Real const > const &vfrac, Real &nrmx, Real &nrmy, bool is_eb_inhomog, const bool on_x_face, const int domlo_x, const int domhi_x, const bool on_y_face, const int domlo_y, const int domhi_y)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void decomp_chol_np10 (Array2D< Real, 0, 9, 0, 9 > &aa)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cholsol_np10 (Array2D< Real, 0, 35, 0, 9 > &Amatrix, Array1D< Real, 0, 9 > &b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cholsol_for_eb (Array2D< Real, 0, 53, 0, 9 > &Amatrix, Array1D< Real, 0, 9 > &b)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_x_of_phi_on_centroids (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Real &yloc_on_xface, Real &zloc_on_xface, bool is_eb_dirichlet, bool is_eb_inhomog)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_y_of_phi_on_centroids (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Real &xloc_on_yface, Real &zloc_on_yface, bool is_eb_dirichlet, bool is_eb_inhomog)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_z_of_phi_on_centroids (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Real &xloc_on_zface, Real &yloc_on_zface, bool is_eb_dirichlet, bool is_eb_inhomog)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_eb_of_phi_on_centroids (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Real &nrmx, Real &nrmy, Real &nrmz, bool is_eb_inhomog)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_x_of_phi_on_centroids_extdir (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Array4< Real const > const &vfrac, Real &yloc_on_xface, Real &zloc_on_xface, bool is_eb_dirichlet, bool is_eb_inhomog, const bool on_x_face, const int domlo_x, const int domhi_x, const bool on_y_face, const int domlo_y, const int domhi_y, const bool on_z_face, const int domlo_z, const int domhi_z)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_y_of_phi_on_centroids_extdir (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Array4< Real const > const &vfrac, Real &xloc_on_yface, Real &zloc_on_yface, bool is_eb_dirichlet, bool is_eb_inhomog, const bool on_x_face, const int domlo_x, const int domhi_x, const bool on_y_face, const int domlo_y, const int domhi_y, const bool on_z_face, const int domlo_z, const int domhi_z)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_z_of_phi_on_centroids_extdir (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Array4< Real const > const &vfrac, Real &xloc_on_zface, Real &yloc_on_zface, bool is_eb_dirichlet, bool is_eb_inhomog, const bool on_x_face, const int domlo_x, const int domhi_x, const bool on_y_face, const int domlo_y, const int domhi_y, const bool on_z_face, const int domlo_z, const int domhi_z)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real grad_eb_of_phi_on_centroids_extdir (int i, int j, int k, int n, Array4< Real const > const &phi, Array4< Real const > const &phieb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &ccent, Array4< Real const > const &bcent, Array4< Real const > const &vfrac, Real &nrmx, Real &nrmy, Real &nrmz, bool is_eb_inhomog, const bool on_x_face, const int domlo_x, const int domhi_x, const bool on_y_face, const int domlo_y, const int domhi_y, const bool on_z_face, const int domlo_z, const int domhi_z)
 
void single_level_redistribute (amrex::MultiFab &div_tmp_in, amrex::MultiFab &div_out, int div_comp, int ncomp, const amrex::Geometry &geom)
 
void single_level_weighted_redistribute (amrex::MultiFab &div_tmp_in, amrex::MultiFab &div_out, const amrex::MultiFab &weights, int div_comp, int ncomp, const amrex::Geometry &geom, bool use_wts_in_divnc)
 
void apply_flux_redistribution (const amrex::Box &bx, amrex::Array4< amrex::Real > const &div, amrex::Array4< amrex::Real const > const &divc, amrex::Array4< amrex::Real const > const &wt, int icomp, int ncomp, amrex::Array4< amrex::EBCellFlag const > const &flag_arr, amrex::Array4< amrex::Real const > const &vfrac, const amrex::Geometry &geom, bool use_wts_in_divnc)
 
void amrex_flux_redistribute (const amrex::Box &bx, amrex::Array4< amrex::Real > const &dqdt, amrex::Array4< amrex::Real const > const &divc, amrex::Array4< amrex::Real const > const &wt, amrex::Array4< amrex::Real const > const &vfrac, amrex::Array4< amrex::EBCellFlag const > const &flag, int as_crse, amrex::Array4< amrex::Real > const &rr_drho_crse, amrex::Array4< int const > const &rr_flag_crse, int as_fine, amrex::Array4< amrex::Real > const &dm_as_fine, amrex::Array4< int const > const &levmsk, const amrex::Geometry &geom, bool use_wts_in_divnc, int level_mask_not_covered, int icomp, int ncomp, amrex::Real dt)
 
void ApplyRedistribution (amrex::Box const &bx, int ncomp, amrex::Array4< amrex::Real > const &dUdt_out, amrex::Array4< amrex::Real > const &dUdt_in, amrex::Array4< amrex::Real const > const &U_in, amrex::Array4< amrex::Real > const &scratch, amrex::Array4< amrex::EBCellFlag const > const &flag, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz), amrex::Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz), amrex::Array4< amrex::Real const > const &ccc, amrex::BCRec const *d_bcrec_ptr, amrex::Geometry const &lev_geom, amrex::Real dt, std::string const &redistribution_type, bool use_wts_in_divnc=false, int srd_max_order=2, amrex::Real target_volfrac=0.5_rt, amrex::Array4< amrex::Real const > const &update_scale={})
 
void ApplyMLRedistribution (amrex::Box const &bx, int ncomp, amrex::Array4< amrex::Real > const &dUdt_out, amrex::Array4< amrex::Real > const &dUdt_in, amrex::Array4< amrex::Real const > const &U_in, amrex::Array4< amrex::Real > const &scratch, amrex::Array4< amrex::EBCellFlag const > const &flag, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz), amrex::Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz), amrex::Array4< amrex::Real const > const &ccc, amrex::BCRec const *d_bcrec_ptr, amrex::Geometry const &lev_geom, amrex::Real dt, std::string const &redistribution_type, int as_crse, amrex::Array4< amrex::Real > const &rr_drho_crse, amrex::Array4< int const > const &rr_flag_crse, int as_fine, amrex::Array4< amrex::Real > const &dm_as_fine, amrex::Array4< int const > const &levmsk, int level_mask_not_covered, amrex::Real fac_for_deltaR=1.0_rt, bool use_wts_in_divnc=false, int icomp=0, int srd_max_order=2, amrex::Real target_volfrac=0.5_rt, amrex::Array4< amrex::Real const > const &update_scale={})
 
void ApplyInitialRedistribution (amrex::Box const &bx, int ncomp, amrex::Array4< amrex::Real > const &U_out, amrex::Array4< amrex::Real > const &U_in, amrex::Array4< amrex::EBCellFlag const > const &flag, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz), amrex::Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz), amrex::Array4< amrex::Real const > const &ccc, amrex::BCRec const *d_bcrec_ptr, amrex::Geometry const &geom, std::string const &redistribution_type, int srd_max_order=2, amrex::Real target_volfrac=0.5_rt)
 
void StateRedistribute (amrex::Box const &bx, int ncomp, amrex::Array4< amrex::Real > const &U_out, amrex::Array4< amrex::Real > const &U_in, amrex::Array4< amrex::EBCellFlag const > const &flag, amrex::Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz), amrex::Array4< amrex::Real const > const &ccent, amrex::BCRec const *d_bcrec_ptr, amrex::Array4< int const > const &itracker, amrex::Array4< amrex::Real const > const &nrs, amrex::Array4< amrex::Real const > const &alpha, amrex::Array4< amrex::Real const > const &nbhd_vol, amrex::Array4< amrex::Real const > const &cent_hat, amrex::Geometry const &geom, int max_order=2)
 
void MLStateRedistribute (amrex::Box const &bx, int ncomp, amrex::Array4< amrex::Real > const &U_out, amrex::Array4< amrex::Real > const &U_in, amrex::Array4< amrex::EBCellFlag const > const &flag, amrex::Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz), amrex::Array4< amrex::Real const > const &ccent, amrex::BCRec const *d_bcrec_ptr, amrex::Array4< int const > const &itracker, amrex::Array4< amrex::Real const > const &nrs, amrex::Array4< amrex::Real const > const &alpha, amrex::Array4< amrex::Real const > const &nbhd_vol, amrex::Array4< amrex::Real const > const &cent_hat, amrex::Geometry const &geom, int as_crse, Array4< Real > const &drho_as_crse, Array4< int const > const &flag_as_crse, int as_fine, Array4< Real > const &dm_as_fine, Array4< int const > const &levmsk, int is_ghost_cell, amrex::Real fac_for_deltaR, int max_order=2)
 
void MakeITracker (amrex::Box const &bx, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz), amrex::Array4< amrex::Real const > const &vfrac, amrex::Array4< int > const &itracker, amrex::Geometry const &geom, amrex::Real target_volfrac)
 
void MakeStateRedistUtils (amrex::Box const &bx, amrex::Array4< amrex::EBCellFlag const > const &flag, amrex::Array4< amrex::Real const > const &vfrac, amrex::Array4< amrex::Real const > const &ccent, amrex::Array4< int const > const &itracker, amrex::Array4< amrex::Real > const &nrs, amrex::Array4< amrex::Real > const &alpha, amrex::Array4< amrex::Real > const &nbhd_vol, amrex::Array4< amrex::Real > const &cent_hat, amrex::Geometry const &geom, amrex::Real target_volfrac)
 
void ApplyRedistribution (Box const &bx, int ncomp, Array4< Real > const &dUdt_out, Array4< Real > const &dUdt_in, Array4< Real const > const &U_in, Array4< Real > const &scratch, Array4< EBCellFlag const > const &flag, AMREX_D_DECL(Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz), Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz), Array4< Real const > const &ccc, amrex::BCRec const *d_bcrec_ptr, Geometry const &lev_geom, Real dt, std::string const &redistribution_type, bool use_wts_in_divnc, int srd_max_order, amrex::Real target_volfrac, Array4< Real const > const &srd_update_scale)
 
void ApplyMLRedistribution (Box const &bx, int ncomp, Array4< Real > const &dUdt_out, Array4< Real > const &dUdt_in, Array4< Real const > const &U_in, Array4< Real > const &scratch, Array4< EBCellFlag const > const &flag, AMREX_D_DECL(Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz), Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz), Array4< Real const > const &ccc, amrex::BCRec const *d_bcrec_ptr, Geometry const &lev_geom, Real dt, std::string const &redistribution_type, int as_crse, Array4< Real > const &rr_drho_crse, Array4< int const > const &rr_flag_crse, int as_fine, Array4< Real > const &dm_as_fine, Array4< int const > const &levmsk, int level_mask_not_covered, Real fac_for_deltaR, bool use_wts_in_divnc, int icomp, int srd_max_order, amrex::Real target_volfrac, Array4< Real const > const &srd_update_scale)
 
void ApplyInitialRedistribution (Box const &bx, int ncomp, Array4< Real > const &U_out, Array4< Real > const &U_in, Array4< EBCellFlag const > const &flag, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz), amrex::Array4< amrex::Real const > const &vfrac, AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz), amrex::Array4< amrex::Real const > const &ccc, amrex::BCRec const *d_bcrec_ptr, Geometry const &lev_geom, std::string const &redistribution_type, int srd_max_order, amrex::Real target_volfrac)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE amrex::Real amrex_calc_alpha_stencil (Real q_hat, Real q_max, Real q_min, Real state) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > amrex_calc_centroid_limiter (int i, int j, int k, int n, amrex::Array4< amrex::Real const > const &state, amrex::Array4< amrex::EBCellFlag const > const &flag, const amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > &slopes, amrex::Array4< amrex::Real const > const &ccent) noexcept
 
void MakeStateRedistUtils (Box const &bx, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrac, Array4< Real const > const &ccent, Array4< int const > const &itracker, Array4< Real > const &nrs, Array4< Real > const &alpha, Array4< Real > const &nbhd_vol, Array4< Real > const &cent_hat, Geometry const &lev_geom, Real target_vol)
 
void FillSignedDistance (MultiFab &mf, bool fluid_has_positive_sign=true)
 Fill MultiFab with signed distance. More...
 
void FillSignedDistance (MultiFab &mf, EB2::Level const &ls_lev, EBFArrayBoxFactory const &eb_fac, int refratio, bool fluid_has_positive_sign=true)
 Fill MultiFab with signed distance. More...
 
template<typename G >
void FillImpFunc (MultiFab &mf, G const &gshop, Geometry const &geom)
 Fill MultiFab with implicit function. More...
 
void TagCutCells (TagBoxArray &tags, const MultiFab &state)
 
void TagVolfrac (TagBoxArray &tags, const MultiFab &volfrac, Real tol)
 
std::ostream & operator<< (std::ostream &os, const EBCellFlag &flag)
 
std::unique_ptr< EBFArrayBoxFactorymakeEBFabFactory (const Geometry &a_geom, const BoxArray &a_ba, const DistributionMapping &a_dm, const Vector< int > &a_ngrow, EBSupport a_support)
 
std::unique_ptr< EBFArrayBoxFactorymakeEBFabFactory (const EB2::Level *eb_level, const BoxArray &a_ba, const DistributionMapping &a_dm, const Vector< int > &a_ngrow, EBSupport a_support)
 
std::unique_ptr< EBFArrayBoxFactorymakeEBFabFactory (const EB2::IndexSpace *index_space, const Geometry &a_geom, const BoxArray &a_ba, const DistributionMapping &a_dm, const Vector< int > &a_ngrow, EBSupport a_support)
 
const EBCellFlagFabgetEBCellFlagFab (const FArrayBox &fab)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_crseadd_va (int i, int j, int k, Array4< Real > const &d, Array4< int const > const &flag, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &vfrac, Array4< Real const > const &ax, Array4< Real const > const &ay, Real dtdx, Real dtdy, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real eb_flux_reg_cvol (int i, int j, Array4< Real const > const &vfrac, Dim3 const &ratio, Real threshold) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_fineadd_va_xlo (int i, int j, int k, int n, Array4< Real > const &d, Array4< Real const > const &f, Array4< Real const > const &vfrac, Array4< Real const > const &a, Real fac, Dim3 const &ratio)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_fineadd_va_xhi (int i, int j, int k, int n, Array4< Real > const &d, Array4< Real const > const &f, Array4< Real const > const &vfrac, Array4< Real const > const &a, Real fac, Dim3 const &ratio)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_fineadd_va_ylo (int i, int j, int k, int n, Array4< Real > const &d, Array4< Real const > const &f, Array4< Real const > const &vfrac, Array4< Real const > const &a, Real fac, Dim3 const &ratio)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_fineadd_va_yhi (int i, int j, int k, int n, Array4< Real > const &d, Array4< Real const > const &f, Array4< Real const > const &vfrac, Array4< Real const > const &a, Real fac, Dim3 const &ratio)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_fineadd_dm (int i, int j, int k, int n, Box const &dmbx, Array4< Real > const &d, Array4< Real const > const &dm, Array4< Real const > const &vfrac, Dim3 const &ratio, Real threshold)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_rereflux_from_crse (int i, int j, int k, int n, Box const &bx, Array4< Real > const &d, Array4< Real const > const &s, Array4< int const > const &amrflg, Array4< EBCellFlag const > const &ebflg, Array4< Real const > const &vfrac)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_rereflux_to_fine (int i, int j, int, int n, Array4< Real > const &d, Array4< Real const > const &s, Array4< int const > const &msk, Dim3 ratio)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_crseadd_va (int i, int j, int k, Array4< Real > const &d, Array4< int const > const &flag, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &fz, Array4< Real const > const &vfrac, Array4< Real const > const &ax, Array4< Real const > const &ay, Array4< Real const > const &az, Real dtdx, Real dtdy, Real dtdz, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real eb_flux_reg_cvol (int i, int j, int k, Array4< Real const > const &vfrac, Dim3 const &ratio, Real small) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_fineadd_va_zlo (int i, int j, int k, int n, Array4< Real > const &d, Array4< Real const > const &f, Array4< Real const > const &vfrac, Array4< Real const > const &a, Real fac, Dim3 const &ratio)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_flux_reg_fineadd_va_zhi (int i, int j, int k, int n, Array4< Real > const &d, Array4< Real const > const &f, Array4< Real const > const &vfrac, Array4< Real const > const &a, Real fac, Dim3 const &ratio)
 
void EB_set_covered (MultiFab &mf, Real val)
 
void EB_set_covered (MultiFab &mf, int icomp, int ncomp, int ngrow, Real val)
 
void EB_set_covered (MultiFab &mf, int icomp, int ncomp, const Vector< Real > &vals)
 
void EB_set_covered (MultiFab &mf, int icomp, int ncomp, int ngrow, const Vector< Real > &a_vals)
 
void EB_set_covered_faces (const Array< MultiFab *, AMREX_SPACEDIM > &umac, Real val)
 
void EB_set_covered_faces (const Array< MultiFab *, AMREX_SPACEDIM > &umac, const int scomp, const int ncomp, const Vector< Real > &a_vals)
 
void EB_average_down (const MultiFab &S_fine, MultiFab &S_crse, const MultiFab &vol_fine, const MultiFab &vfrac_fine, int scomp, int ncomp, const IntVect &ratio)
 
void EB_average_down (const MultiFab &S_fine, MultiFab &S_crse, int scomp, int ncomp, int ratio)
 
void EB_average_down (const MultiFab &S_fine, MultiFab &S_crse, int scomp, int ncomp, const IntVect &ratio)
 
void EB_average_down_faces (const Array< const MultiFab *, AMREX_SPACEDIM > &fine, const Array< MultiFab *, AMREX_SPACEDIM > &crse, int ratio, int ngcrse)
 
void EB_average_down_faces (const Array< const MultiFab *, AMREX_SPACEDIM > &fine, const Array< MultiFab *, AMREX_SPACEDIM > &crse, const IntVect &ratio, int ngcrse)
 
void EB_average_down_faces (const Array< const MultiFab *, AMREX_SPACEDIM > &fine, const Array< MultiFab *, AMREX_SPACEDIM > &crse, const IntVect &ratio, const Geometry &crse_geom)
 
void EB_average_down_boundaries (const MultiFab &fine, MultiFab &crse, int ratio, int ngcrse)
 
void EB_average_down_boundaries (const MultiFab &fine, MultiFab &crse, const IntVect &ratio, int ngcrse)
 
void EB_computeDivergence (MultiFab &divu, const Array< MultiFab const *, AMREX_SPACEDIM > &umac, const Geometry &geom, bool already_on_centroids)
 
void EB_computeDivergence (MultiFab &divu, const Array< MultiFab const *, AMREX_SPACEDIM > &umac, const Geometry &geom, bool already_on_centroids, const MultiFab &vel_eb)
 
void EB_average_face_to_cellcenter (MultiFab &ccmf, int dcomp, const Array< MultiFab const *, AMREX_SPACEDIM > &fmf)
 
void EB_interp_CC_to_Centroid (MultiFab &cent, const MultiFab &cc, int scomp, int dcomp, int ncomp, const Geometry &geom)
 
void EB_interp_CC_to_FaceCentroid (const MultiFab &cc, AMREX_D_DECL(MultiFab &fc_x, MultiFab &fc_y, MultiFab &fc_z), int scomp, int dcomp, int ncomp, const Geometry &a_geom, const Vector< BCRec > &a_bcs)
 
void EB_interp_CellCentroid_to_FaceCentroid (const MultiFab &phi_centroid, const Array< MultiFab *, AMREX_SPACEDIM > &phi_faces, int scomp, int dcomp, int nc, const Geometry &geom, const amrex::Vector< amrex::BCRec > &a_bcs)
 
void EB_interp_CellCentroid_to_FaceCentroid (const MultiFab &phi_centroid, const Vector< MultiFab * > &phi_faces, int scomp, int dcomp, int nc, const Geometry &geom, const amrex::Vector< amrex::BCRec > &a_bcs)
 
void EB_interp_CellCentroid_to_FaceCentroid (const MultiFab &phi_centroid, AMREX_D_DECL(MultiFab &phi_xface, MultiFab &phi_yface, MultiFab &phi_zface), int scomp, int dcomp, int ncomp, const Geometry &a_geom, const Vector< BCRec > &a_bcs)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_set_covered_nodes (int i, int j, int k, int n, int icomp, Array4< Real > const &d, Array4< EBCellFlag const > const &f, Real v)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_set_covered_nodes (int i, int j, int k, int n, int icomp, Array4< Real > const &d, Array4< EBCellFlag const > const &f, Real const *AMREX_RESTRICT v)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avgdown_with_vol (int i, int j, int k, Array4< Real const > const &fine, int fcomp, Array4< Real > const &crse, int ccomp, Array4< Real const > const &fv, Array4< Real const > const &vfrc, Dim3 const &ratio, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avgdown (int i, int j, int k, Array4< Real const > const &fine, int fcomp, Array4< Real > const &crse, int ccomp, Array4< Real const > const &vfrc, Dim3 const &ratio, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avgdown_face_x (int i, int j, int k, Array4< Real const > const &fine, int fcomp, Array4< Real > const &crse, int ccomp, Array4< Real const > const &area, Dim3 const &ratio, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avgdown_face_y (int i, int j, int k, Array4< Real const > const &fine, int fcomp, Array4< Real > const &crse, int ccomp, Array4< Real const > const &area, Dim3 const &ratio, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avgdown_boundaries (int i, int j, int k, Array4< Real const > const &fine, int fcomp, Array4< Real > const &crse, int ccomp, Array4< Real const > const &ba, Dim3 const &ratio, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_compute_divergence (int i, int j, int k, int n, Array4< Real > const &divu, Array4< Real const > const &u, Array4< Real const > const &v, Array4< int const > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &fcx, Array4< Real const > const &fcy, GpuArray< Real, 2 > const &dxinv, bool already_on_centroids)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avg_fc_to_cc (int i, int j, int k, int n, Array4< Real > const &cc, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &ax, Array4< Real const > const &ay, Array4< EBCellFlag const > const &flag)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_cc2cent (Box const &box, const Array4< Real > &phicent, Array4< Real const > const &phicc, Array4< EBCellFlag const > const &flag, Array4< Real const > const &cent, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_cc2facecent_x (Box const &ubx, Array4< Real const > const &phi, Array4< Real const > const &apx, Array4< Real const > const &fcx, Array4< Real > const &edg_x, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_cc2facecent_y (Box const &vbx, Array4< Real const > const &phi, Array4< Real const > const &apy, Array4< Real const > const &fcy, Array4< Real > const &edg_y, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_centroid2facecent_x (Box const &ubx, Array4< Real const > const &phi, Array4< Real const > const &apx, Array4< Real const > const &cvol, Array4< Real const > const &ccent, Array4< Real const > const &fcx, Array4< Real > const &edg_x, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_centroid2facecent_y (Box const &vbx, Array4< Real const > const &phi, Array4< Real const > const &apy, Array4< Real const > const &cvol, Array4< Real const > const &ccent, Array4< Real const > const &fcy, Array4< Real > const &edg_y, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_cc2face_x (Box const &ubx, Array4< Real const > const &phi, Array4< Real > const &edg_x, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_cc2face_y (Box const &vbx, Array4< Real const > const &phi, Array4< Real > const &edg_y, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_add_divergence_from_flow (int i, int j, int k, int n, Array4< Real > const &divu, Array4< Real const > const &vel_eb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &bnorm, Array4< Real const > const &barea, GpuArray< Real, 2 > const &dxinv)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real EB_interp_in_quad (Real xint, Real yint, Real v0, Real v1, Real v2, Real v3, Real x0, Real y0, Real x1, Real y1, Real x2, Real y2, Real x3, Real y3)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avgdown_face_z (int i, int j, int k, Array4< Real const > const &fine, int fcomp, Array4< Real > const &crse, int ccomp, Array4< Real const > const &area, Dim3 const &ratio, int ncomp)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_compute_divergence (int i, int j, int k, int n, Array4< Real > const &divu, Array4< Real const > const &u, Array4< Real const > const &v, Array4< Real const > const &w, Array4< int const > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz, GpuArray< Real, 3 > const &dxinv, bool already_on_centroids)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_avg_fc_to_cc (int i, int j, int k, int n, Array4< Real > const &cc, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &fz, Array4< Real const > const &ax, Array4< Real const > const &ay, Array4< Real const > const &az, Array4< EBCellFlag const > const &flag)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_cc2facecent_z (Box const &wbx, Array4< Real const > const &phi, Array4< Real const > const &apz, Array4< Real const > const &fcz, Array4< Real > const &edg_z, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_centroid2facecent_z (Box const &wbx, Array4< Real const > const &phi, Array4< Real const > const &apz, Array4< Real const > const &cvol, Array4< Real const > const &ccent, Array4< Real const > const &fcz, Array4< Real > const &phi_z, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_interp_cc2face_z (Box const &wbx, Array4< Real const > const &phi, Array4< Real > const &edg_z, int ncomp, const Box &domain, const BCRec *bc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void eb_add_divergence_from_flow (int i, int j, int k, int n, Array4< Real > const &divu, Array4< Real const > const &vel_eb, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &bnorm, Array4< Real const > const &barea, GpuArray< Real, 3 > const &dxinv)
 
void WriteEBSurface (const BoxArray &ba, const DistributionMapping &dmap, const Geometry &geom, const EBFArrayBoxFactory *ebf)
 
static std::string thePlotFileType ()
 
void writePlotFile (const std::string &dir, std::ostream &os, int level, const MultiFab &mf, const Geometry &geom, const IntVect &refRatio, Real bgVal, const Vector< std::string > &names)
 
void writePlotFile (const char *name, const MultiFab &mf, const Geometry &geom, const IntVect &refRatio, Real bgVal, const Vector< std::string > &names)
 
void WritePlotFile (const Vector< MultiFab * > &mfa, const Vector< Box > &probDomain, AmrData &amrdToMimic, const std::string &oFile, bool verbose, const Vector< std::string > &varNames)
 
void WritePlotFile (const Vector< MultiFab * > &mfa, AmrData &amrdToMimic, const std::string &oFile, bool verbose, const Vector< std::string > &varNames)
 
void writePlotFile (const char *name, const amrex::MultiFab &mf, const amrex::Geometry &geom, const amrex::IntVect &refRatio, amrex::Real bgVal, const amrex::Vector< std::string > &names)
 
bool Nestsets (const int level, const int n_levels, const FArrayBox &fab, const Vector< const BoxArray * > box_arrays, const Vector< IntVect > &ref_ratio, const Vector< int > &domain_offsets, conduit::Node &nestset)
 
void FabToBlueprintTopology (const Geometry &geom, const FArrayBox &fab, Node &res)
 
void AddFabGhostIndicatorField (const FArrayBox &fab, int ngrow, Node &res)
 
void FabToBlueprintFields (const FArrayBox &fab, const Vector< std::string > &varnames, Node &res)
 
void SingleLevelToBlueprint (const MultiFab &mf, const Vector< std::string > &varnames, const Geometry &geom, Real time_value, int level_step, Node &res)
 
void MultiLevelToBlueprint (int n_levels, const Vector< const MultiFab * > &mfs, const Vector< std::string > &varnames, const Vector< Geometry > &geoms, Real time_value, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, Node &res)
 
void WriteBlueprintFiles (const conduit::Node &bp_mesh, const std::string &fname_base, int step, const std::string &protocol)
 
void SingleLevelToBlueprint (const MultiFab &mf, const Vector< std::string > &varnames, const Geometry &geom, Real time_value, int level_step, conduit::Node &bp_mesh)
 
void MultiLevelToBlueprint (int n_levels, const Vector< const MultiFab * > &mfs, const Vector< std::string > &varnames, const Vector< Geometry > &geoms, Real time_value, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, conduit::Node &bp_mesh)
 
template<typename ParticleType , int NArrayReal, int NArrayInt>
void ParticleTileToBlueprint (const ParticleTile< ParticleType, NArrayReal, NArrayInt > &ptile, const Vector< std::string > &real_comp_names, const Vector< std::string > &int_comp_names, conduit::Node &res, const std::string &topology_name)
 
template<typename ParticleType , int NArrayReal, int NArrayInt>
void ParticleContainerToBlueprint (const ParticleContainer_impl< ParticleType, NArrayReal, NArrayInt > &pc, const Vector< std::string > &real_comp_names, const Vector< std::string > &int_comp_names, conduit::Node &res, const std::string &topology_name)
 
static int CreateWriteHDF5AttrDouble (hid_t loc, const char *name, hsize_t n, const double *data)
 
static int CreateWriteHDF5AttrInt (hid_t loc, const char *name, hsize_t n, const int *data)
 
static int CreateWriteHDF5AttrString (hid_t loc, const char *name, const char *str)
 
static void SetHDF5fapl (hid_t fapl, MPI_Comm comm)
 
static void WriteGenericPlotfileHeaderHDF5 (hid_t fid, int nlevels, const Vector< const MultiFab * > &mf, const Vector< BoxArray > &bArray, const Vector< std::string > &varnames, const Vector< Geometry > &geom, Real time, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteMultiLevelPlotfileHDF5SingleDset (const std::string &plotfilename, int nlevels, const Vector< const MultiFab * > &mf, const Vector< std::string > &varnames, const Vector< Geometry > &geom, Real time, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, const std::string &compression, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteMultiLevelPlotfileHDF5MultiDset (const std::string &plotfilename, int nlevels, const Vector< const MultiFab * > &mf, const Vector< std::string > &varnames, const Vector< Geometry > &geom, Real time, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, const std::string &compression, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteSingleLevelPlotfileHDF5 (const std::string &plotfilename, const MultiFab &mf, const Vector< std::string > &varnames, const Geometry &geom, Real time, int level_step, const std::string &compression, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteSingleLevelPlotfileHDF5SingleDset (const std::string &plotfilename, const MultiFab &mf, const Vector< std::string > &varnames, const Geometry &geom, Real time, int level_step, const std::string &compression, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteSingleLevelPlotfileHDF5MultiDset (const std::string &plotfilename, const MultiFab &mf, const Vector< std::string > &varnames, const Geometry &geom, Real time, int level_step, const std::string &compression, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
void WriteMultiLevelPlotfileHDF5 (const std::string &plotfilename, int nlevels, const Vector< const MultiFab * > &mf, const Vector< std::string > &varnames, const Vector< Geometry > &geom, Real time, const Vector< int > &level_steps, const Vector< IntVect > &ref_ratio, const std::string &compression, const std::string &versionName, const std::string &levelPrefix, const std::string &mfPrefix, const Vector< std::string > &extra_dirs)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void habec_mat (GpuArray< Real, 2 *AMREX_SPACEDIM+1 > &sten, int i, int j, int k, Dim3 const &boxlo, Dim3 const &boxhi, Real sa, Array4< Real const > const &a, Real sb, GpuArray< Real, AMREX_SPACEDIM > const &dx, GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &b, GpuArray< int, AMREX_SPACEDIM *2 > const &bctype, GpuArray< Real, AMREX_SPACEDIM *2 > const &bcl, int bho, GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &msk, Array4< Real > const &diaginv)
 
template<typename Int >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void habec_ijmat (GpuArray< Real, 2 *AMREX_SPACEDIM+1 > &sten, Array4< Int > const &ncols, Array4< Real > const &diaginv, int i, int j, int k, Array4< Int const > const &cell_id, Real sa, Array4< Real const > const &a, Real sb, GpuArray< Real, AMREX_SPACEDIM > const &dx, GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &b, GpuArray< int, AMREX_SPACEDIM *2 > const &bctype, GpuArray< Real, AMREX_SPACEDIM *2 > const &bcl, int bho, Array4< int const > const &osm)
 
template<typename Int >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void habec_cols (GpuArray< Int, 2 *AMREX_SPACEDIM+1 > &sten, int i, int j, int, Array4< Int const > const &cell_id)
 
std::unique_ptr< HypremakeHypre (const BoxArray &grids, const DistributionMapping &dmap, const Geometry &geom, MPI_Comm comm_, Hypre::Interface interface, const iMultiFab *overset_mask)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void hypmlabeclap_f2c_set_values (IntVect const &cell, Real *values, GpuArray< Real, AMREX_SPACEDIM > const &dx, Real sb, GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &b, GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &bmask, IntVect const &refratio, int not_covered)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void hypmlabeclap_c2f (int i, int j, int k, Array4< GpuArray< Real, 2 *AMREX_SPACEDIM+1 >> const &stencil, GpuArray< HYPRE_Int, AMREX_SPACEDIM > *civ, HYPRE_Int *nentries, int *entry_offset, Real *entry_values, Array4< int const > const &offset_from, Array4< int const > const &nentries_to, Array4< int const > const &offset_to, GpuArray< Real, AMREX_SPACEDIM > const &dx, Real sb, Array4< int const > const &offset_bx, Array4< int const > const &offset_by, Real const *bx, Real const *by, Array4< int const > const &fine_mask, IntVect const &rr, Array4< int const > const &crse_mask)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void hypmlabeclap_c2f (int i, int j, int k, Array4< GpuArray< Real, 2 *AMREX_SPACEDIM+1 >> const &stencil, GpuArray< HYPRE_Int, AMREX_SPACEDIM > *civ, HYPRE_Int *nentries, int *entry_offset, Real *entry_values, Array4< int const > const &offset_from, Array4< int const > const &nentries_to, Array4< int const > const &offset_to, GpuArray< Real, AMREX_SPACEDIM > const &dx, Real sb, Array4< int const > const &offset_bx, Array4< int const > const &offset_by, Array4< int const > const &offset_bz, Real const *bx, Real const *by, Real const *bz, Array4< int const > const &fine_mask, IntVect const &rr, Array4< int const > const &crse_mask)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void hypmlabeclap_mat (GpuArray< Real, 2 *AMREX_SPACEDIM+1 > &sten, int i, int j, int k, Dim3 const &boxlo, Dim3 const &boxhi, Real sa, Array4< Real const > const &a, Real sb, GpuArray< Real, AMREX_SPACEDIM > const &dx, GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &b, GpuArray< int, AMREX_SPACEDIM *2 > const &bctype, GpuArray< Real, AMREX_SPACEDIM *2 > const &bcl, GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &bcmsk, GpuArray< Array4< Real const >, AMREX_SPACEDIM *2 > const &bcval, GpuArray< Array4< Real >, AMREX_SPACEDIM *2 > const &bcrhs, int level, IntVect const &fixed_pt)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void hypmlabeclap_rhs (int i, int j, int k, Dim3 const &boxlo, Dim3 const &boxhi, Array4< Real > const &rhs1, Array4< Real const > const &rhs0, GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &bcmsk, GpuArray< Array4< Real const >, AMREX_SPACEDIM *2 > const &bcrhs)
 
std::unique_ptr< PETScABecLapmakePetsc (const BoxArray &grids, const DistributionMapping &dmap, const Geometry &geom, MPI_Comm comm_)
 
std::string SanitizeName (const std::string &sname)
 
void SimpleRemoveOverlap (BoxArray &ba)
 
void avgDown_doit (const FArrayBox &fine_fab, FArrayBox &crse_fab, const Box &ovlp, int scomp, int dcomp, int ncomp, Vector< int > &ratio)
 
Box FixCoarseBoxSize (const Box &fineBox, int rr)
 
void avgDown (MultiFab &S_crse, MultiFab &S_fine, int scomp, int dcomp, int ncomp, Vector< int > &ratio)
 
void PrintTimeRangeList (const std::list< RegionsProfStats::TimeRange > &trList)
 
void RedistFiles ()
 
int NHops (const Box &tbox, const IntVect &ivfrom, const IntVect &ivto)
 
void Write2DFab (const string &filenameprefix, const int xdim, const int ydim, const double *data)
 
void Write2DText (const string &filenameprefix, const int xdim, const int ydim, const double *data)
 
void Write3DFab (const string &filenameprefix, const int xdim, const int ydim, const int zdim, const double *data)
 
void WriteFab (const string &filenameprefix, const int xdim, const int ydim, const double *data)
 
long FileSize (const std::string &filename)
 
void MakeFuncPctTimesMF (const Vector< Vector< BLProfStats::FuncStat > > &funcStats, const Vector< std::string > &blpFNames, const std::map< std::string, BLProfiler::ProfStats > &mProfStats, Real runTime, int dataNProcs)
 
void CollectMProfStats (std::map< std::string, BLProfiler::ProfStats > &mProfStats, const Vector< Vector< BLProfStats::FuncStat > > &funcStats, const Vector< std::string > &fNames, Real runTime, int whichProc)
 
void GraphTopPct (const std::map< std::string, BLProfiler::ProfStats > &mProfStats, const Vector< Vector< BLProfStats::FuncStat > > &funcStats, const Vector< std::string > &fNames, Real runTime, int dataNProcs, Real gPercent)
 
void WritePlotfile (const std::string &pfversion, const Vector< MultiFab > &data, const Real time, const Vector< Real > &probLo, const Vector< Real > &probHi, const Vector< int > &refRatio, const Vector< Box > &probDomain, const Vector< Vector< Real > > &dxLevel, const int coordSys, const std::string &oFile, const Vector< std::string > &names, const bool verbose, const bool isCartGrid, const Real *vfeps, const int *levelSteps)
 
std::string VisMFBaseName (const std::string &filename)
 
void Write2DBoxFrom3D (const Box &box, std::ostream &os, int whichPlane)
 
VisMF::FabOnDisk VisMFWrite (const FArrayBox &fabIn, const std::string &filename, std::ostream &os, long &bytes, int whichPlane)
 
static std::ostream & operator<< (std::ostream &os, const Vector< Vector< Real > > &ar)
 
long VisMFWriteHeader (const std::string &mf_name, VisMF::Header &hdr, int whichPlane)
 
void WritePlotfile2DFrom3D (const std::string &pfversion, const Vector< MultiFab > &data, const Real time, const Vector< Real > &probLo, const Vector< Real > &probHi, const Vector< int > &refRatio, const Vector< Box > &probDomain, const Vector< Vector< Real > > &dxLevel, const int coordSys, const std::string &oFile, const Vector< std::string > &names, const bool verbose, const bool isCartGrid, const Real *vfeps, const int *levelSteps)
 
 senseiNewMacro (AmrDataAdaptor)
 
 senseiNewMacro (AmrMeshDataAdaptor)
 
template<typename V1 , typename F >
std::enable_if_t< IsAlgVector< std::decay_t< V1 > >::value > ForEach (V1 &x, F const &f)
 
template<typename V1 , typename V2 , typename F >
std::enable_if_t< IsAlgVector< std::decay_t< V1 > >::value &&IsAlgVector< std::decay_t< V2 > >::value > ForEach (V1 &x, V2 &y, F const &f)
 
template<typename V1 , typename V2 , typename V3 , typename F >
std::enable_if_t< IsAlgVector< std::decay_t< V1 > >::value &&IsAlgVector< std::decay_t< V2 > >::value &&IsAlgVector< std::decay_t< V3 > >::value > ForEach (V1 &x, V2 &y, V3 &z, F const &f)
 
template<typename V1 , typename V2 , typename V3 , typename V4 , typename F >
std::enable_if_t< IsAlgVector< std::decay_t< V1 > >::value &&IsAlgVector< std::decay_t< V2 > >::value &&IsAlgVector< std::decay_t< V3 > >::value &&IsAlgVector< std::decay_t< V4 > >::value > ForEach (V1 &x, V2 &y, V3 &z, V4 &a, F const &f)
 
template<typename V1 , typename V2 , typename V3 , typename V4 , typename V5 , typename F >
std::enable_if_t< IsAlgVector< std::decay_t< V1 > >::value &&IsAlgVector< std::decay_t< V2 > >::value &&IsAlgVector< std::decay_t< V3 > >::value &&IsAlgVector< std::decay_t< V4 > >::value &&IsAlgVector< std::decay_t< V5 > >::value > ForEach (V1 &x, V2 &y, V3 &z, V4 &a, V5 &b, F const &f)
 
template<typename T >
Dot (AlgVector< T > const &x, AlgVector< T > const &y, bool local=false)
 
template<typename T >
void Axpy (AlgVector< T > &y, T a, AlgVector< T > const &x, bool async=false)
 
template<typename T >
void LinComb (AlgVector< T > &y, T a, AlgVector< T > const &xa, T b, AlgVector< T > const &xb, bool async=false)
 
template<typename T >
void SpMV (AlgVector< T > &y, SpMatrix< T > const &A, AlgVector< T > const &x)
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_adotx (int i, int, int, int n, Array4< T > const &y, Array4< T const > const &x, Array4< T const > const &a, Array4< T const > const &bX, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_adotx_os (int i, int, int, int n, Array4< T > const &y, Array4< T const > const &x, Array4< T const > const &a, Array4< T const > const &bX, Array4< int const > const &osm, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_normalize (int i, int, int, int n, Array4< T > const &x, Array4< T const > const &a, Array4< T const > const &bX, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_flux_x (Box const &box, Array4< T > const &fx, Array4< T const > const &sol, Array4< T const > const &bx, T fac, int ncomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_flux_xface (Box const &box, Array4< T > const &fx, Array4< T const > const &sol, Array4< T const > const &bx, T fac, int xlen, int ncomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_gsrb (int i, int, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, Array4< T const > const &bX, Array4< int const > const &m0, Array4< int const > const &m1, Array4< T const > const &f0, Array4< T const > const &f1, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_gsrb_os (int i, int, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, Array4< T const > const &bX, Array4< int const > const &m0, Array4< int const > const &m1, Array4< T const > const &f0, Array4< T const > const &f1, Array4< int const > const &osm, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_jacobi (int i, int, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T alpha, Array4< T const > const &a, T dhx, Array4< T const > const &bX, Array4< int const > const &m0, Array4< int const > const &m1, Array4< T const > const &f0, Array4< T const > const &f1, Box const &vbox) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_jacobi_os (int i, int, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T alpha, Array4< T const > const &a, T dhx, Array4< T const > const &bX, Array4< int const > const &m0, Array4< int const > const &m1, Array4< T const > const &f0, Array4< T const > const &f1, Array4< int const > const &osm, Box const &vbox) noexcept
 
template<typename T >
AMREX_FORCE_INLINE void abec_gsrb_with_line_solve (Box const &, Array4< T > const &, Array4< T const > const &, T, Array4< T const > const &, T, Array4< T const > const &, Array4< int const > const &, Array4< int const > const &, Array4< T const > const &, Array4< T const > const &, Box const &, int, int) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void overset_rescale_bcoef_x (Box const &box, Array4< T > const &bX, Array4< int const > const &osm, int ncomp, T osfac) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_adotx (int i, int j, int, int n, Array4< T > const &y, Array4< T const > const &x, Array4< T const > const &a, Array4< T const > const &bX, Array4< T const > const &bY, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_adotx_os (int i, int j, int, int n, Array4< T > const &y, Array4< T const > const &x, Array4< T const > const &a, Array4< T const > const &bX, Array4< T const > const &bY, Array4< int const > const &osm, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_normalize (int i, int j, int, int n, Array4< T > const &x, Array4< T const > const &a, Array4< T const > const &bX, Array4< T const > const &bY, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_flux_y (Box const &box, Array4< T > const &fy, Array4< T const > const &sol, Array4< T const > const &by, T fac, int ncomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_flux_yface (Box const &box, Array4< T > const &fy, Array4< T const > const &sol, Array4< T const > const &by, T fac, int ylen, int ncomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_gsrb (int i, int j, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, T dhy, Array4< T const > const &bX, Array4< T const > const &bY, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m1, Array4< int const > const &m3, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f1, Array4< T const > const &f3, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_gsrb_os (int i, int j, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, T dhy, Array4< T const > const &bX, Array4< T const > const &bY, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m1, Array4< int const > const &m3, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f1, Array4< T const > const &f3, Array4< int const > const &osm, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_jacobi (int i, int j, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T alpha, Array4< T const > const &a, T dhx, T dhy, Array4< T const > const &bX, Array4< T const > const &bY, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m1, Array4< int const > const &m3, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f1, Array4< T const > const &f3, Box const &vbox) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_jacobi_os (int i, int j, int, int n, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T alpha, Array4< T const > const &a, T dhx, T dhy, Array4< T const > const &bX, Array4< T const > const &bY, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m1, Array4< int const > const &m3, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f1, Array4< T const > const &f3, Array4< int const > const &osm, Box const &vbox) noexcept
 
template<typename T >
AMREX_FORCE_INLINE void abec_gsrb_with_line_solve (Box const &box, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, T dhy, Array4< T const > const &bX, Array4< T const > const &bY, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m1, Array4< int const > const &m3, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f1, Array4< T const > const &f3, Box const &vbox, int redblack, int nc) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void overset_rescale_bcoef_y (Box const &box, Array4< T > const &bY, Array4< int const > const &osm, int ncomp, T osfac) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_adotx (int i, int j, int k, int n, Array4< T > const &y, Array4< T const > const &x, Array4< T const > const &a, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_adotx_os (int i, int j, int k, int n, Array4< T > const &y, Array4< T const > const &x, Array4< T const > const &a, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, Array4< int const > const &osm, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_normalize (int i, int j, int k, int n, Array4< T > const &x, Array4< T const > const &a, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, GpuArray< T, AMREX_SPACEDIM > const &dxinv, T alpha, T beta) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_flux_z (Box const &box, Array4< T > const &fz, Array4< T const > const &sol, Array4< T const > const &bz, T fac, int ncomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlabeclap_flux_zface (Box const &box, Array4< T > const &fz, Array4< T const > const &sol, Array4< T const > const &bz, T fac, int zlen, int ncomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_gsrb (int i, int j, int k, int n, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, T dhy, T dhz, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m4, Array4< int const > const &m1, Array4< int const > const &m3, Array4< int const > const &m5, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f4, Array4< T const > const &f1, Array4< T const > const &f3, Array4< T const > const &f5, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_gsrb_os (int i, int j, int k, int n, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, T dhy, T dhz, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m4, Array4< int const > const &m1, Array4< int const > const &m3, Array4< int const > const &m5, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f4, Array4< T const > const &f1, Array4< T const > const &f3, Array4< T const > const &f5, Array4< int const > const &osm, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_jacobi (int i, int j, int k, int n, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T alpha, Array4< T const > const &a, T dhx, T dhy, T dhz, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m4, Array4< int const > const &m1, Array4< int const > const &m3, Array4< int const > const &m5, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f4, Array4< T const > const &f1, Array4< T const > const &f3, Array4< T const > const &f5, Box const &vbox) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void abec_jacobi_os (int i, int j, int k, int n, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T alpha, Array4< T const > const &a, T dhx, T dhy, T dhz, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m4, Array4< int const > const &m1, Array4< int const > const &m3, Array4< int const > const &m5, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f4, Array4< T const > const &f1, Array4< T const > const &f3, Array4< T const > const &f5, Array4< int const > const &osm, Box const &vbox) noexcept
 
template<typename T >
AMREX_FORCE_INLINE void tridiagonal_solve (Array1D< T, 0, 31 > &a_ls, Array1D< T, 0, 31 > &b_ls, Array1D< T, 0, 31 > &c_ls, Array1D< T, 0, 31 > &r_ls, Array1D< T, 0, 31 > &u_ls, Array1D< T, 0, 31 > &gam, int ilen) noexcept
 
template<typename T >
AMREX_FORCE_INLINE void abec_gsrb_with_line_solve (Box const &box, Array4< T > const &phi, Array4< T const > const &rhs, T alpha, Array4< T const > const &a, T dhx, T dhy, T dhz, Array4< T const > const &bX, Array4< T const > const &bY, Array4< T const > const &bZ, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m4, Array4< int const > const &m1, Array4< int const > const &m3, Array4< int const > const &m5, Array4< T const > const &f0, Array4< T const > const &f2, Array4< T const > const &f4, Array4< T const > const &f1, Array4< T const > const &f3, Array4< T const > const &f5, Box const &vbox, int redblack, int nc) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void overset_rescale_bcoef_z (Box const &box, Array4< T > const &bZ, Array4< int const > const &osm, int ncomp, T osfac) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_adotx (Box const &box, Array4< RT > const &y, Array4< RT const > const &x, Array4< RT const > const &a, GpuArray< RT, AMREX_SPACEDIM > const &dxinv, RT alpha, RT beta, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_adotx_m (Box const &box, Array4< RT > const &y, Array4< RT const > const &x, Array4< RT const > const &a, GpuArray< RT, AMREX_SPACEDIM > const &dxinv, RT alpha, RT beta, RT dx, RT probxlo, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_normalize (Box const &box, Array4< RT > const &x, Array4< RT const > const &a, GpuArray< RT, AMREX_SPACEDIM > const &dxinv, RT alpha, RT beta, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_normalize_m (Box const &box, Array4< RT > const &x, Array4< RT const > const &a, GpuArray< RT, AMREX_SPACEDIM > const &dxinv, RT alpha, RT beta, RT dx, RT probxlo, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_x (Box const &box, Array4< RT > const &fx, Array4< RT const > const &sol, RT fac, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_x_m (Box const &box, Array4< RT > const &fx, Array4< RT const > const &sol, RT fac, RT dx, RT probxlo, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_xface (Box const &box, Array4< RT > const &fx, Array4< RT const > const &sol, RT fac, int xlen, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_xface_m (Box const &box, Array4< RT > const &fx, Array4< RT const > const &sol, RT fac, int xlen, RT dx, RT probxlo, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_gsrb (Box const &box, Array4< RT > const &phi, Array4< RT const > const &rhs, RT alpha, RT dhx, Array4< RT const > const &a, Array4< RT const > const &f0, Array4< int const > const &m0, Array4< RT const > const &f1, Array4< int const > const &m1, Box const &vbox, int redblack, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_gsrb_m (Box const &box, Array4< RT > const &phi, Array4< RT const > const &rhs, RT alpha, RT dhx, Array4< RT const > const &a, Array4< RT const > const &f0, Array4< int const > const &m0, Array4< RT const > const &f1, Array4< int const > const &m1, Box const &vbox, int redblack, RT dx, RT probxlo, int ncomp)
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_y (Box const &box, Array4< RT > const &fy, Array4< RT const > const &sol, RT fac, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_yface (Box const &box, Array4< RT > const &fy, Array4< RT const > const &sol, RT fac, int ylen, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_z (Box const &box, Array4< RT > const &fz, Array4< RT const > const &sol, RT fac, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_flux_zface (Box const &box, Array4< RT > const &fz, Array4< RT const > const &sol, RT fac, int zlen, int ncomp) noexcept
 
template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlalap_gsrb (Box const &box, Array4< RT > const &phi, Array4< RT const > const &rhs, RT alpha, RT dhx, RT dhy, RT dhz, Array4< RT const > const &a, Array4< RT const > const &f0, Array4< int const > const &m0, Array4< RT const > const &f1, Array4< int const > const &m1, Array4< RT const > const &f2, Array4< int const > const &m2, Array4< RT const > const &f3, Array4< int const > const &m3, Array4< RT const > const &f4, Array4< int const > const &m4, Array4< RT const > const &f5, Array4< int const > const &m5, Box const &vbox, int redblack, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int coarsen_overset_mask (Box const &bx, Array4< int > const &cmsk, Array4< int const > const &fmsk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void coarsen_overset_mask (int i, int, int, Array4< int > const &cmsk, Array4< int const > const &fmsk) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_adotx_x (int i, int j, int k, Array4< Real > const &Ax, Array4< Real const > const &ex, Array4< Real const > const &ey, Array4< Real const > const &ez, Real beta, GpuArray< Real, AMREX_SPACEDIM > const &adxinv)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_adotx_y (int i, int j, int k, Array4< Real > const &Ay, Array4< Real const > const &ex, Array4< Real const > const &ey, Array4< Real const > const &ez, Real beta, GpuArray< Real, AMREX_SPACEDIM > const &adxinv)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_adotx_z (int i, int j, int k, Array4< Real > const &Az, Array4< Real const > const &ex, Array4< Real const > const &ey, Array4< Real const > const &ez, Real beta, GpuArray< Real, AMREX_SPACEDIM > const &adxinv)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_1D (int i, int j, int k, Array4< Real > const &ex, Array4< Real > const &ey, Array4< Real > const &ez, Array4< Real const > const &rhsx, Array4< Real const > const &rhsy, Array4< Real const > const &rhsz, Real beta, GpuArray< Real, AMREX_SPACEDIM > const &adxinv, int color, CurlCurlDirichletInfo const &dinfo)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_1D (int i, int j, int k, Array4< Real > const &ex, Array4< Real > const &ey, Array4< Real > const &ez, Array4< Real const > const &rhsx, Array4< Real const > const &rhsy, Array4< Real const > const &rhsz, Array4< Real const > const &betax, Array4< Real const > const &betay, Array4< Real const > const &betaz, GpuArray< Real, AMREX_SPACEDIM > const &adxinv, int color, CurlCurlDirichletInfo const &dinfo)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_interpadd (int dir, int i, int j, int k, Array4< Real > const &fine, Array4< Real const > const &crse)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_restriction (int dir, int i, int j, int k, Array4< Real > const &crse, Array4< Real const > const &fine, CurlCurlDirichletInfo const &dinfo)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlcurlcurl_bc_symmetry (int i, int j, int k, Orientation face, IndexType it, Array4< Real > const &a)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_adotx_centroid (Box const &box, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &a, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &ccent, Array4< Real const > const &ba, Array4< Real const > const &bcent, Array4< Real const > const &beb, Array4< Real const > const &phieb, const int &domlo_x, const int &domlo_y, const int &domhi_x, const int &domhi_y, const bool &on_x_face, const bool &on_y_face, bool is_eb_dirichlet, bool is_eb_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real alpha, Real beta, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_adotx (Box const &box, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &a, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< const int > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &ba, Array4< Real const > const &bc, Array4< Real const > const &beb, bool is_dirichlet, Array4< Real const > const &phieb, bool is_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real alpha, Real beta, int ncomp, bool beta_on_centroid, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_ebflux (int i, int j, int k, int n, Array4< Real > const &feb, Array4< Real const > const &x, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &bc, Array4< Real const > const &beb, Array4< Real const > const &phieb, bool is_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_gsrb (Box const &box, Array4< Real > const &phi, Array4< Real const > const &rhs, Real alpha, Array4< Real const > const &a, Real dhx, Real dhy, Real dh, GpuArray< Real, AMREX_SPACEDIM > const &dx, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m1, Array4< int const > const &m3, Array4< Real const > const &f0, Array4< Real const > const &f2, Array4< Real const > const &f1, Array4< Real const > const &f3, Array4< const int > const &ccm, Array4< Real const > const &beb, EBData const &ebdata, bool is_dirichlet, bool beta_on_centroid, bool phi_on_centroid, Box const &vbox, int redblack, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_flux_x (Box const &box, Array4< Real > const &fx, Array4< Real const > const &apx, Array4< Real const > const &fcx, Array4< Real const > const &sol, Array4< Real const > const &bX, Array4< int const > const &ccm, Real dhx, int face_only, int ncomp, Box const &xbox, bool beta_on_centroid, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_flux_y (Box const &box, Array4< Real > const &fy, Array4< Real const > const &apy, Array4< Real const > const &fcy, Array4< Real const > const &sol, Array4< Real const > const &bY, Array4< int const > const &ccm, Real dhy, int face_only, int ncomp, Box const &ybox, bool beta_on_centroid, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_flux_x_0 (Box const &box, Array4< Real > const &fx, Array4< Real const > const &apx, Array4< Real const > const &sol, Array4< Real const > const &bX, Real dhx, int face_only, int ncomp, Box const &xbox) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_flux_y_0 (Box const &box, Array4< Real > const &fy, Array4< Real const > const &apy, Array4< Real const > const &sol, Array4< Real const > const &bY, Real dhy, int face_only, int ncomp, Box const &ybox) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_grad_x (Box const &box, Array4< Real > const &gx, Array4< Real const > const &sol, Array4< Real const > const &apx, Array4< Real const > const &fcx, Array4< int const > const &ccm, Real dxi, int ncomp, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_grad_y (Box const &box, Array4< Real > const &gy, Array4< Real const > const &sol, Array4< Real const > const &apy, Array4< Real const > const &fcy, Array4< int const > const &ccm, Real dyi, int ncomp, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_grad_x_0 (Box const &box, Array4< Real > const &gx, Array4< Real const > const &sol, Array4< Real const > const &apx, Real dxi, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_grad_y_0 (Box const &box, Array4< Real > const &gy, Array4< Real const > const &sol, Array4< Real const > const &apy, Real dyi, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_normalize (Box const &box, Array4< Real > const &phi, Real alpha, Array4< Real const > const &a, Real dhx, Real dhy, Real dh, const amrex::GpuArray< Real, AMREX_SPACEDIM > &dx, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< const int > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &ba, Array4< Real const > const &bc, Array4< Real const > const &beb, bool is_dirichlet, bool beta_on_centroid, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_adotx_centroid (Box const &box, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &a, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< Real const > const &bZ, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz, Array4< Real const > const &ccent, Array4< Real const > const &ba, Array4< Real const > const &bcent, Array4< Real const > const &beb, Array4< Real const > const &phieb, const int &domlo_x, const int &domlo_y, const int &domlo_z, const int &domhi_x, const int &domhi_y, const int &domhi_z, const bool &on_x_face, const bool &on_y_face, const bool &on_z_face, bool is_eb_dirichlet, bool is_eb_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real alpha, Real beta, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_adotx (Box const &box, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &a, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< Real const > const &bZ, Array4< const int > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz, Array4< Real const > const &ba, Array4< Real const > const &bc, Array4< Real const > const &beb, bool is_dirichlet, Array4< Real const > const &phieb, bool is_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real alpha, Real beta, int ncomp, bool beta_on_centroid, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_ebflux (int i, int j, int k, int n, Array4< Real > const &feb, Array4< Real const > const &x, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz, Array4< Real const > const &bc, Array4< Real const > const &beb, Array4< Real const > const &phieb, bool is_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_gsrb (Box const &box, Array4< Real > const &phi, Array4< Real const > const &rhs, Real alpha, Array4< Real const > const &a, Real dhx, Real dhy, Real dhz, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< Real const > const &bZ, Array4< int const > const &m0, Array4< int const > const &m2, Array4< int const > const &m4, Array4< int const > const &m1, Array4< int const > const &m3, Array4< int const > const &m5, Array4< Real const > const &f0, Array4< Real const > const &f2, Array4< Real const > const &f4, Array4< Real const > const &f1, Array4< Real const > const &f3, Array4< Real const > const &f5, Array4< const int > const &ccm, Array4< Real const > const &beb, EBData const &ebdata, bool is_dirichlet, bool beta_on_centroid, bool phi_on_centroid, Box const &vbox, int redblack, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_flux_z (Box const &box, Array4< Real > const &fz, Array4< Real const > const &apz, Array4< Real const > const &fcz, Array4< Real const > const &sol, Array4< Real const > const &bZ, Array4< int const > const &ccm, Real dhz, int face_only, int ncomp, Box const &zbox, bool beta_on_centroid, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_flux_z_0 (Box const &box, Array4< Real > const &fz, Array4< Real const > const &apz, Array4< Real const > const &sol, Array4< Real const > const &bZ, Real dhz, int face_only, int ncomp, Box const &zbox) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_grad_z (Box const &box, Array4< Real > const &gz, Array4< Real const > const &sol, Array4< Real const > const &apz, Array4< Real const > const &fcz, Array4< int const > const &ccm, Real dzi, int ncomp, bool phi_on_centroid) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_grad_z_0 (Box const &box, Array4< Real > const &gz, Array4< Real const > const &sol, Array4< Real const > const &apz, Real dzi, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_normalize (Box const &box, Array4< Real > const &phi, Real alpha, Array4< Real const > const &a, Real dhx, Real dhy, Real dhz, Array4< Real const > const &bX, Array4< Real const > const &bY, Array4< Real const > const &bZ, Array4< const int > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vfrc, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz, Array4< Real const > const &ba, Array4< Real const > const &bc, Array4< Real const > const &beb, bool is_dirichlet, bool beta_on_centroid, int ncomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_apply_bc_x (int side, Box const &box, int blen, Array4< Real > const &phi, Array4< int const > const &mask, Array4< Real const > const &area, BoundCond bct, Real bcl, Array4< Real const > const &bcval, int maxorder, Real dxinv, int inhomog, int icomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_apply_bc_y (int side, Box const &box, int blen, Array4< Real > const &phi, Array4< int const > const &mask, Array4< Real const > const &area, BoundCond bct, Real bcl, Array4< Real const > const &bcval, int maxorder, Real dyinv, int inhomog, int icomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebabeclap_apply_bc_z (int side, Box const &box, int blen, Array4< Real > const &phi, Array4< int const > const &mask, Array4< Real const > const &area, BoundCond bct, Real bcl, Array4< Real const > const &bcval, int maxorder, Real dzinv, int inhomog, int icomp) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &, Real) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_gsrb (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &, Real, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &, Array4< Real const > const &, Real) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_gsrb (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &, Array4< Real const > const &, Real, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_scale_rhs (int i, int j, int, Array4< Real > const &rhs, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy) noexcept
 
template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_eb_doit (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, F const &xeb, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Real xeb, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &xeb, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< int const > const &dmsk, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_gsrb_eb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Real bx, Real by, int redblack) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_gsrb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< int const > const &dmsk, Real bx, Real by, int redblack) noexcept
 
template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_rz_eb_doit (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, F const &xeb, Real sigr, Real dr, Real dz, Real rlo, Real alpha) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_rz_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Real xeb, Real sigr, Real dr, Real dz, Real rlo, Real alpha) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_rz_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &xeb, Real sigr, Real dr, Real dz, Real rlo, Real alpha) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_rz (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< int const > const &dmsk, Real sigr, Real dr, Real dz, Real rlo, Real alpha) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_gsrb_rz_eb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Real sigr, Real dr, Real dz, Real rlo, int redblack, Real alpha) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_gsrb_rz (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< int const > const &dmsk, Real sigr, Real dr, Real dz, Real rlo, int redblack, Real alpha) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< int const > const &dmsk, Array4< Real const > const &sig, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_gsrb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< int const > const &dmsk, Array4< Real const > const &sig, Real bx, Real by, int redblack) noexcept
 
template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx_eb_doit (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &sig, Array4< Real const > const &vfrc, F const &xeb, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &sig, Array4< Real const > const &vfrc, Real xeb, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &sig, Array4< Real const > const &vfrc, Array4< Real const > const &xeb, Real bx, Real by) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_gsrb_eb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &sig, Array4< Real const > const &vfrc, Real bx, Real by, int redblack) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_scale_rhs (int i, int j, int k, Array4< Real > const &rhs, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz) noexcept
 
template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_eb_doit (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, F const &xeb, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, Real xeb, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, Array4< Real const > const &xeb, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_adotx (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< int const > const &dmsk, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_gsrb_eb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, Real bx, Real by, Real bz, int redblack) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_gsrb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< int const > const &dmsk, Real bx, Real by, Real bz, int redblack) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< int const > const &dmsk, Array4< Real const > const &sig, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_gsrb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< int const > const &dmsk, Array4< Real const > const &sig, Real bx, Real by, Real bz, int redblack) noexcept
 
template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx_eb_doit (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, Array4< Real const > const &sig, Array4< Real const > const &vfrc, F const &xeb, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, Array4< Real const > const &sig, Array4< Real const > const &vfrc, Real xeb, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_adotx_eb (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, Array4< Real const > const &sig, Array4< Real const > const &vfrc, Array4< Real const > const &xeb, Real bx, Real by, Real bz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_sig_gsrb_eb (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &rhs, Array4< Real const > const &levset, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &ecy, Array4< Real const > const &ecz, Array4< Real const > const &sig, Array4< Real const > const &vfrc, Real bx, Real by, Real bz, int redblack) noexcept
 
template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_grad_x_doit (int i, int j, int k, Array4< Real > const &px, Array4< Real const > const &p, Array4< int const > const &dmsk, Array4< Real const > const &ecx, F const &phieb, Real dxi)
 
template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_grad_y_doit (int i, int j, int k, Array4< Real > const &py, Array4< Real const > const &p, Array4< int const > const &dmsk, Array4< Real const > const &ecy, F const &phieb, Real dyi)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_grad_x (Box const &b, Array4< Real > const &px, Array4< Real const > const &p, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Real phieb, Real dxi)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_grad_x (Box const &b, Array4< Real > const &px, Array4< Real const > const &p, Array4< int const > const &dmsk, Array4< Real const > const &ecx, Array4< Real const > const &phieb, Real dxi)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_grad_y (Box const &b, Array4< Real > const &py, Array4< Real const > const &p, Array4< int const > const &dmsk, Array4< Real const > const &ecy, Real phieb, Real dyi)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_grad_y (Box const &b, Array4< Real > const &py, Array4< Real const > const &p, Array4< int const > const &dmsk, Array4< Real const > const &ecy, Array4< Real const > const &phieb, Real dyi)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebndfdlap_grad_x (Box const &b, Array4< Real > const &px, Array4< Real const > const &p, Real dxi)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &etax, Array4< Real const > const &kapx, Array4< Real const > const &apx, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &etay, Array4< Real const > const &kapy, Array4< Real const > const &apy, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &etax, Array4< Real const > const &kapx, Array4< Real const > const &apx, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &etay, Array4< Real const > const &kapy, Array4< Real const > const &apy, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &vel, Array4< Real const > const &velb, Array4< Real const > const &etab, Array4< Real const > const &kapb, Array4< int const > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vol, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &bc, bool is_dirichlet, bool is_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real bscalar) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_flux_0 (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &ap, Real bscalar) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_flux_x (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &apx, Array4< Real const > const &fcx, Real const bscalar, Array4< int const > const &ccm, int face_only, Box const &xbox) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_flux_y (Box const &box, Array4< Real > const &Ay, Array4< Real const > const &fy, Array4< Real const > const &apy, Array4< Real const > const &fcy, Real const bscalar, Array4< int const > const &ccm, int face_only, Box const &ybox) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &apx, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &apy, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &apx, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &apy, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &apx, Array4< EBCellFlag const > const &flag, Array4< int const > const &ccm, Array4< Real const > const &fcx, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &apy, Array4< EBCellFlag const > const &flag, Array4< int const > const &ccm, Array4< Real const > const &fcy, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &apx, Array4< EBCellFlag const > const &flag, Array4< int const > const &ccm, Array4< Real const > const &fcx, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &apy, Array4< EBCellFlag const > const &flag, Array4< int const > const &ccm, Array4< Real const > const &fcy, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dz_on_xface (int i, int j, int, int n, Array4< Real const > const &vel, Real dzi, Real whi, Real wlo, int khip, int khim, int klop, int klom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dz_on_yface (int i, int j, int, int n, Array4< Real const > const &vel, Real dzi, Real whi, Real wlo, int khip, int khim, int klop, int klom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dx_on_zface (int, int j, int k, int n, Array4< Real const > const &vel, Real dxi, Real whi, Real wlo, int ihip, int ihim, int ilop, int ilom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dy_on_zface (int i, int, int k, int n, Array4< Real const > const &vel, Real dyi, Real whi, Real wlo, int jhip, int jhim, int jlop, int jlom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &etaz, Array4< Real const > const &kapz, Array4< Real const > const &apz, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dz_on_xface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dzi, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi, Real whi, Real wlo, int khip, int khim, int klop, int klom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dz_on_yface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dzi, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi, Real whi, Real wlo, int khip, int khim, int klop, int klom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dx_on_zface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dxi, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi, Real whi, Real wlo, int ihip, int ihim, int ilop, int ilom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dy_on_zface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dyi, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi, Real whi, Real wlo, int jhip, int jhim, int jlop, int jlom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &etaz, Array4< Real const > const &kapz, Array4< Real const > const &apz, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_cross_terms (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &fz, Array4< Real const > const &vel, Array4< Real const > const &velb, Array4< Real const > const &etab, Array4< Real const > const &kapb, Array4< int const > const &ccm, Array4< EBCellFlag const > const &flag, Array4< Real const > const &vol, Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz, Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz, Array4< Real const > const &bc, bool is_dirichlet, bool is_inhomog, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real bscalar) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_flux_z (Box const &box, Array4< Real > const &Az, Array4< Real const > const &fz, Array4< Real const > const &apz, Array4< Real const > const &fcz, Real const bscalar, Array4< int const > const &ccm, int face_only, Box const &zbox) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &apz, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &apz, Array4< EBCellFlag const > const &flag, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &apz, Array4< EBCellFlag const > const &flag, Array4< int const > const &ccm, Array4< Real const > const &fcz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlebtensor_vel_grads_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &apz, Array4< EBCellFlag const > const &flag, Array4< int const > const &ccm, Array4< Real const > const &fcz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_weight (int d)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dy_on_xface (int i, int, int k, int n, Array4< Real const > const &vel, Real dyi, Real whi, Real wlo, int jhip, int jhim, int jlop, int jlom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dx_on_yface (int, int j, int k, int n, Array4< Real const > const &vel, Real dxi, Real whi, Real wlo, int ihip, int ihim, int ilop, int ilom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dy_on_xface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dyi, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi, Real whi, Real wlo, int jhip, int jhim, int jlop, int jlom) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlebtensor_dx_on_yface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dxi, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi, Real whi, Real wlo, int ihip, int ihim, int ilop, int ilom) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_bc_x (int side, Box const &box, int blen, Array4< T > const &phi, Array4< int const > const &mask, BoundCond bct, T bcl, Array4< T const > const &bcval, int maxorder, T dxinv, int inhomog, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_bc_x (int side, int i, int j, int k, int blen, Array4< T > const &phi, Array4< int const > const &mask, BoundCond bct, T bcl, Array4< T const > const &bcval, int maxorder, T dxinv, int inhomog, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_bc_y (int side, Box const &box, int blen, Array4< T > const &phi, Array4< int const > const &mask, BoundCond bct, T bcl, Array4< T const > const &bcval, int maxorder, T dyinv, int inhomog, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_bc_y (int side, int i, int j, int k, int blen, Array4< T > const &phi, Array4< int const > const &mask, BoundCond bct, T bcl, Array4< T const > const &bcval, int maxorder, T dyinv, int inhomog, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_bc_z (int side, Box const &box, int blen, Array4< T > const &phi, Array4< int const > const &mask, BoundCond bct, T bcl, Array4< T const > const &bcval, int maxorder, T dzinv, int inhomog, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_bc_z (int side, int i, int j, int k, int blen, Array4< T > const &phi, Array4< int const > const &mask, BoundCond bct, T bcl, Array4< T const > const &bcval, int maxorder, T dzinv, int inhomog, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_comp_interp_coef0_x (int side, Box const &box, int blen, Array4< T > const &f, Array4< int const > const &mask, BoundCond bct, T bcl, int maxorder, T dxinv, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_comp_interp_coef0_x (int side, int i, int j, int k, int blen, Array4< T > const &f, Array4< int const > const &mask, BoundCond bct, T bcl, int maxorder, T dxinv, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_comp_interp_coef0_y (int side, Box const &box, int blen, Array4< T > const &f, Array4< int const > const &mask, BoundCond bct, T bcl, int maxorder, T dyinv, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_comp_interp_coef0_y (int side, int i, int j, int k, int blen, Array4< T > const &f, Array4< int const > const &mask, BoundCond bct, T bcl, int maxorder, T dyinv, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_comp_interp_coef0_z (int side, Box const &box, int blen, Array4< T > const &f, Array4< int const > const &mask, BoundCond bct, T bcl, int maxorder, T dzinv, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_comp_interp_coef0_z (int side, int i, int j, int k, int blen, Array4< T > const &f, Array4< int const > const &mask, BoundCond bct, T bcl, int maxorder, T dzinv, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_xlo (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, Array4< T const > const &bcoef, BoundCond bct, T, Array4< T const > const &bcval, T fac, bool has_bcoef, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_xhi (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, Array4< T const > const &bcoef, BoundCond bct, T, Array4< T const > const &bcval, T fac, bool has_bcoef, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_ylo (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, Array4< T const > const &bcoef, BoundCond bct, T, Array4< T const > const &bcval, T fac, bool has_bcoef, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_ylo_m (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, BoundCond bct, T, Array4< T const > const &bcval, T fac, T xlo, T dx, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_yhi (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, Array4< T const > const &bcoef, BoundCond bct, T, Array4< T const > const &bcval, T fac, bool has_bcoef, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_yhi_m (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, BoundCond bct, T, Array4< T const > const &bcval, T fac, T xlo, T dx, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_zlo (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, Array4< T const > const &bcoef, BoundCond bct, T, Array4< T const > const &bcval, T fac, bool has_bcoef, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mllinop_apply_innu_zhi (int i, int j, int k, Array4< T > const &rhs, Array4< int const > const &mask, Array4< T const > const &bcoef, BoundCond bct, T, Array4< T const > const &bcval, T fac, bool has_bcoef, int icomp) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlmg_lin_cc_interp_r2 (Box const &bx, Array4< T > const &ff, Array4< T const > const &cc, int nc) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlmg_lin_cc_interp_r4 (Box const &bx, Array4< T > const &ff, Array4< T const > const &cc, int nc) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlmg_lin_nd_interp_r2 (int i, int, int, int n, Array4< T > const &fine, Array4< T const > const &crse) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlmg_lin_nd_interp_r4 (int i, int, int, int n, Array4< T > const &fine, Array4< T const > const &crse) noexcept
 
void mlndabeclap_gauss_seidel_aa (Box const &, Array4< Real > const &, Array4< Real const > const &, Real, Real, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &, GpuArray< Real, AMREX_SPACEDIM > const &) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndabeclap_jacobi_aa (int, int, int, Array4< Real > const &, Real, Array4< Real const > const &, Real, Real, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &, GpuArray< Real, AMREX_SPACEDIM > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_zero_fine (int i, int, int, Array4< Real > const &phi, Array4< int const > const &msk, int fine_flag) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_avgdown_coeff_x (int i, int, int, Array4< Real > const &crse, Array4< Real const > const &fine) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_semi_avgdown_coeff (int i, int j, int k, Array4< Real > const &crse, Array4< Real const > const &fine, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_c (int i, int, int, Array4< Real const > const &x, Real sigma, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_ha (int i, int, int, Array4< Real const > const &x, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_aa (int i, int j, int k, Array4< Real const > const &x, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_normalize_ha (int i, int, int, Array4< Real > const &x, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_normalize_aa (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_jacobi_ha (int i, int, int, Array4< Real > const &sol, Real Ax, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_jacobi_ha (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &Ax, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_jacobi_aa (int i, int j, int k, Array4< Real > const &sol, Real Ax, Array4< Real const > const &rhs, Array4< Real const > const &sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_jacobi_aa (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &Ax, Array4< Real const > const &rhs, Array4< Real const > const &sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_jacobi_c (int i, int, int, Array4< Real > const &sol, Real Ax, Array4< Real const > const &rhs, Real sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_jacobi_c (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &Ax, Array4< Real const > const &rhs, Real sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_gauss_seidel_ha (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_gauss_seidel_aa (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_gauss_seidel_c (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Real sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_gauss_seidel_with_line_solve_aa (Box const &, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &, GpuArray< Real, AMREX_SPACEDIM > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_interpadd_c (int i, int, int, Array4< Real > const &fine, Array4< Real const > const &crse, Array4< int const > const &msk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_interpadd_aa (int i, int, int, Array4< Real > const &fine, Array4< Real const > const &crse, Array4< Real const > const &sig, Array4< int const > const &msk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_semi_interpadd_aa (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_interpadd_ha (int i, int j, int k, Array4< Real > const &fine, Array4< Real const > const &crse, Array4< Real const > const &sig, Array4< int const > const &msk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_divu (int i, int, int, Array4< Real > const &rhs, Array4< Real const > const &vel, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_rhcc (int i, int, int, Array4< Real const > const &rhcc, Array4< int const > const &msk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_mknewu (int i, int, int, Array4< Real > const &u, Array4< Real const > const &p, Array4< Real const > const &sig, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_mknewu_c (int i, int, int, Array4< Real > const &u, Array4< Real const > const &p, Real sig, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_rhcc_fine_contrib (int, int, int, Box const &, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_divu_cf_contrib (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &, Array4< int const > const &, Array4< int const > const &, GpuArray< Real, AMREX_SPACEDIM > const &, Box const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, bool) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_crse_resid (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &, Box const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, bool) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_res_cf_contrib (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &, Array4< int const > const &, Array4< int const > const &, Array4< Real const > const &, GpuArray< Real, AMREX_SPACEDIM > const &, Box const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, bool) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_res_cf_contrib_cs (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Real, Array4< int const > const &, Array4< int const > const &, Array4< int const > const &, Array4< Real const > const &, GpuArray< Real, AMREX_SPACEDIM > const &, Box const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &, bool) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_set_stencil (Box const &, Array4< Real > const &, Array4< Real const > const &, GpuArray< Real, AMREX_SPACEDIM > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_set_stencil_s0 (int, int, int, Array4< Real > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_stencil_rap (int, int, int, Array4< Real > const &, Array4< Real const > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_sten (int, int, int, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &) noexcept
 
void mlndlap_gauss_seidel_sten (Box const &, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_interpadd_rap (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_restriction_rap (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE int mlndlap_color (int i, int, int)
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_ha (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, int color) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_aa (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, int color) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_c (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Real sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, int color) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_sten (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< Real const > const &, Array4< int const > const &, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_avgdown_coeff_y (int i, int j, int k, Array4< Real > const &crse, Array4< Real const > const &fine) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_ha (int i, int j, int k, Array4< Real const > const &x, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< int const > const &msk, bool is_rz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_aa (int i, int j, int k, Array4< Real const > const &x, Array4< Real const > const &sig, Array4< int const > const &msk, bool is_rz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_c (int i, int j, int k, Array4< Real const > const &x, Real sigma, Array4< int const > const &msk, bool is_rz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_normalize_ha (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_jacobi_ha (int i, int j, int k, Array4< Real > const &sol, Real Ax, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_jacobi_ha (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &Ax, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_gauss_seidel_ha (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool is_rz) noexcept
 
void mlndlap_gauss_seidel_aa (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool is_rz) noexcept
 
void mlndlap_gauss_seidel_c (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Real sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool is_rz) noexcept
 
AMREX_FORCE_INLINE void tridiagonal_solve (Array1D< Real, 0, 31 > &a_ls, Array1D< Real, 0, 31 > &b_ls, Array1D< Real, 0, 31 > &c_ls, Array1D< Real, 0, 31 > &r_ls, Array1D< Real, 0, 31 > &u_ls, Array1D< Real, 0, 31 > &gam, int ilen) noexcept
 
void mlndlap_gauss_seidel_with_line_solve_aa (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool is_rz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_line_x (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int ic, int jc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_line_y (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int ic, int jc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_face_xy (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int ic, int jc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real ha_interp_face_xy (Array4< Real const > const &crse, Array4< Real const > const &sigx, Array4< Real const > const &sigy, int i, int j, int ic, int jc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_interpadd_ha (int i, int j, int, Array4< Real > const &fine, Array4< Real const > const &crse, Array4< Real const > const &sigx, Array4< Real const > const &sigy, Array4< int const > const &msk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_divu (int i, int j, int k, Array4< Real > const &rhs, Array4< Real const > const &vel, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &nodal_domain, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi, bool is_rz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_mknewu (int i, int j, int k, Array4< Real > const &u, Array4< Real const > const &p, Array4< Real const > const &sig, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool is_rz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_mknewu_c (int i, int j, int k, Array4< Real > const &u, Array4< Real const > const &p, Real sig, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool is_rz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_sum_Df (int ii, int jj, Real facx, Real facy, Array4< Real const > const &vel, Box const &velbx, bool is_rz) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_divu_fine_contrib (int i, int j, int, Box const &fvbx, Box const &velbx, Array4< Real > const &rhs, Array4< Real const > const &vel, Array4< Real const > const &frhs, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool is_rz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real neumann_scale (int i, int j, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_divu_cf_contrib (int i, int j, int, Array4< Real > const &rhs, Array4< Real const > const &vel, Array4< Real const > const &fc, Array4< Real const > const &rhcc, Array4< int const > const &dmsk, Array4< int const > const &ndmsk, Array4< int const > const &ccmsk, bool is_rz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &ccdom_p, Box const &veldom, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi) noexcept
 
template<typename P , typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_sum_Ax (P const &pred, S const &sig, int i, int j, Real facx, Real facy, Array4< Real const > const &phi, bool is_rz) noexcept
 
template<int rr, typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_Ax_fine_contrib_doit (S const &sig, int i, int j, Box const &ndbx, Box const &ccbx, Array4< Real > const &f, Array4< Real const > const &res, Array4< Real const > const &rhs, Array4< Real const > const &phi, Array4< int const > const &msk, bool is_rz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_Ax_fine_contrib (int i, int j, int, Box const &ndbx, Box const &ccbx, Array4< Real > const &f, Array4< Real const > const &res, Array4< Real const > const &rhs, Array4< Real const > const &phi, Array4< Real const > const &sig, Array4< int const > const &msk, bool is_rz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_Ax_fine_contrib_cs (int i, int j, int, Box const &ndbx, Box const &ccbx, Array4< Real > const &f, Array4< Real const > const &res, Array4< Real const > const &rhs, Array4< Real const > const &phi, Real const sig, Array4< int const > const &msk, bool is_rz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_res_cf_contrib (int i, int j, int, Array4< Real > const &res, Array4< Real const > const &phi, Array4< Real const > const &rhs, Array4< Real const > const &sig, Array4< int const > const &dmsk, Array4< int const > const &ndmsk, Array4< int const > const &ccmsk, Array4< Real const > const &fc, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &ccdom_p, Box const &nddom, bool is_rz, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi, bool neumann_doubling) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_res_cf_contrib_cs (int i, int j, int, Array4< Real > const &res, Array4< Real const > const &phi, Array4< Real const > const &rhs, Real const sig, Array4< int const > const &dmsk, Array4< int const > const &ndmsk, Array4< int const > const &ccmsk, Array4< Real const > const &fc, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &ccdom_p, Box const &nddom, bool is_rz, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi, bool neumann_doubling) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_sten_doit (int i, int j, int k, Array4< Real const > const &x, Array4< Real const > const &sten) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_gauss_seidel_sten (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sten, Array4< int const > const &msk) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_ha (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, int color, bool is_rz) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_aa (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, int color, bool is_rz) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_c (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Real sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, int color, bool is_rz) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_avgdown_coeff_z (int i, int j, int k, Array4< Real > const &crse, Array4< Real const > const &fine) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_adotx_ha (int i, int j, int k, Array4< Real const > const &x, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< Real const > const &sz, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_normalize_ha (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< Real const > const &sz, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_jacobi_ha (int i, int j, int k, Array4< Real > const &sol, Real Ax, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< Real const > const &sz, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_jacobi_ha (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &Ax, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< Real const > const &sz, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
void mlndlap_gauss_seidel_ha (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< Real const > const &sz, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_line_x (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_line_y (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_line_z (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_face_xy (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_face_xz (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real aa_interp_face_yz (Array4< Real const > const &crse, Array4< Real const > const &sig, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real ha_interp_face_xy (Array4< Real const > const &crse, Array4< Real const > const &sigx, Array4< Real const > const &sigy, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real ha_interp_face_xz (Array4< Real const > const &crse, Array4< Real const > const &sigx, Array4< Real const > const &sigz, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real ha_interp_face_yz (Array4< Real const > const &crse, Array4< Real const > const &sigy, Array4< Real const > const &sigz, int i, int j, int k, int ic, int jc, int kc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_interpadd_ha (int i, int j, int k, Array4< Real > const &fine, Array4< Real const > const &crse, Array4< Real const > const &sigx, Array4< Real const > const &sigy, Array4< Real const > const &sigz, Array4< int const > const &msk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_sum_Df (int ii, int jj, int kk, Real facx, Real facy, Real facz, Array4< Real const > const &vel, Box const &velbx) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_divu_fine_contrib (int i, int j, int k, Box const &fvbx, Box const &velbx, Array4< Real > const &rhs, Array4< Real const > const &vel, Array4< Real const > const &frhs, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real neumann_scale (int i, int j, int k, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_divu_cf_contrib (int i, int j, int k, Array4< Real > const &rhs, Array4< Real const > const &vel, Array4< Real const > const &fc, Array4< Real const > const &rhcc, Array4< int const > const &dmsk, Array4< int const > const &ndmsk, Array4< int const > const &ccmsk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &ccdom_p, Box const &veldom, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi) noexcept
 
template<typename P , typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mlndlap_sum_Ax (P const &pred, S const &sig, int i, int j, int k, Real facx, Real facy, Real facz, Array4< Real const > const &phi) noexcept
 
template<int rr, typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_Ax_fine_contrib_doit (S const &sig, int i, int j, int k, Box const &ndbx, Box const &ccbx, Array4< Real > const &f, Array4< Real const > const &res, Array4< Real const > const &rhs, Array4< Real const > const &phi, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_Ax_fine_contrib (int i, int j, int k, Box const &ndbx, Box const &ccbx, Array4< Real > const &f, Array4< Real const > const &res, Array4< Real const > const &rhs, Array4< Real const > const &phi, Array4< Real const > const &sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_Ax_fine_contrib_cs (int i, int j, int k, Box const &ndbx, Box const &ccbx, Array4< Real > const &f, Array4< Real const > const &res, Array4< Real const > const &rhs, Array4< Real const > const &phi, Real const sig, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_res_cf_contrib (int i, int j, int k, Array4< Real > const &res, Array4< Real const > const &phi, Array4< Real const > const &rhs, Array4< Real const > const &sig, Array4< int const > const &dmsk, Array4< int const > const &ndmsk, Array4< int const > const &ccmsk, Array4< Real const > const &fc, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &ccdom_p, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi, bool neumann_doubling) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_res_cf_contrib_cs (int i, int j, int k, Array4< Real > const &res, Array4< Real const > const &phi, Array4< Real const > const &rhs, Real const sig, Array4< int const > const &dmsk, Array4< int const > const &ndmsk, Array4< int const > const &ccmsk, Array4< Real const > const &fc, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Box const &ccdom_p, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi, bool neumann_doubling) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mlndlap_gscolor_ha (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< Real const > const &sx, Array4< Real const > const &sy, Array4< Real const > const &sz, Array4< int const > const &msk, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, int color) noexcept
 
void mlndlap_scale_neumann_bc (Real s, Box const &bx, Array4< Real > const &rhs, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &lobc, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &hibc) noexcept
 
void mlndlap_impose_neumann_bc (Box const &bx, Array4< Real > const &rhs, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &lobc, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &hibc) noexcept
 
void mlndlap_unimpose_neumann_bc (Box const &bx, Array4< Real > const &rhs, Box const &nddom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &lobc, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &hibc) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_normalize_sten (int i, int j, int k, Array4< Real > const &x, Array4< Real const > const &sten, Array4< int const > const &msk, Real s0_norm0) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_jacobi_sten (int i, int j, int k, Array4< Real > const &sol, Real Ax, Array4< Real const > const &rhs, Array4< Real const > const &sten, Array4< int const > const &msk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_jacobi_sten (Box const &bx, Array4< Real > const &sol, Array4< Real const > const &Ax, Array4< Real const > const &rhs, Array4< Real const > const &sten, Array4< int const > const &msk) noexcept
 
AMREX_FORCE_INLINE bool mlndlap_any_fine_sync_cells (Box const &bx, Array4< int const > const &msk, int fine_flag) noexcept
 
template<typename T >
void mlndlap_bc_doit (Box const &vbx, Array4< T > const &a, Box const &domain, GpuArray< bool, AMREX_SPACEDIM > const &bflo, GpuArray< bool, AMREX_SPACEDIM > const &bfhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_restriction (int i, int, int, Array4< Real > const &crse, Array4< Real const > const &fine, Array4< int const > const &msk) noexcept
 
template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_restriction (int i, int, int, Array4< Real > const &crse, Array4< Real const > const &fine, Array4< int const > const &msk, Box const &fdom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_semi_restriction (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_set_nodal_mask (int i, int, int, Array4< int > const &nmsk, Array4< int const > const &cmsk) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_set_dirichlet_mask (Box const &bx, Array4< int > const &dmsk, Array4< int const > const &omsk, Box const &dom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndlap_set_dot_mask (Box const &bx, Array4< Real > const &dmsk, Array4< int const > const &omsk, Box const &dom, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > const &bchi) noexcept
 
template<typename T >
void mlndlap_fillbc_cc (Box const &vbx, Array4< T > const &sigma, Box const &domain, GpuArray< LinOpBCType, AMREX_SPACEDIM > bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > bchi) noexcept
 
template<typename T >
void mlndlap_applybc (Box const &vbx, Array4< T > const &phi, Box const &domain, GpuArray< LinOpBCType, AMREX_SPACEDIM > bclo, GpuArray< LinOpBCType, AMREX_SPACEDIM > bchi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndtslap_interpadd (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndtslap_semi_interpadd (int, int, int, Array4< Real > const &, Array4< Real const > const &, Array4< int const > const &, int) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndtslap_adotx (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< int const > const &msk, GpuArray< Real, 3 > const &s) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndtslap_gauss_seidel (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< int const > const &msk, GpuArray< Real, 3 > const &s) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndtslap_adotx (int i, int j, int k, Array4< Real > const &y, Array4< Real const > const &x, Array4< int const > const &msk, GpuArray< Real, 6 > const &s) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlndtslap_gauss_seidel (int i, int j, int k, Array4< Real > const &sol, Array4< Real const > const &rhs, Array4< int const > const &msk, GpuArray< Real, 6 > const &s) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_adotx (int i, Array4< T > const &y, Array4< T const > const &x, T dhx) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_adotx_os (int i, Array4< T > const &y, Array4< T const > const &x, Array4< int const > const &osm, T dhx) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_adotx_m (int i, Array4< T > const &y, Array4< T const > const &x, T dhx, T dx, T probxlo) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_x (Box const &box, Array4< T > const &fx, Array4< T const > const &sol, T dxinv) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_x_m (Box const &box, Array4< T > const &fx, Array4< T const > const &sol, T dxinv, T dx, T probxlo) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_xface (Box const &box, Array4< T > const &fx, Array4< T const > const &sol, T dxinv, int xlen) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_xface_m (Box const &box, Array4< T > const &fx, Array4< T const > const &sol, T dxinv, int xlen, T dx, T probxlo) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_gsrb (int i, int, int, Array4< T > const &phi, Array4< T const > const &rhs, T dhx, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_gsrb_os (int i, int, int, Array4< T > const &phi, Array4< T const > const &rhs, Array4< int const > const &osm, T dhx, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_gsrb_m (int i, int, int, Array4< T > const &phi, Array4< T const > const &rhs, T dhx, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Box const &vbox, int redblack, T dx, T probxlo) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_jacobi (int i, int, int, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T dhx, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Box const &vbox) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_jacobi_os (int i, int, int, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, Array4< int const > const &osm, T dhx, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Box const &vbox) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_jacobi_m (int i, int, int, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T dhx, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Box const &vbox, T dx, T probxlo) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_normalize (int i, int, int, Array4< T > const &x, T dhx, T dx, T probxlo) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_adotx (int i, int j, int k, Array4< T > const &y, Array4< T const > const &x, T dhx, T dhy, T dhz) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_adotx_os (int i, int j, int k, Array4< T > const &y, Array4< T const > const &x, Array4< int const > const &osm, T dhx, T dhy, T dhz) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_y (Box const &box, Array4< T > const &fy, Array4< T const > const &sol, T dyinv) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_yface (Box const &box, Array4< T > const &fy, Array4< T const > const &sol, T dyinv, int ylen) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_z (Box const &box, Array4< T > const &fz, Array4< T const > const &sol, T dzinv) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_flux_zface (Box const &box, Array4< T > const &fz, Array4< T const > const &sol, T dzinv, int zlen) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_gsrb (int i, int j, int k, Array4< T > const &phi, Array4< T const > const &rhs, T dhx, T dhy, T dhz, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Array4< T const > const &f2, Array4< int const > const &m2, Array4< T const > const &f3, Array4< int const > const &m3, Array4< T const > const &f4, Array4< int const > const &m4, Array4< T const > const &f5, Array4< int const > const &m5, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_gsrb_os (int i, int j, int k, Array4< T > const &phi, Array4< T const > const &rhs, Array4< int const > const &osm, T dhx, T dhy, T dhz, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Array4< T const > const &f2, Array4< int const > const &m2, Array4< T const > const &f3, Array4< int const > const &m3, Array4< T const > const &f4, Array4< int const > const &m4, Array4< T const > const &f5, Array4< int const > const &m5, Box const &vbox, int redblack) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_jacobi (int i, int j, int k, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, T dhx, T dhy, T dhz, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Array4< T const > const &f2, Array4< int const > const &m2, Array4< T const > const &f3, Array4< int const > const &m3, Array4< T const > const &f4, Array4< int const > const &m4, Array4< T const > const &f5, Array4< int const > const &m5, Box const &vbox) noexcept
 
template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mlpoisson_jacobi_os (int i, int j, int k, Array4< T > const &phi, Array4< T const > const &rhs, Array4< T const > const &Ax, Array4< int const > const &osm, T dhx, T dhy, T dhz, Array4< T const > const &f0, Array4< int const > const &m0, Array4< T const > const &f1, Array4< int const > const &m1, Array4< T const > const &f2, Array4< int const > const &m2, Array4< T const > const &f3, Array4< int const > const &m3, Array4< T const > const &f4, Array4< int const > const &m4, Array4< T const > const &f5, Array4< int const > const &m5, Box const &vbox) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_corners (int icorner, Box const &vbox, Array4< Real > const &vel, Array4< int const > const &mxlo, Array4< int const > const &mylo, Array4< int const > const &mxhi, Array4< int const > const &myhi, Array4< Real const > const &bcvalxlo, Array4< Real const > const &bcvalylo, Array4< Real const > const &bcvalxhi, Array4< Real const > const &bcvalyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &etax, Array4< Real const > const &kapx, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &etay, Array4< Real const > const &kapy, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, Array4< Real const > const &etax, Array4< Real const > const &kapx, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, Array4< Real const > const &etay, Array4< Real const > const &kapy, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &fy, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real bscalar) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_os (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< int const > const &osm, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real bscalar) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_vel_grads_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_vel_grads_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_vel_grads_fx (Box const &box, Array4< Real > const &fx, Array4< Real const > const &vel, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_vel_grads_fy (Box const &box, Array4< Real > const &fy, Array4< Real const > const &vel, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xlo_ylo (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxlo, Array4< int const > const &mylo, Array4< Real const > const &bcvalxlo, Array4< Real const > const &bcvalylo, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xlo_domain, bool ylo_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xhi_ylo (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxhi, Array4< int const > const &mylo, Array4< Real const > const &bcvalxhi, Array4< Real const > const &bcvalylo, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xhi_domain, bool ylo_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xlo_yhi (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxlo, Array4< int const > const &myhi, Array4< Real const > const &bcvalxlo, Array4< Real const > const &bcvalyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xlo_domain, bool yhi_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xhi_yhi (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxhi, Array4< int const > const &myhi, Array4< Real const > const &bcvalxhi, Array4< Real const > const &bcvalyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xhi_domain, bool yhi_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xlo_zlo (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxlo, Array4< int const > const &mzlo, Array4< Real const > const &bcvalxlo, Array4< Real const > const &bcvalzlo, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xlo_domain, bool zlo_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xhi_zlo (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxhi, Array4< int const > const &mzlo, Array4< Real const > const &bcvalxhi, Array4< Real const > const &bcvalzlo, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xhi_domain, bool zlo_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xlo_zhi (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxlo, Array4< int const > const &mzhi, Array4< Real const > const &bcvalxlo, Array4< Real const > const &bcvalzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xlo_domain, bool zhi_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_xhi_zhi (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mxhi, Array4< int const > const &mzhi, Array4< Real const > const &bcvalxhi, Array4< Real const > const &bcvalzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool xhi_domain, bool zhi_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_ylo_zlo (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mylo, Array4< int const > const &mzlo, Array4< Real const > const &bcvalylo, Array4< Real const > const &bcvalzlo, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool ylo_domain, bool zlo_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_yhi_zlo (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &myhi, Array4< int const > const &mzlo, Array4< Real const > const &bcvalyhi, Array4< Real const > const &bcvalzlo, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool yhi_domain, bool zlo_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_ylo_zhi (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &mylo, Array4< int const > const &mzhi, Array4< Real const > const &bcvalylo, Array4< Real const > const &bcvalzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool ylo_domain, bool zhi_domain) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges_yhi_zhi (int const i, int const j, int const k, Dim3 const &blen, Array4< Real > const &vel, Array4< int const > const &myhi, Array4< int const > const &mzhi, Array4< Real const > const &bcvalyhi, Array4< Real const > const &bcvalzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, bool yhi_domain, bool zhi_domain) noexcept
 
void mltensor_fill_edges (Box const &vbox, Array4< Real > const &vel, Array4< int const > const &mxlo, Array4< int const > const &mylo, Array4< int const > const &mzlo, Array4< int const > const &mxhi, Array4< int const > const &myhi, Array4< int const > const &mzhi, Array4< Real const > const &bcvalxlo, Array4< Real const > const &bcvalylo, Array4< Real const > const &bcvalzlo, Array4< Real const > const &bcvalxhi, Array4< Real const > const &bcvalyhi, Array4< Real const > const &bcvalzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_DEVICE AMREX_FORCE_INLINE void mltensor_fill_edges (int const bid, int const tid, int const bdim, Box const &vbox, Array4< Real > const &vel, Array4< int const > const &mxlo, Array4< int const > const &mylo, Array4< int const > const &mzlo, Array4< int const > const &mxhi, Array4< int const > const &myhi, Array4< int const > const &mzhi, Array4< Real const > const &bcvalxlo, Array4< Real const > const &bcvalylo, Array4< Real const > const &bcvalzlo, Array4< Real const > const &bcvalxhi, Array4< Real const > const &bcvalyhi, Array4< Real const > const &bcvalzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bcl, int inhomog, int maxorder, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dz_on_xface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dzi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dz_on_yface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dzi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dx_on_zface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dxi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dy_on_zface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dyi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &etaz, Array4< Real const > const &kapz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dz_on_xface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dzi, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dz_on_yface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dzi, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dx_on_zface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dxi, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dy_on_zface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dyi, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, Array4< Real const > const &etaz, Array4< Real const > const &kapz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &fz, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real bscalar) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_cross_terms_os (Box const &box, Array4< Real > const &Ax, Array4< Real const > const &fx, Array4< Real const > const &fy, Array4< Real const > const &fz, Array4< int const > const &osm, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Real bscalar) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_vel_grads_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, GpuArray< Real, AMREX_SPACEDIM > const &dxinv) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mltensor_vel_grads_fz (Box const &box, Array4< Real > const &fz, Array4< Real const > const &vel, GpuArray< Real, AMREX_SPACEDIM > const &dxinv, Array4< Real const > const &bvzlo, Array4< Real const > const &bvzhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dy_on_xface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dyi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dx_on_yface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dxi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dy_on_xface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dyi, Array4< Real const > const &bvxlo, Array4< Real const > const &bvxhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real mltensor_dx_on_yface (int i, int j, int k, int n, Array4< Real const > const &vel, Real dxi, Array4< Real const > const &bvylo, Array4< Real const > const &bvyhi, Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &bct, Dim3 const &dlo, Dim3 const &dhi) noexcept
 
template<int N, typename T , typename M , typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int pcg_solve (T *AMREX_RESTRICT x, T *AMREX_RESTRICT r, M const &mat, P const &precond, int maxiter, T rel_tol)
 Preconditioned conjugate gradient solver. More...
 
template<class T >
constexpr decltype(T::is_particle_tile_data) IsParticleTileData ()
 
template<class T , class... Args>
constexpr bool IsParticleTileData (Args...)
 
template<typename A , typename B , std::enable_if_t< std::is_same_v< std::remove_cv_t< A >, std::remove_cv_t< B > >, int > = 0>
bool isSame (A const *pa, B const *pb)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE std::uint64_t SetParticleIDandCPU (Long id, int cpu) noexcept
 
template<int NReal, int NInt>
std::ostream & operator<< (std::ostream &os, const Particle< NReal, NInt > &p)
 
template<int NReal>
std::ostream & operator<< (std::ostream &os, const Particle< NReal, 0 > &p)
 
template<int NInt>
std::ostream & operator<< (std::ostream &os, const Particle< 0, NInt > &p)
 
template<int NReal = 0, int NInt = 0>
std::ostream & operator<< (std::ostream &os, const Particle< 0, 0 > &p)
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_deposit_cic (P const &p, int nc, amrex::Array4< amrex::Real > const &rho, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi)
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex_deposit_particle_dx_cic (P const &p, int nc, amrex::Array4< amrex::Real > const &rho, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &pdxi)
 
template<class PC , class Buffer , std::enable_if_t< IsParticleContainer< PC >::value &&std::is_base_of_v< PolymorphicArenaAllocator< typename Buffer::value_type >, Buffer >, int > foo = 0>
void packBuffer (const PC &pc, const ParticleCopyOp &op, const ParticleCopyPlan &plan, Buffer &snd_buffer)
 
template<class PC , class Buffer , class UnpackPolicy , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void unpackBuffer (PC &pc, const ParticleCopyPlan &plan, const Buffer &snd_buffer, UnpackPolicy const &policy)
 
template<class PC , class SndBuffer , class RcvBuffer , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void communicateParticlesStart (const PC &pc, ParticleCopyPlan &plan, const SndBuffer &snd_buffer, RcvBuffer &rcv_buffer)
 
void communicateParticlesFinish (const ParticleCopyPlan &plan)
 
template<class PC , class Buffer , class UnpackPolicy , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void unpackRemotes (PC &pc, const ParticleCopyPlan &plan, Buffer &rcv_buffer, UnpackPolicy const &policy)
 
template<class PC , class MF , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void ParticleToMesh (PC const &pc, MF &mf, int lev, F const &f, bool zero_out_input=true)
 
template<class PC , class MF , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void MeshToParticle (PC &pc, MF const &mf, int lev, F const &f)
 
Long CountSnds (const std::map< int, Vector< char > > &not_ours, Vector< Long > &Snds)
 
Long doHandShake (const std::map< int, Vector< char > > &not_ours, Vector< Long > &Snds, Vector< Long > &Rcvs)
 
Long doHandShakeLocal (const std::map< int, Vector< char > > &not_ours, const Vector< int > &neighbor_procs, Vector< Long > &Snds, Vector< Long > &Rcvs)
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceSum (PC const &pc, F &&f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceSum (PC const &pc, int lev, F &&f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceSum (PC const &pc, int lev_min, int lev_max, F const &f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceMax (PC const &pc, F &&f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceMax (PC const &pc, int lev, F &&f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceMax (PC const &pc, int lev_min, int lev_max, F const &f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceMin (PC const &pc, F &&f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceMin (PC const &pc, int lev, F &&f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto ReduceMin (PC const &pc, int lev_min, int lev_max, F const &f) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool ReduceLogicalAnd (PC const &pc, F &&f)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool ReduceLogicalAnd (PC const &pc, int lev, F &&f)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool ReduceLogicalAnd (PC const &pc, int lev_min, int lev_max, F const &f)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool ReduceLogicalOr (PC const &pc, F &&f)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool ReduceLogicalOr (PC const &pc, int lev, F &&f)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level. More...
 
template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool ReduceLogicalOr (PC const &pc, int lev_min, int lev_max, F const &f)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max. More...
 
template<class RD , class PC , class F , class ReduceOps , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
RD::Type ParticleReduce (PC const &pc, F &&f, ReduceOps &reduce_ops)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels. More...
 
template<class RD , class PC , class F , class ReduceOps , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
RD::Type ParticleReduce (PC const &pc, int lev, F &&f, ReduceOps &reduce_ops)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level. More...
 
template<class RD , class PC , class F , class ReduceOps , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
RD::Type ParticleReduce (PC const &pc, int lev_min, int lev_max, F const &f, ReduceOps &reduce_ops)
 A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max. More...
 
template<typename T_ParticleType , int NAR, int NAI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void copyParticle (const ParticleTileData< T_ParticleType, NAR, NAI > &dst, const ConstParticleTileData< T_ParticleType, NAR, NAI > &src, int src_i, int dst_i) noexcept
 A general single particle copying routine that can run on the GPU. More...
 
template<typename T_ParticleType , int NAR, int NAI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void copyParticle (const ParticleTileData< T_ParticleType, NAR, NAI > &dst, const ParticleTileData< T_ParticleType, NAR, NAI > &src, int src_i, int dst_i) noexcept
 A general single particle copying routine that can run on the GPU. More...
 
template<typename T_ParticleType , int NAR, int NAI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void swapParticle (const ParticleTileData< T_ParticleType, NAR, NAI > &dst, const ParticleTileData< T_ParticleType, NAR, NAI > &src, int src_i, int dst_i) noexcept
 A general single particle swapping routine that can run on the GPU. More...
 
template<typename DstTile , typename SrcTile >
void copyParticles (DstTile &dst, const SrcTile &src) noexcept
 Copy particles from src to dst. This version copies all the particles, writing them to the beginning of dst. More...
 
template<typename DstTile , typename SrcTile , typename Index , typename N , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void copyParticles (DstTile &dst, const SrcTile &src, Index src_start, Index dst_start, N n) noexcept
 Copy particles from src to dst. This version copies n particles starting at index src_start, writing the result starting at dst_start. More...
 
template<typename DstTile , typename SrcTile , typename F >
void transformParticles (DstTile &dst, const SrcTile &src, F &&f) noexcept
 Apply the function f to all the particles in src, writing the result to dst. This version does all the particles in src. More...
 
template<typename DstTile , typename SrcTile , typename Index , typename N , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void transformParticles (DstTile &dst, const SrcTile &src, Index src_start, Index dst_start, N n, F const &f) noexcept
 Apply the function f to particles in src, writing the result to dst. This version applies the function to n particles starting at index src_start, writing the result starting at dst_start. More...
 
template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename F >
void transformParticles (DstTile1 &dst1, DstTile2 &dst2, const SrcTile &src, F &&f) noexcept
 Apply the function f to all the particles in src, writing the results to dst1 and dst2. This version does all the particles in src. More...
 
template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename Index , typename N , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void transformParticles (DstTile1 &dst1, DstTile2 &dst2, const SrcTile &src, Index src_start, Index dst1_start, Index dst2_start, N n, F const &f) noexcept
 Apply the function f to particles in src, writing the results to dst1 and dst2. This version applies the function to n particles starting at index src_start, writing the result starting at dst1_start and dst2_start. More...
 
template<typename DstTile , typename SrcTile , typename Index , typename N , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index filterParticles (DstTile &dst, const SrcTile &src, const Index *mask) noexcept
 Conditionally copy particles from src to dst based on the value of mask. More...
 
template<typename DstTile , typename SrcTile , typename Index , typename N , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index filterParticles (DstTile &dst, const SrcTile &src, const Index *mask, Index src_start, Index dst_start, N n) noexcept
 Conditionally copy particles from src to dst based on the value of mask. This version conditionally copies n particles starting at index src_start, writing the result starting at dst_start. More...
 
template<typename DstTile , typename SrcTile , typename Pred , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, int > foo = 0>
int filterParticles (DstTile &dst, const SrcTile &src, Pred &&p) noexcept
 Conditionally copy particles from src to dst based on a predicate. More...
 
template<typename DstTile , typename SrcTile , typename Pred , typename Index , typename N , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, Index > nvccfoo = 0>
Index filterParticles (DstTile &dst, const SrcTile &src, Pred const &p, Index src_start, Index dst_start, N n) noexcept
 Conditionally copy particles from src to dst based on a predicate. This version conditionally copies n particles starting at index src_start, writing the result starting at dst_start. More...
 
template<typename DstTile , typename SrcTile , typename Index , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index filterAndTransformParticles (DstTile &dst, const SrcTile &src, Index *mask, F const &f, Index src_start, Index dst_start) noexcept
 Conditionally copy particles from src to dst based on the value of mask. A transformation will also be applied to the particles on copy. More...
 
template<typename DstTile , typename SrcTile , typename Index , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index filterAndTransformParticles (DstTile &dst, const SrcTile &src, Index *mask, F &&f) noexcept
 Conditionally copy particles from src to dst based on the value of mask. A transformation will also be applied to the particles on copy. More...
 
template<typename DstTile , typename SrcTile , typename Pred , typename F , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, int > foo = 0>
int filterAndTransformParticles (DstTile &dst, const SrcTile &src, Pred &&p, F &&f) noexcept
 Conditionally copy particles from src to dst based on a predicate. A transformation will also be applied to the particles on copy. More...
 
template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename Index , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index filterAndTransformParticles (DstTile1 &dst1, DstTile2 &dst2, const SrcTile &src, Index *mask, F const &f) noexcept
 Conditionally copy particles from src to dst1 and dst2 based on the value of mask. A transformation will also be applied to the particles on copy. More...
 
template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename Pred , typename F , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, int > foo = 0>
int filterAndTransformParticles (DstTile1 &dst1, DstTile2 &dst2, const SrcTile &src, Pred const &p, F &&f) noexcept
 Conditionally copy particles from src to dst1 and dst2 based on a predicate. A transformation will also be applied to the particles on copy. More...
 
template<typename DstTile , typename SrcTile , typename Pred , typename F , typename Index , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, Index > nvccfoo = 0>
Index filterAndTransformParticles (DstTile &dst, const SrcTile &src, Pred const &p, F &&f, Index src_start, Index dst_start) noexcept
 Conditionally copy particles from src to dst based on a predicate. This version conditionally copies n particles starting at index src_start, writing the result starting at dst_start. More...
 
template<typename PTile , typename N , typename Index , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void gatherParticles (PTile &dst, const PTile &src, N np, const Index *inds)
 Gather particles copies particles into contiguous order from an arbitrary order. Specifically, the particle at the index inds[i] in src will be copied to the index i in dst. More...
 
template<typename PTile , typename N , typename Index , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void scatterParticles (PTile &dst, const PTile &src, N np, const Index *inds)
 Scatter particles copies particles from contiguous order into an arbitrary order. Specifically, the particle at the index i in src will be copied to the index inds[i] in dst. More...
 
IntVect computeRefFac (const ParGDBBase *a_gdb, int src_lev, int lev)
 
Vector< intcomputeNeighborProcs (const ParGDBBase *a_gdb, int ngrow)
 
template<class Iterator , std::enable_if_t< IsParticleIterator< Iterator >::value, int > foo = 0>
int numParticlesOutOfRange (Iterator const &pti, int nGrow)
 Returns the number of particles that are more than nGrow cells from the box correspond to the input iterator. More...
 
template<class Iterator , std::enable_if_t< IsParticleIterator< Iterator >::value &&!Iterator::ContainerType::ParticleType::is_soa_particle, int > foo = 0>
int numParticlesOutOfRange (Iterator const &pti, IntVect nGrow)
 Returns the number of particles that are more than nGrow cells from the box correspond to the input iterator. More...
 
template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int numParticlesOutOfRange (PC const &pc, int nGrow)
 Returns the number of particles that are more than nGrow cells from their assigned box. More...
 
template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int numParticlesOutOfRange (PC const &pc, IntVect nGrow)
 Returns the number of particles that are more than nGrow cells from their assigned box. More...
 
template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int numParticlesOutOfRange (PC const &pc, int lev_min, int lev_max, int nGrow)
 Returns the number of particles that are more than nGrow cells from their assigned box. More...
 
template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int numParticlesOutOfRange (PC const &pc, int lev_min, int lev_max, IntVect nGrow)
 Returns the number of particles that are more than nGrow cells from their assigned box. More...
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int getTileIndex (const IntVect &iv, const Box &box, const bool a_do_tiling, const IntVect &a_tile_size, Box &tbx)
 
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int numTilesInBox (const Box &box, const bool a_do_tiling, const IntVect &a_tile_size)
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVect getParticleCell (P const &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi) noexcept
 Returns the cell index for a given particle using the provided lower bounds and cell sizes. More...
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVect getParticleCell (P const &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, const Box &domain) noexcept
 Returns the cell index for a given particle using the provided lower bounds, cell sizes and global domain offset. More...
 
template<typename PTD >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVect getParticleCell (PTD const &ptd, int i, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, const Box &domain) noexcept
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int getParticleGrid (P const &p, amrex::Array4< int > const &mask, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, const Box &domain) noexcept
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE bool enforcePeriodic (P &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &phi, amrex::GpuArray< amrex::ParticleReal, AMREX_SPACEDIM > const &rlo, amrex::GpuArray< amrex::ParticleReal, AMREX_SPACEDIM > const &rhi, amrex::GpuArray< int, AMREX_SPACEDIM > const &is_per) noexcept
 
template<typename PTile , typename ParFunc >
int partitionParticles (PTile &ptile, ParFunc const &is_left)
 Reorders the ParticleTile into two partitions left [0, num_left-1] and right [num_left, ptile.numParticles()-1] and returns the number of particles in the left partition. More...
 
template<typename PTile >
void removeInvalidParticles (PTile &ptile)
 
template<typename PTile , typename PLocator , typename CellAssignor >
int partitionParticlesByDest (PTile &ptile, const PLocator &ploc, CellAssignor const &assignor, const ParticleBufferMap &pmap, const GpuArray< Real, AMREX_SPACEDIM > &plo, const GpuArray< Real, AMREX_SPACEDIM > &phi, const GpuArray< ParticleReal, AMREX_SPACEDIM > &rlo, const GpuArray< ParticleReal, AMREX_SPACEDIM > &rhi, const GpuArray< int, AMREX_SPACEDIM > &is_per, int lev, int gid, int, int lev_min, int lev_max, int nGrow, bool remove_negative)
 
template<class PC1 , class PC2 >
bool SameIteratorsOK (const PC1 &pc1, const PC2 &pc2)
 
template<class PC >
void EnsureThreadSafeTiles (PC &pc)
 
template<class index_type , typename F >
void PermutationForDeposition (Gpu::DeviceVector< index_type > &perm, index_type nitems, index_type nbins, F const &f)
 
template<class index_type , class PTile >
void PermutationForDeposition (Gpu::DeviceVector< index_type > &perm, index_type nitems, const PTile &ptile, Box bx, Geometry geom, const IntVect idx_type)
 
template<typename P >
std::string getDefaultCompNameReal (const int i)
 
template<typename P >
std::string getDefaultCompNameInt (const int i)
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cic_interpolate (const P &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, const amrex::Array4< amrex::Real const > &data_arr, amrex::ParticleReal *val, int M=AMREX_SPACEDIM)
 Linearly interpolates the mesh data to the particle position from cell-centered data. More...
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cic_interpolate_cc (const P &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, const amrex::Array4< amrex::Real const > &data_arr, amrex::ParticleReal *val, int M=AMREX_SPACEDIM)
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void cic_interpolate_nd (const P &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, const amrex::Array4< amrex::Real const > &data_arr, amrex::ParticleReal *val, int M=AMREX_SPACEDIM)
 Linearly interpolates the mesh data to the particle position from node-centered data. More...
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void mac_interpolate (const P &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, amrex::GpuArray< amrex::Array4< amrex::Real const >, AMREX_SPACEDIM > const &data_arr, amrex::ParticleReal *val)
 Linearly interpolates the mesh data to the particle position from face-centered data. The nth component of the data_arr array is nodal in the nth direction, and cell-centered in the others. More...
 
template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void linear_interpolate_to_particle (const P &p, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &plo, amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &dxi, const Array4< amrex::Real const > *data_arr, amrex::ParticleReal *val, const IntVect *is_nodal, int start_comp, int ncomp, int num_arrays)
 Linearly interpolates the mesh data to the particle position from mesh data. This general form can handle an arbitrary number of Array4s, each with different staggerings. More...
 

Variables

static constexpr Real INVALID_TIME = -1.0e200_rt
 
static constexpr int MFNEWDATA = 0
 
static constexpr int MFOLDDATA = 1
 
PCInterp pc_interp
 CONSTRUCT A GLOBAL OBJECT OF EACH VERSION. More...
 
NodeBilinear node_bilinear_interp
 
FaceLinear face_linear_interp
 
FaceConservativeLinear face_cons_linear_interp
 
FaceDivFree face_divfree_interp
 
CellConservativeLinear lincc_interp
 
CellConservativeLinear cell_cons_interp (false)
 
CellConservativeProtected protected_interp
 
CellConservativeQuartic quartic_interp
 
CellBilinear cell_bilinear_interp
 
CellQuadratic quadratic_interp
 
CellQuartic cell_quartic_interp
 
MFPCInterp mf_pc_interp
 
MFCellConsLinInterp mf_cell_cons_interp (false)
 
MFCellConsLinInterp mf_lincc_interp (true)
 
MFCellConsLinMinmaxLimitInterp mf_linear_slope_minmax_interp
 
MFCellBilinear mf_cell_bilinear_interp
 
MFNodeBilinear mf_node_bilinear_interp
 
constexpr char ResetDisplay [] = "\033[0m"
 
std::atomic< Long > atomic_total_bytes_allocated_in_fabs {0L}
 
std::atomic< Long > atomic_total_bytes_allocated_in_fabs_hwm {0L}
 
std::atomic< Long > atomic_total_cells_allocated_in_fabs {0L}
 
std::atomic< Long > atomic_total_cells_allocated_in_fabs_hwm {0L}
 
Long private_total_bytes_allocated_in_fabs = 0L
 total bytes at any given time More...
 
Long private_total_bytes_allocated_in_fabs_hwm = 0L
 high-water-mark over a given interval More...
 
Long private_total_cells_allocated_in_fabs = 0L
 total cells at any given time More...
 
Long private_total_cells_allocated_in_fabs_hwm = 0L
 high-water-mark over a given interval More...
 
const int []
 
bool initialized = false
 
int verbose
 
int sfc_threshold
 
Real max_efficiency
 
int node_size
 
static const char sys_name [] = "IEEE"
 
constexpr gpuError_t gpuSuccess = cudaSuccess
 
amrex::randState_tgpu_rand_state = nullptr
 
template<class A >
constexpr bool IsBaseFab_v = IsBaseFab<A>::value
 
template<class A >
constexpr bool IsFabArray_v = IsFabArray<A>::value
 
template<class M >
constexpr bool IsMultiFabLike_v = IsMultiFabLike<M>::value
 
template<typename T , typename... Args>
constexpr bool IsConvertible_v = IsConvertible<T, Args...>::value
 
static const Long gcc_map_node_extra_bytes = 32L
 
static constexpr std::string_view parser_f1_s []
 
static constexpr std::string_view parser_f2_s []
 
static constexpr std::string_view parser_f3_s []
 
static constexpr std::string_view parser_node_s []
 
static constexpr amrex::Real eb_covered_val = amrex::Real(1.e40)
 
EBCellConservativeLinear eb_lincc_interp
 
EBCellConservativeLinear eb_cell_cons_interp (false)
 
EBMFCellConsLinInterp eb_mf_cell_cons_interp (false)
 
EBMFCellConsLinInterp eb_mf_lincc_interp (true)
 
static constexpr auto K1D = int(AMREX_SPACEDIM>=1)
 
static constexpr auto K2D = int(AMREX_SPACEDIM>=2)
 
static constexpr auto K3D = int(AMREX_SPACEDIM>=3)
 

Detailed Description

Current Support:

  • 2D + 3D
  • single + multi-level (w/o nesting)
  • ghosts (indicator field created using grow)
  • particles

TODO:

  • AMR nesting

Typedef Documentation

◆ AmrParticleContainer

template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::AmrParticleContainer = typedef AmrParticleContainer_impl<Particle<T_NStructReal, T_NStructInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ Array

template<class T , std::size_t N>
using amrex::Array = typedef std::array<T,N>

◆ BndryBATransformer

◆ BndryData

◆ BndryFunc3DDefault

using amrex::BndryFunc3DDefault = typedef void (*)(Real* data, const int* lo, const int* hi, const int* dom_lo, const int* dom_hi, const Real* dx, const Real* grd_lo, const Real* time, const int* bc)

◆ BndryFuncDefault

using amrex::BndryFuncDefault = typedef void (*)(Real* data, AMREX_ARLIM_P(lo), AMREX_ARLIM_P(hi), const int* dom_lo, const int* dom_hi, const Real* dx, const Real* grd_lo, const Real* time, const int* bc)

◆ BndryFuncFabDefault

using amrex::BndryFuncFabDefault = typedef std::function<void(Box const& bx, FArrayBox& data, int dcomp, int numcomp, Geometry const& geom, Real time, const Vector<BCRec>& bcr, int bcomp, int scomp)>

◆ BndryRegister

◆ Box

typedef BoxND< AMREX_SPACEDIM > amrex::Box

◆ BoxIndexer

using amrex::BoxIndexer = typedef BoxIndexerND<AMREX_SPACEDIM>

◆ cMultiFab

using amrex::cMultiFab = typedef FabArray<BaseFab<GpuComplex<Real> > >

◆ DefaultAllocator

template<class T >
using amrex::DefaultAllocator = typedef amrex::ArenaAllocator<T>

◆ DeriveFunc

using amrex::DeriveFunc = typedef void (*)(amrex::Real* data, AMREX_ARLIM_P(dlo), AMREX_ARLIM_P(dhi), const int* nvar, const amrex::Real* compdat, AMREX_ARLIM_P(compdat_lo), AMREX_ARLIM_P(compdat_hi), const int* ncomp, const int* lo, const int* hi, const int* domain_lo, const int* domain_hi, const amrex::Real* delta, const amrex::Real* xlo, const amrex::Real* time, const amrex::Real* dt, const int* bcrec, const int* level, const int* grid_no)

Type of extern "C" function called by DeriveRec to compute derived quantity.

Note that AMREX_ARLIM_P will be preprocessed into DIM const int&'s.

Parameters
data
AMREX_ARLIM_P(dlo)
AMREX_ARLIM_P(dhi)
nvar
compdat
AMREX_ARLIM_P(compdat_lo)
AMREX_ARLIM_P(compdat_hi)
ncomp
lo
hi
domain_lo
domain_hi
delta
xlo
time
dt
bcrec
level
grid_no

◆ DeriveFunc3D

using amrex::DeriveFunc3D = typedef void (*)(amrex::Real* data, const int* dlo, const int* dhi, const int* nvar, const amrex::Real* compdat, const int* clo, const int* chi, const int* ncomp, const int* lo, const int* hi, const int* domain_lo, const int* domain_hi, const amrex::Real* delta, const amrex::Real* xlo, const amrex::Real* time, const amrex::Real* dt, const int* bcrec, const int* level, const int* grid_no)

This is dimension agnostic. For example, dlo always has three elements.

Parameters
data
dlo
dhi
nvar
compdat
clo
chi
ncomp
lo
hi
domain_lo
domain_hi
delta
xlo
time
dt
bcrec
level
grid_no

◆ DeriveFuncFab

using amrex::DeriveFuncFab = typedef std::function<void(const amrex::Box& bx, amrex::FArrayBox& derfab, int dcomp, int ncomp, const amrex::FArrayBox& datafab, const amrex::Geometry& geomdata, amrex::Real time, const int* bcrec, int level)>

◆ DeriveFuncMF

using amrex::DeriveFuncMF = typedef std::function<void(amrex::MultiFab& der_mf, int dcomp, int ncomp, const amrex::MultiFab& data_mf, const amrex::Geometry& geomdata, amrex::Real time, const int* bcrec, int level)>

◆ Detected_t

template<template< class... > class Op, class... Args>
using amrex::Detected_t = typedef typename detail::Detector<detail::Nonesuch, void, Op, Args...>::type

◆ DetectedOr

template<class Default , template< class... > class Op, class... Args>
using amrex::DetectedOr = typedef typename detail::Detector<Default, void, Op, Args...>::type

◆ DMRef

◆ EnableIf_t

template<bool B, class T = void>
using amrex::EnableIf_t = typedef std::enable_if_t<B,T>

◆ ErrorFunc2Default

using amrex::ErrorFunc2Default = typedef void (*)(int* tag, AMREX_ARLIM_P(tlo), AMREX_ARLIM_P(thi), const int* tagval, const int* clearval, amrex::Real* data, AMREX_ARLIM_P(data_lo), AMREX_ARLIM_P(data_hi), const int* lo, const int * hi, const int* nvar, const int* domain_lo, const int* domain_hi, const amrex::Real* dx, const int* level, const amrex::Real* avg)

◆ ErrorFunc3DDefault

using amrex::ErrorFunc3DDefault = typedef void (*)(int* tag, const int* tlo, const int* thi, const int* tagval, const int* clearval, amrex::Real* data, const int* data_lo, const int* data_hi, const int* lo, const int * hi, const int* nvar, const int* domain_lo, const int* domain_hi, const amrex::Real* dx, const amrex::Real* xlo, const amrex::Real* prob_lo, const amrex::Real* time, const int* level)

Dimension agnostic version that always has three elements. Note that this is only implemented for the ErrorFunc class, not ErrorFunc2.

Parameters
tag
tlo
thi
tagval
clearval
data
data_lo
data_hi
lo
hi
nvar
domain_lo
domain_hi
dx
xlo
prob_lo
time
level

◆ ErrorFuncDefault

using amrex::ErrorFuncDefault = typedef void (*)(int* tag, AMREX_ARLIM_P(tlo), AMREX_ARLIM_P(thi), const int* tagval, const int* clearval, amrex::Real* data, AMREX_ARLIM_P(data_lo), AMREX_ARLIM_P(data_hi), const int* lo, const int * hi, const int* nvar, const int* domain_lo, const int* domain_hi, const amrex::Real* dx, const amrex::Real* xlo, const amrex::Real* prob_lo, const amrex::Real* time, const int* level)

Type of extern "C" function called by ErrorRec to do tagging of cells for refinement.

Parameters
tag
tlo
thi
tagval
clearval
data
data_lo
data_hi
lo
hi
nvar
domain_lo
domain_hi
dx
xlo
prob_lo
time
level

◆ ErrorHandler

using amrex::ErrorHandler = typedef void (*)(const char*)

◆ FabSet

using amrex::FabSet = typedef FabSetT<MultiFab>

◆ FArrayBoxFactory

◆ fBndryData

◆ fBndryRegister

◆ fFabSet

using amrex::fFabSet = typedef FabSetT<fMultiFab>

◆ fInterpBndryData

◆ fMultiFab

using amrex::fMultiFab = typedef FabArray<BaseFab<float> >

◆ GMRESMLMG

◆ gpuDeviceProp_t

using amrex::gpuDeviceProp_t = typedef cudaDeviceProp

◆ gpuError_t

using amrex::gpuError_t = typedef cudaError_t

◆ gpuStream_t

using amrex::gpuStream_t = typedef cudaStream_t

◆ IndexType

typedef IndexTypeND< AMREX_SPACEDIM > amrex::IndexType

◆ IntArray

using amrex::IntArray = typedef Array<int , AMREX_SPACEDIM>

◆ InterpBndryData

◆ IntVect

typedef IntVectND< AMREX_SPACEDIM > amrex::IntVect

◆ IsDetected

template<template< class... > class Op, class... Args>
using amrex::IsDetected = typedef typename detail::Detector<detail::Nonesuch, void, Op, Args...>::value_t

◆ IsDetectedExact

template<class Expected , template< typename... > class Op, class... Args>
using amrex::IsDetectedExact = typedef std::is_same<Expected, Detected_t<Op, Args...> >

◆ KeyValuePair

template<typename K , typename V >
using amrex::KeyValuePair = typedef ValLocPair<K,V>

◆ MaxResSteadyClock

using amrex::MaxResSteadyClock = typedef std::conditional_t<std::chrono::high_resolution_clock::is_steady, std::chrono::high_resolution_clock, std::chrono::steady_clock>

◆ MLABecLaplacian

◆ MLALaplacian

◆ MLCellABecLap

◆ MLCellLinOp

◆ MLCGSolver

◆ MLLinOp

using amrex::MLLinOp = typedef MLLinOpT<MultiFab>

◆ MLMG

using amrex::MLMG = typedef MLMGT<MultiFab>

◆ MLMGBndry

◆ MLPoisson

◆ MultiFabId

using amrex::MultiFabId = typedef FabArrayId

◆ Negation

template<class B >
using amrex::Negation = typedef std::integral_constant<bool, !bool(B::value)>

◆ ParConstIter

template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParConstIter = typedef ParConstIter_impl<Particle<T_NStructReal, T_NStructInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ ParConstIterSoA

template<int T_NArrayReal, int T_NArrayInt, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParConstIterSoA = typedef ParConstIter_impl<SoAParticle<T_NArrayReal, T_NArrayInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ ParIter

template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParIter = typedef ParIter_impl<Particle<T_NStructReal, T_NStructInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ ParIterBase

template<bool is_const, int T_NStructReal, int T_NStructInt, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParIterBase = typedef ParIterBase_impl<is_const, Particle<T_NStructReal, T_NStructInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ ParIterBaseSoA

template<bool is_const, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParIterBaseSoA = typedef ParIterBase_impl<is_const,SoAParticle<T_NArrayReal, T_NArrayInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ ParIterSoA

template<int T_NArrayReal, int T_NArrayInt, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParIterSoA = typedef ParIter_impl<SoAParticle<T_NArrayReal, T_NArrayInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ ParticleContainer

template<int T_NStructReal, int T_NStructInt = 0, int T_NArrayReal = 0, int T_NArrayInt = 0, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParticleContainer = typedef ParticleContainer_impl<Particle<T_NStructReal, T_NStructInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ ParticleContainerPureSoA

template<int T_NArrayReal, int T_NArrayInt, template< class > class Allocator = DefaultAllocator, class CellAssignor = DefaultAssignor>
using amrex::ParticleContainerPureSoA = typedef ParticleContainer_impl<SoAParticle<T_NArrayReal, T_NArrayInt>, T_NArrayReal, T_NArrayInt, Allocator, CellAssignor>

◆ PTR_TO_VOID_FUNC

using amrex::PTR_TO_VOID_FUNC = typedef void (*)()

◆ randGenerator_t

using amrex::randGenerator_t = typedef curandGenerator_t

◆ randState_t

using amrex::randState_t = typedef curandState_t

◆ RealArray

using amrex::RealArray = typedef Array<Real, AMREX_SPACEDIM>

◆ RuntimeError

using amrex::RuntimeError = typedef std::runtime_error

◆ SmallRowVector

template<class T , int N, int StartIndex = 0>
using amrex::SmallRowVector = typedef SmallMatrix<T,1,N,Order::F,StartIndex>

◆ SmallVector

template<class T , int N, int StartIndex = 0>
using amrex::SmallVector = typedef SmallMatrix<T,N,1,Order::F,StartIndex>

◆ TheFaArenaPointer

using amrex::TheFaArenaPointer = typedef std::unique_ptr<char, TheFaArenaDeleter>

◆ TracerParIter

using amrex::TracerParIter = typedef ParIter<AMREX_SPACEDIM>

◆ Tuple

template<class... Ts>
using amrex::Tuple = typedef std::tuple<Ts...>

◆ TypeAt

template<std::size_t I, typename T >
using amrex::TypeAt = typedef typename detail::TypeListGet<I,T>::type

Type at position I of a TypeList.

◆ TypeMultiplier

template<template< class... > class TParam, class... Types>
using amrex::TypeMultiplier = typedef TypeAt<0, decltype(detail::TApply<TParam>( (TypeList<>{} + ... + detail::SingleTypeMultiplier(Types{})) ))>

Return the first template argument with the later arguments applied to it. Types of the form T[N] are expanded to T, T, T, T, ... (N times with N >= 1).

For example, TypeMultiplier<ReduceData, Real[4], int[2], Long> is an alias to the type ReduceData<Real, Real, Real, Real, int, int, Long>.

◆ UserFillBox

using amrex::UserFillBox = typedef void (*)(Box const& bx, Array4<Real> const& dest, int dcomp, int numcomp, GeometryData const& geom, Real time, const BCRec* bcr, int bcomp, int orig_comp)

◆ YAFluxRegister

Enumeration Type Documentation

◆ BottomSolver

enum amrex::BottomSolver : int
strong
Enumerator
Default 
smoother 
bicgstab 
cg 
bicgcg 
cgbicg 
hypre 
petsc 

◆ ButcherTableauTypes

Enumerator
User 
ForwardEuler 
Trapezoid 
SSPRK3 
RK4 
NumTypes 

◆ CurlCurlStateType

Enumerator

◆ DataLayout

enum amrex::DataLayout
strong

A tag that defines the data layout policy used by particle tiles.

Enumerator
AoS 
SoA 

◆ Direction

enum amrex::Direction : int
strong
Enumerator
AMREX_D_DECL 

◆ EBData_t

enum amrex::EBData_t : int
strong
Enumerator
levelset 
volfrac 
centroid 
bndrycent 
bndrynorm 
bndryarea 
AMREX_D_DECL 
AMREX_D_DECL 
AMREX_D_DECL 
cellflag 

◆ EBSupport

enum amrex::EBSupport : int
strong
Enumerator
none 
basic 

EBCellFlag.

volume 
  • volume fraction
full 
  • area fraction, boundary centroids and face centroids

◆ FabType

enum amrex::FabType : int
strong
Enumerator
covered 
regular 
singlevalued 
multivalued 
undefined 

◆ FillType

This enum and the FabCopyDescriptor class should really be nested in FabArrayCopyDescriptor (not done for portability reasons).

Enumerator
FillLocally 
FillRemotely 
Unfillable 

◆ FPExcept

enum amrex::FPExcept : std::uint8_t
strong
Enumerator
none 
invalid 
zero 
overflow 
all 

◆ HypreSolverID

enum amrex::HypreSolverID
strong
Enumerator
BoomerAMG 
SSAMG 

◆ IntegratorTypes

Enumerator
ForwardEuler 
ExplicitRungeKutta 
Sundials 

◆ InterpEM_t

Enumerator
InterpE 
InterpB 

◆ iparser_exe_t

Enumerator
IPARSER_EXE_NULL 
IPARSER_EXE_NUMBER 
IPARSER_EXE_SYMBOL 
IPARSER_EXE_ADD 
IPARSER_EXE_SUB 
IPARSER_EXE_MUL 
IPARSER_EXE_DIV_F 
IPARSER_EXE_DIV_B 
IPARSER_EXE_NEG 
IPARSER_EXE_F1 
IPARSER_EXE_F2_F 
IPARSER_EXE_F2_B 
IPARSER_EXE_ADD_VP 
IPARSER_EXE_SUB_VP 
IPARSER_EXE_MUL_VP 
IPARSER_EXE_DIV_VP 
IPARSER_EXE_DIV_PV 
IPARSER_EXE_ADD_PP 
IPARSER_EXE_SUB_PP 
IPARSER_EXE_MUL_PP 
IPARSER_EXE_DIV_PP 
IPARSER_EXE_NEG_P 
IPARSER_EXE_ADD_VN 
IPARSER_EXE_SUB_VN 
IPARSER_EXE_MUL_VN 
IPARSER_EXE_DIV_NV 
IPARSER_EXE_DIV_VN 
IPARSER_EXE_ADD_PN 
IPARSER_EXE_SUB_PN 
IPARSER_EXE_MUL_PN 
IPARSER_EXE_DIV_PN 
IPARSER_EXE_IF 
IPARSER_EXE_JUMP 

◆ iparser_f1_t

Enumerator
IPARSER_ABS 

◆ iparser_f2_t

Enumerator
IPARSER_FLRDIV 
IPARSER_POW 
IPARSER_GT 
IPARSER_LT 
IPARSER_GEQ 
IPARSER_LEQ 
IPARSER_EQ 
IPARSER_NEQ 
IPARSER_AND 
IPARSER_OR 
IPARSER_MIN 
IPARSER_MAX 

◆ iparser_f3_t

Enumerator
IPARSER_IF 

◆ iparser_node_t

Enumerator
IPARSER_NUMBER 
IPARSER_SYMBOL 
IPARSER_ADD 
IPARSER_SUB 
IPARSER_MUL 
IPARSER_DIV 
IPARSER_NEG 
IPARSER_F1 
IPARSER_F2 
IPARSER_F3 
IPARSER_ASSIGN 
IPARSER_LIST 
IPARSER_ADD_VP 
IPARSER_ADD_PP 
IPARSER_SUB_VP 
IPARSER_SUB_PP 
IPARSER_MUL_VP 
IPARSER_MUL_PP 
IPARSER_DIV_VP 
IPARSER_DIV_PV 
IPARSER_DIV_PP 
IPARSER_NEG_P 

◆ MakeType

Enumerator
make_alias 
make_deep_copy 

◆ Order

enum amrex::Order
strong
Enumerator
RowMajor 
ColumnMajor 

◆ parser_exe_t

Enumerator
PARSER_EXE_NULL 
PARSER_EXE_NUMBER 
PARSER_EXE_SYMBOL 
PARSER_EXE_ADD 
PARSER_EXE_SUB_F 
PARSER_EXE_SUB_B 
PARSER_EXE_MUL 
PARSER_EXE_DIV_F 
PARSER_EXE_DIV_B 
PARSER_EXE_F1 
PARSER_EXE_F2_F 
PARSER_EXE_F2_B 
PARSER_EXE_ADD_VP 
PARSER_EXE_SUB_VP 
PARSER_EXE_MUL_VP 
PARSER_EXE_DIV_VP 
PARSER_EXE_ADD_PP 
PARSER_EXE_SUB_PP 
PARSER_EXE_MUL_PP 
PARSER_EXE_DIV_PP 
PARSER_EXE_ADD_VN 
PARSER_EXE_SUB_VN 
PARSER_EXE_MUL_VN 
PARSER_EXE_DIV_VN 
PARSER_EXE_ADD_PN 
PARSER_EXE_SUB_PN 
PARSER_EXE_MUL_PN 
PARSER_EXE_DIV_PN 
PARSER_EXE_SQUARE 
PARSER_EXE_POWI 
PARSER_EXE_IF 
PARSER_EXE_JUMP 

◆ parser_f1_t

Enumerator
PARSER_SQRT 
PARSER_EXP 
PARSER_LOG 
PARSER_LOG10 
PARSER_SIN 
PARSER_COS 
PARSER_TAN 
PARSER_ASIN 
PARSER_ACOS 
PARSER_ATAN 
PARSER_SINH 
PARSER_COSH 
PARSER_TANH 
PARSER_ASINH 
PARSER_ACOSH 
PARSER_ATANH 
PARSER_ABS 
PARSER_FLOOR 
PARSER_CEIL 
PARSER_COMP_ELLINT_1 
PARSER_COMP_ELLINT_2 
PARSER_ERF 

◆ parser_f2_t

Enumerator
PARSER_POW 
PARSER_ATAN2 
PARSER_GT 
PARSER_LT 
PARSER_GEQ 
PARSER_LEQ 
PARSER_EQ 
PARSER_NEQ 
PARSER_AND 
PARSER_OR 
PARSER_HEAVISIDE 
PARSER_JN 
PARSER_YN 
PARSER_MIN 
PARSER_MAX 
PARSER_FMOD 

◆ parser_f3_t

Enumerator
PARSER_IF 

◆ parser_node_t

Enumerator
PARSER_NUMBER 
PARSER_SYMBOL 
PARSER_ADD 
PARSER_SUB 
PARSER_MUL 
PARSER_DIV 
PARSER_F1 
PARSER_F2 
PARSER_F3 
PARSER_ASSIGN 
PARSER_LIST 

◆ RunOn

enum amrex::RunOn
strong
Enumerator
Gpu 
Cpu 
Device 
Host 

Function Documentation

◆ __launch_bounds__() [1/2]

template<int amrex_launch_bounds_max_threads, class L >
amrex::__launch_bounds__ ( amrex_launch_bounds_max_threads  )

◆ __launch_bounds__() [2/2]

template<int amrex_launch_bounds_max_threads, int min_blocks, class L >
amrex::__launch_bounds__ ( amrex_launch_bounds_max_threads  ,
min_blocks   
)

◆ _pd_get_bit()

int amrex::_pd_get_bit ( char const *  base,
int  offs,
int  nby,
const int ord 
)
inline

◆ _pd_insert_field()

void amrex::_pd_insert_field ( Long  in_long,
int  nb,
char *  out,
int  offs,
int  l_order,
int  l_bytes 
)
inline

◆ _pd_set_bit()

void amrex::_pd_set_bit ( char *  base,
int  offs 
)
inline

◆ aa_interp_face_xy() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_face_xy ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  ic,
int  jc 
)
noexcept

◆ aa_interp_face_xy() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_face_xy ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ aa_interp_face_xz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_face_xz ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ aa_interp_face_yz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_face_yz ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ aa_interp_line_x() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_line_x ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  ic,
int  jc 
)
noexcept

◆ aa_interp_line_x() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_line_x ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ aa_interp_line_y() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_line_y ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  ic,
int  jc 
)
noexcept

◆ aa_interp_line_y() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_line_y ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ aa_interp_line_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::aa_interp_line_z ( Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ abec_gsrb() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_gsrb ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
dhz,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m4,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< int const > const &  m5,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f4,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Array4< T const > const &  f5,
Box const &  vbox,
int  redblack 
)
noexcept

◆ abec_gsrb() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_gsrb ( int  i,
int  j,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Box const &  vbox,
int  redblack 
)
noexcept

◆ abec_gsrb() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_gsrb ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
Array4< T const > const &  bX,
Array4< int const > const &  m0,
Array4< int const > const &  m1,
Array4< T const > const &  f0,
Array4< T const > const &  f1,
Box const &  vbox,
int  redblack 
)
noexcept

◆ abec_gsrb_os() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_gsrb_os ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
dhz,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m4,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< int const > const &  m5,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f4,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Array4< T const > const &  f5,
Array4< int const > const &  osm,
Box const &  vbox,
int  redblack 
)
noexcept

◆ abec_gsrb_os() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_gsrb_os ( int  i,
int  j,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Array4< int const > const &  osm,
Box const &  vbox,
int  redblack 
)
noexcept

◆ abec_gsrb_os() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_gsrb_os ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
Array4< T const > const &  bX,
Array4< int const > const &  m0,
Array4< int const > const &  m1,
Array4< T const > const &  f0,
Array4< T const > const &  f1,
Array4< int const > const &  osm,
Box const &  vbox,
int  redblack 
)
noexcept

◆ abec_gsrb_with_line_solve() [1/3]

template<typename T >
AMREX_FORCE_INLINE void amrex::abec_gsrb_with_line_solve ( Box const &  ,
Array4< T > const &  ,
Array4< T const > const &  ,
,
Array4< T const > const &  ,
,
Array4< T const > const &  ,
Array4< int const > const &  ,
Array4< int const > const &  ,
Array4< T const > const &  ,
Array4< T const > const &  ,
Box const &  ,
int  ,
int   
)
noexcept

◆ abec_gsrb_with_line_solve() [2/3]

template<typename T >
AMREX_FORCE_INLINE void amrex::abec_gsrb_with_line_solve ( Box const &  box,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Box const &  vbox,
int  redblack,
int  nc 
)
noexcept

◆ abec_gsrb_with_line_solve() [3/3]

template<typename T >
AMREX_FORCE_INLINE void amrex::abec_gsrb_with_line_solve ( Box const &  box,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
dhz,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m4,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< int const > const &  m5,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f4,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Array4< T const > const &  f5,
Box const &  vbox,
int  redblack,
int  nc 
)
noexcept

◆ abec_jacobi() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_jacobi ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
dhz,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m4,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< int const > const &  m5,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f4,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Array4< T const > const &  f5,
Box const &  vbox 
)
noexcept

◆ abec_jacobi() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_jacobi ( int  i,
int  j,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Box const &  vbox 
)
noexcept

◆ abec_jacobi() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_jacobi ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
alpha,
Array4< T const > const &  a,
dhx,
Array4< T const > const &  bX,
Array4< int const > const &  m0,
Array4< int const > const &  m1,
Array4< T const > const &  f0,
Array4< T const > const &  f1,
Box const &  vbox 
)
noexcept

◆ abec_jacobi_os() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_jacobi_os ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
dhz,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m4,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< int const > const &  m5,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f4,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Array4< T const > const &  f5,
Array4< int const > const &  osm,
Box const &  vbox 
)
noexcept

◆ abec_jacobi_os() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_jacobi_os ( int  i,
int  j,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
alpha,
Array4< T const > const &  a,
dhx,
dhy,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< T const > const &  f0,
Array4< T const > const &  f2,
Array4< T const > const &  f1,
Array4< T const > const &  f3,
Array4< int const > const &  osm,
Box const &  vbox 
)
noexcept

◆ abec_jacobi_os() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::abec_jacobi_os ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
alpha,
Array4< T const > const &  a,
dhx,
Array4< T const > const &  bX,
Array4< int const > const &  m0,
Array4< int const > const &  m1,
Array4< T const > const &  f0,
Array4< T const > const &  f1,
Array4< int const > const &  osm,
Box const &  vbox 
)
noexcept

◆ Abort() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::Abort ( const char *  msg = nullptr)

◆ Abort() [2/2]

void amrex::Abort ( const std::string &  msg)

Print out message to cerr and exit via abort().

◆ abs() [1/7]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE T amrex::abs ( const GpuComplex< T > &  a_z)
noexcept

Return the absolute value of a complex number.

◆ Abs() [1/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Abs ( FabArray< FAB > &  fa,
int  icomp,
int  numcomp,
const IntVect nghost 
)

◆ Abs() [2/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Abs ( FabArray< FAB > &  fa,
int  icomp,
int  numcomp,
int  nghost 
)

◆ accrete() [1/2]

void amrex::accrete ( BoxDomain dest,
const BoxDomain fin,
int  sz 
)

Grow each Box in BoxDomain fin by size sz and place the result into BoxDomain dest.

◆ accrete() [2/2]

BoxList amrex::accrete ( const BoxList bl,
int  sz 
)

Returns a new BoxList in which each Box is grown by the given size.

◆ Add() [1/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Add ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
const IntVect nghost 
)

◆ Add() [2/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Add ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
int  nghost 
)

◆ AddFabGhostIndicatorField()

void amrex::AddFabGhostIndicatorField ( const FArrayBox fab,
int  ngrow,
Node &  res 
)

◆ adjCell()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::adjCell ( const BoxND< dim > &  b,
Orientation  face,
int  len = 1 
)
noexcept

Similar to adjCellLo and adjCellHi; operates on given face.

◆ adjCellHi()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::adjCellHi ( const BoxND< dim > &  b,
int  dir,
int  len = 1 
)
noexcept

Similar to adjCellLo but builds an adjacent BoxND on the high end.

◆ adjCellLo()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::adjCellLo ( const BoxND< dim > &  b,
int  dir,
int  len = 1 
)
noexcept

Returns the cell centered BoxND of length len adjacent to b on the low end along the coordinate direction dir. The return BoxND is identical to b in the other directions. The return BoxND and b have an empty intersection. NOTE: len >= 1 NOTE: BoxND retval = b.adjCellLo(b,dir,len) is equivalent to the following set of operations: BoxND retval(b); retval.convert(dir,BoxND::CELL); retval.setrange(dir,retval.smallEnd(dir)-len,len);.

◆ aligned_size()

std::size_t amrex::aligned_size ( std::size_t  align_requirement,
std::size_t  size 
)
inlinenoexcept

Given a minimum required size of size bytes, this returns the next largest arena size that will align to align_requirement bytes.

◆ AllGatherBoxes()

void amrex::AllGatherBoxes ( Vector< Box > &  bxs,
int  n_extra_reserve 
)

◆ AlmostEqual()

bool amrex::AlmostEqual ( const RealBox box1,
const RealBox box2,
Real  eps 
)
noexcept

Check for equality of real boxes within a certain tolerance.

◆ almostEqual()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE std::enable_if_t<std::is_floating_point_v<T>,bool> amrex::almostEqual ( x,
y,
int  ulp = 2 
)

◆ amrex_avg_cc_to_fc() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_cc_to_fc ( int  i,
int  j,
int  k,
int  n,
Box const &  xbx,
Box const &  ybx,
Box const &  zbx,
Array4< Real > const &  fx,
Array4< Real > const &  fy,
Array4< Real > const &  fz,
Array4< Real const > const &  cc,
bool  use_harmonic_averaging 
)
noexcept

◆ amrex_avg_cc_to_fc() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_cc_to_fc ( int  i,
int  j,
int  ,
int  n,
Box const &  xbx,
Box const &  ybx,
Array4< Real > const &  fx,
Array4< Real > const &  fy,
Array4< Real const > const &  cc,
bool  use_harmonic_averaging 
)
noexcept

◆ amrex_avg_cc_to_fc() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_cc_to_fc ( int  i,
int  ,
int  ,
int  n,
Box const &  xbx,
Array4< Real > const &  fx,
Array4< Real const > const &  cc,
GeometryData const &  gd,
bool  use_harmonic_averaging 
)
noexcept

◆ amrex_avg_eg_to_cc() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_eg_to_cc ( int  i,
int  j,
int  k,
Array4< Real > const &  cc,
Array4< Real const > const &  Ex,
Array4< Real const > const &  Ey,
Array4< Real const > const &  Ez,
int  cccomp 
)
noexcept

◆ amrex_avg_eg_to_cc() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_eg_to_cc ( int  i,
int  j,
int  ,
Array4< Real > const &  cc,
Array4< Real const > const &  Ex,
Array4< Real const > const &  Ey,
int  cccomp 
)
noexcept

◆ amrex_avg_eg_to_cc() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_eg_to_cc ( int  i,
int  ,
int  ,
Array4< Real > const &  cc,
Array4< Real const > const &  Ex,
int  cccomp 
)
noexcept

◆ amrex_avg_fc_to_cc() [1/3]

template<typename CT , typename FT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_fc_to_cc ( int  i,
int  j,
int  k,
Array4< CT > const &  cc,
Array4< FT const > const &  fx,
Array4< FT const > const &  fy,
Array4< FT const > const &  fz,
int  cccomp 
)
noexcept

◆ amrex_avg_fc_to_cc() [2/3]

template<typename CT , typename FT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_fc_to_cc ( int  i,
int  j,
int  ,
Array4< CT > const &  cc,
Array4< FT const > const &  fx,
Array4< FT const > const &  fy,
int  cccomp 
)
noexcept

◆ amrex_avg_fc_to_cc() [3/3]

template<typename CT , typename FT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_fc_to_cc ( int  i,
int  ,
int  ,
Array4< CT > const &  cc,
Array4< FT const > const &  fx,
int  cccomp,
GeometryData const &  gd 
)
noexcept

◆ amrex_avg_nd_to_cc()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avg_nd_to_cc ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  cc,
Array4< Real const > const &  nd,
int  cccomp,
int  ndcomp 
)
noexcept

◆ amrex_avgdown() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown ( Box const &  bx,
Array4< T > const &  crse,
Array4< T const > const &  fine,
int  ccomp,
int  fcomp,
int  ncomp,
IntVect const &  ratio 
)
noexcept

◆ amrex_avgdown() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  crse,
Array4< T const > const &  fine,
int  ccomp,
int  fcomp,
IntVect const &  ratio 
)
noexcept

◆ amrex_avgdown_edges() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_edges ( Box const &  bx,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
int  ccomp,
int  fcomp,
int  ncomp,
IntVect const &  ratio,
int  idir 
)
noexcept

◆ amrex_avgdown_edges() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_edges ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
int  ccomp,
int  fcomp,
IntVect const &  ratio,
int  idir 
)
noexcept

◆ amrex_avgdown_faces() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_faces ( Box const &  bx,
Array4< T > const &  crse,
Array4< T const > const &  fine,
int  ccomp,
int  fcomp,
int  ncomp,
IntVect const &  ratio,
int  idir 
)
noexcept

◆ amrex_avgdown_faces() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_faces ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  crse,
Array4< T const > const &  fine,
int  ccomp,
int  fcomp,
IntVect const &  ratio,
int  idir 
)
noexcept

◆ amrex_avgdown_nodes() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_nodes ( Box const &  bx,
Array4< T > const &  crse,
Array4< T const > const &  fine,
int  ccomp,
int  fcomp,
int  ncomp,
IntVect const &  ratio 
)
noexcept

◆ amrex_avgdown_nodes() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_nodes ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  crse,
Array4< T const > const &  fine,
int  ccomp,
int  fcomp,
IntVect const &  ratio 
)
noexcept

◆ amrex_avgdown_with_vol() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_with_vol ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
Array4< Real const > const &  fv,
int  ccomp,
int  fcomp,
IntVect const &  ratio 
)
noexcept

◆ amrex_avgdown_with_vol() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_avgdown_with_vol ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  crse,
Array4< T const > const &  fine,
Array4< T const > const &  fv,
int  ccomp,
int  fcomp,
IntVect const &  ratio 
)
noexcept

◆ amrex_calc_alpha_stencil()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE amrex::Real amrex::amrex_calc_alpha_stencil ( Real  q_hat,
Real  q_max,
Real  q_min,
Real  state 
)
noexcept

◆ amrex_calc_centroid_limiter()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE amrex::GpuArray<amrex::Real,AMREX_SPACEDIM> amrex::amrex_calc_centroid_limiter ( int  i,
int  j,
int  k,
int  n,
amrex::Array4< amrex::Real const > const &  state,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
const amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > &  slopes,
amrex::Array4< amrex::Real const > const &  ccent 
)
noexcept

◆ amrex_calc_xslope()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex::amrex_calc_xslope ( int  i,
int  j,
int  k,
int  n,
int  order,
amrex::Array4< Real const > const &  q 
)
noexcept

◆ amrex_calc_xslope_extdir()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex::amrex_calc_xslope_extdir ( int  i,
int  j,
int  k,
int  n,
int  order,
amrex::Array4< Real const > const &  q,
bool  edlo,
bool  edhi,
int  domlo,
int  domhi 
)
noexcept

◆ amrex_calc_yslope()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex::amrex_calc_yslope ( int  i,
int  j,
int  k,
int  n,
int  order,
amrex::Array4< Real const > const &  q 
)
noexcept

◆ amrex_calc_yslope_extdir()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE Real amrex::amrex_calc_yslope_extdir ( int  i,
int  j,
int  k,
int  n,
int  order,
amrex::Array4< Real const > const &  q,
bool  edlo,
bool  edhi,
int  domlo,
int  domhi 
)
noexcept

◆ amrex_compute_convective_difference() [1/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_convective_difference ( Box const &  bx,
Array4< amrex::Real > const &  diff,
Array4< Real const > const &  u_face,
Array4< Real const > const &  v_face,
Array4< Real const > const &  s_on_x_face,
Array4< Real const > const &  s_on_y_face,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_convective_difference() [2/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_convective_difference ( Box const &  bx,
Array4< Real > const &  diff,
Array4< Real const > const &  u_face,
Array4< Real const > const &  v_face,
Array4< Real const > const &  w_face,
Array4< Real const > const &  s_on_x_face,
Array4< Real const > const &  s_on_y_face,
Array4< Real const > const &  s_on_z_face,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_divergence() [1/3]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_divergence ( Box const &  bx,
Array4< Real > const &  divu,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
Array4< Real const > const &  w,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_divergence() [2/3]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_divergence ( Box const &  bx,
Array4< Real > const &  divu,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_divergence() [3/3]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_divergence ( Box const &  bx,
Array4< Real > const &  divu,
Array4< Real const > const &  u,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_divergence_rz()

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_divergence_rz ( Box const &  bx,
Array4< Real > const &  divu,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
Array4< Real const > const &  ax,
Array4< Real const > const &  ay,
Array4< Real const > const &  vol 
)
inlinenoexcept

◆ amrex_compute_gradient() [1/3]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_gradient ( Box const &  bx,
Array4< Real > const &  grad,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
Array4< Real const > const &  w,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_gradient() [2/3]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_gradient ( Box const &  bx,
Array4< Real > const &  grad,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_gradient() [3/3]

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_gradient ( Box const &  bx,
Array4< Real > const &  grad,
Array4< Real const > const &  u,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ amrex_compute_gradient_rz()

AMREX_GPU_HOST_DEVICE void amrex::amrex_compute_gradient_rz ( Box const &  bx,
Array4< Real > const &  grad,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
Array4< Real const > const &  ax,
Array4< Real const > const &  ay,
Array4< Real const > const &  vol 
)
inlinenoexcept

◆ amrex_deposit_cic()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_deposit_cic ( P const &  p,
int  nc,
amrex::Array4< amrex::Real > const &  rho,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi 
)

◆ amrex_deposit_particle_dx_cic()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_deposit_particle_dx_cic ( P const &  p,
int  nc,
amrex::Array4< amrex::Real > const &  rho,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  pdxi 
)

◆ amrex_fill_slice_interp()

AMREX_GPU_HOST_DEVICE void amrex::amrex_fill_slice_interp ( Box const &  bx,
Array4< Real >  slice,
Array4< Real const > const &  full,
int  scomp,
int  fcomp,
int  ncomp,
int  dir,
Real  coord,
GeometryData const &  gd 
)
inlinenoexcept

◆ amrex_first_order_extrap_cpu()

AMREX_GPU_HOST AMREX_FORCE_INLINE void amrex::amrex_first_order_extrap_cpu ( amrex::Box const &  bx,
int  nComp,
amrex::Array4< const int > const &  mask,
amrex::Array4< amrex::Real > const &  data 
)
noexcept

◆ amrex_first_order_extrap_gpu()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::amrex_first_order_extrap_gpu ( int  i,
int  j,
int  k,
int  n,
amrex::Box const &  bx,
amrex::Array4< const int > const &  mask,
amrex::Array4< amrex::Real > const &  data 
)
noexcept

◆ amrex_flux_redistribute()

void amrex::amrex_flux_redistribute ( const amrex::Box bx,
amrex::Array4< amrex::Real > const &  dqdt,
amrex::Array4< amrex::Real const > const &  divc,
amrex::Array4< amrex::Real const > const &  wt,
amrex::Array4< amrex::Real const > const &  vfrac,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
int  as_crse,
amrex::Array4< amrex::Real > const &  rr_drho_crse,
amrex::Array4< int const > const &  rr_flag_crse,
int  as_fine,
amrex::Array4< amrex::Real > const &  dm_as_fine,
amrex::Array4< int const > const &  levmsk,
const amrex::Geometry geom,
bool  use_wts_in_divnc,
int  level_mask_not_covered,
int  icomp,
int  ncomp,
amrex::Real  dt 
)

◆ amrex_iparser_delete()

void amrex::amrex_iparser_delete ( struct amrex_iparser iparser)

◆ amrex_iparser_new()

struct amrex_iparser * amrex::amrex_iparser_new ( )

◆ amrex_parser_delete()

void amrex::amrex_parser_delete ( struct amrex_parser parser)

◆ amrex_parser_new()

struct amrex_parser * amrex::amrex_parser_new ( )

◆ amrex_setarea() [1/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_setarea ( Box const &  bx,
Array4< Real > const &  area,
GpuArray< Real, 1 > const &  offset,
GpuArray< Real, 1 > const &  dx,
const int  ,
const int  coord 
)
inlinenoexcept

◆ amrex_setarea() [2/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_setarea ( Box const &  bx,
Array4< Real > const &  area,
GpuArray< Real, 2 > const &  offset,
GpuArray< Real, 2 > const &  dx,
const int  dir,
const int  coord 
)
inlinenoexcept

◆ amrex_setdloga() [1/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_setdloga ( Box const &  bx,
Array4< Real > const &  dloga,
GpuArray< Real, 1 > const &  offset,
GpuArray< Real, 1 > const &  dx,
const int  ,
const int  coord 
)
inlinenoexcept

◆ amrex_setdloga() [2/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_setdloga ( Box const &  bx,
Array4< Real > const &  dloga,
GpuArray< Real, 2 > const &  offset,
GpuArray< Real, 2 > const &  dx,
const int  dir,
const int  coord 
)
inlinenoexcept

◆ amrex_setvol() [1/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_setvol ( Box const &  bx,
Array4< Real > const &  vol,
GpuArray< Real, 1 > const &  offset,
GpuArray< Real, 1 > const &  dx,
const int  coord 
)
inlinenoexcept

◆ amrex_setvol() [2/2]

AMREX_GPU_HOST_DEVICE void amrex::amrex_setvol ( Box const &  bx,
Array4< Real > const &  vol,
GpuArray< Real, 2 > const &  offset,
GpuArray< Real, 2 > const &  dx,
const int  coord 
)
inlinenoexcept

◆ any()

bool amrex::any ( FPExcept  a)
inline

◆ AnyCTO()

template<class L , class... Fs, typename... CTOs>
void amrex::AnyCTO ( [[maybe_unused] ] TypeList< CTOs... >  list_of_compile_time_options,
std::array< int, sizeof...(CTOs)> const &  runtime_options,
L &&  l,
Fs &&...  cto_functs 
)

Compile time optimization of kernels with run time options.

This is a generalized version of ParallelFor with CTOs that can support any function that takes in one lambda to launch a GPU kernel such as ParallelFor, ParallelForRNG, launch, etc. It uses fold expression to generate kernel launches for all combinations of the run time options. The kernel function can use constexpr if to discard unused code blocks for better run time performance. In the example below, the code will be expanded into 4*2=8 normal ParallelForRNGs for all combinations of the run time parameters.

   int A_runtime_option = ...;
   int B_runtime_option = ...;
   enum A_options : int { A0, A1, A2, A3 };
   enum B_options : int { B0, B1 };
   AnyCTO(TypeList<CompileTimeOptions<A0,A1,A2,A3>,
                   CompileTimeOptions<B0,B1>>{},
       {A_runtime_option, B_runtime_option},
       [&](auto cto_func){
           ParallelForRNG(N, cto_func);
       },
       [=] AMREX_GPU_DEVICE (int i, const RandomEngine& engine,
                             auto A_control, auto B_control)
       {
           ...
           if constexpr (A_control.value == A0) {
               ...
           } else if constexpr (A_control.value == A1) {
               ...
           } else if constexpr (A_control.value == A2) {
               ...
           } else {
               ...
           }
           if constexpr (A_control.value != A3 && B_control.value == B1) {
               ...
           }
           ...
       }
   );

   constexpr int nthreads_per_block = ...;
   int nblocks = ...;
   AnyCTO(TypeList<CompileTimeOptions<A0,A1,A2,A3>,
                   CompileTimeOptions<B0,B1>>{},
       {A_runtime_option, B_runtime_option},
       [&](auto cto_func){
           launch<nthreads_per_block>(nblocks, Gpu::gpuStream(), cto_func);
       },
       [=] AMREX_GPU_DEVICE (auto A_control, auto B_control){
           ...
       }
   );

The static member function cto_func.GetOptions() can be used to obtain the runtime_options passed into AnyCTO, but at compile time. This enables some advanced use cases, such as changing the number of threads per block or the dimensionality of ParallelFor at runtime. For the second example -> decltype(void(intvect.size())) is necessary to disambiguate IntVectND<1> and int for the first argument of the kernel function.

   int nthreads_per_block = ...;
   AnyCTO(TypeList<CompileTimeOptions<128,256,512,1024>>{},
       {nthreads_per_block},
       [&](auto cto_func){
           constexpr std::array<int, 1> ctos = cto_func.GetOptions();
           constexpr int c_nthreads_per_block = ctos[0];
           ParallelFor<c_nthreads_per_block>(N, cto_func);
       },
       [=] AMREX_GPU_DEVICE (int i, auto){
           ...
       }
   );

   BoxND<6> box6D = ...;
   int dims_needed = ...;
   AnyCTO(TypeList<CompileTimeOptions<1,2,3,4,5,6>>{},
       {dims_needed},
       [&](auto cto_func){
           constexpr std::array<int, 1> ctos = cto_func.GetOptions();
           constexpr int c_dims_needed = ctos[0];
           const auto box = BoxShrink<c_dims_needed>(box6D);
           ParallelFor(box, cto_func);
       },
       [=] AMREX_GPU_DEVICE (auto intvect, auto) -> decltype(void(intvect.size())) {
           ...
       }
   );

Note that due to a limitation of CUDA's extended device lambda, the constexpr if block cannot be the one that captures a variable first. If nvcc complains about it, you will have to manually capture it outside constexpr if. Alternatively, the constexpr if can be replaced with a regular if. Compilers can still perform the same optimizations since the condition is known at compile time. The data type for the parameters is int.

Parameters
list_of_compile_time_optionslist of all possible values of the parameters.
runtime_optionsthe run time parameters.
la callable object containing a CPU function that launches the provided GPU kernel.
cto_functsa callable object containing the GPU kernel with optimizations.

◆ Apply()

template<typename F , typename TP >
constexpr AMREX_GPU_HOST_DEVICE auto amrex::Apply ( F &&  f,
TP &&  t 
) -> typename detail::apply_result<F,detail::tuple_decay_t<TP> >::type
constexpr

◆ apply_flux_redistribution()

void amrex::apply_flux_redistribution ( const amrex::Box bx,
amrex::Array4< amrex::Real > const &  div,
amrex::Array4< amrex::Real const > const &  divc,
amrex::Array4< amrex::Real const > const &  wt,
int  icomp,
int  ncomp,
amrex::Array4< amrex::EBCellFlag const > const &  flag_arr,
amrex::Array4< amrex::Real const > const &  vfrac,
const amrex::Geometry geom,
bool  use_wts_in_divnc 
)

◆ ApplyInitialRedistribution() [1/2]

void amrex::ApplyInitialRedistribution ( amrex::Box const &  bx,
int  ncomp,
amrex::Array4< amrex::Real > const &  U_out,
amrex::Array4< amrex::Real > const &  U_in,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz)  ,
amrex::Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz)  ,
amrex::Array4< amrex::Real const > const &  ccc,
amrex::BCRec const *  d_bcrec_ptr,
amrex::Geometry const &  geom,
std::string const &  redistribution_type,
int  srd_max_order = 2,
amrex::Real  target_volfrac = 0.5_rt 
)

◆ ApplyInitialRedistribution() [2/2]

void amrex::ApplyInitialRedistribution ( Box const &  bx,
int  ncomp,
Array4< Real > const &  U_out,
Array4< Real > const &  U_in,
Array4< EBCellFlag const > const &  flag,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz)  ,
amrex::Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz)  ,
amrex::Array4< amrex::Real const > const &  ccc,
amrex::BCRec const *  d_bcrec_ptr,
Geometry const &  lev_geom,
std::string const &  redistribution_type,
int  srd_max_order,
amrex::Real  target_volfrac 
)

◆ ApplyMLRedistribution() [1/2]

void amrex::ApplyMLRedistribution ( amrex::Box const &  bx,
int  ncomp,
amrex::Array4< amrex::Real > const &  dUdt_out,
amrex::Array4< amrex::Real > const &  dUdt_in,
amrex::Array4< amrex::Real const > const &  U_in,
amrex::Array4< amrex::Real > const &  scratch,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz)  ,
amrex::Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz)  ,
amrex::Array4< amrex::Real const > const &  ccc,
amrex::BCRec const *  d_bcrec_ptr,
amrex::Geometry const &  lev_geom,
amrex::Real  dt,
std::string const &  redistribution_type,
int  as_crse,
amrex::Array4< amrex::Real > const &  rr_drho_crse,
amrex::Array4< int const > const &  rr_flag_crse,
int  as_fine,
amrex::Array4< amrex::Real > const &  dm_as_fine,
amrex::Array4< int const > const &  levmsk,
int  level_mask_not_covered,
amrex::Real  fac_for_deltaR = 1.0_rt,
bool  use_wts_in_divnc = false,
int  icomp = 0,
int  srd_max_order = 2,
amrex::Real  target_volfrac = 0.5_rt,
amrex::Array4< amrex::Real const > const &  update_scale = {} 
)

◆ ApplyMLRedistribution() [2/2]

void amrex::ApplyMLRedistribution ( Box const &  bx,
int  ncomp,
Array4< Real > const &  dUdt_out,
Array4< Real > const &  dUdt_in,
Array4< Real const > const &  U_in,
Array4< Real > const &  scratch,
Array4< EBCellFlag const > const &  flag,
AMREX_D_DECL(Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz)  ,
Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz)  ,
Array4< Real const > const &  ccc,
amrex::BCRec const *  d_bcrec_ptr,
Geometry const &  lev_geom,
Real  dt,
std::string const &  redistribution_type,
int  as_crse,
Array4< Real > const &  rr_drho_crse,
Array4< int const > const &  rr_flag_crse,
int  as_fine,
Array4< Real > const &  dm_as_fine,
Array4< int const > const &  levmsk,
int  level_mask_not_covered,
Real  fac_for_deltaR,
bool  use_wts_in_divnc,
int  icomp,
int  srd_max_order,
amrex::Real  target_volfrac,
Array4< Real const > const &  srd_update_scale 
)

◆ ApplyRedistribution() [1/2]

void amrex::ApplyRedistribution ( amrex::Box const &  bx,
int  ncomp,
amrex::Array4< amrex::Real > const &  dUdt_out,
amrex::Array4< amrex::Real > const &  dUdt_in,
amrex::Array4< amrex::Real const > const &  U_in,
amrex::Array4< amrex::Real > const &  scratch,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz)  ,
amrex::Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz)  ,
amrex::Array4< amrex::Real const > const &  ccc,
amrex::BCRec const *  d_bcrec_ptr,
amrex::Geometry const &  lev_geom,
amrex::Real  dt,
std::string const &  redistribution_type,
bool  use_wts_in_divnc = false,
int  srd_max_order = 2,
amrex::Real  target_volfrac = 0.5_rt,
amrex::Array4< amrex::Real const > const &  update_scale = {} 
)

◆ ApplyRedistribution() [2/2]

void amrex::ApplyRedistribution ( Box const &  bx,
int  ncomp,
Array4< Real > const &  dUdt_out,
Array4< Real > const &  dUdt_in,
Array4< Real const > const &  U_in,
Array4< Real > const &  scratch,
Array4< EBCellFlag const > const &  flag,
AMREX_D_DECL(Array4< Real const > const &apx, Array4< Real const > const &apy, Array4< Real const > const &apz)  ,
Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(Array4< Real const > const &fcx, Array4< Real const > const &fcy, Array4< Real const > const &fcz)  ,
Array4< Real const > const &  ccc,
amrex::BCRec const *  d_bcrec_ptr,
Geometry const &  lev_geom,
Real  dt,
std::string const &  redistribution_type,
bool  use_wts_in_divnc,
int  srd_max_order,
amrex::Real  target_volfrac,
Array4< Real const > const &  srd_update_scale 
)

◆ arg()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE T amrex::arg ( const GpuComplex< T > &  a_z)
noexcept

Return the angle of a complex number's polar representation.

◆ Assert()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::Assert ( const char *  EX,
const char *  file,
int  line,
const char *  msg = nullptr 
)

◆ Assert_host()

void amrex::Assert_host ( const char *  EX,
const char *  file,
int  line,
const char *  msg 
)

Prints assertion failed messages to cerr and exits via abort(). Intended for use by the BL_ASSERT() macro in <AMReX_BLassert.H>.

◆ average_cellcenter_to_face() [1/2]

void amrex::average_cellcenter_to_face ( const Array< MultiFab *, AMREX_SPACEDIM > &  fc,
const MultiFab cc,
const Geometry geom,
int  ncomp,
bool  use_harmonic_averaging 
)

Average cell-centered MultiFab onto face-based MultiFab with geometric weighting.

◆ average_cellcenter_to_face() [2/2]

void amrex::average_cellcenter_to_face ( const Vector< MultiFab * > &  fc,
const MultiFab cc,
const Geometry geom,
int  ncomp,
bool  use_harmonic_averaging 
)

Average cell-centered MultiFab onto face-based MultiFab with geometric weighting.

◆ average_down() [1/4]

template<typename FAB >
void amrex::average_down ( const FabArray< FAB > &  S_fine,
FabArray< FAB > &  S_crse,
int  scomp,
int  ncomp,
const IntVect ratio 
)

Average MultiFab onto crse MultiFab without volume weighting. This routine DOES NOT assume that the crse BoxArray is a coarsened version of the fine BoxArray. Work for both cell-centered and nodal MultiFabs.

◆ average_down() [2/4]

template<typename FAB >
void amrex::average_down ( const FabArray< FAB > &  S_fine,
FabArray< FAB > &  S_crse,
int  scomp,
int  ncomp,
int  rr 
)

◆ average_down() [3/4]

void amrex::average_down ( const MultiFab S_fine,
MultiFab S_crse,
const Geometry fgeom,
const Geometry cgeom,
int  scomp,
int  ncomp,
const IntVect ratio 
)

Volume weighed average of fine MultiFab onto coarse MultiFab.

Both MultiFabs are assumed to be cell-centered. This routine DOES NOT assume that the crse BoxArray is a coarsened version of the fine BoxArray.

◆ average_down() [4/4]

void amrex::average_down ( const MultiFab S_fine,
MultiFab S_crse,
const Geometry fgeom,
const Geometry cgeom,
int  scomp,
int  ncomp,
int  rr 
)

◆ average_down_edges() [1/3]

void amrex::average_down_edges ( const Array< const MultiFab *, AMREX_SPACEDIM > &  fine,
const Array< MultiFab *, AMREX_SPACEDIM > &  crse,
const IntVect ratio,
int  ngcrse 
)

◆ average_down_edges() [2/3]

void amrex::average_down_edges ( const MultiFab fine,
MultiFab crse,
const IntVect ratio,
int  ngcrse = 0 
)

This version does average down for one direction. It uses the IndexType of MultiFabs to determine the direction. It is expected that one direction is cell-centered and the rest are nodal.

◆ average_down_edges() [3/3]

void amrex::average_down_edges ( const Vector< const MultiFab * > &  fine,
const Vector< MultiFab * > &  crse,
const IntVect ratio,
int  ngcrse 
)

Average fine edge-based MultiFab onto crse edge-based MultiFab.

Average fine edge-based MultiFab onto crse edge-based MultiFab. This routine does NOT assume that the crse BoxArray is a coarsened version of the fine BoxArray.

◆ average_down_faces() [1/7]

template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void amrex::average_down_faces ( const Array< const MF *, AMREX_SPACEDIM > &  fine,
const Array< MF *, AMREX_SPACEDIM > &  crse,
const IntVect ratio,
const Geometry crse_geom 
)

◆ average_down_faces() [2/7]

template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void amrex::average_down_faces ( const Array< const MF *, AMREX_SPACEDIM > &  fine,
const Array< MF *, AMREX_SPACEDIM > &  crse,
const IntVect ratio,
int  ngcrse = 0 
)

Average fine face-based FabArray onto crse face-based FabArray.

◆ average_down_faces() [3/7]

template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void amrex::average_down_faces ( const Array< const MF *, AMREX_SPACEDIM > &  fine,
const Array< MF *, AMREX_SPACEDIM > &  crse,
int  ratio,
int  ngcrse = 0 
)

Average fine face-based FabArray onto crse face-based FabArray.

◆ average_down_faces() [4/7]

template<typename FAB >
void amrex::average_down_faces ( const FabArray< FAB > &  fine,
FabArray< FAB > &  crse,
const IntVect ratio,
const Geometry crse_geom 
)

◆ average_down_faces() [5/7]

template<typename FAB >
void amrex::average_down_faces ( const FabArray< FAB > &  fine,
FabArray< FAB > &  crse,
const IntVect ratio,
int  ngcrse = 0 
)

This version does average down for one face direction.

It uses the IndexType of MultiFabs to determine the direction. It is expected that one direction is nodal and the rest are cell-centered.

◆ average_down_faces() [6/7]

template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void amrex::average_down_faces ( const Vector< const MF * > &  fine,
const Vector< MF * > &  crse,
const IntVect ratio,
int  ngcrse = 0 
)

Average fine face-based FabArray onto crse face-based FabArray.

◆ average_down_faces() [7/7]

template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > = 0>
void amrex::average_down_faces ( const Vector< const MF * > &  fine,
const Vector< MF * > &  crse,
int  ratio,
int  ngcrse = 0 
)

Average fine face-based FabArray onto crse face-based FabArray.

◆ average_down_nodal()

template<typename FAB >
void amrex::average_down_nodal ( const FabArray< FAB > &  fine,
FabArray< FAB > &  crse,
const IntVect ratio,
int  ngcrse,
bool  mfiter_is_definitely_safe 
)

Average fine node-based MultiFab onto crse node-centered MultiFab.

Average fine node-based MultiFab onto crse node-based MultiFab. This routine assumes that the crse BoxArray is a coarsened version of the fine BoxArray.

◆ average_edge_to_cellcenter()

void amrex::average_edge_to_cellcenter ( MultiFab cc,
int  dcomp,
const Vector< const MultiFab * > &  edge,
int  ngrow = 0 
)

Average edge-based MultiFab onto cell-centered MultiFab.

This fills in ngrow ghost cells in the cell-centered MultiFab. Both cell centered and edge centered MultiFabs need to have ngrow ghost values.

◆ average_face_to_cellcenter() [1/4]

template<typename CMF , typename FMF , std::enable_if_t< IsFabArray_v< CMF > &&IsFabArray_v< FMF >, int > = 0>
void amrex::average_face_to_cellcenter ( CMF &  cc,
int  dcomp,
const Array< const FMF *, AMREX_SPACEDIM > &  fc,
int  ngrow = 0 
)

Average face-based FabArray onto cell-centered FabArray.

◆ average_face_to_cellcenter() [2/4]

void amrex::average_face_to_cellcenter ( MultiFab cc,
const Array< const MultiFab *, AMREX_SPACEDIM > &  fc,
const Geometry geom 
)

Average face-based MultiFab onto cell-centered MultiFab with geometric weighting.

◆ average_face_to_cellcenter() [3/4]

void amrex::average_face_to_cellcenter ( MultiFab cc,
const Vector< const MultiFab * > &  fc,
const Geometry geom 
)

Average face-based MultiFab onto cell-centered MultiFab with geometric weighting.

◆ average_face_to_cellcenter() [4/4]

void amrex::average_face_to_cellcenter ( MultiFab cc,
int  dcomp,
const Vector< const MultiFab * > &  fc,
int  ngrow 
)

Average face-based MultiFab onto cell-centered MultiFab.

◆ average_node_to_cellcenter()

void amrex::average_node_to_cellcenter ( MultiFab cc,
int  dcomp,
const MultiFab nd,
int  scomp,
int  ncomp,
int  ngrow 
)

Average nodal-based MultiFab onto cell-centered MultiFab.

◆ avgDown()

void amrex::avgDown ( MultiFab S_crse,
MultiFab S_fine,
int  scomp,
int  dcomp,
int  ncomp,
Vector< int > &  ratio 
)

◆ avgDown_doit()

void amrex::avgDown_doit ( const FArrayBox fine_fab,
FArrayBox crse_fab,
const Box ovlp,
int  scomp,
int  dcomp,
int  ncomp,
Vector< int > &  ratio 
)

◆ Axpy()

template<typename T >
void amrex::Axpy ( AlgVector< T > &  y,
a,
AlgVector< T > const &  x,
bool  async = false 
)

◆ BaseFab_Finalize()

void amrex::BaseFab_Finalize ( )

◆ BaseFab_Initialize()

void amrex::BaseFab_Initialize ( )

◆ BASISREALV()

AMREX_GPU_HOST_DEVICE RealVect amrex::BASISREALV ( int  dir)
inlinenoexcept

Returns a basis vector in the given coordinate direction.
In 2-D:
BASISREALV(0) == (1.,0.); BASISREALV(1) == (0.,1.).
In 3-D:
BASISREALV(0) == (1.,0.,0.); BASISREALV(1) == (0.,1.,0.); BASISREALV(2) == (0.,0.,1.).
Note that the coordinate directions are based at zero.

◆ BASISV()

template<int dim = AMREX_SPACEDIM>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::BASISV ( int  dir)
noexcept

Returns a basis vector in the given coordinate direction; eg. IntVectND<3> BASISV<3>(1) == (0,1,0). Note that the coordinate directions are zero based.

◆ bdryHi()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::bdryHi ( const BoxND< dim > &  b,
int  dir,
int  len = 1 
)
noexcept

Returns the edge-centered BoxND (in direction dir) defining the high side of BoxND b.

◆ bdryLo()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::bdryLo ( const BoxND< dim > &  b,
int  dir,
int  len = 1 
)
noexcept

Returns the edge-centered BoxND (in direction dir) defining the low side of BoxND b.

◆ bdryNode()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::bdryNode ( const BoxND< dim > &  b,
Orientation  face,
int  len = 1 
)
noexcept

Similar to bdryLo and bdryHi except that it operates on the given face of box b.

◆ begin()

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::begin ( BoxND< dim > const &  box)
noexcept

◆ begin_iv()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::begin_iv ( BoxND< dim > const &  box)
noexcept

◆ bisect() [1/2]

template<typename T , typename I , std::enable_if_t< std::is_integral_v< I >, int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE I amrex::bisect ( T const *  d,
lo,
hi,
T const &  v 
)

◆ bisect() [2/2]

template<class T , class F , std::enable_if_t< std::is_floating_point_v< T >, int > FOO = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE T amrex::bisect ( lo,
hi,
f,
tol = 1e-12,
int  max_iter = 100 
)

◆ boxArray() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
BoxArray const& amrex::boxArray ( Array< MF, N > const &  mf)

◆ boxArray() [2/2]

BoxArray const& amrex::boxArray ( FabArrayBase const &  fa)

◆ BoxCat()

template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND<detail::get_sum<d, dims...>)> amrex::BoxCat ( const BoxND< d > &  bx,
const BoxND< dims > &...  boxes 
)
constexprnoexcept

Returns a BoxND obtained by concatenating the input BoxNDs. The dimension of the return value equals the sum of the dimensions of the inputted BoxNDs.

◆ boxComplement()

BoxArray amrex::boxComplement ( const Box b1in,
const Box b2 
)

Make a BoxArray from the the complement of b2 in b1in.

◆ boxDiff() [1/2]

void amrex::boxDiff ( BoxList bl_diff,
const Box b1in,
const Box b2 
)

◆ boxDiff() [2/2]

BoxList amrex::boxDiff ( const Box b1in,
const Box b2 
)

Returns BoxList defining the compliment of b2 in b1in.

◆ BoxExpand()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND<new_dim> amrex::BoxExpand ( const BoxND< old_dim > &  bx)
constexprnoexcept

Returns a new BoxND of size new_dim and assigns all values of this BoxND to it and (small=0, big=0, typ=CELL) to the remaining elements.

◆ BoxResize()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND<new_dim> amrex::BoxResize ( const BoxND< old_dim > &  bx)
constexprnoexcept

Returns a new BoxND of size new_dim by either shrinking or expanding this BoxND.

◆ BoxShrink()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE BoxND<new_dim> amrex::BoxShrink ( const BoxND< old_dim > &  bx)
constexprnoexcept

Returns a new BoxND of dimension new_dim and assigns the first new_dim dimension of this BoxND to it.

◆ BoxSplit()

template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE GpuTuple<BoxND<d>, BoxND<dims>...> amrex::BoxSplit ( const BoxND< detail::get_sum< d, dims... >()> &  bx)
constexprnoexcept

Returns a tuple of BoxNDs obtained by splitting the input BoxND according to the dimensions specified by the template arguments.

◆ BroadcastArray()

template<class T >
void amrex::BroadcastArray ( Vector< T > &  aT,
int  myLocalId,
int  rootId,
const MPI_Comm localComm 
)

◆ BroadcastBool()

void amrex::BroadcastBool ( bool &  bBool,
int  myLocalId,
int  rootId,
const MPI_Comm localComm 
)

◆ BroadcastString()

void amrex::BroadcastString ( std::string &  bStr,
int  myLocalId,
int  rootId,
const MPI_Comm localComm 
)

◆ BroadcastStringArray()

void amrex::BroadcastStringArray ( Vector< std::string > &  bSA,
int  myLocalId,
int  rootId,
const MPI_Comm localComm 
)

◆ bytesOf() [1/2]

template<typename Key , typename T , class Compare >
Long amrex::bytesOf ( const std::map< Key, T, Compare > &  m)

◆ bytesOf() [2/2]

template<typename T >
Long amrex::bytesOf ( const std::vector< T > &  v)

◆ call_device() [1/2]

template<class L >
AMREX_GPU_DEVICE void amrex::call_device ( L &&  f0)
noexcept

◆ call_device() [2/2]

template<class L , class... Lambdas>
AMREX_GPU_DEVICE void amrex::call_device ( L &&  f0,
Lambdas &&...  fs 
)
noexcept

◆ CartesianProduct()

template<typename... Ls>
constexpr auto amrex::CartesianProduct ( Ls...  )
constexpr

Cartesian Product of TypeLists.

For example,

    CartesianProduct(TypeList<std::integral_constant<int,0>,
                              std::integral_constant<int,1>>{},
                     TypeList<std::integral_constant<int,2>,
                              std::integral_constant<int,3>>{});

returns TypeList of TypeList of integral_constants {{0,2},{1,2},{0,3},{1,3}}.

◆ cast() [1/2]

template<class Tto , class Tfrom >
AMREX_GPU_HOST_DEVICE void amrex::cast ( BaseFab< Tto > &  tofab,
BaseFab< Tfrom > const &  fromfab,
Box const &  bx,
SrcComp  scomp,
DestComp  dcomp,
NumComps  ncomp 
)
noexcept

◆ cast() [2/2]

template<typename T , typename U >
T amrex::cast ( U const &  mf_in)

example: auto mf = amrex::cast<MultiFab>(imf);

◆ ccprotect_2d()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::ccprotect_2d ( int  ic,
int  jc,
int  ,
int  nvar,
Box const &  fine_bx,
IntVect const &  ratio,
GeometryData  cs_geomdata,
GeometryData  fn_geomdata,
Array4< T > const &  fine,
Array4< T const > const &  fine_state 
)
noexcept

◆ ccprotect_3d()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::ccprotect_3d ( int  ic,
int  jc,
int  kc,
int  nvar,
Box const &  fine_bx,
IntVect const &  ratio,
Array4< T > const &  fine,
Array4< T const > const &  fine_state 
)
noexcept

◆ ccquartic_interp()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::ccquartic_interp ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  crse,
Array4< Real > const &  fine 
)
noexcept

◆ cell_quartic_interp_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cell_quartic_interp_x ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  fine,
Array4< Real const > const &  crse 
)
noexcept

◆ cell_quartic_interp_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cell_quartic_interp_y ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  fine,
Array4< Real const > const &  crse 
)
noexcept

◆ cell_quartic_interp_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cell_quartic_interp_z ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  fine,
Array4< Real const > const &  crse 
)
noexcept

◆ cholsol_for_eb() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cholsol_for_eb ( Array2D< Real, 0, 17, 0, 5 > &  Amatrix,
Array1D< Real, 0, 5 > &  b 
)

◆ cholsol_for_eb() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cholsol_for_eb ( Array2D< Real, 0, 53, 0, 9 > &  Amatrix,
Array1D< Real, 0, 9 > &  b 
)

◆ cholsol_np10()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cholsol_np10 ( Array2D< Real, 0, 35, 0, 9 > &  Amatrix,
Array1D< Real, 0, 9 > &  b 
)

◆ cholsol_np6()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cholsol_np6 ( Array2D< Real, 0, 11, 0, 5 > &  Amatrix,
Array1D< Real, 0, 5 > &  b 
)

◆ cic_interpolate()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cic_interpolate ( const P &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
const amrex::Array4< amrex::Real const > &  data_arr,
amrex::ParticleReal *  val,
int  M = AMREX_SPACEDIM 
)

Linearly interpolates the mesh data to the particle position from cell-centered data.

◆ cic_interpolate_cc()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cic_interpolate_cc ( const P &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
const amrex::Array4< amrex::Real const > &  data_arr,
amrex::ParticleReal *  val,
int  M = AMREX_SPACEDIM 
)

◆ cic_interpolate_nd()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::cic_interpolate_nd ( const P &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
const amrex::Array4< amrex::Real const > &  data_arr,
amrex::ParticleReal *  val,
int  M = AMREX_SPACEDIM 
)

Linearly interpolates the mesh data to the particle position from node-centered data.

◆ Clamp()

template<typename T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T& amrex::Clamp ( const T &  v,
const T &  lo,
const T &  hi 
)
constexpr

◆ clz()

template<class T , std::enable_if_t< std::is_same_v< std::decay_t< T >, std::uint8_t >||std::is_same_v< std::decay_t< T >, std::uint16_t >||std::is_same_v< std::decay_t< T >, std::uint32_t >||std::is_same_v< std::decay_t< T >, std::uint64_t >, int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::clz ( x)
noexcept

◆ clz_generic() [1/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::clz_generic ( std::uint16_t  x)
noexcept

◆ clz_generic() [2/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::clz_generic ( std::uint32_t  x)
noexcept

◆ clz_generic() [3/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::clz_generic ( std::uint64_t  x)
noexcept

◆ clz_generic() [4/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::clz_generic ( std::uint8_t  x)
noexcept

◆ coarsen() [1/12]

void amrex::coarsen ( BoxDomain dest,
const BoxDomain fin,
int  ratio 
)

Coarsen all Boxes in the domain by the refinement ratio. The result is placed into a new BoxDomain.

◆ coarsen() [2/12]

BoxArray amrex::coarsen ( const BoxArray ba,
const IntVect ratio 
)

◆ coarsen() [3/12]

BoxArray amrex::coarsen ( const BoxArray ba,
int  ratio 
)

◆ coarsen() [4/12]

BoxList amrex::coarsen ( const BoxList bl,
int  ratio 
)

Returns a new BoxList in which each Box is coarsened by the given ratio.

◆ coarsen() [5/12]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::coarsen ( const BoxND< dim > &  b,
const IntVectND< dim > &  ref_ratio 
)
noexcept

Coarsen BoxND by given (positive) refinement ratio. NOTE: if type(dir) = CELL centered: lo <- lo/ratio and hi <- hi/ratio. NOTE: if type(dir) = NODE centered: lo <- lo/ratio and hi <- hi/ratio + ((hiratio)==0 ? 0 : 1). That is, refinement of coarsened BoxND must contain the original BoxND.

◆ coarsen() [6/12]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::coarsen ( const BoxND< dim > &  b,
int  ref_ratio 
)
noexcept

Coarsen BoxND by given (positive) refinement ratio. NOTE: if type(dir) = CELL centered: lo <- lo/ratio and hi <- hi/ratio. NOTE: if type(dir) = NODE centered: lo <- lo/ratio and hi <- hi/ratio + ((hiratio)==0 ? 0 : 1). That is, refinement of coarsened BoxND must contain the original BoxND.

◆ coarsen() [7/12]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::coarsen ( const IntVectND< dim > &  p,
int  s 
)
noexcept

Returns an IntVectND that is the component-wise integer projection of p by s.

◆ coarsen() [8/12]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::coarsen ( const IntVectND< dim > &  p1,
const IntVectND< dim > &  p2 
)
noexcept

Returns an IntVectND which is the component-wise integer projection of IntVectND p1 by IntVectND p2.

◆ coarsen() [9/12]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::coarsen ( Dim3 const &  fine,
IntVectND< dim > const &  ratio 
)
noexcept

◆ coarsen() [10/12]

Geometry amrex::coarsen ( Geometry const &  fine,
int  rr 
)
inline

◆ coarsen() [11/12]

Geometry amrex::coarsen ( Geometry const &  fine,
IntVect const &  rr 
)
inline

◆ coarsen() [12/12]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::coarsen ( int  i,
int  ratio 
)
noexcept

◆ coarsen_overset_mask() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::coarsen_overset_mask ( Box const &  bx,
Array4< int > const &  cmsk,
Array4< int const > const &  fmsk 
)
noexcept

◆ coarsen_overset_mask() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::coarsen_overset_mask ( int  i,
int  j,
int  k,
Array4< int > const &  cmsk,
Array4< int const > const &  fmsk 
)
noexcept

◆ CollectMProfStats()

void amrex::CollectMProfStats ( std::map< std::string, BLProfiler::ProfStats > &  mProfStats,
const Vector< Vector< BLProfStats::FuncStat > > &  funcStats,
const Vector< std::string > &  fNames,
Real  runTime,
int  whichProc 
)

◆ command_argument_count()

int amrex::command_argument_count ( )

◆ communicateParticlesFinish()

void amrex::communicateParticlesFinish ( const ParticleCopyPlan plan)

◆ communicateParticlesStart()

template<class PC , class SndBuffer , class RcvBuffer , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void amrex::communicateParticlesStart ( const PC &  pc,
ParticleCopyPlan plan,
const SndBuffer &  snd_buffer,
RcvBuffer &  rcv_buffer 
)

◆ complementIn() [1/3]

BoxArray amrex::complementIn ( const Box b,
const BoxArray ba 
)

Make a BoxArray from the complement of BoxArray ba in Box b.

◆ complementIn() [2/3]

BoxDomain amrex::complementIn ( const Box b,
const BoxDomain bl 
)

Returns the complement of BoxDomain bl in Box b.

◆ complementIn() [3/3]

BoxList amrex::complementIn ( const Box b,
const BoxList bl 
)

Returns a BoxList defining the complement of BoxList bl in Box b.

◆ computeDivergence()

void amrex::computeDivergence ( MultiFab divu,
const Array< MultiFab const *, AMREX_SPACEDIM > &  umac,
const Geometry geom 
)

Computes divergence of face-data stored in the umac MultiFab.

◆ computeGradient()

void amrex::computeGradient ( MultiFab grad,
const Array< MultiFab const *, AMREX_SPACEDIM > &  umac,
const Geometry geom 
)

Computes gradient of face-data stored in the umac MultiFab.

◆ computeNeighborProcs()

Vector< int > amrex::computeNeighborProcs ( const ParGDBBase a_gdb,
int  ngrow 
)

◆ computeRefFac()

IntVect amrex::computeRefFac ( const ParGDBBase a_gdb,
int  src_lev,
int  lev 
)

◆ Concatenate()

std::string amrex::Concatenate ( const std::string &  root,
int  num,
int  mindigits 
)

Returns rootNNNN where NNNN == num.

◆ constexpr_for()

template<auto I, auto N, class F >
AMREX_GPU_HOST_DEVICE constexpr AMREX_INLINE void amrex::constexpr_for ( F const &  f)
constexpr

◆ convert() [1/4]

BoxArray amrex::convert ( const BoxArray ba,
const IntVect typ 
)

◆ convert() [2/4]

BoxArray amrex::convert ( const BoxArray ba,
IndexType  typ 
)

◆ convert() [3/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::convert ( const BoxND< dim > &  b,
const IndexTypeND< dim > &  typ 
)
noexcept

◆ convert() [4/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::convert ( const BoxND< dim > &  b,
const IntVectND< dim > &  typ 
)
noexcept

Returns a BoxND with different type.

◆ convexify()

Vector< MultiFab > amrex::convexify ( Vector< MultiFab const * > const &  mf,
Vector< IntVect > const &  refinement_ratio 
)

Convexify AMR data.

This function "convexifies" the AMR data by removing cells that are covered by fine levels from coarse level MultiFabs. This could be useful for visualization. The returned MultiFabs on coarse levels have different BoxArrays from the original BoxArrays. For the finest level, the data is simply copied to the returned object. The returned MultiFabs have no ghost cells. For nodal data, the nodes on the coarse/fine interface exist on both levels.

◆ Copy() [1/2]

template<class DFAB , class SFAB , std::enable_if_t< std::conjunction_v< IsBaseFab< DFAB >, IsBaseFab< SFAB >, std::is_convertible< typename SFAB::value_type, typename DFAB::value_type >>, int > BAR = 0>
void amrex::Copy ( FabArray< DFAB > &  dst,
FabArray< SFAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
const IntVect nghost 
)

◆ Copy() [2/2]

template<class DFAB , class SFAB , std::enable_if_t< std::conjunction_v< IsBaseFab< DFAB >, IsBaseFab< SFAB >, std::is_convertible< typename SFAB::value_type, typename DFAB::value_type >>, int > BAR = 0>
void amrex::Copy ( FabArray< DFAB > &  dst,
FabArray< SFAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
int  nghost 
)

◆ copyParticle() [1/2]

template<typename T_ParticleType , int NAR, int NAI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::copyParticle ( const ParticleTileData< T_ParticleType, NAR, NAI > &  dst,
const ConstParticleTileData< T_ParticleType, NAR, NAI > &  src,
int  src_i,
int  dst_i 
)
noexcept

A general single particle copying routine that can run on the GPU.

Template Parameters
NSRnumber of extra reals in the particle struct
NSInumber of extra ints in the particle struct
NARnumber of reals in the struct-of-arrays
NAInumber of ints in the struct-of-arrays
Parameters
dstthe destination tile
srcthe source tile
src_ithe index in the source to read from
dst_ithe index in the destination to write to

◆ copyParticle() [2/2]

template<typename T_ParticleType , int NAR, int NAI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::copyParticle ( const ParticleTileData< T_ParticleType, NAR, NAI > &  dst,
const ParticleTileData< T_ParticleType, NAR, NAI > &  src,
int  src_i,
int  dst_i 
)
noexcept

A general single particle copying routine that can run on the GPU.

Template Parameters
NSRnumber of extra reals in the particle struct
NSInumber of extra ints in the particle struct
NARnumber of reals in the struct-of-arrays
NAInumber of ints in the struct-of-arrays
Parameters
dstthe destination tile
srcthe source tile
src_ithe index in the source to read from
dst_ithe index in the destination to write to

◆ copyParticles() [1/2]

template<typename DstTile , typename SrcTile >
void amrex::copyParticles ( DstTile &  dst,
const SrcTile &  src 
)
noexcept

Copy particles from src to dst. This version copies all the particles, writing them to the beginning of dst.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Parameters
dstthe destination tile
srcthe source tile
fthe function that will be applied to each particle

◆ copyParticles() [2/2]

template<typename DstTile , typename SrcTile , typename Index , typename N , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void amrex::copyParticles ( DstTile &  dst,
const SrcTile &  src,
Index  src_start,
Index  dst_start,
n 
)
noexcept

Copy particles from src to dst. This version copies n particles starting at index src_start, writing the result starting at dst_start.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Nthe size type, e.g. Long
Parameters
dstthe destination tile
srcthe source tile
src_startthe offset at which to start reading particles from src
dst_startthe offset at which to start writing particles to dst
nthe number of particles to write

◆ CountSnds()

Long amrex::CountSnds ( const std::map< int, Vector< char > > &  not_ours,
Vector< Long > &  Snds 
)

◆ CreateDirectoryFailed()

void amrex::CreateDirectoryFailed ( const std::string &  dir)

Output a message and abort when couldn't create the directory.

◆ CreateWriteHDF5AttrDouble()

static int amrex::CreateWriteHDF5AttrDouble ( hid_t  loc,
const char *  name,
hsize_t  n,
const double *  data 
)
static

◆ CreateWriteHDF5AttrInt()

static int amrex::CreateWriteHDF5AttrInt ( hid_t  loc,
const char *  name,
hsize_t  n,
const int data 
)
static

◆ CreateWriteHDF5AttrString()

static int amrex::CreateWriteHDF5AttrString ( hid_t  loc,
const char *  name,
const char *  str 
)
static

◆ CRRBetweenLevels()

int amrex::CRRBetweenLevels ( int  fromlevel,
int  tolevel,
const Vector< int > &  refratios 
)

◆ DeallocateRandomSeedDevArray()

void amrex::DeallocateRandomSeedDevArray ( )

◆ decomp_chol_np10()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::decomp_chol_np10 ( Array2D< Real, 0, 9, 0, 9 > &  aa)

◆ decomp_chol_np6()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::decomp_chol_np6 ( Array2D< Real, 0, 5, 0, 5 > &  aa)

◆ decompose()

BoxArray amrex::decompose ( Box const &  domain,
int  nboxes,
Array< bool, AMREX_SPACEDIM > const &  decomp = {AMREX_D_DECL(true, true, true)},
bool  no_overlap = false 
)

Decompose domain box into BoxArray.

The returned BoxArray has nboxes Boxes, unless the the domain is too small. We aim to decompose the domain into subdomains that are as cubic as possible, even if this results in Boxes with odd numbers of cells. Thus, this function is generally not suited for applications with multiple AMR levels or for multigrid solvers.

Parameters
domainDomain Box
nboxesthe target number of Boxes
decompcontrols whether domain decomposition should be done in that direction.

◆ DefaultGeometry()

const Geometry& amrex::DefaultGeometry ( )
inline

◆ demangle()

std::string amrex::demangle ( const char *  name)
inline

Demangle C++ name.

Demange C++ name if possible. For example

    amrex::Box box;
    std::cout << amrex::demangle(typeid(box).name());

Demangling turns "N5amrex3BoxE" into "amrex::Box".

◆ diagShift()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::diagShift ( const IntVectND< dim > &  p,
int  s 
)
noexcept

Returns IntVectND obtained by adding s to each of the components of this IntVectND.

◆ disableFPExcept()

FPExcept amrex::disableFPExcept ( FPExcept  excepts)

Disable FP exceptions. Linux Only.

This function disables given exception traps and keeps the status of the others. The example below disables FPE invalid and divide-by-zero, and later restores the previous settings.

    auto prev_excepts = disableFPExcept(FPExcept::invalid | FPExcept::zero);
    // ....
    setFPExcept(prev_excepts); // restore previous settings

◆ DistributionMap() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
DistributionMapping const& amrex::DistributionMap ( Array< MF, N > const &  mf)

◆ DistributionMap() [2/2]

DistributionMapping const& amrex::DistributionMap ( FabArrayBase const &  fa)

◆ Divide() [1/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Divide ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
const IntVect nghost 
)

◆ Divide() [2/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Divide ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
int  nghost 
)

◆ doHandShake()

Long amrex::doHandShake ( const std::map< int, Vector< char > > &  not_ours,
Vector< Long > &  Snds,
Vector< Long > &  Rcvs 
)

◆ doHandShakeLocal()

Long amrex::doHandShakeLocal ( const std::map< int, Vector< char > > &  not_ours,
const Vector< int > &  neighbor_procs,
Vector< Long > &  Snds,
Vector< Long > &  Rcvs 
)

◆ Dot() [1/2]

template<typename T >
T amrex::Dot ( AlgVector< T > const &  x,
AlgVector< T > const &  y,
bool  local = false 
)

◆ Dot() [2/2]

template<typename FAB , std::enable_if_t< IsBaseFab< FAB >::value, int > FOO = 0>
FAB::value_type amrex::Dot ( FabArray< FAB > const &  x,
int  xcomp,
FabArray< FAB > const &  y,
int  ycomp,
int  ncomp,
IntVect const &  nghost,
bool  local = false 
)

Compute dot products of two FabArrays.

Parameters
xfirst FabArray
xcompstarting component of x
ysecond FabArray
ycompstarting component of y
ncompnumber of components
nghostnumber of ghost cells
localIf true, MPI communication is skipped.

◆ dtoh_memcpy() [1/2]

template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::dtoh_memcpy ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src 
)

◆ dtoh_memcpy() [2/2]

template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::dtoh_memcpy ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  scomp,
int  dcomp,
int  ncomp 
)

◆ eb_add_divergence_from_flow() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_add_divergence_from_flow ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  divu,
Array4< Real const > const &  vel_eb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  bnorm,
Array4< Real const > const &  barea,
GpuArray< Real, 2 > const &  dxinv 
)

◆ eb_add_divergence_from_flow() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_add_divergence_from_flow ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  divu,
Array4< Real const > const &  vel_eb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  bnorm,
Array4< Real const > const &  barea,
GpuArray< Real, 3 > const &  dxinv 
)

◆ EB_average_down() [1/3]

void amrex::EB_average_down ( const MultiFab S_fine,
MultiFab S_crse,
const MultiFab vol_fine,
const MultiFab vfrac_fine,
int  scomp,
int  ncomp,
const IntVect ratio 
)

◆ EB_average_down() [2/3]

void amrex::EB_average_down ( const MultiFab S_fine,
MultiFab S_crse,
int  scomp,
int  ncomp,
const IntVect ratio 
)

◆ EB_average_down() [3/3]

void amrex::EB_average_down ( const MultiFab S_fine,
MultiFab S_crse,
int  scomp,
int  ncomp,
int  ratio 
)

◆ EB_average_down_boundaries() [1/2]

void amrex::EB_average_down_boundaries ( const MultiFab fine,
MultiFab crse,
const IntVect ratio,
int  ngcrse 
)

◆ EB_average_down_boundaries() [2/2]

void amrex::EB_average_down_boundaries ( const MultiFab fine,
MultiFab crse,
int  ratio,
int  ngcrse 
)

◆ EB_average_down_faces() [1/3]

void amrex::EB_average_down_faces ( const Array< const MultiFab *, AMREX_SPACEDIM > &  fine,
const Array< MultiFab *, AMREX_SPACEDIM > &  crse,
const IntVect ratio,
const Geometry crse_geom 
)

◆ EB_average_down_faces() [2/3]

void amrex::EB_average_down_faces ( const Array< const MultiFab *, AMREX_SPACEDIM > &  fine,
const Array< MultiFab *, AMREX_SPACEDIM > &  crse,
const IntVect ratio,
int  ngcrse 
)

◆ EB_average_down_faces() [3/3]

void amrex::EB_average_down_faces ( const Array< const MultiFab *, AMREX_SPACEDIM > &  fine,
const Array< MultiFab *, AMREX_SPACEDIM > &  crse,
int  ratio,
int  ngcrse 
)

◆ EB_average_face_to_cellcenter()

void amrex::EB_average_face_to_cellcenter ( MultiFab ccmf,
int  dcomp,
const Array< MultiFab const *, AMREX_SPACEDIM > &  fmf 
)

◆ eb_avg_fc_to_cc() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avg_fc_to_cc ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  cc,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  ax,
Array4< Real const > const &  ay,
Array4< EBCellFlag const > const &  flag 
)

◆ eb_avg_fc_to_cc() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avg_fc_to_cc ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  cc,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  fz,
Array4< Real const > const &  ax,
Array4< Real const > const &  ay,
Array4< Real const > const &  az,
Array4< EBCellFlag const > const &  flag 
)

◆ eb_avgdown()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avgdown ( int  i,
int  j,
int  k,
Array4< Real const > const &  fine,
int  fcomp,
Array4< Real > const &  crse,
int  ccomp,
Array4< Real const > const &  vfrc,
Dim3 const &  ratio,
int  ncomp 
)

◆ eb_avgdown_boundaries()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avgdown_boundaries ( int  i,
int  j,
int  k,
Array4< Real const > const &  fine,
int  fcomp,
Array4< Real > const &  crse,
int  ccomp,
Array4< Real const > const &  ba,
Dim3 const &  ratio,
int  ncomp 
)

◆ eb_avgdown_face_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avgdown_face_x ( int  i,
int  j,
int  k,
Array4< Real const > const &  fine,
int  fcomp,
Array4< Real > const &  crse,
int  ccomp,
Array4< Real const > const &  area,
Dim3 const &  ratio,
int  ncomp 
)

◆ eb_avgdown_face_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avgdown_face_y ( int  i,
int  j,
int  k,
Array4< Real const > const &  fine,
int  fcomp,
Array4< Real > const &  crse,
int  ccomp,
Array4< Real const > const &  area,
Dim3 const &  ratio,
int  ncomp 
)

◆ eb_avgdown_face_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avgdown_face_z ( int  i,
int  j,
int  k,
Array4< Real const > const &  fine,
int  fcomp,
Array4< Real > const &  crse,
int  ccomp,
Array4< Real const > const &  area,
Dim3 const &  ratio,
int  ncomp 
)

◆ eb_avgdown_with_vol()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_avgdown_with_vol ( int  i,
int  j,
int  k,
Array4< Real const > const &  fine,
int  fcomp,
Array4< Real > const &  crse,
int  ccomp,
Array4< Real const > const &  fv,
Array4< Real const > const &  vfrc,
Dim3 const &  ratio,
int  ncomp 
)

◆ eb_compute_divergence() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_compute_divergence ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  divu,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
Array4< int const > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
GpuArray< Real, 2 > const &  dxinv,
bool  already_on_centroids 
)

◆ eb_compute_divergence() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_compute_divergence ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  divu,
Array4< Real const > const &  u,
Array4< Real const > const &  v,
Array4< Real const > const &  w,
Array4< int const > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  fcz,
GpuArray< Real, 3 > const &  dxinv,
bool  already_on_centroids 
)

◆ EB_computeDivergence() [1/2]

void amrex::EB_computeDivergence ( MultiFab divu,
const Array< MultiFab const *, AMREX_SPACEDIM > &  umac,
const Geometry geom,
bool  already_on_centroids 
)

◆ EB_computeDivergence() [2/2]

void amrex::EB_computeDivergence ( MultiFab divu,
const Array< MultiFab const *, AMREX_SPACEDIM > &  umac,
const Geometry geom,
bool  already_on_centroids,
const MultiFab vel_eb 
)

◆ eb_flux_reg_crseadd_va() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_crseadd_va ( int  i,
int  j,
int  k,
Array4< Real > const &  d,
Array4< int const > const &  flag,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  fz,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  ax,
Array4< Real const > const &  ay,
Array4< Real const > const &  az,
Real  dtdx,
Real  dtdy,
Real  dtdz,
int  ncomp 
)

◆ eb_flux_reg_crseadd_va() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_crseadd_va ( int  i,
int  j,
int  k,
Array4< Real > const &  d,
Array4< int const > const &  flag,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  ax,
Array4< Real const > const &  ay,
Real  dtdx,
Real  dtdy,
int  ncomp 
)

◆ eb_flux_reg_cvol() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::eb_flux_reg_cvol ( int  i,
int  j,
Array4< Real const > const &  vfrac,
Dim3 const &  ratio,
Real  threshold 
)
noexcept

◆ eb_flux_reg_cvol() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::eb_flux_reg_cvol ( int  i,
int  j,
int  k,
Array4< Real const > const &  vfrac,
Dim3 const &  ratio,
Real  small 
)
noexcept

◆ eb_flux_reg_fineadd_dm()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_fineadd_dm ( int  i,
int  j,
int  k,
int  n,
Box const &  dmbx,
Array4< Real > const &  d,
Array4< Real const > const &  dm,
Array4< Real const > const &  vfrac,
Dim3 const &  ratio,
Real  threshold 
)

◆ eb_flux_reg_fineadd_va_xhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_fineadd_va_xhi ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  d,
Array4< Real const > const &  f,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  a,
Real  fac,
Dim3 const &  ratio 
)

◆ eb_flux_reg_fineadd_va_xlo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_fineadd_va_xlo ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  d,
Array4< Real const > const &  f,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  a,
Real  fac,
Dim3 const &  ratio 
)

◆ eb_flux_reg_fineadd_va_yhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_fineadd_va_yhi ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  d,
Array4< Real const > const &  f,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  a,
Real  fac,
Dim3 const &  ratio 
)

◆ eb_flux_reg_fineadd_va_ylo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_fineadd_va_ylo ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  d,
Array4< Real const > const &  f,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  a,
Real  fac,
Dim3 const &  ratio 
)

◆ eb_flux_reg_fineadd_va_zhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_fineadd_va_zhi ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  d,
Array4< Real const > const &  f,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  a,
Real  fac,
Dim3 const &  ratio 
)

◆ eb_flux_reg_fineadd_va_zlo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_flux_reg_fineadd_va_zlo ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  d,
Array4< Real const > const &  f,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  a,
Real  fac,
Dim3 const &  ratio 
)

◆ eb_interp_cc2cent()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_cc2cent ( Box const &  box,
const Array4< Real > &  phicent,
Array4< Real const > const &  phicc,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  cent,
int  ncomp 
)
noexcept

◆ eb_interp_cc2face_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_cc2face_x ( Box const &  ubx,
Array4< Real const > const &  phi,
Array4< Real > const &  edg_x,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ eb_interp_cc2face_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_cc2face_y ( Box const &  vbx,
Array4< Real const > const &  phi,
Array4< Real > const &  edg_y,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ eb_interp_cc2face_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_cc2face_z ( Box const &  wbx,
Array4< Real const > const &  phi,
Array4< Real > const &  edg_z,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ eb_interp_cc2facecent_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_cc2facecent_x ( Box const &  ubx,
Array4< Real const > const &  phi,
Array4< Real const > const &  apx,
Array4< Real const > const &  fcx,
Array4< Real > const &  edg_x,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ eb_interp_cc2facecent_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_cc2facecent_y ( Box const &  vbx,
Array4< Real const > const &  phi,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcy,
Array4< Real > const &  edg_y,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ eb_interp_cc2facecent_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_cc2facecent_z ( Box const &  wbx,
Array4< Real const > const &  phi,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcz,
Array4< Real > const &  edg_z,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ EB_interp_CC_to_Centroid()

void amrex::EB_interp_CC_to_Centroid ( MultiFab cent,
const MultiFab cc,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry geom 
)

◆ EB_interp_CC_to_FaceCentroid()

void amrex::EB_interp_CC_to_FaceCentroid ( const MultiFab cc,
AMREX_D_DECL(MultiFab &fc_x, MultiFab &fc_y, MultiFab &fc_z)  ,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry a_geom,
const Vector< BCRec > &  a_bcs 
)

◆ EB_interp_CellCentroid_to_FaceCentroid() [1/3]

void amrex::EB_interp_CellCentroid_to_FaceCentroid ( const MultiFab phi_centroid,
AMREX_D_DECL(MultiFab &phi_xface, MultiFab &phi_yface, MultiFab &phi_zface)  ,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry a_geom,
const Vector< BCRec > &  a_bcs 
)

◆ EB_interp_CellCentroid_to_FaceCentroid() [2/3]

void amrex::EB_interp_CellCentroid_to_FaceCentroid ( const MultiFab phi_centroid,
const Array< MultiFab *, AMREX_SPACEDIM > &  phi_faces,
int  scomp,
int  dcomp,
int  nc,
const Geometry geom,
const amrex::Vector< amrex::BCRec > &  a_bcs 
)

◆ EB_interp_CellCentroid_to_FaceCentroid() [3/3]

void amrex::EB_interp_CellCentroid_to_FaceCentroid ( const MultiFab phi_centroid,
const Vector< MultiFab * > &  phi_faces,
int  scomp,
int  dcomp,
int  nc,
const Geometry geom,
const amrex::Vector< amrex::BCRec > &  a_bcs 
)

◆ eb_interp_centroid2facecent_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_centroid2facecent_x ( Box const &  ubx,
Array4< Real const > const &  phi,
Array4< Real const > const &  apx,
Array4< Real const > const &  cvol,
Array4< Real const > const &  ccent,
Array4< Real const > const &  fcx,
Array4< Real > const &  edg_x,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ eb_interp_centroid2facecent_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_centroid2facecent_y ( Box const &  vbx,
Array4< Real const > const &  phi,
Array4< Real const > const &  apy,
Array4< Real const > const &  cvol,
Array4< Real const > const &  ccent,
Array4< Real const > const &  fcy,
Array4< Real > const &  edg_y,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ eb_interp_centroid2facecent_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_interp_centroid2facecent_z ( Box const &  wbx,
Array4< Real const > const &  phi,
Array4< Real const > const &  apz,
Array4< Real const > const &  cvol,
Array4< Real const > const &  ccent,
Array4< Real const > const &  fcz,
Array4< Real > const &  phi_z,
int  ncomp,
const Box domain,
const BCRec bc 
)
noexcept

◆ EB_interp_in_quad()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::EB_interp_in_quad ( Real  xint,
Real  yint,
Real  v0,
Real  v1,
Real  v2,
Real  v3,
Real  x0,
Real  y0,
Real  x1,
Real  y1,
Real  x2,
Real  y2,
Real  x3,
Real  y3 
)

◆ eb_rereflux_from_crse()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_rereflux_from_crse ( int  i,
int  j,
int  k,
int  n,
Box const &  bx,
Array4< Real > const &  d,
Array4< Real const > const &  s,
Array4< int const > const &  amrflg,
Array4< EBCellFlag const > const &  ebflg,
Array4< Real const > const &  vfrac 
)

◆ eb_rereflux_to_fine()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_rereflux_to_fine ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  d,
Array4< Real const > const &  s,
Array4< int const > const &  msk,
Dim3  ratio 
)

◆ EB_set_covered() [1/4]

void amrex::EB_set_covered ( MultiFab mf,
int  icomp,
int  ncomp,
const Vector< Real > &  vals 
)

◆ EB_set_covered() [2/4]

void amrex::EB_set_covered ( MultiFab mf,
int  icomp,
int  ncomp,
int  ngrow,
const Vector< Real > &  a_vals 
)

◆ EB_set_covered() [3/4]

void amrex::EB_set_covered ( MultiFab mf,
int  icomp,
int  ncomp,
int  ngrow,
Real  val 
)

◆ EB_set_covered() [4/4]

void amrex::EB_set_covered ( MultiFab mf,
Real  val 
)

◆ EB_set_covered_faces() [1/2]

void amrex::EB_set_covered_faces ( const Array< MultiFab *, AMREX_SPACEDIM > &  umac,
const int  scomp,
const int  ncomp,
const Vector< Real > &  a_vals 
)

◆ EB_set_covered_faces() [2/2]

void amrex::EB_set_covered_faces ( const Array< MultiFab *, AMREX_SPACEDIM > &  umac,
Real  val 
)

◆ eb_set_covered_nodes() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_set_covered_nodes ( int  i,
int  j,
int  k,
int  n,
int  icomp,
Array4< Real > const &  d,
Array4< EBCellFlag const > const &  f,
Real const *AMREX_RESTRICT  v 
)

◆ eb_set_covered_nodes() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::eb_set_covered_nodes ( int  i,
int  j,
int  k,
int  n,
int  icomp,
Array4< Real > const &  d,
Array4< EBCellFlag const > const &  f,
Real  v 
)

◆ elemwiseMax() [1/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::elemwiseMax ( const IntVectND< dim > &  p1,
const IntVectND< dim > &  p2 
)
noexcept

◆ elemwiseMax() [2/3]

template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE T amrex::elemwiseMax ( const T &  a,
const T &  b,
const Ts &...  c 
)
constexprnoexcept

◆ elemwiseMax() [3/3]

template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE T amrex::elemwiseMax ( T const &  a,
T const &  b 
)
constexprnoexcept

◆ elemwiseMin() [1/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::elemwiseMin ( const IntVectND< dim > &  p1,
const IntVectND< dim > &  p2 
)
noexcept

◆ elemwiseMin() [2/3]

template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE T amrex::elemwiseMin ( const T &  a,
const T &  b,
const Ts &...  c 
)
constexprnoexcept

◆ elemwiseMin() [3/3]

template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE T amrex::elemwiseMin ( T const &  a,
T const &  b 
)
constexprnoexcept

◆ enableFPExcept()

FPExcept amrex::enableFPExcept ( FPExcept  excepts)

Enable FP exceptions. Linux Only.

This function enables given exception traps and keeps the status of the others. The example below enables all FPE traps, and later restores the previous settings.

    auto prev_excepts = disableFPExcept(FPExcept::all);
    // ....
    setFPExcept(prev_excepts); // restore previous settings

◆ enclosedCells() [1/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::enclosedCells ( const BoxND< dim > &  b)
noexcept

Returns a BoxND with CELL based coordinates in all directions that is enclosed by b.

◆ enclosedCells() [2/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::enclosedCells ( const BoxND< dim > &  b,
Direction  d 
)
noexcept

◆ enclosedCells() [3/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::enclosedCells ( const BoxND< dim > &  b,
int  dir 
)
noexcept

Returns a BoxND with CELL based coordinates in direction dir that is enclosed by b. NOTE: equivalent to b.convert(dir,CELL) NOTE: error if b.type(dir) == CELL.

◆ end()

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::end ( BoxND< dim > const &  box)
noexcept

◆ end_iv()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::end_iv ( BoxND< dim > const &  box)
noexcept

◆ enforcePeriodic()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE bool amrex::enforcePeriodic ( P &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  phi,
amrex::GpuArray< amrex::ParticleReal, AMREX_SPACEDIM > const &  rlo,
amrex::GpuArray< amrex::ParticleReal, AMREX_SPACEDIM > const &  rhi,
amrex::GpuArray< int, AMREX_SPACEDIM > const &  is_per 
)
noexcept

◆ EnsureThreadSafeTiles()

template<class PC >
void amrex::EnsureThreadSafeTiles ( PC &  pc)

◆ Error() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::Error ( const char *  msg = nullptr)

◆ Error() [2/2]

void amrex::Error ( const std::string &  msg)

Print out message to cerr and exit via amrex::Abort().

◆ Error_host()

void amrex::Error_host ( const char *  type,
const char *  msg 
)

◆ ErrorStream()

std::ostream & amrex::ErrorStream ( )

◆ ExecOnFinalize()

void amrex::ExecOnFinalize ( std::function< void()>  f)

We maintain a stack of functions that need to be called in Finalize(). The functions are called in LIFO order. The idea here is to allow classes to clean up any "global" state that they maintain when we're exiting from AMReX.

◆ ExecOnInitialize()

void amrex::ExecOnInitialize ( std::function< void()>  f)

◆ exp()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::exp ( const GpuComplex< T > &  a_z)
noexcept

Complex expotential function.

◆ fab_filcc()

void amrex::fab_filcc ( Box const &  bx,
Array4< Real > const &  qn,
int  ncomp,
Box const &  domain,
Real const *  ,
Real const *  ,
BCRec const *  bcn 
)

◆ fab_filfc()

void amrex::fab_filfc ( Box const &  bx,
Array4< Real > const &  qn,
int  ncomp,
Box const &  domain,
Real const *  ,
Real const *  ,
BCRec const *  bcn 
)

◆ fab_filnd()

void amrex::fab_filnd ( Box const &  bx,
Array4< Real > const &  qn,
int  ncomp,
Box const &  domain,
Real const *  ,
Real const *  ,
BCRec const *  bcn 
)

◆ FabToBlueprintFields()

void amrex::FabToBlueprintFields ( const FArrayBox fab,
const Vector< std::string > &  varnames,
Node &  res 
)

◆ FabToBlueprintTopology()

void amrex::FabToBlueprintTopology ( const Geometry geom,
const FArrayBox fab,
Node &  res 
)

◆ face_cons_linear_face_interp()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_cons_linear_face_interp ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse,
Array4< int const > const &  mask,
IntVect const &  ratio,
Box const &  per_grown_domain,
int  dim 
)
noexcept

◆ face_linear_face_interp_x()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_face_interp_x ( int  fi,
int  fj,
int  fk,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse,
Array4< int const > const &  mask,
IntVect const &  ratio 
)
noexcept

◆ face_linear_face_interp_y()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_face_interp_y ( int  fi,
int  fj,
int  fk,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse,
Array4< int const > const &  mask,
IntVect const &  ratio 
)
noexcept

◆ face_linear_face_interp_z()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_face_interp_z ( int  fi,
int  fj,
int  fk,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse,
Array4< int const > const &  mask,
IntVect const &  ratio 
)
noexcept

◆ face_linear_interp_x() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_interp_x ( int  i,
int  j,
int  k,
int  n,
amrex::Array4< amrex::Real > const &  fine,
IntVect const &  ratio 
)
noexcept

◆ face_linear_interp_x() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_interp_x ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse,
IntVect const &  ratio 
)
noexcept

◆ face_linear_interp_y() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_interp_y ( int  i,
int  j,
int  k,
int  n,
amrex::Array4< amrex::Real > const &  fine,
IntVect const &  ratio 
)
noexcept

◆ face_linear_interp_y() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_interp_y ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse,
IntVect const &  ratio 
)
noexcept

◆ face_linear_interp_z() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_interp_z ( int  i,
int  j,
int  k,
int  n,
amrex::Array4< amrex::Real > const &  fine,
IntVect const &  ratio 
)
noexcept

◆ face_linear_interp_z() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::face_linear_interp_z ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse,
IntVect const &  ratio 
)
noexcept

◆ facediv_face_interp()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::facediv_face_interp ( int  ci,
int  cj,
int  ck,
int  nc,
int  nf,
int  idir,
Array4< T const > const &  crse,
Array4< T > const &  fine,
Array4< const int > const &  mask,
IntVect const &  ratio 
)
noexcept

◆ facediv_int()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::facediv_int ( int  ci,
int  cj,
int  ck,
int  nf,
GpuArray< Array4< T >, AMREX_SPACEDIM > const &  fine,
IntVect const &  ratio,
GpuArray< Real, AMREX_SPACEDIM > const &  cellSize 
)
noexcept

◆ FileExists()

bool amrex::FileExists ( const std::string &  filename)

Check if a file already exists. Return true if the filename is an existing file, directory, or link. For links, this operates on the link and not what the link points to.

◆ FileOpenFailed()

void amrex::FileOpenFailed ( const std::string &  file)

Output a message and abort when couldn't open the file.

◆ FileSize()

long amrex::FileSize ( const std::string &  filename)

◆ fill()

template<typename STRUCT , typename F , std::enable_if_t<(sizeof(STRUCT)<=36 *8) &&AMREX_IS_TRIVIALLY_COPYABLE(STRUCT) &&std::is_trivially_destructible_v< STRUCT >, int > FOO = 0>
void amrex::fill ( BaseFab< STRUCT > &  aos_fab,
F const &  f 
)

◆ fill_snan()

template<RunOn run_on, typename T , std::enable_if_t< std::is_same_v< T, double >||std::is_same_v< T, float >, int > FOO = 0>
void amrex::fill_snan ( T *  p,
std::size_t  nelems 
)

◆ FillDomainBoundary()

void amrex::FillDomainBoundary ( MultiFab phi,
const Geometry geom,
const Vector< BCRec > &  bc 
)

◆ FillImpFunc()

template<typename G >
void amrex::FillImpFunc ( MultiFab mf,
G const &  gshop,
Geometry const &  geom 
)

Fill MultiFab with implicit function.

This function fills the nodal MultiFab with the implicit function in GeometryShop. Note that an implicit function is not necessarily a signed distance function.

Template Parameters
Gis the GeometryShop type
Parameters
mfis a nodal MultiFab.
gshopis a GeometryShop object.
geomis a Geometry object.

◆ FillNull() [1/2]

template<class T >
void amrex::FillNull ( Vector< std::unique_ptr< T > > &  a)

◆ FillNull() [2/2]

template<class T >
void amrex::FillNull ( Vector< T * > &  a)

◆ FillPatchInterp() [1/3]

template<typename MF , typename Interp >
std::enable_if_t<IsFabArray<MF>::value && !std::is_same_v<Interp,MFInterpolater> > amrex::FillPatchInterp ( MF &  mf_fine_patch,
int  fcomp,
MF const &  mf_crse_patch,
int  ccomp,
int  ncomp,
IntVect const &  ng,
const Geometry cgeom,
const Geometry fgeom,
Box const &  dest_domain,
const IntVect ratio,
Interp *  mapper,
const Vector< BCRec > &  bcs,
int  bcscomp 
)

◆ FillPatchInterp() [2/3]

template<typename MF >
std::enable_if_t<IsFabArray<MF>::value> amrex::FillPatchInterp ( MF &  mf_fine_patch,
int  fcomp,
MF const &  mf_crse_patch,
int  ccomp,
int  ncomp,
IntVect const &  ng,
const Geometry cgeom,
const Geometry fgeom,
Box const &  dest_domain,
const IntVect ratio,
InterpBase mapper,
const Vector< BCRec > &  bcs,
int  bcscomp 
)

◆ FillPatchInterp() [3/3]

void amrex::FillPatchInterp ( MultiFab mf_fine_patch,
int  fcomp,
MultiFab const &  mf_crse_patch,
int  ccomp,
int  ncomp,
IntVect const &  ng,
const Geometry cgeom,
const Geometry fgeom,
Box const &  dest_domain,
const IntVect ratio,
MFInterpolater mapper,
const Vector< BCRec > &  bcs,
int  bcscomp 
)

◆ FillPatchNLevels()

template<typename MF , typename BC , typename Interp >
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchNLevels ( MF &  mf,
int  level,
const IntVect nghost,
Real  time,
const Vector< Vector< MF * >> &  smf,
const Vector< Vector< Real >> &  st,
int  scomp,
int  dcomp,
int  ncomp,
const Vector< Geometry > &  geom,
Vector< BC > &  bc,
int  bccomp,
const Vector< IntVect > &  ratio,
Interp *  mapper,
const Vector< BCRec > &  bcr,
int  bcrcomp 
)

FillPatch with data from AMR levels.

First, we try to fill the destination MultiFab/FabArray with this level's data if it's available. For the unfilled region, we try to fill with the coarse level below if it's available. Even coarser levels will be used if necessary till all regions are filled. This function is more expensive than FillPatchTwoLevels. So if one knows FillPatchTwoLevels can do the job because grids are properly nested, this function should be avoided.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
Parameters
mfdestination MF
levelAMR level associated with mf
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
smfsource MFs. The outer Vector is for AMR levels, whereas the inner Vector is for data at various times. It is not an error if the level for the destination MF is finer than data in smf (i.e., level >= smf.size()).
sttimes associated smf
scompstarting component of the source MFs
dcompstarting component of the destination MF
ncompnumber of components
geomGeometry objects for AMR levels. The size must be big enough such that level < geom.size().
bcfunctors for physical boundaries on AMR levels. The size must be big enough such that level < bc.size().
bccompstarting component for bc
ratiorefinement ratio for AMR levels. The size must be big enough such that level < bc.size()-1.
mapperspatial interpolater
bcrboundary types for each component. We need this because some interpolaters need it.
bcrcompstarting component for bcr

◆ FillPatchSingleLevel() [1/3]

template<typename MF , typename BC >
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchSingleLevel ( MF &  mf,
IntVect const &  nghost,
Real  time,
const Vector< MF * > &  smf,
const Vector< Real > &  stime,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry geom,
BC &  physbcf,
int  bcfcomp 
)

FillPatch with data from the current level.

The destination MultiFab/FabArray is on the same AMR level as the source MultiFab/FabArray. Usually this can only be used on AMR level 0, because filling fine level MF usually requires coarse level data. If needed, interpolation in time is performed.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Parameters
mfdestination MF
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
smfsource MFs
stimetimes associated smf
scompstarting component of the source MFs
dcompstarting component of the destination MF
ncompnumber of components
geomGeometry for this level
physbcffunctor for physical boundaries
bcfcompstarting component for physbcf

◆ FillPatchSingleLevel() [2/3]

template<typename MF >
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchSingleLevel ( MF &  mf,
IntVect const &  nghost,
Real  time,
const Vector< MF * > &  smf,
IntVect const &  snghost,
const Vector< Real > &  stime,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry geom 
)

FillPatch with data from the current level.

In this version of FillPatchSingleLevel, it's the CALLER's responsibility to make sure that smf has snghost ghost cells already filled before calling this function. The destination MultiFab/FabArray is on the same AMR level as the source MultiFab/FabArray. If needed, interpolation in time is performed.

Template Parameters
MFthe MultiFab/FabArray type
Parameters
mfdestination MF
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
smfsource MFs
snghostnumber of ghost cells in smf with valid data
stimetimes associated smf
scompstarting component of the source MFs
dcompstarting component of the destination MF
ncompnumber of components
geomGeometry for this level

◆ FillPatchSingleLevel() [3/3]

template<typename MF , typename BC >
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchSingleLevel ( MF &  mf,
Real  time,
const Vector< MF * > &  smf,
const Vector< Real > &  stime,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry geom,
BC &  physbcf,
int  bcfcomp 
)

FillPatch with data from the current level.

The destination MultiFab/FabArray is on the same AMR level as the source MultiFab/FabArray. Usually this can only be used on AMR level 0, because filling fine level MF usually requires coarse level data. If needed, interpolation in time is performed. Ghost cells of the destination MF are not filled.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Parameters
mfdestination MF
timetime associated with mf
smfsource MFs
stimetimes associated smf
scompstarting component of the source MFs
dcompstarting component of the destination MF
ncompnumber of components
geomGeometry for this level
physbcffunctor for physical boundaries
bcfcompstarting component for physbcf

◆ FillPatchTwoLevels() [1/6]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchTwoLevels ( Array< MF *, AMREX_SPACEDIM > const &  mf,
IntVect const &  nghost,
Real  time,
const Vector< Array< MF *, AMREX_SPACEDIM > > &  cmf,
const Vector< Real > &  ct,
const Vector< Array< MF *, AMREX_SPACEDIM > > &  fmf,
const Vector< Real > &  ft,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
Array< BC, AMREX_SPACEDIM > &  cbc,
const Array< int, AMREX_SPACEDIM > &  cbccomp,
Array< BC, AMREX_SPACEDIM > &  fbc,
const Array< int, AMREX_SPACEDIM > &  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Array< Vector< BCRec >, AMREX_SPACEDIM > &  bcs,
const Array< int, AMREX_SPACEDIM > &  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

FillPatch for face variables with data from the current level and the level below. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving.

First, we fill the destination MultiFab/FabArray's with the current level data as much as possible. This may include interpolation in time. For the rest of the destination MFs, we fill them with the coarse level data using interpolation in space (and in time if needed).

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MFs on the fine level
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
cmfsource MFs on the coarse level
cttimes associated cmf
fmfsource MFs on the fine level
fttimes associated fmf
scompstarting component of the source MFs
dcompstarting component of the destination MFs
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ FillPatchTwoLevels() [2/6]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchTwoLevels ( Array< MF *, AMREX_SPACEDIM > const &  mf,
IntVect const &  nghost,
Real  time,
const Vector< Array< MF *, AMREX_SPACEDIM > > &  cmf,
const Vector< Real > &  ct,
const Vector< Array< MF *, AMREX_SPACEDIM > > &  fmf,
const Vector< Real > &  ft,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
Array< BC, AMREX_SPACEDIM > &  cbc,
int  cbccomp,
Array< BC, AMREX_SPACEDIM > &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Array< Vector< BCRec >, AMREX_SPACEDIM > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

FillPatch for face variables with data from the current level and the level below. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving.

First, we fill the destination MultiFab/FabArray's with the current level data as much as possible. This may include interpolation in time. For the rest of the destination MFs, we fill them with the coarse level data using interpolation in space (and in time if needed).

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MFs on the fine level
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
cmfsource MFs on the coarse level
cttimes associated cmf
fmfsource MFs on the fine level
fttimes associated fmf
scompstarting component of the source MFs
dcompstarting component of the destination MFs
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ FillPatchTwoLevels() [3/6]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchTwoLevels ( Array< MF *, AMREX_SPACEDIM > const &  mf,
Real  time,
const Vector< Array< MF *, AMREX_SPACEDIM > > &  cmf,
const Vector< Real > &  ct,
const Vector< Array< MF *, AMREX_SPACEDIM > > &  fmf,
const Vector< Real > &  ft,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
Array< BC, AMREX_SPACEDIM > &  cbc,
int  cbccomp,
Array< BC, AMREX_SPACEDIM > &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Array< Vector< BCRec >, AMREX_SPACEDIM > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

FillPatch for face variables with data from the current level and the level below. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving.

First, we fill the destination MultiFab/FabArray's with the current level data as much as possible. This may include interpolation in time. For the rest of the destination MFs, we fill them with the coarse level data using interpolation in space (and in time if needed). Ghost cells of the destination MFs are not filled.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MFs on the fine level
timetime associated with mf
cmfsource MFs on the coarse level
cttimes associated cmf
fmfsource MFs on the fine level
fttimes associated fmf
scompstarting component of the source MFs
dcompstarting component of the destination MFs
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ FillPatchTwoLevels() [4/6]

template<typename MF , typename Interp >
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchTwoLevels ( MF &  mf,
IntVect const &  nghost,
IntVect const &  nghost_outside_domain,
Real  time,
const Vector< MF * > &  cmf,
const Vector< Real > &  ct,
const Vector< MF * > &  fmf,
const Vector< Real > &  ft,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
const IntVect ratio,
Interp *  mapper,
const Vector< BCRec > &  bcs,
int  bcscomp 
)

FillPatch with data from the current level and the level below.

In this version of FillPatchTwoLevels, it's the CALLER's responsibility to make sure all ghost cells of the coarse MF needed for interpolation are filled already before calling this function. It's assumed that the fine level MultiFab mf's BoxArray is coarsenable by the refinement ratio. There is no support for EB.

Template Parameters
MFthe MultiFab/FabArray type
Interpspatial interpolater
Parameters
mfdestination MF on the fine level
nghostnumber of ghost cells of mf inside domain needed to be filled
nghost_outside_domainnumber of ghost cells of mf outside domain needed to be filled
timetime associated with mf
cmfsource MFs on the coarse level
cttimes associated cmf
fmfsource MFs on the fine level
fttimes associated fmf
scompstarting component of the source MFs
dcompstarting component of the destination MF
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component.
bcscompstarting component for bcs

◆ FillPatchTwoLevels() [5/6]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchTwoLevels ( MF &  mf,
IntVect const &  nghost,
Real  time,
const Vector< MF * > &  cmf,
const Vector< Real > &  ct,
const Vector< MF * > &  fmf,
const Vector< Real > &  ft,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
BC &  cbc,
int  cbccomp,
BC &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Vector< BCRec > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

FillPatch with data from the current level and the level below.

First, we fill the destination MultiFab/FabArray with the current level data as much as possible. This may include interpolation in time. For the rest of the destination MF, we fill them with the coarse level data using interpolation in space (and in time if needed).

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MF on the fine level
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
cmfsource MFs on the coarse level
cttimes associated cmf
fmfsource MFs on the fine level
fttimes associated fmf
scompstarting component of the source MFs
dcompstarting component of the destination MF
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ FillPatchTwoLevels() [6/6]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::FillPatchTwoLevels ( MF &  mf,
Real  time,
const Vector< MF * > &  cmf,
const Vector< Real > &  ct,
const Vector< MF * > &  fmf,
const Vector< Real > &  ft,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
BC &  cbc,
int  cbccomp,
BC &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Vector< BCRec > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

FillPatch with data from the current level and the level below.

First, we fill the destination MultiFab/FabArray with the current level data as much as possible. This may include interpolation in time. For the rest of the destination MF, we fill them with the coarse level data using interpolation in space (and in time if needed). Ghost cells of the destination MF are not filled.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MF on the fine level
timetime associated with mf
cmfsource MFs on the coarse level
cttimes associated cmf
fmfsource MFs on the fine level
fttimes associated fmf
scompstarting component of the source MFs
dcompstarting component of the destination MF
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ FillRandom() [1/2]

void amrex::FillRandom ( MultiFab mf,
int  scomp,
int  ncomp 
)

Fill MultiFab with random numbers from uniform distribution.

The uniform distribution range is [0.0, 1.0) for CPU and SYCL, it's (0,1] for CUDA and HIP. All cells including ghost cells are filled.

Parameters
mfMultiFab
scompstarting component
ncompnumber of component

◆ FillRandom() [2/2]

void amrex::FillRandom ( Real *  p,
Long  N 
)

Fill random numbers from uniform distribution. The range is [0,1) for CPU and SYCl, and (0,1] for CUADA and HIP.

◆ FillRandomNormal() [1/2]

void amrex::FillRandomNormal ( MultiFab mf,
int  scomp,
int  ncomp,
Real  mean,
Real  stddev 
)

Fill MultiFab with random numbers from normal distribution.

All cells including ghost cells are filled.

Parameters
mfMultiFab
scompstarting component
ncompnumber of component
meanmean of normal distribution
stddevstandard deviation of normal distribution

◆ FillRandomNormal() [2/2]

void amrex::FillRandomNormal ( Real *  p,
Long  N,
Real  mean,
Real  stddev 
)

Fill random numbers from normal distribution.

◆ FillSignedDistance() [1/2]

void amrex::FillSignedDistance ( MultiFab mf,
bool  fluid_has_positive_sign = true 
)

Fill MultiFab with signed distance.

This function fills the nodal MultiFab with signed distance. Note that the distance is valid only if it's within a few cells to the EB. The MultiFab must have been built with an EBFArrayBoxFactory.

Parameters
mfis a nodal MultiFab built with EBFArrayBoxFactory.
fluid_has_positive_signdetermines the sign of the fluid.

◆ FillSignedDistance() [2/2]

void amrex::FillSignedDistance ( MultiFab mf,
EB2::Level const &  ls_lev,
EBFArrayBoxFactory const &  eb_fac,
int  refratio,
bool  fluid_has_positive_sign = true 
)

Fill MultiFab with signed distance.

This function fills the nodal MultiFab with signed distance. Note that the distance is valid only if it's within a few cells to the EB.

Parameters
mfis a nodal MultiFab.
ls_levis an EB2::Level object with an implicit function. This is at the same level as mf.
eb_facis an EBFArrayBoxFactory object containing EB information.
refratiois the refinement ratio of mf to eb_fac.
fluid_has_positive_signdetermines the sign of the fluid.

◆ filterAndTransformParticles() [1/6]

template<typename DstTile , typename SrcTile , typename Index , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index amrex::filterAndTransformParticles ( DstTile &  dst,
const SrcTile &  src,
Index *  mask,
F &&  f 
)
noexcept

Conditionally copy particles from src to dst based on the value of mask. A transformation will also be applied to the particles on copy.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Fthe transform function type
Parameters
dstthe destination tile
srcthe source tile
maskpointer to the mask - 1 means copy, 0 means don't copy
fdefines the transformation that will be applied to the particles on copy

◆ filterAndTransformParticles() [2/6]

template<typename DstTile , typename SrcTile , typename Index , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index amrex::filterAndTransformParticles ( DstTile &  dst,
const SrcTile &  src,
Index *  mask,
F const &  f,
Index  src_start,
Index  dst_start 
)
noexcept

Conditionally copy particles from src to dst based on the value of mask. A transformation will also be applied to the particles on copy.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Fthe transform function type
Parameters
dstthe destination tile
srcthe source tile
maskpointer to the mask - 1 means copy, 0 means don't copy
fdefines the transformation that will be applied to the particles on copy

◆ filterAndTransformParticles() [3/6]

template<typename DstTile , typename SrcTile , typename Pred , typename F , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, int > foo = 0>
int amrex::filterAndTransformParticles ( DstTile &  dst,
const SrcTile &  src,
Pred &&  p,
F &&  f 
)
noexcept

Conditionally copy particles from src to dst based on a predicate. A transformation will also be applied to the particles on copy.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Preda function object
Fthe transform function type
Parameters
dstthe destination tile
srcthe source tile
ppredicate function - particles will be copied if p returns true
fdefines the transformation that will be applied to the particles on copy

◆ filterAndTransformParticles() [4/6]

template<typename DstTile , typename SrcTile , typename Pred , typename F , typename Index , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, Index > nvccfoo = 0>
Index amrex::filterAndTransformParticles ( DstTile &  dst,
const SrcTile &  src,
Pred const &  p,
F &&  f,
Index  src_start,
Index  dst_start 
)
noexcept

Conditionally copy particles from src to dst based on a predicate. This version conditionally copies n particles starting at index src_start, writing the result starting at dst_start.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Preda function object
Parameters
dstthe destination tile
srcthe source tile
ppredicate function - particles will be copied if p returns true
src_startthe offset at which to start reading particles from src
dst_startthe offset at which to start writing particles to dst

◆ filterAndTransformParticles() [5/6]

template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename Index , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index amrex::filterAndTransformParticles ( DstTile1 &  dst1,
DstTile2 &  dst2,
const SrcTile &  src,
Index *  mask,
F const &  f 
)
noexcept

Conditionally copy particles from src to dst1 and dst2 based on the value of mask. A transformation will also be applied to the particles on copy.

Template Parameters
DstTile1the dst1 particle tile type
DstTile2the dst2 particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Fthe transform function type
Parameters
dst1the first destination tile
dst2the second destination tile
srcthe source tile
maskpointer to the mask - 1 means copy, 0 means don't copy
fdefines the transformation that will be applied to the particles on copy

◆ filterAndTransformParticles() [6/6]

template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename Pred , typename F , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, int > foo = 0>
int amrex::filterAndTransformParticles ( DstTile1 &  dst1,
DstTile2 &  dst2,
const SrcTile &  src,
Pred const &  p,
F &&  f 
)
noexcept

Conditionally copy particles from src to dst1 and dst2 based on a predicate. A transformation will also be applied to the particles on copy.

Template Parameters
DstTile1the dst1 particle tile type
DstTile2the dst2 particle tile type
SrcTilethe src particle tile type
Preda function object
Fthe transform function type
Parameters
dst1the first destination tile
dst2the second destination tile
srcthe source tile
ppredicate function - particles will be copied if p returns true
fdefines the transformation that will be applied to the particles on copy

◆ filterParticles() [1/4]

template<typename DstTile , typename SrcTile , typename Index , typename N , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index amrex::filterParticles ( DstTile &  dst,
const SrcTile &  src,
const Index *  mask 
)
noexcept

Conditionally copy particles from src to dst based on the value of mask.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Parameters
dstthe destination tile
srcthe source tile
maskpointer to the mask - 1 means copy, 0 means don't copy

◆ filterParticles() [2/4]

template<typename DstTile , typename SrcTile , typename Index , typename N , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
Index amrex::filterParticles ( DstTile &  dst,
const SrcTile &  src,
const Index *  mask,
Index  src_start,
Index  dst_start,
n 
)
noexcept

Conditionally copy particles from src to dst based on the value of mask. This version conditionally copies n particles starting at index src_start, writing the result starting at dst_start.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Parameters
dstthe destination tile
srcthe source tile
maskpointer to the mask - 1 means copy, 0 means don't copy
src_startthe offset at which to start reading particles from src
dst_startthe offset at which to start writing particles to dst
nthe number of particles to apply the operation to

◆ filterParticles() [3/4]

template<typename DstTile , typename SrcTile , typename Pred , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, int > foo = 0>
int amrex::filterParticles ( DstTile &  dst,
const SrcTile &  src,
Pred &&  p 
)
noexcept

Conditionally copy particles from src to dst based on a predicate.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Preda function object
Parameters
dstthe destination tile
srcthe source tile
ppredicate function - particles will be copied if p returns true

◆ filterParticles() [4/4]

template<typename DstTile , typename SrcTile , typename Pred , typename Index , typename N , std::enable_if_t<!std::is_pointer_v< std::decay_t< Pred >>, Index > nvccfoo = 0>
Index amrex::filterParticles ( DstTile &  dst,
const SrcTile &  src,
Pred const &  p,
Index  src_start,
Index  dst_start,
n 
)
noexcept

Conditionally copy particles from src to dst based on a predicate. This version conditionally copies n particles starting at index src_start, writing the result starting at dst_start.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Preda function object
Parameters
dstthe destination tile
srcthe source tile
ppredicate function - particles will be copied if p returns true
src_startthe offset at which to start reading particles from src
dst_startthe offset at which to start writing particles to dst
nthe number of particles to apply the operation to

◆ Finalize() [1/2]

void amrex::Finalize ( )

◆ Finalize() [2/2]

void amrex::Finalize ( amrex::AMReX pamrex)

◆ FixCoarseBoxSize()

amrex::Box amrex::FixCoarseBoxSize ( const Box fineBox,
int  rr 
)

◆ fluxreg_fineadd()

AMREX_GPU_HOST_DEVICE void amrex::fluxreg_fineadd ( Box const &  bx,
Array4< Real > const &  reg,
const int  rcomp,
Array4< Real const > const &  flx,
const int  fcomp,
const int  ncomp,
const int  dir,
Dim3 const &  ratio,
const Real  mult 
)
inlinenoexcept

Add fine grid flux to flux register. Flux array is a fine grid edge based object, Register is a coarse grid edge based object. It is assumed that the coarsened flux region contains the register region.

Parameters
bx
reg
rcomp
flx
fcomp
ncomp
ratio
mult
bx
reg
rcomp
flx
fcomp
ncomp
dir
ratio
mult

◆ fluxreg_fineareaadd()

AMREX_GPU_HOST_DEVICE void amrex::fluxreg_fineareaadd ( Box const &  bx,
Array4< Real > const &  reg,
const int  rcomp,
Array4< Real const > const &  area,
Array4< Real const > const &  flx,
const int  fcomp,
const int  ncomp,
const int  dir,
Dim3 const &  ratio,
const Real  mult 
)
inlinenoexcept

Add fine grid flux times area to flux register. Flux array is a fine grid edge based object, Register is a coarse grid edge based object. It is assumed that the coarsened flux region contains the register region.

Parameters
bx
reg
rcomp
area
flx
fcomp
ncomp
ratio
mult
bx
reg
rcomp
area
flx
fcomp
ncomp
dir
ratio
mult

◆ fluxreg_reflux()

AMREX_GPU_HOST_DEVICE void amrex::fluxreg_reflux ( Box const &  bx,
Array4< Real > const &  s,
const int  scomp,
Array4< Real const > const &  f,
Array4< Real const > const &  v,
const int  ncomp,
const Real  mult,
const Orientation  face 
)
inlinenoexcept

◆ For() [1/31]

template<int MT, typename L , int dim>
void amrex::For ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ For() [2/31]

template<typename L , int dim>
void amrex::For ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ For() [3/31]

template<typename L , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::For ( BoxND< dim > const &  box,
L const &  f 
)
noexcept

◆ For() [4/31]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::For ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ For() [5/31]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
void amrex::For ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ For() [6/31]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::For ( BoxND< dim > const &  box,
ncomp,
L const &  f 
)
noexcept

◆ For() [7/31]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::For ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ For() [8/31]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::For ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ For() [9/31]

template<typename L1 , typename L2 , int dim>
void amrex::For ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ For() [10/31]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::For ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ For() [11/31]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::For ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ For() [12/31]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::For ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ For() [13/31]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::For ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ For() [14/31]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::For ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ For() [15/31]

template<typename L , int dim>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ For() [16/31]

template<int MT, typename L , int dim>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ For() [17/31]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ For() [18/31]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ For() [19/31]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ For() [20/31]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ For() [21/31]

template<typename L1 , typename L2 , int dim>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ For() [22/31]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ For() [23/31]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ For() [24/31]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ For() [25/31]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ For() [26/31]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::For ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ For() [27/31]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::For ( Gpu::KernelInfo const &  info,
n,
L &&  f 
)
noexcept

◆ For() [28/31]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::For ( Gpu::KernelInfo const &  info,
n,
L &&  f 
)
noexcept

◆ For() [29/31]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::For ( n,
L &&  f 
)
noexcept

◆ For() [30/31]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
void amrex::For ( n,
L &&  f 
)
noexcept

◆ For() [31/31]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::For ( n,
L const &  f 
)
noexcept

◆ ForEach() [1/6]

template<typename... Ts, typename F >
constexpr void amrex::ForEach ( TypeList< Ts... >  ,
F &&  f 
)
constexpr

For each type t in TypeList, call f(t)

For example, instead of

    int order = ...;
    if (order == 1) {
        interp<1>(...);
    } else if (order == 2) {
        interp<2>(...);
    } else if (order == 4) {
        interp<4>(...);
    }

we could have

    int order = ...;
    ForEach(TypeList<std::integral_constant<int,1>,
                     std::integral_constant<int,2>,
                     std::integral_constant<int,4>>{},
            [&] (auto order_const) {
                if (order_const() == order) {
                    interp<order_const()>(...);
                }
            });

◆ ForEach() [2/6]

template<typename V1 , typename F >
std::enable_if_t<IsAlgVector<std::decay_t<V1> >::value> amrex::ForEach ( V1 &  x,
F const &  f 
)

◆ ForEach() [3/6]

template<typename V1 , typename V2 , typename F >
std::enable_if_t<IsAlgVector<std::decay_t<V1> >::value && IsAlgVector<std::decay_t<V2> >::value> amrex::ForEach ( V1 &  x,
V2 &  y,
F const &  f 
)

◆ ForEach() [4/6]

template<typename V1 , typename V2 , typename V3 , typename F >
std::enable_if_t<IsAlgVector<std::decay_t<V1> >::value && IsAlgVector<std::decay_t<V2> >::value && IsAlgVector<std::decay_t<V3> >::value> amrex::ForEach ( V1 &  x,
V2 &  y,
V3 &  z,
F const &  f 
)

◆ ForEach() [5/6]

template<typename V1 , typename V2 , typename V3 , typename V4 , typename F >
std::enable_if_t<IsAlgVector<std::decay_t<V1> >::value && IsAlgVector<std::decay_t<V2> >::value && IsAlgVector<std::decay_t<V3> >::value && IsAlgVector<std::decay_t<V4> >::value> amrex::ForEach ( V1 &  x,
V2 &  y,
V3 &  z,
V4 &  a,
F const &  f 
)

◆ ForEach() [6/6]

template<typename V1 , typename V2 , typename V3 , typename V4 , typename V5 , typename F >
std::enable_if_t<IsAlgVector<std::decay_t<V1> >::value && IsAlgVector<std::decay_t<V2> >::value && IsAlgVector<std::decay_t<V3> >::value && IsAlgVector<std::decay_t<V4> >::value && IsAlgVector<std::decay_t<V5> >::value> amrex::ForEach ( V1 &  x,
V2 &  y,
V3 &  z,
V4 &  a,
V5 &  b,
F const &  f 
)

◆ ForEachUntil()

template<typename... Ts, typename F >
constexpr bool amrex::ForEachUntil ( TypeList< Ts... >  ,
F &&  f 
)
constexpr

For each type t in TypeList, call f(t) until true is returned.

This behaves like return (f(t0) || f(t1) || f(t2) || ...). Note that shor-circuting occurs for the || operators.

An example,

    void AnyF (Any& dst, Any const& src) {
        // dst and src are either MultiFab or fMultiFab
        auto tt = CartesianProduct(TypeList<MultiFab,fMultiFab>{},
                                   TypeList<MultiFab,fMultiFab>{});
        bool r = ForEachUntil(tt, [&] (auto t) -> bool
        {
            using MF0 = TypeAt<0,decltype(t)>;
            using MF1 = TypeAt<1,decltype(t)>;
            if (dst.is<MF0>() && src.is<MF1>()) {
                MF0      & dmf = dst.get<MF0>();
                MF1 const& smf = src.get<MF1>();
                f(dmf, smf);
                return true;
            } else {
                return false;
            }
        });
        if (!r) { amrex::Abort("Unsupported types"); }
    }

◆ ForwardAsTuple()

template<typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple<Ts&&...> amrex::ForwardAsTuple ( Ts &&...  args)
constexprnoexcept

◆ FourthOrderInterpFromFineToCoarse()

void amrex::FourthOrderInterpFromFineToCoarse ( MultiFab cmf,
int  scomp,
int  ncomp,
MultiFab const &  fmf,
IntVect const &  ratio 
)

Fourth-order interpolation from fine to coarse level.

This is for high-order "average-down" of finite-difference data. If ghost cell data are used, it's the caller's responsibility to fill the ghost cells before calling this function.

Parameters
cmfcoarse data
scompstarting component
ncompnumber of component
fmffine data
ratiorefinement ratio.

◆ gatherParticles()

template<typename PTile , typename N , typename Index , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void amrex::gatherParticles ( PTile &  dst,
const PTile &  src,
np,
const Index *  inds 
)

Gather particles copies particles into contiguous order from an arbitrary order. Specifically, the particle at the index inds[i] in src will be copied to the index i in dst.

Template Parameters
PTilethe particle tile type
Nthe size type, e.g. Long
Indexthe index type, e.g. unsigned int
Parameters
dstthe destination tile
srcthe source tile
npthe number of particles
indspointer to the permutation array

◆ GccPlacater()

void amrex::GccPlacater ( )
inline

◆ GccPlacaterMF()

void amrex::GccPlacaterMF ( )
inline

◆ get() [1/3]

template<std::size_t I, typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTupleElement<I, GpuTuple<Ts...> >::type&& amrex::get ( GpuTuple< Ts... > &&  tup)
constexprnoexcept

◆ get() [2/3]

template<std::size_t I, typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTupleElement<I, GpuTuple<Ts...> >::type& amrex::get ( GpuTuple< Ts... > &  tup)
constexprnoexcept

◆ get() [3/3]

template<std::size_t I, typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTupleElement<I, GpuTuple<Ts...> >::type const& amrex::get ( GpuTuple< Ts... > const &  tup)
constexprnoexcept

◆ get_cell_data()

template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > FOO = 0>
Vector< typename MF::value_type > amrex::get_cell_data ( MF const &  mf,
IntVect const &  cell 
)

Get data in a cell of MultiFab/FabArray.

This returns a Vector containing the data in a given cell, if it's available on a process. The returned Vector is empty if a process does not have the given cell.

◆ get_command()

std::string amrex::get_command ( )

◆ get_command_argument()

std::string amrex::get_command_argument ( int  number)

Get command line arguments. The executable name is the zero-th argument. Return empty string if there are not that many arguments. std::string.

◆ get_line_data()

template<typename MF , std::enable_if_t< IsFabArray< MF >::value, int > FOO = 0>
MF amrex::get_line_data ( MF const &  mf,
int  dir,
IntVect const &  cell,
Box const &  bnd_bx = Box() 
)

Get data in a line of MultiFab/FabArray.

This returns a MultiFab/FabArray containing the data in a line specified by a direction and a cell.

◆ get_slice_data()

std::unique_ptr< MultiFab > amrex::get_slice_data ( int  dir,
Real  coord,
const MultiFab cc,
const Geometry geom,
int  start_comp,
int  ncomp,
bool  interpolate = false,
RealBox const &  bnd_rbx = RealBox() 
)

Extract a slice from the given cell-centered MultiFab at coordinate "coord" along direction "dir".

◆ GetArrOfConstPtrs() [1/3]

template<class T >
std::array<T const*,AMREX_SPACEDIM> amrex::GetArrOfConstPtrs ( const std::array< std::unique_ptr< T >, AMREX_SPACEDIM > &  a)
noexcept

◆ GetArrOfConstPtrs() [2/3]

template<class T >
std::array<T const*,AMREX_SPACEDIM> amrex::GetArrOfConstPtrs ( const std::array< T *, AMREX_SPACEDIM > &  a)
noexcept

◆ GetArrOfConstPtrs() [3/3]

template<class T >
std::array<T const*,AMREX_SPACEDIM> amrex::GetArrOfConstPtrs ( const std::array< T, AMREX_SPACEDIM > &  a)
noexcept

◆ GetArrOfPtrs() [1/2]

template<class T >
std::array<T*,AMREX_SPACEDIM> amrex::GetArrOfPtrs ( const std::array< std::unique_ptr< T >, AMREX_SPACEDIM > &  a)
noexcept

◆ GetArrOfPtrs() [2/2]

template<class T , typename = typename T::FABType>
std::array<T*,AMREX_SPACEDIM> amrex::GetArrOfPtrs ( std::array< T, AMREX_SPACEDIM > &  a)
noexcept

◆ GetBndryCells()

BoxList amrex::GetBndryCells ( const BoxArray ba,
int  ngrow 
)

Find the ghost cells of a given BoxArray.

◆ getCell()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::getCell ( BoxND< dim > const *  boxes,
int  nboxes,
Long  icell 
)
noexcept

◆ getDefaultCompNameInt()

template<typename P >
std::string amrex::getDefaultCompNameInt ( const int  i)

◆ getDefaultCompNameReal()

template<typename P >
std::string amrex::getDefaultCompNameReal ( const int  i)

◆ getEBCellFlagFab()

const EBCellFlagFab & amrex::getEBCellFlagFab ( const FArrayBox fab)

◆ getEnum()

template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
T amrex::getEnum ( std::string_view const &  s)

◆ getEnumCaseInsensitive()

template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
T amrex::getEnumCaseInsensitive ( std::string_view const &  s)

◆ getEnumClassName()

template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::string amrex::getEnumClassName ( )

◆ getEnumNameString()

template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::string amrex::getEnumNameString ( T const &  v)

◆ getEnumNameStrings()

template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::vector<std::string> amrex::getEnumNameStrings ( )

◆ getEnumNameValuePairs()

template<typename T , typename ET = amrex_enum_traits<T>, std::enable_if_t< ET::value, int > = 0>
std::vector<std::pair<std::string,T> > amrex::getEnumNameValuePairs ( )

◆ getFPExcept()

FPExcept amrex::getFPExcept ( )

Return currently enabled FP exceptions. Linux only.

◆ getIndexBounds() [1/3]

template<int dim>
AMREX_FORCE_INLINE BoxND<dim> amrex::getIndexBounds ( BoxND< dim > const &  b1)
noexcept

◆ getIndexBounds() [2/3]

template<int dim>
AMREX_FORCE_INLINE BoxND<dim> amrex::getIndexBounds ( BoxND< dim > const &  b1,
BoxND< dim > const &  b2 
)
noexcept

◆ getIndexBounds() [3/3]

template<class T , class ... Ts>
AMREX_FORCE_INLINE auto amrex::getIndexBounds ( T const &  b1,
T const &  b2,
Ts const &...  b3 
)
noexcept

◆ getParticleCell() [1/3]

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVect amrex::getParticleCell ( P const &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi 
)
noexcept

Returns the cell index for a given particle using the provided lower bounds and cell sizes.

This version indexes cells starting from 0 at the lower left corner of the provided lower bounds, i.e., it returns a local index.

Template Parameters
Pa type of AMReX particle.
Parameters
pthe particle for which the cell index is calculated
plothe low end of the domain
dxicell sizes in each dimension

◆ getParticleCell() [2/3]

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVect amrex::getParticleCell ( P const &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
const Box domain 
)
noexcept

Returns the cell index for a given particle using the provided lower bounds, cell sizes and global domain offset.

This version indexes cells starting from 0 at the lower left corner of the simulation geometry, i.e., it returns a global index.

Template Parameters
Pa type of AMReX particle.
Parameters
pthe particle for which the cell index is calculated
plothe low end of the domain
dxicell sizes in each dimension
domainAMReX box in which the given particle resides

◆ getParticleCell() [3/3]

template<typename PTD >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVect amrex::getParticleCell ( PTD const &  ptd,
int  i,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
const Box domain 
)
noexcept

◆ getParticleGrid()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::getParticleGrid ( P const &  p,
amrex::Array4< int > const &  mask,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
const Box domain 
)
noexcept

◆ getRandState()

AMREX_FORCE_INLINE randState_t* amrex::getRandState ( )

◆ getTileIndex()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::getTileIndex ( const IntVect iv,
const Box box,
const bool  a_do_tiling,
const IntVect a_tile_size,
Box tbx 
)

◆ GetVecOfArrOfConstPtrs() [1/2]

template<class T >
Vector<std::array<T const*,AMREX_SPACEDIM> > amrex::GetVecOfArrOfConstPtrs ( const Vector< std::array< std::unique_ptr< T >, AMREX_SPACEDIM > > &  a)

◆ GetVecOfArrOfConstPtrs() [2/2]

template<class T , std::enable_if_t< IsFabArray< T >::value||IsBaseFab< T >::value, int > = 0>
Vector<std::array<T const*,AMREX_SPACEDIM> > amrex::GetVecOfArrOfConstPtrs ( const Vector< std::array< T, AMREX_SPACEDIM > > &  a)

◆ GetVecOfArrOfPtrs() [1/2]

template<class T >
Vector<std::array<T*,AMREX_SPACEDIM> > amrex::GetVecOfArrOfPtrs ( const Vector< std::array< std::unique_ptr< T >, AMREX_SPACEDIM > > &  a)

◆ GetVecOfArrOfPtrs() [2/2]

template<class T , std::enable_if_t< IsFabArray< T >::value||IsBaseFab< T >::value, int > = 0>
Vector<std::array<T*, AMREX_SPACEDIM> > amrex::GetVecOfArrOfPtrs ( Vector< std::array< T, AMREX_SPACEDIM > > &  a)

◆ GetVecOfArrOfPtrsConst()

template<class T >
Vector<std::array<T const*,AMREX_SPACEDIM> > amrex::GetVecOfArrOfPtrsConst ( const Vector< std::array< std::unique_ptr< T >, AMREX_SPACEDIM > > &  a)

◆ GetVecOfConstPtrs() [1/3]

template<class T >
Vector<const T*> amrex::GetVecOfConstPtrs ( const Vector< std::unique_ptr< T > > &  a)

◆ GetVecOfConstPtrs() [2/3]

template<class T , typename = typename T::FABType>
Vector<const T*> amrex::GetVecOfConstPtrs ( const Vector< T * > &  a)

◆ GetVecOfConstPtrs() [3/3]

template<class T , typename = typename T::FABType>
Vector<const T*> amrex::GetVecOfConstPtrs ( const Vector< T > &  a)

◆ GetVecOfPtrs() [1/2]

template<class T >
Vector<T*> amrex::GetVecOfPtrs ( const Vector< std::unique_ptr< T > > &  a)

◆ GetVecOfPtrs() [2/2]

template<class T , typename = typename T::FABType>
Vector<T*> amrex::GetVecOfPtrs ( Vector< T > &  a)

◆ GetVecOfVecOfPtrs()

template<class T >
Vector<Vector<T*> > amrex::GetVecOfVecOfPtrs ( const Vector< Vector< std::unique_ptr< T > > > &  a)

◆ gpuGetErrorString()

const char* amrex::gpuGetErrorString ( gpuError_t  error)
inline

◆ gpuGetLastError()

gpuError_t amrex::gpuGetLastError ( )
inline

◆ grad_eb_of_phi_on_centroids() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_eb_of_phi_on_centroids ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Real &  nrmx,
Real &  nrmy,
bool  is_eb_inhomog 
)

◆ grad_eb_of_phi_on_centroids() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_eb_of_phi_on_centroids ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Real &  nrmx,
Real &  nrmy,
Real &  nrmz,
bool  is_eb_inhomog 
)

◆ grad_eb_of_phi_on_centroids_extdir() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_eb_of_phi_on_centroids_extdir ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Array4< Real const > const &  vfrac,
Real &  nrmx,
Real &  nrmy,
bool  is_eb_inhomog,
const bool  on_x_face,
const int  domlo_x,
const int  domhi_x,
const bool  on_y_face,
const int  domlo_y,
const int  domhi_y 
)

◆ grad_eb_of_phi_on_centroids_extdir() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_eb_of_phi_on_centroids_extdir ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Array4< Real const > const &  vfrac,
Real &  nrmx,
Real &  nrmy,
Real &  nrmz,
bool  is_eb_inhomog,
const bool  on_x_face,
const int  domlo_x,
const int  domhi_x,
const bool  on_y_face,
const int  domlo_y,
const int  domhi_y,
const bool  on_z_face,
const int  domlo_z,
const int  domhi_z 
)

◆ grad_x_of_phi_on_centroids() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_x_of_phi_on_centroids ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Real &  yloc_on_xface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog 
)

◆ grad_x_of_phi_on_centroids() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_x_of_phi_on_centroids ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Real &  yloc_on_xface,
Real &  zloc_on_xface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog 
)

◆ grad_x_of_phi_on_centroids_extdir() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_x_of_phi_on_centroids_extdir ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Array4< Real const > const &  vfrac,
Real &  yloc_on_xface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog,
const bool  on_x_face,
const int  domlo_x,
const int  domhi_x,
const bool  on_y_face,
const int  domlo_y,
const int  domhi_y 
)

◆ grad_x_of_phi_on_centroids_extdir() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_x_of_phi_on_centroids_extdir ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Array4< Real const > const &  vfrac,
Real &  yloc_on_xface,
Real &  zloc_on_xface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog,
const bool  on_x_face,
const int  domlo_x,
const int  domhi_x,
const bool  on_y_face,
const int  domlo_y,
const int  domhi_y,
const bool  on_z_face,
const int  domlo_z,
const int  domhi_z 
)

◆ grad_y_of_phi_on_centroids() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_y_of_phi_on_centroids ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Real &  xloc_on_yface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog 
)

◆ grad_y_of_phi_on_centroids() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_y_of_phi_on_centroids ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Real &  xloc_on_yface,
Real &  zloc_on_yface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog 
)

◆ grad_y_of_phi_on_centroids_extdir() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_y_of_phi_on_centroids_extdir ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Array4< Real const > const &  vfrac,
Real &  xloc_on_yface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog,
const bool  on_x_face,
const int  domlo_x,
const int  domhi_x,
const bool  on_y_face,
const int  domlo_y,
const int  domhi_y 
)

◆ grad_y_of_phi_on_centroids_extdir() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_y_of_phi_on_centroids_extdir ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Array4< Real const > const &  vfrac,
Real &  xloc_on_yface,
Real &  zloc_on_yface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog,
const bool  on_x_face,
const int  domlo_x,
const int  domhi_x,
const bool  on_y_face,
const int  domlo_y,
const int  domhi_y,
const bool  on_z_face,
const int  domlo_z,
const int  domhi_z 
)

◆ grad_z_of_phi_on_centroids()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_z_of_phi_on_centroids ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Real &  xloc_on_zface,
Real &  yloc_on_zface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog 
)

◆ grad_z_of_phi_on_centroids_extdir()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::grad_z_of_phi_on_centroids_extdir ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  phi,
Array4< Real const > const &  phieb,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  ccent,
Array4< Real const > const &  bcent,
Array4< Real const > const &  vfrac,
Real &  xloc_on_zface,
Real &  yloc_on_zface,
bool  is_eb_dirichlet,
bool  is_eb_inhomog,
const bool  on_x_face,
const int  domlo_x,
const int  domhi_x,
const bool  on_y_face,
const int  domlo_y,
const int  domhi_y,
const bool  on_z_face,
const int  domlo_z,
const int  domhi_z 
)

◆ GraphTopPct()

void amrex::GraphTopPct ( const std::map< std::string, BLProfiler::ProfStats > &  mProfStats,
const Vector< Vector< BLProfStats::FuncStat > > &  funcStats,
const Vector< std::string > &  fNames,
Real  runTime,
int  dataNProcs,
Real  gPercent 
)

◆ grow() [1/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::grow ( const BoxND< dim > &  b,
const IntVectND< dim > &  v 
)
noexcept

Grow BoxND in each direction by specified amount.

◆ grow() [2/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::grow ( const BoxND< dim > &  b,
Direction  d,
int  n_cell 
)
noexcept

◆ grow() [3/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::grow ( const BoxND< dim > &  b,
int  i 
)
noexcept

Grow BoxND in all directions by given amount.

NOTE: n_cell negative shrinks the BoxND by that number of cells.

◆ grow() [4/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::grow ( const BoxND< dim > &  b,
int  idir,
int  n_cell 
)
noexcept

Grow BoxND in direction idir be n_cell cells.

◆ growHi() [1/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::growHi ( const BoxND< dim > &  b,
Direction  d,
int  n_cell 
)
noexcept

◆ growHi() [2/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::growHi ( const BoxND< dim > &  b,
int  idir,
int  n_cell 
)
noexcept

◆ growLo() [1/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::growLo ( const BoxND< dim > &  b,
Direction  d,
int  n_cell 
)
noexcept

◆ growLo() [2/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::growLo ( const BoxND< dim > &  b,
int  idir,
int  n_cell 
)
noexcept

◆ ha_interp_face_xy() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::ha_interp_face_xy ( Array4< Real const > const &  crse,
Array4< Real const > const &  sigx,
Array4< Real const > const &  sigy,
int  i,
int  j,
int  ic,
int  jc 
)
noexcept

◆ ha_interp_face_xy() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::ha_interp_face_xy ( Array4< Real const > const &  crse,
Array4< Real const > const &  sigx,
Array4< Real const > const &  sigy,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ ha_interp_face_xz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::ha_interp_face_xz ( Array4< Real const > const &  crse,
Array4< Real const > const &  sigx,
Array4< Real const > const &  sigz,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ ha_interp_face_yz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::ha_interp_face_yz ( Array4< Real const > const &  crse,
Array4< Real const > const &  sigy,
Array4< Real const > const &  sigz,
int  i,
int  j,
int  k,
int  ic,
int  jc,
int  kc 
)
noexcept

◆ habec_cols()

template<typename Int >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::habec_cols ( GpuArray< Int, 2 *AMREX_SPACEDIM+1 > &  sten,
int  i,
int  j,
int  k,
Array4< Int const > const &  cell_id 
)

◆ habec_ijmat()

template<typename Int >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::habec_ijmat ( GpuArray< Real, 2 *AMREX_SPACEDIM+1 > &  sten,
Array4< Int > const &  ncols,
Array4< Real > const &  diaginv,
int  i,
int  j,
int  k,
Array4< Int const > const &  cell_id,
Real  sa,
Array4< Real const > const &  a,
Real  sb,
GpuArray< Real, AMREX_SPACEDIM > const &  dx,
GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &  b,
GpuArray< int, AMREX_SPACEDIM *2 > const &  bctype,
GpuArray< Real, AMREX_SPACEDIM *2 > const &  bcl,
int  bho,
Array4< int const > const &  osm 
)

◆ habec_mat()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::habec_mat ( GpuArray< Real, 2 *AMREX_SPACEDIM+1 > &  sten,
int  i,
int  j,
int  k,
Dim3 const &  boxlo,
Dim3 const &  boxhi,
Real  sa,
Array4< Real const > const &  a,
Real  sb,
GpuArray< Real, AMREX_SPACEDIM > const &  dx,
GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &  b,
GpuArray< int, AMREX_SPACEDIM *2 > const &  bctype,
GpuArray< Real, AMREX_SPACEDIM *2 > const &  bcl,
int  bho,
GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &  msk,
Array4< Real > const &  diaginv 
)

◆ hash_combine()

template<typename T >
void amrex::hash_combine ( uint64_t &  seed,
const T &  val 
)
noexcept

◆ hash_vector()

template<typename T >
uint64_t amrex::hash_vector ( const Vector< T > &  vec,
uint64_t  seed = 0xDEADBEEFDEADBEEF 
)
noexcept

◆ HostDeviceFor() [1/28]

template<typename L , int dim>
void amrex::HostDeviceFor ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceFor() [2/28]

template<int MT, typename L , int dim>
void amrex::HostDeviceFor ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceFor() [3/28]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceFor() [4/28]

template<int MT, typename T , int dim, typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceFor() [5/28]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [6/28]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [7/28]

template<typename L1 , typename L2 , int dim>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [8/28]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [9/28]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [10/28]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [11/28]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [12/28]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [13/28]

template<typename L , int dim>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceFor() [14/28]

template<int MT, typename L , int dim>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceFor() [15/28]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceFor() [16/28]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceFor() [17/28]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [18/28]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [19/28]

template<typename L1 , typename L2 , int dim>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [20/28]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [21/28]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [22/28]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceFor() [23/28]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [24/28]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceFor() [25/28]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
n,
L &&  f 
)
noexcept

◆ HostDeviceFor() [26/28]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( Gpu::KernelInfo const &  info,
n,
L &&  f 
)
noexcept

◆ HostDeviceFor() [27/28]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( n,
L &&  f 
)
noexcept

◆ HostDeviceFor() [28/28]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceFor ( n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [1/43]

template<typename L , int dim>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [2/43]

template<int MT, typename L , int dim>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [3/43]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [4/43]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [5/43]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [6/43]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [7/43]

template<typename L1 , typename L2 , int dim>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [8/43]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [9/43]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [10/43]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [11/43]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [12/43]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [13/43]

template<typename L , int dim>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [14/43]

template<int MT, typename L , int dim>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [15/43]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [16/43]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [17/43]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [18/43]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [19/43]

template<typename L1 , typename L2 , int dim>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [20/43]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [21/43]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [22/43]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [23/43]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [24/43]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [25/43]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [26/43]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  ,
n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [27/43]

template<typename L , int dim>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [28/43]

template<int MT, typename L , int dim>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [29/43]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [30/43]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [31/43]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
std::enable_if_t<MaybeHostDeviceRunnable<L1>::value && MaybeHostDeviceRunnable<L2>::value && MaybeHostDeviceRunnable<L3>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [32/43]

template<typename L1 , typename L2 , int dim>
std::enable_if_t<MaybeHostDeviceRunnable<L1>::value && MaybeHostDeviceRunnable<L2>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [33/43]

template<int MT, typename L1 , typename L2 , int dim>
std::enable_if_t<MaybeHostDeviceRunnable<L1>::value && MaybeHostDeviceRunnable<L2>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [34/43]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L1>::value && MaybeHostDeviceRunnable<L2>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [35/43]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L1>::value && MaybeHostDeviceRunnable<L2>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ HostDeviceParallelFor() [36/43]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L1>::value && MaybeHostDeviceRunnable<L2>::value && MaybeHostDeviceRunnable<L3>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [37/43]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L1>::value && MaybeHostDeviceRunnable<L2>::value && MaybeHostDeviceRunnable<L3>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ HostDeviceParallelFor() [38/43]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [39/43]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( Gpu::KernelInfo const &  info,
n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [40/43]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [41/43]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::HostDeviceParallelFor ( n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [42/43]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( n,
L &&  f 
)
noexcept

◆ HostDeviceParallelFor() [43/43]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeHostDeviceRunnable<L>::value> amrex::HostDeviceParallelFor ( n,
L &&  f 
)
noexcept

◆ htod_memcpy() [1/2]

template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::htod_memcpy ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src 
)

◆ htod_memcpy() [2/2]

template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::htod_memcpy ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  scomp,
int  dcomp,
int  ncomp 
)

◆ hypmlabeclap_c2f() [1/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::hypmlabeclap_c2f ( int  i,
int  j,
int  k,
Array4< GpuArray< Real, 2 *AMREX_SPACEDIM+1 >> const &  stencil,
GpuArray< HYPRE_Int, AMREX_SPACEDIM > *  civ,
HYPRE_Int *  nentries,
int entry_offset,
Real *  entry_values,
Array4< int const > const &  offset_from,
Array4< int const > const &  nentries_to,
Array4< int const > const &  offset_to,
GpuArray< Real, AMREX_SPACEDIM > const &  dx,
Real  sb,
Array4< int const > const &  offset_bx,
Array4< int const > const &  offset_by,
Array4< int const > const &  offset_bz,
Real const *  bx,
Real const *  by,
Real const *  bz,
Array4< int const > const &  fine_mask,
IntVect const &  rr,
Array4< int const > const &  crse_mask 
)

◆ hypmlabeclap_c2f() [2/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::hypmlabeclap_c2f ( int  i,
int  j,
int  k,
Array4< GpuArray< Real, 2 *AMREX_SPACEDIM+1 >> const &  stencil,
GpuArray< HYPRE_Int, AMREX_SPACEDIM > *  civ,
HYPRE_Int *  nentries,
int entry_offset,
Real *  entry_values,
Array4< int const > const &  offset_from,
Array4< int const > const &  nentries_to,
Array4< int const > const &  offset_to,
GpuArray< Real, AMREX_SPACEDIM > const &  dx,
Real  sb,
Array4< int const > const &  offset_bx,
Array4< int const > const &  offset_by,
Real const *  bx,
Real const *  by,
Array4< int const > const &  fine_mask,
IntVect const &  rr,
Array4< int const > const &  crse_mask 
)

◆ hypmlabeclap_f2c_set_values()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::hypmlabeclap_f2c_set_values ( IntVect const &  cell,
Real *  values,
GpuArray< Real, AMREX_SPACEDIM > const &  dx,
Real  sb,
GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &  b,
GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &  bmask,
IntVect const &  refratio,
int  not_covered 
)

◆ hypmlabeclap_mat()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::hypmlabeclap_mat ( GpuArray< Real, 2 *AMREX_SPACEDIM+1 > &  sten,
int  i,
int  j,
int  k,
Dim3 const &  boxlo,
Dim3 const &  boxhi,
Real  sa,
Array4< Real const > const &  a,
Real  sb,
GpuArray< Real, AMREX_SPACEDIM > const &  dx,
GpuArray< Array4< Real const >, AMREX_SPACEDIM > const &  b,
GpuArray< int, AMREX_SPACEDIM *2 > const &  bctype,
GpuArray< Real, AMREX_SPACEDIM *2 > const &  bcl,
GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &  bcmsk,
GpuArray< Array4< Real const >, AMREX_SPACEDIM *2 > const &  bcval,
GpuArray< Array4< Real >, AMREX_SPACEDIM *2 > const &  bcrhs,
int  level,
IntVect const &  fixed_pt 
)

◆ hypmlabeclap_rhs()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::hypmlabeclap_rhs ( int  i,
int  j,
int  k,
Dim3 const &  boxlo,
Dim3 const &  boxhi,
Array4< Real > const &  rhs1,
Array4< Real const > const &  rhs0,
GpuArray< Array4< int const >, AMREX_SPACEDIM *2 > const &  bcmsk,
GpuArray< Array4< Real const >, AMREX_SPACEDIM *2 > const &  bcrhs 
)

◆ IdentityTuple() [1/2]

template<typename... Ts, typename... Ps>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple<Ts...> amrex::IdentityTuple ( GpuTuple< Ts... >  ,
ReduceOps< Ps... >   
)
constexprnoexcept

Return a GpuTuple containing the identity element for each operation in ReduceOps. For example 0, +inf and -inf for ReduceOpSum, ReduceOpMin and ReduceOpMax respectively.

◆ IdentityTuple() [2/2]

template<typename... Ts, typename... Ps>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple<Ts...> amrex::IdentityTuple ( GpuTuple< Ts... >  ,
TypeList< Ps... >   
)
constexprnoexcept

Return a GpuTuple containing the identity element for each ReduceOp in TypeList. For example 0, +inf and -inf for ReduceOpSum, ReduceOpMin and ReduceOpMax respectively.

◆ ignore_unused()

template<class... Ts>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::ignore_unused ( const Ts &  ...)

This shuts up the compiler about unused variables.

◆ indexFromValue()

template<class FAB , class foo = std::enable_if_t<IsBaseFab<FAB>::value>>
IntVect amrex::indexFromValue ( FabArray< FAB > const &  mf,
int  comp,
IntVect const &  nghost,
typename FAB::value_type  value 
)

◆ IndexTypeCat()

template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND<detail::get_sum<d, dims...>)> amrex::IndexTypeCat ( const IndexTypeND< d > &  v,
const IndexTypeND< dims > &...  vects 
)
constexprnoexcept

Returns a IndexTypeND obtained by concatenating the input IndexTypeNDs. The dimension of the return value equals the sum of the dimensions of the inputted IndexTypeNDs.

◆ IndexTypeExpand()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND<new_dim> amrex::IndexTypeExpand ( const IndexTypeND< old_dim > &  v,
IndexType::CellIndex  fill_extra = IndexType::CellIndex::CELL 
)
constexprnoexcept

Returns a new IndexTypeND of size new_dim and assigns all values of iv to it and fill_extra to the remaining elements.

◆ IndexTypeND() [1/2]

template<int dim>
AMREX_GPU_HOST_DEVICE amrex::IndexTypeND ( const IntVectND< dim > &  ) -> IndexTypeND< dim >

◆ IndexTypeND() [2/2]

template<class... Args, std::enable_if_t< IsConvertible_v< IndexType::CellIndex, Args... >, int > = 0>
AMREX_GPU_HOST_DEVICE amrex::IndexTypeND ( IndexType::CellIndex  ,
Args...   
) -> IndexTypeND< sizeof...(Args)+1 >

◆ IndexTypeResize()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND<new_dim> amrex::IndexTypeResize ( const IndexTypeND< old_dim > &  v,
IndexType::CellIndex  fill_extra = IndexType::CellIndex::CELL 
)
constexprnoexcept

Returns a new IndexTypeND of size new_dim by either shrinking or expanding iv.

◆ IndexTypeShrink()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IndexTypeND<new_dim> amrex::IndexTypeShrink ( const IndexTypeND< old_dim > &  v)
constexprnoexcept

Returns a new IndexTypeND of size new_dim and assigns the first new_dim values of v to it.

◆ IndexTypeSplit()

template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE GpuTuple<IndexTypeND<d>, IndexTypeND<dims>...> amrex::IndexTypeSplit ( const IndexTypeND< detail::get_sum< d, dims... >()> &  v)
constexprnoexcept

Returns a tuple of IndexTypeND obtained by splitting the input IndexTypeND according to the dimensions specified by the template arguments.

◆ Initialize() [1/2]

amrex::AMReX * amrex::Initialize ( int argc,
char **&  argv,
bool  build_parm_parse = true,
MPI_Comm  mpi_comm = MPI_COMM_WORLD,
const std::function< void()> &  func_parm_parse = {},
std::ostream &  a_osout = std::cout,
std::ostream &  a_oserr = std::cerr,
ErrorHandler  a_errhandler = nullptr 
)

◆ Initialize() [2/2]

amrex::AMReX * amrex::Initialize ( MPI_Comm  mpi_comm,
std::ostream &  a_osout = std::cout,
std::ostream &  a_oserr = std::cerr,
ErrorHandler  a_errhandler = nullptr 
)

◆ Initialized()

bool amrex::Initialized ( )

Returns true if there are any currently-active and initialized AMReX instances (i.e. one for which amrex::Initialize has been called, and amrex::Finalize has not). Otherwise false.

◆ InitRandom()

void amrex::InitRandom ( ULong  cpu_seed,
int  nprocs = ParallelDescriptor::NProcs(),
ULong  gpu_seed = detail::DefaultGpuSeed() 
)

Set the seed of the random number generator.

There is also an entry point for Fortran callable as:

INTEGER seed call blutilinitrand(seed)

or

INTEGER seed call blinitrand(seed)

◆ InitSNaN()

bool amrex::InitSNaN ( )
noexcept

◆ interp_face_reg() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::interp_face_reg ( int  i,
int  j,
int  k,
IntVect const &  rr,
Array4< Real > const &  fine,
int  scomp,
Array4< Real const > const &  crse,
Array4< Real > const &  slope,
int  ncomp,
Box const &  domface,
int  idim 
)

◆ interp_face_reg() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::interp_face_reg ( int  i,
int  j,
IntVect const &  rr,
Array4< Real > const &  fine,
int  scomp,
Array4< Real const > const &  crse,
Array4< Real > const &  slope,
int  ncomp,
Box const &  domface,
int  idim 
)

◆ InterpAddBox()

void amrex::InterpAddBox ( MultiFabCopyDescriptor fabCopyDesc,
BoxList returnUnfilledBoxes,
Vector< FillBoxId > &  returnedFillBoxIds,
const Box subbox,
MultiFabId  faid1,
MultiFabId  faid2,
Real  t1,
Real  t2,
Real  t,
int  src_comp,
int  dest_comp,
int  num_comp,
bool  extrap 
)

◆ interpbndrydata_o1()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::interpbndrydata_o1 ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  bdry,
int  nb,
Array4< T const > const &  crse,
int  nc,
Dim3 const &  r 
)
noexcept

◆ interpbndrydata_x_o3()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::interpbndrydata_x_o3 ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  bdry,
int  nb,
Array4< T const > const &  crse,
int  nc,
Dim3 const &  r,
Array4< int const > const &  mask,
int  not_covered,
int  max_width 
)
noexcept

◆ interpbndrydata_y_o3()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::interpbndrydata_y_o3 ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  bdry,
int  nb,
Array4< T const > const &  crse,
int  nc,
Dim3 const &  r,
Array4< int const > const &  mask,
int  not_covered,
int  max_width 
)
noexcept

◆ interpbndrydata_z_o3()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::interpbndrydata_z_o3 ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  bdry,
int  nb,
Array4< T const > const &  crse,
int  nc,
Dim3 const &  r,
Array4< int const > const &  mask,
int  not_covered,
int   
)
noexcept

◆ InterpCrseFineBndryEMfield() [1/2]

void amrex::InterpCrseFineBndryEMfield ( InterpEM_t  interp_type,
const Array< MultiFab const *, AMREX_SPACEDIM > &  crse,
const Array< MultiFab *, AMREX_SPACEDIM > &  fine,
const Geometry cgeom,
const Geometry fgeom,
int  ref_ratio 
)

◆ InterpCrseFineBndryEMfield() [2/2]

void amrex::InterpCrseFineBndryEMfield ( InterpEM_t  interp_type,
const Array< MultiFab, AMREX_SPACEDIM > &  crse,
Array< MultiFab, AMREX_SPACEDIM > &  fine,
const Geometry cgeom,
const Geometry fgeom,
int  ref_ratio 
)

◆ InterpFace() [1/2]

template<typename MF , typename iMF , typename Interp >
std::enable_if_t<IsFabArray<MF>::value && !std::is_same_v<Interp,MFInterpolater> > amrex::InterpFace ( Interp *  interp,
MF const &  mf_crse_patch,
int  crse_comp,
MF &  mf_refined_patch,
int  fine_comp,
int  ncomp,
const IntVect ratio,
const iMF &  solve_mask,
const Geometry crse_geom,
const Geometry fine_geom,
int  bcscomp,
RunOn  gpu_or_cpu,
const Vector< BCRec > &  bcs 
)

◆ InterpFace() [2/2]

template<typename MF , typename iMF >
std::enable_if_t<IsFabArray<MF>::value> amrex::InterpFace ( InterpBase interp,
MF const &  mf_crse_patch,
int  crse_comp,
MF &  mf_refined_patch,
int  fine_comp,
int  ncomp,
const IntVect ratio,
const iMF &  solve_mask,
const Geometry crse_geom,
const Geometry fine_geom,
int  bccomp,
RunOn  gpu_or_cpu,
const Vector< BCRec > &  bcs 
)

◆ InterpFillFab()

void amrex::InterpFillFab ( MultiFabCopyDescriptor fabCopyDesc,
const Vector< FillBoxId > &  fillBoxIds,
MultiFabId  faid1,
MultiFabId  faid2,
FArrayBox dest,
Real  t1,
Real  t2,
Real  t,
int  src_comp,
int  dest_comp,
int  num_comp,
bool  extrap 
)

◆ InterpFromCoarseLevel() [1/5]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::InterpFromCoarseLevel ( Array< MF *, AMREX_SPACEDIM > const &  mf,
IntVect const &  nghost,
Real  time,
const Array< MF *, AMREX_SPACEDIM > &  cmf,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
Array< BC, AMREX_SPACEDIM > &  cbc,
int  cbccomp,
Array< BC, AMREX_SPACEDIM > &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Array< Vector< BCRec >, AMREX_SPACEDIM > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

Fill face variables with data from the coarse level. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MFs on the fine level
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
cmfsource MFs on the coarse level
scompstarting component of the source MFs
dcompstarting component of the destination MFs
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ InterpFromCoarseLevel() [2/5]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::InterpFromCoarseLevel ( Array< MF *, AMREX_SPACEDIM > const &  mf,
Real  time,
const Array< MF *, AMREX_SPACEDIM > &  cmf,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
Array< BC, AMREX_SPACEDIM > &  cbc,
int  cbccomp,
Array< BC, AMREX_SPACEDIM > &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Array< Vector< BCRec >, AMREX_SPACEDIM > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

Fill face variables with data from the coarse level. Sometimes, we need to fillpatch all AMREX_SPACEDIM face MultiFabs togother to satisfy certain constraint such as divergence preserving.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MFs on the fine level
timetime associated with mf
cmfsource MFs on the coarse level
scompstarting component of the source MFs
dcompstarting component of the destination MFs
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ InterpFromCoarseLevel() [3/5]

template<typename MF , typename Interp >
std::enable_if_t< IsFabArray< MF >::value > amrex::InterpFromCoarseLevel ( MF &  mf,
IntVect const &  nghost,
IntVect const &  nghost_outside_domain,
const MF &  cmf,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
const IntVect ratio,
Interp *  mapper,
const Vector< BCRec > &  bcs,
int  bcscomp 
)

Fill with interpolation of coarse level data.

It's the CALLER's responsibility to make sure all ghost cells of the coarse MF needed for interpolation are filled already before calling this function. It's assumed that the fine level MultiFab mf's BoxArray is coarsenable by the refinement ratio. There is no support for EB.

Template Parameters
MFthe MultiFab/FabArray type
Interpspatial interpolater
Parameters
mfdestination MF on the fine level
nghostnumber of ghost cells of mf inside domain needed to be filled
nghost_outside_domainnumber of ghost cells of mf outside domain needed to be filled
cmfsource MF on the coarse level
scompstarting component of the source MF
dcompstarting component of the destination MF
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
ratiorefinement ratio
mapperspatial interpolater
bcsboundar types for each component
bcscompstarting component for bcs

◆ InterpFromCoarseLevel() [4/5]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::InterpFromCoarseLevel ( MF &  mf,
IntVect const &  nghost,
Real  time,
const MF &  cmf,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
BC &  cbc,
int  cbccomp,
BC &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Vector< BCRec > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

Fill with interpolation of coarse level data.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MF on the fine level
nghostnumber of ghost cells of mf needed to be filled
timetime associated with mf
cmfsource MF on the coarse level
scompstarting component of the source MF
dcompstarting component of the destination MF
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ InterpFromCoarseLevel() [5/5]

template<typename MF , typename BC , typename Interp , typename PreInterpHook = NullInterpHook<typename MF::FABType::value_type>, typename PostInterpHook = NullInterpHook<typename MF::FABType::value_type>>
std::enable_if_t< IsFabArray< MF >::value > amrex::InterpFromCoarseLevel ( MF &  mf,
Real  time,
const MF &  cmf,
int  scomp,
int  dcomp,
int  ncomp,
const Geometry cgeom,
const Geometry fgeom,
BC &  cbc,
int  cbccomp,
BC &  fbc,
int  fbccomp,
const IntVect ratio,
Interp *  mapper,
const Vector< BCRec > &  bcs,
int  bcscomp,
const PreInterpHook &  pre_interp = {},
const PostInterpHook &  post_interp = {} 
)

Fill with interpolation of coarse level data.

No ghost cells of the destination MF are filled.

Template Parameters
MFthe MultiFab/FabArray type
BCfunctor for filling physical boundaries
Interpspatial interpolater
PreInterpHookpre-interpolation hook
PostInterpHookpost-interpolation hook
Parameters
mfdestination MF on the fine level
timetime associated with mf
cmfsource MF on the coarse level
scompstarting component of the source MF
dcompstarting component of the destination MF
ncompnumber of components
cgeomGeometry for the coarse level
fgeomGeometry for the fine level
cbcfunctor for physical boundaries on the coarse level
cbccompstarting component for cbc
fbcfunctor for physical boundaries on the fine level
fbccompstarting component for fbc
ratiorefinement ratio
mapperspatial interpolater
bcsboundary types for each component. We need this because some interpolaters need it.
bcscompstarting component for bcs
pre_interppre-interpolation hook
post_interppost-interpolation hook

◆ intersect() [1/6]

void amrex::intersect ( BoxDomain dest,
const BoxDomain fin,
const Box b 
)

Compute the intersection of BoxDomain fin with Box b and place the result into BoxDomain dest.

◆ intersect() [2/6]

BoxArray amrex::intersect ( const BoxArray ba,
const Box b,
const IntVect ng 
)

◆ intersect() [3/6]

BoxArray amrex::intersect ( const BoxArray ba,
const Box b,
int  ng = 0 
)

Make a BoxArray from the intersection of Box b and BoxArray(+ghostcells).

◆ intersect() [4/6]

BoxList amrex::intersect ( const BoxArray ba,
const BoxList bl 
)

Make a BoxList from the intersection of BoxArray and BoxList.

◆ intersect() [5/6]

BoxArray amrex::intersect ( const BoxArray lhs,
const BoxArray rhs 
)

Make a BoxArray from the intersection of two BoxArrays.

◆ intersect() [6/6]

BoxList amrex::intersect ( const BoxList bl,
const Box b 
)

Returns a BoxList defining the intersection of bl with b.

◆ IntVectCat()

template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND<detail::get_sum<d, dims...>)> amrex::IntVectCat ( const IntVectND< d > &  v,
const IntVectND< dims > &...  vects 
)
constexprnoexcept

Returns a IntVectND obtained by concatenating the input IntVectNDs. The dimension of the return value equals the sum of the dimensions of the inputted IntVectNDs.

◆ IntVectExpand()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND<new_dim> amrex::IntVectExpand ( const IntVectND< old_dim > &  iv,
int  fill_extra = 0 
)
constexprnoexcept

Returns a new IntVectND of size new_dim and assigns all values of iv to it and fill_extra to the remaining elements.

◆ IntVectND() [1/2]

template<std::size_t dim>
AMREX_GPU_HOST_DEVICE amrex::IntVectND ( const Array< int, dim > &  ) -> IntVectND< dim >

◆ IntVectND() [2/2]

template<class... Args, std::enable_if_t< IsConvertible_v< int, Args... >, int > = 0>
AMREX_GPU_HOST_DEVICE amrex::IntVectND ( int  ,
int  ,
Args...   
) -> IntVectND< sizeof...(Args)+2 >

◆ IntVectResize()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND<new_dim> amrex::IntVectResize ( const IntVectND< old_dim > &  iv,
int  fill_extra = 0 
)
constexprnoexcept

Returns a new IntVectND of size new_dim by either shrinking or expanding iv.

◆ IntVectShrink()

template<int new_dim, int old_dim>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE IntVectND<new_dim> amrex::IntVectShrink ( const IntVectND< old_dim > &  iv)
constexprnoexcept

Returns a new IntVectND of size new_dim and assigns the first new_dim values of iv to it.

◆ IntVectSplit()

template<int d, int... dims>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE GpuTuple<IntVectND<d>, IntVectND<dims>...> amrex::IntVectSplit ( const IntVectND< detail::get_sum< d, dims... >()> &  v)
constexprnoexcept

Returns a tuple of IntVectND obtained by splitting the input IntVectND according to the dimensions specified by the template arguments.

◆ InvNormDist()

double amrex::InvNormDist ( double  p)

This function returns an approximation of the inverse cumulative standard normal distribution function. I.e., given P, it returns an approximation to the X satisfying P = Pr{Z <= X} where Z is a random variable from the standard normal distribution.

The algorithm uses a minimax approximation by rational functions and the result has a relative error whose absolute value is less than 1.15e-9.

Author
Peter J. Acklam Time-stamp: 2002-06-09 18:45:44 +0200 E-mail: jackl.nosp@m.am@m.nosp@m.ath.u.nosp@m.io.n.nosp@m.o WWW URL: http://www.math.uio.no/~jacklam

"p" MUST be in the open interval (0,1).

◆ InvNormDistBest()

double amrex::InvNormDistBest ( double  p)

This function returns an approximation of the inverse cumulative standard normal distribution function. I.e., given P, it returns an approximation to the X satisfying P = Pr{Z <= X} where Z is a random variable from the standard normal distribution.

Original FORTRAN77 version by Michael Wichura.

Michael Wichura, The Percentage Points of the Normal Distribution, Algorithm AS 241, Applied Statistics, Volume 37, Number 3, pages 477-484, 1988.

Our version is based on the C++ version by John Burkardt.

The algorithm uses a minimax approximation by rational functions and the result is good to roughly machine precision. This routine is roughly 30% more costly than InvNormDist() above.

"p" MUST be in the open interval (0,1).

◆ iparser_ast_depth()

int amrex::iparser_ast_depth ( struct iparser_node node)

◆ iparser_ast_dup()

struct iparser_node * amrex::iparser_ast_dup ( struct amrex_iparser my_iparser,
struct iparser_node node,
int  move 
)

◆ iparser_ast_get_symbols()

void amrex::iparser_ast_get_symbols ( struct iparser_node node,
std::set< std::string > &  symbols,
std::set< std::string > &  local_symbols 
)

◆ iparser_ast_optimize()

void amrex::iparser_ast_optimize ( struct iparser_node node)

◆ iparser_ast_print()

void amrex::iparser_ast_print ( struct iparser_node node,
std::string const &  space,
AllPrint printer 
)

◆ iparser_ast_regvar()

void amrex::iparser_ast_regvar ( struct iparser_node node,
char const *  name,
int  i 
)

◆ iparser_ast_setconst()

void amrex::iparser_ast_setconst ( struct iparser_node node,
char const *  name,
long long  c 
)

◆ iparser_ast_size()

std::size_t amrex::iparser_ast_size ( struct iparser_node node)

◆ iparser_atoll()

long long amrex::iparser_atoll ( const char *  str)

◆ iparser_call_f1()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long amrex::iparser_call_f1 ( enum  iparser_f1_t,
long long  a 
)

There is only one type for now

◆ iparser_call_f2()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long amrex::iparser_call_f2 ( enum iparser_f2_t  type,
long long  a,
long long  b 
)

◆ iparser_call_f3()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long amrex::iparser_call_f3 ( enum  iparser_f3_t,
long long  a,
long long  b,
long long  c 
)

◆ iparser_compile()

void amrex::iparser_compile ( struct amrex_iparser parser,
char *  p 
)
inline

◆ iparser_compile_exe_size()

void amrex::iparser_compile_exe_size ( struct iparser_node node,
char *&  p,
std::size_t &  exe_size,
int max_stack_size,
int stack_size,
Vector< char * > &  local_variables 
)

◆ iparser_defexpr()

void amrex::iparser_defexpr ( struct iparser_node body)

◆ iparser_depth()

int amrex::iparser_depth ( struct amrex_iparser iparser)

◆ iparser_dup()

struct amrex_iparser * amrex::iparser_dup ( struct amrex_iparser source)

◆ iparser_exe_eval()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE long long amrex::iparser_exe_eval ( const char *  p,
long long const *  x 
)

◆ iparser_exe_size()

std::size_t amrex::iparser_exe_size ( struct amrex_iparser parser,
int max_stack_size,
int stack_size 
)
inline

◆ iparser_get_symbols()

std::set< std::string > amrex::iparser_get_symbols ( struct amrex_iparser iparser)

◆ iparser_makesymbol()

struct iparser_symbol * amrex::iparser_makesymbol ( char *  name)

◆ iparser_newassign()

struct iparser_node * amrex::iparser_newassign ( struct iparser_symbol sym,
struct iparser_node v 
)

◆ iparser_newf1()

struct iparser_node * amrex::iparser_newf1 ( enum iparser_f1_t  ftype,
struct iparser_node l 
)

◆ iparser_newf2()

struct iparser_node * amrex::iparser_newf2 ( enum iparser_f2_t  ftype,
struct iparser_node l,
struct iparser_node r 
)

◆ iparser_newf3()

struct iparser_node * amrex::iparser_newf3 ( enum iparser_f3_t  ftype,
struct iparser_node n1,
struct iparser_node n2,
struct iparser_node n3 
)

◆ iparser_newlist()

struct iparser_node * amrex::iparser_newlist ( struct iparser_node nl,
struct iparser_node nr 
)

◆ iparser_newnode()

struct iparser_node * amrex::iparser_newnode ( enum iparser_node_t  type,
struct iparser_node l,
struct iparser_node r 
)

◆ iparser_newnumber()

struct iparser_node * amrex::iparser_newnumber ( long long  d)

◆ iparser_newsymbol()

struct iparser_node * amrex::iparser_newsymbol ( struct iparser_symbol symbol)

◆ iparser_print()

void amrex::iparser_print ( struct amrex_iparser iparser)

◆ iparser_regvar()

void amrex::iparser_regvar ( struct amrex_iparser iparser,
char const *  name,
int  i 
)

◆ iparser_setconst()

void amrex::iparser_setconst ( struct amrex_iparser iparser,
char const *  name,
long long  c 
)

◆ is_aligned()

bool amrex::is_aligned ( const void *  p,
std::size_t  alignment 
)
inlinenoexcept

◆ is_integer()

bool amrex::is_integer ( const char *  str)

Useful C++ Utility Functions.

Return true if argument is a non-zero length string of digits.

◆ is_it()

template<typename T >
bool amrex::is_it ( std::string const &  s,
T &  v 
)

Return true and store value in v if string s is type T.

◆ isEmpty() [1/2]

template<int dim>
AMREX_FORCE_INLINE bool amrex::isEmpty ( BoxND< dim > const &  b)
noexcept

◆ isEmpty() [2/2]

template<typename T , std::enable_if_t< std::is_integral_v< T >, int > = 0>
bool amrex::isEmpty ( n)
noexcept

◆ isMFIterSafe()

bool amrex::isMFIterSafe ( const FabArrayBase x,
const FabArrayBase y 
)
inline

Is it safe to have these two MultiFabs in the same MFiter? True means safe; false means maybe.

◆ IsParticleTileData() [1/2]

template<class T >
constexpr decltype(T::is_particle_tile_data) amrex::IsParticleTileData ( )
constexpr

◆ IsParticleTileData() [2/2]

template<class T , class... Args>
constexpr bool amrex::IsParticleTileData ( Args...  )
constexpr

◆ isSame()

template<typename A , typename B , std::enable_if_t< std::is_same_v< std::remove_cv_t< A >, std::remove_cv_t< B > >, int > = 0>
bool amrex::isSame ( A const *  pa,
B const *  pb 
)

◆ launch() [1/8]

template<int MT, int dim, typename L >
void amrex::launch ( BoxND< dim > const &  box,
L const &  f 
)
noexcept

◆ launch() [2/8]

template<int MT, typename L >
void amrex::launch ( int  nblocks,
gpuStream_t  stream,
L const &  f 
)
noexcept

◆ launch() [3/8]

template<typename L >
void amrex::launch ( int  nblocks,
int  nthreads_per_block,
gpuStream_t  stream,
L &&  f 
)
noexcept

◆ launch() [4/8]

template<typename L >
void amrex::launch ( int  nblocks,
int  nthreads_per_block,
std::size_t  shared_mem_bytes,
gpuStream_t  stream,
L const &  f 
)
noexcept

◆ launch() [5/8]

template<int MT, typename L >
void amrex::launch ( int  nblocks,
std::size_t  shared_mem_bytes,
gpuStream_t  stream,
L const &  f 
)
noexcept

◆ launch() [6/8]

template<typename T , typename L >
void amrex::launch ( T const &  n,
L &&  f 
)
noexcept

◆ launch() [7/8]

template<int MT, typename T , typename L >
void amrex::launch ( T const &  n,
L &&  f 
)
noexcept

◆ launch() [8/8]

template<int MT, typename T , typename L , std::enable_if_t< std::is_integral_v< T >, int > FOO = 0>
void amrex::launch ( T const &  n,
L const &  f 
)
noexcept

◆ launch_global() [1/2]

template<class L >
AMREX_GPU_GLOBAL void amrex::launch_global ( f0)

◆ launch_global() [2/2]

template<class L , class... Lambdas>
AMREX_GPU_GLOBAL void amrex::launch_global ( f0,
Lambdas...  fs 
)

◆ launch_host() [1/2]

template<class L >
void amrex::launch_host ( L &&  f0)
noexcept

◆ launch_host() [2/2]

template<class L , class... Lambdas>
void amrex::launch_host ( L &&  f0,
Lambdas &&...  fs 
)
noexcept

◆ lbound() [1/2]

template<class T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::lbound ( Array4< T > const &  a)
noexcept

◆ lbound() [2/2]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::lbound ( BoxND< dim > const &  box)
noexcept

◆ lbound_iv()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::lbound_iv ( BoxND< dim > const &  box)
noexcept

◆ length() [1/2]

template<class T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::length ( Array4< T > const &  a)
noexcept

◆ length() [2/2]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::length ( BoxND< dim > const &  box)
noexcept

◆ length_iv()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::length_iv ( BoxND< dim > const &  box)
noexcept

◆ LevelFullPath()

std::string amrex::LevelFullPath ( int  level,
const std::string &  plotfilename,
const std::string &  levelPrefix 
)

return the full path of the level directory, e.g., plt00005/Level_5

◆ LevelPath()

std::string amrex::LevelPath ( int  level,
const std::string &  levelPrefix 
)

return the name of the level directory, e.g., Level_5

◆ LinComb() [1/3]

template<typename T >
void amrex::LinComb ( AlgVector< T > &  y,
a,
AlgVector< T > const &  xa,
b,
AlgVector< T > const &  xb,
bool  async = false 
)

◆ LinComb() [2/3]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::LinComb ( Array< MF, N > &  dst,
typename MF::value_type  a,
Array< MF, N > const &  src_a,
int  acomp,
typename MF::value_type  b,
Array< MF, N > const &  src_b,
int  bcomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst = a*src_a + b*src_b

◆ LinComb() [3/3]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::LinComb ( MF &  dst,
typename MF::value_type  a,
MF const &  src_a,
int  acomp,
typename MF::value_type  b,
MF const &  src_b,
int  bcomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst = a*src_a + b*src_b

◆ linear_interpolate_to_particle()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::linear_interpolate_to_particle ( const P &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
const Array4< amrex::Real const > *  data_arr,
amrex::ParticleReal *  val,
const IntVect is_nodal,
int  start_comp,
int  ncomp,
int  num_arrays 
)

Linearly interpolates the mesh data to the particle position from mesh data. This general form can handle an arbitrary number of Array4s, each with different staggerings.

◆ linspace()

template<typename ItType , typename ValType , std::enable_if_t< std::is_floating_point_v< typename std::iterator_traits< ItType >::value_type > &&std::is_floating_point_v< ValType >, int > = 0>
AMREX_GPU_HOST_DEVICE void amrex::linspace ( ItType  first,
const ItType &  last,
const ValType &  start,
const ValType &  stop 
)

◆ LocalAdd() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::LocalAdd ( Array< MF, N > &  dst,
Array< MF, N > const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst += src

◆ LocalAdd() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::LocalAdd ( MF &  dst,
MF const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst += src

◆ LocalCopy() [1/2]

template<class DMF , class SMF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< DMF > &&IsMultiFabLike_v< SMF >, int > = 0>
void amrex::LocalCopy ( Array< DMF, N > &  dst,
Array< SMF, N > const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst = src

◆ LocalCopy() [2/2]

template<class DMF , class SMF , std::enable_if_t< IsMultiFabLike_v< DMF > &&IsMultiFabLike_v< SMF >, int > = 0>
void amrex::LocalCopy ( DMF &  dst,
SMF const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst = src

◆ log()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::log ( const GpuComplex< T > &  a_z)
noexcept

Complex natural logarithm function.

◆ logspace()

template<typename ItType , typename ValType , std::enable_if_t< std::is_floating_point_v< typename std::iterator_traits< ItType >::value_type > &&std::is_floating_point_v< ValType >, int > = 0>
AMREX_GPU_HOST_DEVICE void amrex::logspace ( ItType  first,
const ItType &  last,
const ValType &  start,
const ValType &  stop,
const ValType &  base 
)

◆ Loop() [1/4]

template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::Loop ( BoxND< dim > const &  bx,
F const &  f 
)
noexcept

◆ Loop() [2/4]

template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::Loop ( BoxND< dim > const &  bx,
int  ncomp,
F const &  f 
)
noexcept

◆ Loop() [3/4]

template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::Loop ( Dim3  lo,
Dim3  hi,
F const &  f 
)
noexcept

◆ Loop() [4/4]

template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::Loop ( Dim3  lo,
Dim3  hi,
int  ncomp,
F const &  f 
)
noexcept

◆ LoopConcurrent() [1/4]

template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrent ( BoxND< dim > const &  bx,
F const &  f 
)
noexcept

◆ LoopConcurrent() [2/4]

template<class F , int dim>
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrent ( BoxND< dim > const &  bx,
int  ncomp,
F const &  f 
)
noexcept

◆ LoopConcurrent() [3/4]

template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrent ( Dim3  lo,
Dim3  hi,
F const &  f 
)
noexcept

◆ LoopConcurrent() [4/4]

template<class F >
AMREX_GPU_HOST_DEVICE AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrent ( Dim3  lo,
Dim3  hi,
int  ncomp,
F const &  f 
)
noexcept

◆ LoopConcurrentOnCpu() [1/4]

template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrentOnCpu ( BoxND< dim > const &  bx,
F const &  f 
)
noexcept

◆ LoopConcurrentOnCpu() [2/4]

template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrentOnCpu ( BoxND< dim > const &  bx,
int  ncomp,
F const &  f 
)
noexcept

◆ LoopConcurrentOnCpu() [3/4]

template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrentOnCpu ( Dim3  lo,
Dim3  hi,
F const &  f 
)
noexcept

◆ LoopConcurrentOnCpu() [4/4]

template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopConcurrentOnCpu ( Dim3  lo,
Dim3  hi,
int  ncomp,
F const &  f 
)
noexcept

◆ LoopOnCpu() [1/4]

template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopOnCpu ( BoxND< dim > const &  bx,
F const &  f 
)
noexcept

◆ LoopOnCpu() [2/4]

template<class F , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopOnCpu ( BoxND< dim > const &  bx,
int  ncomp,
F const &  f 
)
noexcept

◆ LoopOnCpu() [3/4]

template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopOnCpu ( Dim3  lo,
Dim3  hi,
F const &  f 
)
noexcept

◆ LoopOnCpu() [4/4]

template<class F >
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::LoopOnCpu ( Dim3  lo,
Dim3  hi,
int  ncomp,
F const &  f 
)
noexcept

◆ lower_bound()

template<typename ItType , typename ValType >
AMREX_GPU_HOST_DEVICE ItType amrex::lower_bound ( ItType  first,
ItType  last,
const ValType &  val 
)

◆ mac_interpolate()

template<typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mac_interpolate ( const P &  p,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  plo,
amrex::GpuArray< amrex::Real, AMREX_SPACEDIM > const &  dxi,
amrex::GpuArray< amrex::Array4< amrex::Real const >, AMREX_SPACEDIM > const &  data_arr,
amrex::ParticleReal *  val 
)

Linearly interpolates the mesh data to the particle position from face-centered data. The nth component of the data_arr array is nodal in the nth direction, and cell-centered in the others.

◆ makeArray4()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Array4<T> amrex::makeArray4 ( T *  p,
Box const &  bx,
int  ncomp 
)
noexcept

◆ makeEBFabFactory() [1/3]

std::unique_ptr< EBFArrayBoxFactory > amrex::makeEBFabFactory ( const EB2::IndexSpace index_space,
const Geometry a_geom,
const BoxArray a_ba,
const DistributionMapping a_dm,
const Vector< int > &  a_ngrow,
EBSupport  a_support 
)

◆ makeEBFabFactory() [2/3]

std::unique_ptr< EBFArrayBoxFactory > amrex::makeEBFabFactory ( const EB2::Level eb_level,
const BoxArray a_ba,
const DistributionMapping a_dm,
const Vector< int > &  a_ngrow,
EBSupport  a_support 
)

◆ makeEBFabFactory() [3/3]

std::unique_ptr< EBFArrayBoxFactory > amrex::makeEBFabFactory ( const Geometry a_geom,
const BoxArray a_ba,
const DistributionMapping a_dm,
const Vector< int > &  a_ngrow,
EBSupport  a_support 
)

◆ makeFineMask() [1/7]

iMultiFab amrex::makeFineMask ( const BoxArray cba,
const DistributionMapping cdm,
const BoxArray fba,
const IntVect ratio,
int  crse_value,
int  fine_value 
)

◆ makeFineMask() [2/7]

MultiFab amrex::makeFineMask ( const BoxArray cba,
const DistributionMapping cdm,
const BoxArray fba,
const IntVect ratio,
Real  crse_value,
Real  fine_value 
)

◆ makeFineMask() [3/7]

iMultiFab amrex::makeFineMask ( const BoxArray cba,
const DistributionMapping cdm,
const IntVect cnghost,
const BoxArray fba,
const IntVect ratio,
Periodicity const &  period,
int  crse_value,
int  fine_value 
)

◆ makeFineMask() [4/7]

template<typename FAB >
iMultiFab amrex::makeFineMask ( const FabArray< FAB > &  cmf,
const BoxArray fba,
const IntVect ratio,
int  crse_value = 0,
int  fine_value = 1 
)

Return an iMultiFab that has the same BoxArray and DistributionMapping as the coarse MultiFab cmf. Cells covered by the coarsened fine grids are set to fine_value, whereas other cells are set to crse_value.

◆ makeFineMask() [5/7]

template<typename FAB >
iMultiFab amrex::makeFineMask ( const FabArray< FAB > &  cmf,
const BoxArray fba,
const IntVect ratio,
Periodicity const &  period,
int  crse_value,
int  fine_value 
)

◆ makeFineMask() [6/7]

template<typename FAB >
iMultiFab amrex::makeFineMask ( const FabArray< FAB > &  cmf,
const FabArray< FAB > &  fmf,
const IntVect cnghost,
const IntVect ratio,
Periodicity const &  period,
int  crse_value,
int  fine_value 
)

◆ makeFineMask() [7/7]

template<typename FAB >
iMultiFab amrex::makeFineMask ( const FabArray< FAB > &  cmf,
const FabArray< FAB > &  fmf,
const IntVect cnghost,
const IntVect ratio,
Periodicity const &  period,
int  crse_value,
int  fine_value,
LayoutData< int > &  has_cf 
)

◆ makeFineMask_doit()

template<typename FAB >
void amrex::makeFineMask_doit ( FabArray< FAB > &  mask,
const BoxArray fba,
const IntVect ratio,
Periodicity const &  period,
typename FAB::value_type  crse_value,
typename FAB::value_type  fine_value 
)

◆ MakeFuncPctTimesMF()

void amrex::MakeFuncPctTimesMF ( const Vector< Vector< BLProfStats::FuncStat > > &  funcStats,
const Vector< std::string > &  blpFNames,
const std::map< std::string, BLProfiler::ProfStats > &  mProfStats,
Real  runTime,
int  dataNProcs 
)

◆ makeHypre()

std::unique_ptr< Hypre > amrex::makeHypre ( const BoxArray grids,
const DistributionMapping dmap,
const Geometry geom,
MPI_Comm  comm_,
Hypre::Interface  interface,
const iMultiFab overset_mask 
)

◆ MakeITracker()

void amrex::MakeITracker ( amrex::Box const &  bx,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &apx, amrex::Array4< amrex::Real const > const &apy, amrex::Array4< amrex::Real const > const &apz)  ,
amrex::Array4< amrex::Real const > const &  vfrac,
amrex::Array4< int > const &  itracker,
amrex::Geometry const &  geom,
amrex::Real  target_volfrac 
)

◆ makePetsc()

std::unique_ptr< PETScABecLap > amrex::makePetsc ( const BoxArray grids,
const DistributionMapping dmap,
const Geometry geom,
MPI_Comm  comm_ 
)

◆ makePolymorphic()

template<typename T >
PolymorphicArray4<T> amrex::makePolymorphic ( Array4< T > const &  a)

◆ MakeSimilarDM() [1/2]

DistributionMapping amrex::MakeSimilarDM ( const BoxArray ba,
const BoxArray src_ba,
const DistributionMapping src_dm,
const IntVect ng 
)

Function that creates a DistributionMapping "similar" to that of a MultiFab.

"Similar" means that, if a box in "ba" intersects with any of the boxes in the BoxArray associated with "mf", taking "ngrow" ghost cells into account, then that box will be assigned to the proc owning the one it has the maximum amount of overlap with.

Parameters
[in]baThe BoxArray we want to generate a DistributionMapping for.
[in]src_baThe BoxArray associated with the src DistributionMapping.
[in]src_dmThe input DistributionMapping we want the output to be similar to.
[in]ngThe number of grow cells to use when computing intersection / overlap
Returns
The computed DistributionMapping.

◆ MakeSimilarDM() [2/2]

DistributionMapping amrex::MakeSimilarDM ( const BoxArray ba,
const MultiFab mf,
const IntVect ng 
)

Function that creates a DistributionMapping "similar" to that of a MultiFab.

"Similar" means that, if a box in "ba" intersects with any of the boxes in the BoxArray associated with "mf", taking "ngrow" ghost cells into account, then that box will be assigned to the proc owning the one it has the maximum amount of overlap with.

Parameters
[in]baThe BoxArray we want to generate a DistributionMapping for.
[in]mfThe MultiFab we want said DistributionMapping to be similar to.
[in]ngThe number of grow cells to use when computing intersection / overlap
Returns
The computed DistributionMapping.

◆ makeSingleCellBox() [1/2]

template<int dim = AMREX_SPACEDIM, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::makeSingleCellBox ( int  i,
int  j,
int  k,
IndexTypeND< dim >  typ = IndexTypeND<dim>::TheCellType() 
)

◆ makeSingleCellBox() [2/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::makeSingleCellBox ( IntVectND< dim > const &  vect,
IndexTypeND< dim >  typ = IndexTypeND<dim>::TheCellType() 
)

◆ makeSlab()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::makeSlab ( BoxND< dim > const &  b,
int  direction,
int  slab_index 
)
noexcept

◆ MakeStateRedistUtils() [1/2]

void amrex::MakeStateRedistUtils ( amrex::Box const &  bx,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
amrex::Array4< amrex::Real const > const &  vfrac,
amrex::Array4< amrex::Real const > const &  ccent,
amrex::Array4< int const > const &  itracker,
amrex::Array4< amrex::Real > const &  nrs,
amrex::Array4< amrex::Real > const &  alpha,
amrex::Array4< amrex::Real > const &  nbhd_vol,
amrex::Array4< amrex::Real > const &  cent_hat,
amrex::Geometry const &  geom,
amrex::Real  target_volfrac 
)

◆ MakeStateRedistUtils() [2/2]

void amrex::MakeStateRedistUtils ( Box const &  bx,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrac,
Array4< Real const > const &  ccent,
Array4< int const > const &  itracker,
Array4< Real > const &  nrs,
Array4< Real > const &  alpha,
Array4< Real > const &  nbhd_vol,
Array4< Real > const &  cent_hat,
Geometry const &  lev_geom,
Real  target_vol 
)

◆ makeTuple()

template<typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple<detail::tuple_decay_t<Ts>...> amrex::makeTuple ( Ts &&...  args)
constexpr

◆ makeXDim3()

XDim3 amrex::makeXDim3 ( const Array< Real, AMREX_SPACEDIM > &  a)
inlinenoexcept

◆ MakeZeroTuple()

template<typename... Ts>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple<Ts...> amrex::MakeZeroTuple ( GpuTuple< Ts... >  )
constexprnoexcept

Return a GpuTuple containing all zeros. Note that a default-constructed GpuTuple can have uninitialized values.

◆ match()

bool amrex::match ( const BoxArray x,
const BoxArray y 
)

Note that two BoxArrays that match are not necessarily equal.

◆ max() [1/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::max ( const IntVectND< dim > &  p1,
const IntVectND< dim > &  p2 
)
noexcept

Returns the IntVectND that is the component-wise maximum of two argument IntVectNDs.

◆ max() [2/4]

AMREX_GPU_HOST_DEVICE RealVect amrex::max ( const RealVect p1,
const RealVect p2 
)
inlinenoexcept

Returns the RealVect that is the component-wise maximum of two argument RealVects.

◆ max() [3/4]

template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T& amrex::max ( const T &  a,
const T &  b 
)
constexprnoexcept

◆ max() [4/4]

template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T& amrex::max ( const T &  a,
const T &  b,
const Ts &...  c 
)
constexprnoexcept

◆ max_lbound() [1/2]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::max_lbound ( BoxND< dim > const &  b1,
BoxND< dim > const &  b2 
)
noexcept

◆ max_lbound() [2/2]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::max_lbound ( BoxND< dim > const &  b1,
Dim3 const &  lo 
)
noexcept

◆ max_lbound_iv() [1/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::max_lbound_iv ( BoxND< dim > const &  b1,
BoxND< dim > const &  b2 
)
noexcept

◆ max_lbound_iv() [2/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::max_lbound_iv ( BoxND< dim > const &  b1,
IntVectND< dim > const &  lo 
)
noexcept

◆ MeshToParticle()

template<class PC , class MF , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void amrex::MeshToParticle ( PC &  pc,
MF const &  mf,
int  lev,
F const &  f 
)

◆ mf_cell_bilin_interp()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_bilin_interp ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  fine,
int  fcomp,
Array4< T const > const &  crse,
int  ccomp,
IntVect const &  ratio 
)
noexcept

◆ mf_cell_cons_lin_interp()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp ( int  i,
int  j,
int  k,
int  ns,
Array4< Real > const &  fine,
int  fcomp,
Array4< Real const > const &  slope,
Array4< Real const > const &  crse,
int  ccomp,
int  ncomp,
IntVect const &  ratio 
)
noexcept

◆ mf_cell_cons_lin_interp_limit_minmax_llslope()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp_limit_minmax_llslope ( int  i,
int  j,
int  k,
Array4< Real > const &  slope,
Array4< Real const > const &  u,
int  scomp,
int  ncomp,
Box const &  domain,
IntVect const &  ratio,
BCRec const *  bc 
)
noexcept

◆ mf_cell_cons_lin_interp_llslope()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp_llslope ( int  i,
int  j,
int  k,
Array4< Real > const &  slope,
Array4< Real const > const &  u,
int  scomp,
int  ncomp,
Box const &  domain,
IntVect const &  ratio,
BCRec const *  bc 
)
noexcept

◆ mf_cell_cons_lin_interp_mcslope()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp_mcslope ( int  i,
int  j,
int  k,
int  ns,
Array4< Real > const &  slope,
Array4< Real const > const &  u,
int  scomp,
int  ncomp,
Box const &  domain,
IntVect const &  ratio,
BCRec const *  bc 
)
noexcept

◆ mf_cell_cons_lin_interp_mcslope_rz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp_mcslope_rz ( int  i,
int  j,
int  ns,
Array4< Real > const &  slope,
Array4< Real const > const &  u,
int  scomp,
int  ncomp,
Box const &  domain,
IntVect const &  ratio,
BCRec const *  bc,
Real  drf,
Real  rlo 
)
noexcept

◆ mf_cell_cons_lin_interp_mcslope_sph()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp_mcslope_sph ( int  i,
int  ns,
Array4< Real > const &  slope,
Array4< Real const > const &  u,
int  scomp,
int  ,
Box const &  domain,
IntVect const &  ratio,
BCRec const *  bc,
Real  drf,
Real  rlo 
)
noexcept

◆ mf_cell_cons_lin_interp_rz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp_rz ( int  i,
int  j,
int  ns,
Array4< Real > const &  fine,
int  fcomp,
Array4< Real const > const &  slope,
Array4< Real const > const &  crse,
int  ccomp,
int  ncomp,
IntVect const &  ratio,
Real  drf,
Real  rlo 
)
noexcept

◆ mf_cell_cons_lin_interp_sph()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_cons_lin_interp_sph ( int  i,
int  ns,
Array4< Real > const &  fine,
int  fcomp,
Array4< Real const > const &  slope,
Array4< Real const > const &  crse,
int  ccomp,
int  ,
IntVect const &  ratio,
Real  drf,
Real  rlo 
)
noexcept

◆ mf_cell_quadratic_calcslope()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_quadratic_calcslope ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  crse,
int  ccomp,
Array4< Real > const &  slope,
Box const &  domain,
BCRec const *  bc 
)
noexcept

◆ mf_cell_quadratic_compute_slopes_xx()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_cell_quadratic_compute_slopes_xx ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_cell_quadratic_compute_slopes_xy()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_cell_quadratic_compute_slopes_xy ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_cell_quadratic_compute_slopes_xz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_cell_quadratic_compute_slopes_xz ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_cell_quadratic_compute_slopes_yy()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_cell_quadratic_compute_slopes_yy ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_cell_quadratic_compute_slopes_yz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_cell_quadratic_compute_slopes_yz ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_cell_quadratic_compute_slopes_zz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_cell_quadratic_compute_slopes_zz ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_cell_quadratic_interp()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_quadratic_interp ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  fine,
int  fcomp,
Array4< Real const > const &  crse,
int  ccomp,
Array4< Real const > const &  slope,
IntVect const &  ratio 
)
noexcept

◆ mf_cell_quadratic_interp_rz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_cell_quadratic_interp_rz ( int  i,
int  j,
int  ,
int  n,
Array4< Real > const &  fine,
int  fcomp,
Array4< Real const > const &  crse,
int  ccomp,
Array4< Real const > const &  slope,
IntVect const &  ratio,
GeometryData const &  cs_geomdata,
GeometryData const &  fn_geomdata 
)
noexcept

◆ mf_compute_slopes_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_compute_slopes_x ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_compute_slopes_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_compute_slopes_y ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_compute_slopes_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mf_compute_slopes_z ( int  i,
int  j,
int  k,
Array4< Real const > const &  u,
int  nu,
Box const &  domain,
BCRec const &  bc 
)

◆ mf_nodebilin_interp()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mf_nodebilin_interp ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  fine,
int  fcomp,
Array4< Real const > const &  crse,
int  ccomp,
IntVect const &  ratio 
)
noexcept

◆ min() [1/4]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::min ( const IntVectND< dim > &  p1,
const IntVectND< dim > &  p2 
)
noexcept

Returns the IntVectND that is the component-wise minimum of two argument IntVectNDs.

◆ min() [2/4]

AMREX_GPU_HOST_DEVICE RealVect amrex::min ( const RealVect p1,
const RealVect p2 
)
inlinenoexcept

Returns the RealVect that is the component-wise minimum of two argument RealVects.

◆ min() [3/4]

template<class T >
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T& amrex::min ( const T &  a,
const T &  b 
)
constexprnoexcept

◆ min() [4/4]

template<class T , class ... Ts>
AMREX_GPU_HOST_DEVICE constexpr AMREX_FORCE_INLINE const T& amrex::min ( const T &  a,
const T &  b,
const Ts &...  c 
)
constexprnoexcept

◆ min_ubound() [1/2]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::min_ubound ( BoxND< dim > const &  b1,
BoxND< dim > const &  b2 
)
noexcept

◆ min_ubound() [2/2]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::min_ubound ( BoxND< dim > const &  b1,
Dim3 const &  hi 
)
noexcept

◆ min_ubound_iv() [1/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::min_ubound_iv ( BoxND< dim > const &  b1,
BoxND< dim > const &  b2 
)
noexcept

◆ min_ubound_iv() [2/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::min_ubound_iv ( BoxND< dim > const &  b1,
IntVectND< dim > const &  hi 
)
noexcept

◆ minBox()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::minBox ( const BoxND< dim > &  b1,
const BoxND< dim > &  b2 
)
noexcept

Modify BoxND to that of the minimum BoxND containing both the original BoxND and the argument. Both BoxNDes must have identical type.

◆ mlabeclap_adotx() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_adotx ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_adotx() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_adotx ( int  i,
int  j,
int  ,
int  n,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_adotx() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_adotx ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_adotx_os() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_adotx_os ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
Array4< int const > const &  osm,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_adotx_os() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_adotx_os ( int  i,
int  j,
int  ,
int  n,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< int const > const &  osm,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_adotx_os() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_adotx_os ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
Array4< int const > const &  osm,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_flux_x()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_flux_x ( Box const &  box,
Array4< T > const &  fx,
Array4< T const > const &  sol,
Array4< T const > const &  bx,
fac,
int  ncomp 
)
noexcept

◆ mlabeclap_flux_xface()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_flux_xface ( Box const &  box,
Array4< T > const &  fx,
Array4< T const > const &  sol,
Array4< T const > const &  bx,
fac,
int  xlen,
int  ncomp 
)
noexcept

◆ mlabeclap_flux_y()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_flux_y ( Box const &  box,
Array4< T > const &  fy,
Array4< T const > const &  sol,
Array4< T const > const &  by,
fac,
int  ncomp 
)
noexcept

◆ mlabeclap_flux_yface()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_flux_yface ( Box const &  box,
Array4< T > const &  fy,
Array4< T const > const &  sol,
Array4< T const > const &  by,
fac,
int  ylen,
int  ncomp 
)
noexcept

◆ mlabeclap_flux_z()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_flux_z ( Box const &  box,
Array4< T > const &  fz,
Array4< T const > const &  sol,
Array4< T const > const &  bz,
fac,
int  ncomp 
)
noexcept

◆ mlabeclap_flux_zface()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_flux_zface ( Box const &  box,
Array4< T > const &  fz,
Array4< T const > const &  sol,
Array4< T const > const &  bz,
fac,
int  zlen,
int  ncomp 
)
noexcept

◆ mlabeclap_normalize() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_normalize ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
Array4< T const > const &  bZ,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_normalize() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_normalize ( int  i,
int  j,
int  ,
int  n,
Array4< T > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
Array4< T const > const &  bY,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlabeclap_normalize() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlabeclap_normalize ( int  i,
int  ,
int  ,
int  n,
Array4< T > const &  x,
Array4< T const > const &  a,
Array4< T const > const &  bX,
GpuArray< T, AMREX_SPACEDIM > const &  dxinv,
alpha,
beta 
)
noexcept

◆ mlalap_adotx()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_adotx ( Box const &  box,
Array4< RT > const &  y,
Array4< RT const > const &  x,
Array4< RT const > const &  a,
GpuArray< RT, AMREX_SPACEDIM > const &  dxinv,
RT  alpha,
RT  beta,
int  ncomp 
)
noexcept

◆ mlalap_adotx_m()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_adotx_m ( Box const &  box,
Array4< RT > const &  y,
Array4< RT const > const &  x,
Array4< RT const > const &  a,
GpuArray< RT, AMREX_SPACEDIM > const &  dxinv,
RT  alpha,
RT  beta,
RT  dx,
RT  probxlo,
int  ncomp 
)
noexcept

◆ mlalap_flux_x()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_x ( Box const &  box,
Array4< RT > const &  fx,
Array4< RT const > const &  sol,
RT  fac,
int  ncomp 
)
noexcept

◆ mlalap_flux_x_m()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_x_m ( Box const &  box,
Array4< RT > const &  fx,
Array4< RT const > const &  sol,
RT  fac,
RT  dx,
RT  probxlo,
int  ncomp 
)
noexcept

◆ mlalap_flux_xface()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_xface ( Box const &  box,
Array4< RT > const &  fx,
Array4< RT const > const &  sol,
RT  fac,
int  xlen,
int  ncomp 
)
noexcept

◆ mlalap_flux_xface_m()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_xface_m ( Box const &  box,
Array4< RT > const &  fx,
Array4< RT const > const &  sol,
RT  fac,
int  xlen,
RT  dx,
RT  probxlo,
int  ncomp 
)
noexcept

◆ mlalap_flux_y()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_y ( Box const &  box,
Array4< RT > const &  fy,
Array4< RT const > const &  sol,
RT  fac,
int  ncomp 
)
noexcept

◆ mlalap_flux_yface()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_yface ( Box const &  box,
Array4< RT > const &  fy,
Array4< RT const > const &  sol,
RT  fac,
int  ylen,
int  ncomp 
)
noexcept

◆ mlalap_flux_z()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_z ( Box const &  box,
Array4< RT > const &  fz,
Array4< RT const > const &  sol,
RT  fac,
int  ncomp 
)
noexcept

◆ mlalap_flux_zface()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_flux_zface ( Box const &  box,
Array4< RT > const &  fz,
Array4< RT const > const &  sol,
RT  fac,
int  zlen,
int  ncomp 
)
noexcept

◆ mlalap_gsrb() [1/2]

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_gsrb ( Box const &  box,
Array4< RT > const &  phi,
Array4< RT const > const &  rhs,
RT  alpha,
RT  dhx,
Array4< RT const > const &  a,
Array4< RT const > const &  f0,
Array4< int const > const &  m0,
Array4< RT const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox,
int  redblack,
int  ncomp 
)
noexcept

◆ mlalap_gsrb() [2/2]

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_gsrb ( Box const &  box,
Array4< RT > const &  phi,
Array4< RT const > const &  rhs,
RT  alpha,
RT  dhx,
RT  dhy,
RT  dhz,
Array4< RT const > const &  a,
Array4< RT const > const &  f0,
Array4< int const > const &  m0,
Array4< RT const > const &  f1,
Array4< int const > const &  m1,
Array4< RT const > const &  f2,
Array4< int const > const &  m2,
Array4< RT const > const &  f3,
Array4< int const > const &  m3,
Array4< RT const > const &  f4,
Array4< int const > const &  m4,
Array4< RT const > const &  f5,
Array4< int const > const &  m5,
Box const &  vbox,
int  redblack,
int  ncomp 
)
noexcept

◆ mlalap_gsrb_m()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_gsrb_m ( Box const &  box,
Array4< RT > const &  phi,
Array4< RT const > const &  rhs,
RT  alpha,
RT  dhx,
Array4< RT const > const &  a,
Array4< RT const > const &  f0,
Array4< int const > const &  m0,
Array4< RT const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox,
int  redblack,
RT  dx,
RT  probxlo,
int  ncomp 
)

◆ mlalap_normalize()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_normalize ( Box const &  box,
Array4< RT > const &  x,
Array4< RT const > const &  a,
GpuArray< RT, AMREX_SPACEDIM > const &  dxinv,
RT  alpha,
RT  beta,
int  ncomp 
)
noexcept

◆ mlalap_normalize_m()

template<typename RT >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlalap_normalize_m ( Box const &  box,
Array4< RT > const &  x,
Array4< RT const > const &  a,
GpuArray< RT, AMREX_SPACEDIM > const &  dxinv,
RT  alpha,
RT  beta,
RT  dx,
RT  probxlo,
int  ncomp 
)
noexcept

◆ mlcurlcurl_1D() [1/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_1D ( int  i,
int  j,
int  k,
Array4< Real > const &  ex,
Array4< Real > const &  ey,
Array4< Real > const &  ez,
Array4< Real const > const &  rhsx,
Array4< Real const > const &  rhsy,
Array4< Real const > const &  rhsz,
Array4< Real const > const &  betax,
Array4< Real const > const &  betay,
Array4< Real const > const &  betaz,
GpuArray< Real, AMREX_SPACEDIM > const &  adxinv,
int  color,
CurlCurlDirichletInfo const &  dinfo 
)

◆ mlcurlcurl_1D() [2/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_1D ( int  i,
int  j,
int  k,
Array4< Real > const &  ex,
Array4< Real > const &  ey,
Array4< Real > const &  ez,
Array4< Real const > const &  rhsx,
Array4< Real const > const &  rhsy,
Array4< Real const > const &  rhsz,
Real  beta,
GpuArray< Real, AMREX_SPACEDIM > const &  adxinv,
int  color,
CurlCurlDirichletInfo const &  dinfo 
)

◆ mlcurlcurl_adotx_x()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_adotx_x ( int  i,
int  j,
int  k,
Array4< Real > const &  Ax,
Array4< Real const > const &  ex,
Array4< Real const > const &  ey,
Array4< Real const > const &  ez,
Real  beta,
GpuArray< Real, AMREX_SPACEDIM > const &  adxinv 
)

◆ mlcurlcurl_adotx_y()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_adotx_y ( int  i,
int  j,
int  k,
Array4< Real > const &  Ay,
Array4< Real const > const &  ex,
Array4< Real const > const &  ey,
Array4< Real const > const &  ez,
Real  beta,
GpuArray< Real, AMREX_SPACEDIM > const &  adxinv 
)

◆ mlcurlcurl_adotx_z()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_adotx_z ( int  i,
int  j,
int  k,
Array4< Real > const &  Az,
Array4< Real const > const &  ex,
Array4< Real const > const &  ey,
Array4< Real const > const &  ez,
Real  beta,
GpuArray< Real, AMREX_SPACEDIM > const &  adxinv 
)

◆ mlcurlcurl_bc_symmetry()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_bc_symmetry ( int  i,
int  j,
int  k,
Orientation  face,
IndexType  it,
Array4< Real > const &  a 
)

◆ mlcurlcurl_interpadd()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_interpadd ( int  dir,
int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse 
)

◆ mlcurlcurl_restriction()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlcurlcurl_restriction ( int  dir,
int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
CurlCurlDirichletInfo const &  dinfo 
)

◆ mlebabeclap_adotx() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_adotx ( Box const &  box,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  a,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< const int > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  ba,
Array4< Real const > const &  bc,
Array4< Real const > const &  beb,
bool  is_dirichlet,
Array4< Real const > const &  phieb,
bool  is_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  alpha,
Real  beta,
int  ncomp,
bool  beta_on_centroid,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_adotx() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_adotx ( Box const &  box,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  a,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< Real const > const &  bZ,
Array4< const int > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  fcz,
Array4< Real const > const &  ba,
Array4< Real const > const &  bc,
Array4< Real const > const &  beb,
bool  is_dirichlet,
Array4< Real const > const &  phieb,
bool  is_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  alpha,
Real  beta,
int  ncomp,
bool  beta_on_centroid,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_adotx_centroid() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_adotx_centroid ( Box const &  box,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  a,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  ccent,
Array4< Real const > const &  ba,
Array4< Real const > const &  bcent,
Array4< Real const > const &  beb,
Array4< Real const > const &  phieb,
const int domlo_x,
const int domlo_y,
const int domhi_x,
const int domhi_y,
const bool &  on_x_face,
const bool &  on_y_face,
bool  is_eb_dirichlet,
bool  is_eb_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  alpha,
Real  beta,
int  ncomp 
)
noexcept

◆ mlebabeclap_adotx_centroid() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_adotx_centroid ( Box const &  box,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  a,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< Real const > const &  bZ,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  fcz,
Array4< Real const > const &  ccent,
Array4< Real const > const &  ba,
Array4< Real const > const &  bcent,
Array4< Real const > const &  beb,
Array4< Real const > const &  phieb,
const int domlo_x,
const int domlo_y,
const int domlo_z,
const int domhi_x,
const int domhi_y,
const int domhi_z,
const bool &  on_x_face,
const bool &  on_y_face,
const bool &  on_z_face,
bool  is_eb_dirichlet,
bool  is_eb_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  alpha,
Real  beta,
int  ncomp 
)
noexcept

◆ mlebabeclap_apply_bc_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_apply_bc_x ( int  side,
Box const &  box,
int  blen,
Array4< Real > const &  phi,
Array4< int const > const &  mask,
Array4< Real const > const &  area,
BoundCond  bct,
Real  bcl,
Array4< Real const > const &  bcval,
int  maxorder,
Real  dxinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mlebabeclap_apply_bc_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_apply_bc_y ( int  side,
Box const &  box,
int  blen,
Array4< Real > const &  phi,
Array4< int const > const &  mask,
Array4< Real const > const &  area,
BoundCond  bct,
Real  bcl,
Array4< Real const > const &  bcval,
int  maxorder,
Real  dyinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mlebabeclap_apply_bc_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_apply_bc_z ( int  side,
Box const &  box,
int  blen,
Array4< Real > const &  phi,
Array4< int const > const &  mask,
Array4< Real const > const &  area,
BoundCond  bct,
Real  bcl,
Array4< Real const > const &  bcval,
int  maxorder,
Real  dzinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mlebabeclap_ebflux() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_ebflux ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  feb,
Array4< Real const > const &  x,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  apz,
Array4< Real const > const &  bc,
Array4< Real const > const &  beb,
Array4< Real const > const &  phieb,
bool  is_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebabeclap_ebflux() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_ebflux ( int  i,
int  j,
int  k,
int  n,
Array4< Real > const &  feb,
Array4< Real const > const &  x,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  bc,
Array4< Real const > const &  beb,
Array4< Real const > const &  phieb,
bool  is_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebabeclap_flux_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_flux_x ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  apx,
Array4< Real const > const &  fcx,
Array4< Real const > const &  sol,
Array4< Real const > const &  bX,
Array4< int const > const &  ccm,
Real  dhx,
int  face_only,
int  ncomp,
Box const &  xbox,
bool  beta_on_centroid,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_flux_x_0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_flux_x_0 ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  apx,
Array4< Real const > const &  sol,
Array4< Real const > const &  bX,
Real  dhx,
int  face_only,
int  ncomp,
Box const &  xbox 
)
noexcept

◆ mlebabeclap_flux_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_flux_y ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcy,
Array4< Real const > const &  sol,
Array4< Real const > const &  bY,
Array4< int const > const &  ccm,
Real  dhy,
int  face_only,
int  ncomp,
Box const &  ybox,
bool  beta_on_centroid,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_flux_y_0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_flux_y_0 ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  apy,
Array4< Real const > const &  sol,
Array4< Real const > const &  bY,
Real  dhy,
int  face_only,
int  ncomp,
Box const &  ybox 
)
noexcept

◆ mlebabeclap_flux_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_flux_z ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcz,
Array4< Real const > const &  sol,
Array4< Real const > const &  bZ,
Array4< int const > const &  ccm,
Real  dhz,
int  face_only,
int  ncomp,
Box const &  zbox,
bool  beta_on_centroid,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_flux_z_0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_flux_z_0 ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  apz,
Array4< Real const > const &  sol,
Array4< Real const > const &  bZ,
Real  dhz,
int  face_only,
int  ncomp,
Box const &  zbox 
)
noexcept

◆ mlebabeclap_grad_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_grad_x ( Box const &  box,
Array4< Real > const &  gx,
Array4< Real const > const &  sol,
Array4< Real const > const &  apx,
Array4< Real const > const &  fcx,
Array4< int const > const &  ccm,
Real  dxi,
int  ncomp,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_grad_x_0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_grad_x_0 ( Box const &  box,
Array4< Real > const &  gx,
Array4< Real const > const &  sol,
Array4< Real const > const &  apx,
Real  dxi,
int  ncomp 
)
noexcept

◆ mlebabeclap_grad_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_grad_y ( Box const &  box,
Array4< Real > const &  gy,
Array4< Real const > const &  sol,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcy,
Array4< int const > const &  ccm,
Real  dyi,
int  ncomp,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_grad_y_0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_grad_y_0 ( Box const &  box,
Array4< Real > const &  gy,
Array4< Real const > const &  sol,
Array4< Real const > const &  apy,
Real  dyi,
int  ncomp 
)
noexcept

◆ mlebabeclap_grad_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_grad_z ( Box const &  box,
Array4< Real > const &  gz,
Array4< Real const > const &  sol,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcz,
Array4< int const > const &  ccm,
Real  dzi,
int  ncomp,
bool  phi_on_centroid 
)
noexcept

◆ mlebabeclap_grad_z_0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_grad_z_0 ( Box const &  box,
Array4< Real > const &  gz,
Array4< Real const > const &  sol,
Array4< Real const > const &  apz,
Real  dzi,
int  ncomp 
)
noexcept

◆ mlebabeclap_gsrb() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_gsrb ( Box const &  box,
Array4< Real > const &  phi,
Array4< Real const > const &  rhs,
Real  alpha,
Array4< Real const > const &  a,
Real  dhx,
Real  dhy,
Real  dh,
GpuArray< Real, AMREX_SPACEDIM > const &  dx,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< Real const > const &  f0,
Array4< Real const > const &  f2,
Array4< Real const > const &  f1,
Array4< Real const > const &  f3,
Array4< const int > const &  ccm,
Array4< Real const > const &  beb,
EBData const &  ebdata,
bool  is_dirichlet,
bool  beta_on_centroid,
bool  phi_on_centroid,
Box const &  vbox,
int  redblack,
int  ncomp 
)
noexcept

◆ mlebabeclap_gsrb() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_gsrb ( Box const &  box,
Array4< Real > const &  phi,
Array4< Real const > const &  rhs,
Real  alpha,
Array4< Real const > const &  a,
Real  dhx,
Real  dhy,
Real  dhz,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< Real const > const &  bZ,
Array4< int const > const &  m0,
Array4< int const > const &  m2,
Array4< int const > const &  m4,
Array4< int const > const &  m1,
Array4< int const > const &  m3,
Array4< int const > const &  m5,
Array4< Real const > const &  f0,
Array4< Real const > const &  f2,
Array4< Real const > const &  f4,
Array4< Real const > const &  f1,
Array4< Real const > const &  f3,
Array4< Real const > const &  f5,
Array4< const int > const &  ccm,
Array4< Real const > const &  beb,
EBData const &  ebdata,
bool  is_dirichlet,
bool  beta_on_centroid,
bool  phi_on_centroid,
Box const &  vbox,
int  redblack,
int  ncomp 
)
noexcept

◆ mlebabeclap_normalize() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_normalize ( Box const &  box,
Array4< Real > const &  phi,
Real  alpha,
Array4< Real const > const &  a,
Real  dhx,
Real  dhy,
Real  dh,
const amrex::GpuArray< Real, AMREX_SPACEDIM > &  dx,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< const int > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  ba,
Array4< Real const > const &  bc,
Array4< Real const > const &  beb,
bool  is_dirichlet,
bool  beta_on_centroid,
int  ncomp 
)
noexcept

◆ mlebabeclap_normalize() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebabeclap_normalize ( Box const &  box,
Array4< Real > const &  phi,
Real  alpha,
Array4< Real const > const &  a,
Real  dhx,
Real  dhy,
Real  dhz,
Array4< Real const > const &  bX,
Array4< Real const > const &  bY,
Array4< Real const > const &  bZ,
Array4< const int > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  fcz,
Array4< Real const > const &  ba,
Array4< Real const > const &  bc,
Array4< Real const > const &  beb,
bool  is_dirichlet,
bool  beta_on_centroid,
int  ncomp 
)
noexcept

◆ mlebndfdlap_adotx() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< int const > const &  dmsk,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_adotx() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< int const > const &  dmsk,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_adotx() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx ( int  ,
int  ,
int  ,
Array4< Real > const &  ,
Array4< Real const > const &  ,
Array4< int const > const &  ,
Real   
)
noexcept

◆ mlebndfdlap_adotx_eb() [1/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
Array4< Real const > const &  xeb,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_adotx_eb() [2/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
Real  xeb,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_adotx_eb() [3/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  xeb,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_adotx_eb() [4/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Real  xeb,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_adotx_eb_doit() [1/2]

template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_eb_doit ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
F const &  xeb,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_adotx_eb_doit() [2/2]

template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_eb_doit ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
F const &  xeb,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_adotx_rz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_rz ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< int const > const &  dmsk,
Real  sigr,
Real  dr,
Real  dz,
Real  rlo,
Real  alpha 
)
noexcept

◆ mlebndfdlap_adotx_rz_eb() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_rz_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  xeb,
Real  sigr,
Real  dr,
Real  dz,
Real  rlo,
Real  alpha 
)
noexcept

◆ mlebndfdlap_adotx_rz_eb() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_rz_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Real  xeb,
Real  sigr,
Real  dr,
Real  dz,
Real  rlo,
Real  alpha 
)
noexcept

◆ mlebndfdlap_adotx_rz_eb_doit()

template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_adotx_rz_eb_doit ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
F const &  xeb,
Real  sigr,
Real  dr,
Real  dz,
Real  rlo,
Real  alpha 
)
noexcept

◆ mlebndfdlap_grad_x() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_grad_x ( Box const &  b,
Array4< Real > const &  px,
Array4< Real const > const &  p,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  phieb,
Real  dxi 
)

◆ mlebndfdlap_grad_x() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_grad_x ( Box const &  b,
Array4< Real > const &  px,
Array4< Real const > const &  p,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Real  phieb,
Real  dxi 
)

◆ mlebndfdlap_grad_x() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_grad_x ( Box const &  b,
Array4< Real > const &  px,
Array4< Real const > const &  p,
Real  dxi 
)

◆ mlebndfdlap_grad_x_doit()

template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_grad_x_doit ( int  i,
int  j,
int  k,
Array4< Real > const &  px,
Array4< Real const > const &  p,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
F const &  phieb,
Real  dxi 
)

◆ mlebndfdlap_grad_y() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_grad_y ( Box const &  b,
Array4< Real > const &  py,
Array4< Real const > const &  p,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecy,
Array4< Real const > const &  phieb,
Real  dyi 
)

◆ mlebndfdlap_grad_y() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_grad_y ( Box const &  b,
Array4< Real > const &  py,
Array4< Real const > const &  p,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecy,
Real  phieb,
Real  dyi 
)

◆ mlebndfdlap_grad_y_doit()

template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_grad_y_doit ( int  i,
int  j,
int  k,
Array4< Real > const &  py,
Array4< Real const > const &  p,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecy,
F const &  phieb,
Real  dyi 
)

◆ mlebndfdlap_gsrb() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_gsrb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< int const > const &  dmsk,
Real  bx,
Real  by,
int  redblack 
)
noexcept

◆ mlebndfdlap_gsrb() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_gsrb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< int const > const &  dmsk,
Real  bx,
Real  by,
Real  bz,
int  redblack 
)
noexcept

◆ mlebndfdlap_gsrb() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_gsrb ( int  ,
int  ,
int  ,
Array4< Real > const &  ,
Array4< Real const > const &  ,
Array4< int const > const &  ,
Real  ,
int   
)
noexcept

◆ mlebndfdlap_gsrb_eb() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_gsrb_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
Real  bx,
Real  by,
Real  bz,
int  redblack 
)
noexcept

◆ mlebndfdlap_gsrb_eb() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_gsrb_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Real  bx,
Real  by,
int  redblack 
)
noexcept

◆ mlebndfdlap_gsrb_rz()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_gsrb_rz ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< int const > const &  dmsk,
Real  sigr,
Real  dr,
Real  dz,
Real  rlo,
int  redblack,
Real  alpha 
)
noexcept

◆ mlebndfdlap_gsrb_rz_eb()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_gsrb_rz_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Real  sigr,
Real  dr,
Real  dz,
Real  rlo,
int  redblack,
Real  alpha 
)
noexcept

◆ mlebndfdlap_scale_rhs() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_scale_rhs ( int  i,
int  j,
int  k,
Array4< Real > const &  rhs,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz 
)
noexcept

◆ mlebndfdlap_scale_rhs() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_scale_rhs ( int  i,
int  j,
int  ,
Array4< Real > const &  rhs,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy 
)
noexcept

◆ mlebndfdlap_sig_adotx() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< int const > const &  dmsk,
Array4< Real const > const &  sig,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_sig_adotx() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< int const > const &  dmsk,
Array4< Real const > const &  sig,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_sig_adotx() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx ( int  ,
int  ,
int  ,
Array4< Real > const &  ,
Array4< Real const > const &  ,
Array4< int const > const &  ,
Array4< Real const > const &  ,
Real   
)
noexcept

◆ mlebndfdlap_sig_adotx_eb() [1/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  xeb,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_sig_adotx_eb() [2/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
Real  xeb,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_sig_adotx_eb() [3/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
Array4< Real const > const &  xeb,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_sig_adotx_eb() [4/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
Real  xeb,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_sig_adotx_eb_doit() [1/2]

template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx_eb_doit ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
F const &  xeb,
Real  bx,
Real  by,
Real  bz 
)
noexcept

◆ mlebndfdlap_sig_adotx_eb_doit() [2/2]

template<typename F >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_adotx_eb_doit ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
F const &  xeb,
Real  bx,
Real  by 
)
noexcept

◆ mlebndfdlap_sig_gsrb() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_gsrb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< int const > const &  dmsk,
Array4< Real const > const &  sig,
Real  bx,
Real  by,
int  redblack 
)
noexcept

◆ mlebndfdlap_sig_gsrb() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_gsrb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< int const > const &  dmsk,
Array4< Real const > const &  sig,
Real  bx,
Real  by,
Real  bz,
int  redblack 
)
noexcept

◆ mlebndfdlap_sig_gsrb() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_gsrb ( int  ,
int  ,
int  ,
Array4< Real > const &  ,
Array4< Real const > const &  ,
Array4< int const > const &  ,
Array4< Real const > const &  ,
Real  ,
int   
)
noexcept

◆ mlebndfdlap_sig_gsrb_eb() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_gsrb_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  ecz,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
Real  bx,
Real  by,
Real  bz,
int  redblack 
)
noexcept

◆ mlebndfdlap_sig_gsrb_eb() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebndfdlap_sig_gsrb_eb ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  rhs,
Array4< Real const > const &  levset,
Array4< int const > const &  dmsk,
Array4< Real const > const &  ecx,
Array4< Real const > const &  ecy,
Array4< Real const > const &  sig,
Array4< Real const > const &  vfrc,
Real  bx,
Real  by,
int  redblack 
)
noexcept

◆ mlebtensor_cross_terms() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  velb,
Array4< Real const > const &  etab,
Array4< Real const > const &  kapb,
Array4< int const > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vol,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  fcz,
Array4< Real const > const &  bc,
bool  is_dirichlet,
bool  is_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  bscalar 
)
noexcept

◆ mlebtensor_cross_terms() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  velb,
Array4< Real const > const &  etab,
Array4< Real const > const &  kapb,
Array4< int const > const &  ccm,
Array4< EBCellFlag const > const &  flag,
Array4< Real const > const &  vol,
Array4< Real const > const &  apx,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcx,
Array4< Real const > const &  fcy,
Array4< Real const > const &  bc,
bool  is_dirichlet,
bool  is_inhomog,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  bscalar 
)
noexcept

◆ mlebtensor_cross_terms_fx() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  etax,
Array4< Real const > const &  kapx,
Array4< Real const > const &  apx,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_cross_terms_fx() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  etax,
Array4< Real const > const &  kapx,
Array4< Real const > const &  apx,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_cross_terms_fy() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  etay,
Array4< Real const > const &  kapy,
Array4< Real const > const &  apy,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_cross_terms_fy() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  etay,
Array4< Real const > const &  kapy,
Array4< Real const > const &  apy,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_cross_terms_fz() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  etaz,
Array4< Real const > const &  kapz,
Array4< Real const > const &  apz,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_cross_terms_fz() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_cross_terms_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  etaz,
Array4< Real const > const &  kapz,
Array4< Real const > const &  apz,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_dx_on_yface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dx_on_yface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi,
Real  whi,
Real  wlo,
int  ihip,
int  ihim,
int  ilop,
int  ilom 
)
noexcept

◆ mlebtensor_dx_on_yface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dx_on_yface ( int  ,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi,
Real  whi,
Real  wlo,
int  ihip,
int  ihim,
int  ilop,
int  ilom 
)
noexcept

◆ mlebtensor_dx_on_zface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dx_on_zface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi,
Real  whi,
Real  wlo,
int  ihip,
int  ihim,
int  ilop,
int  ilom 
)
noexcept

◆ mlebtensor_dx_on_zface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dx_on_zface ( int  ,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi,
Real  whi,
Real  wlo,
int  ihip,
int  ihim,
int  ilop,
int  ilom 
)
noexcept

◆ mlebtensor_dy_on_xface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dy_on_xface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi,
Real  whi,
Real  wlo,
int  jhip,
int  jhim,
int  jlop,
int  jlom 
)
noexcept

◆ mlebtensor_dy_on_xface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dy_on_xface ( int  i,
int  ,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi,
Real  whi,
Real  wlo,
int  jhip,
int  jhim,
int  jlop,
int  jlom 
)
noexcept

◆ mlebtensor_dy_on_zface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dy_on_zface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi,
Real  whi,
Real  wlo,
int  jhip,
int  jhim,
int  jlop,
int  jlom 
)
noexcept

◆ mlebtensor_dy_on_zface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dy_on_zface ( int  i,
int  ,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi,
Real  whi,
Real  wlo,
int  jhip,
int  jhim,
int  jlop,
int  jlom 
)
noexcept

◆ mlebtensor_dz_on_xface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dz_on_xface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dzi,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi,
Real  whi,
Real  wlo,
int  khip,
int  khim,
int  klop,
int  klom 
)
noexcept

◆ mlebtensor_dz_on_xface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dz_on_xface ( int  i,
int  j,
int  ,
int  n,
Array4< Real const > const &  vel,
Real  dzi,
Real  whi,
Real  wlo,
int  khip,
int  khim,
int  klop,
int  klom 
)
noexcept

◆ mlebtensor_dz_on_yface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dz_on_yface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dzi,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi,
Real  whi,
Real  wlo,
int  khip,
int  khim,
int  klop,
int  klom 
)
noexcept

◆ mlebtensor_dz_on_yface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_dz_on_yface ( int  i,
int  j,
int  ,
int  n,
Array4< Real const > const &  vel,
Real  dzi,
Real  whi,
Real  wlo,
int  khip,
int  khim,
int  klop,
int  klom 
)
noexcept

◆ mlebtensor_flux_0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_flux_0 ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  ap,
Real  bscalar 
)
noexcept

◆ mlebtensor_flux_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_flux_x ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  apx,
Array4< Real const > const &  fcx,
Real const  bscalar,
Array4< int const > const &  ccm,
int  face_only,
Box const &  xbox 
)
noexcept

◆ mlebtensor_flux_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_flux_y ( Box const &  box,
Array4< Real > const &  Ay,
Array4< Real const > const &  fy,
Array4< Real const > const &  apy,
Array4< Real const > const &  fcy,
Real const  bscalar,
Array4< int const > const &  ccm,
int  face_only,
Box const &  ybox 
)
noexcept

◆ mlebtensor_flux_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_flux_z ( Box const &  box,
Array4< Real > const &  Az,
Array4< Real const > const &  fz,
Array4< Real const > const &  apz,
Array4< Real const > const &  fcz,
Real const  bscalar,
Array4< int const > const &  ccm,
int  face_only,
Box const &  zbox 
)
noexcept

◆ mlebtensor_vel_grads_fx() [1/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  apx,
Array4< EBCellFlag const > const &  flag,
Array4< int const > const &  ccm,
Array4< Real const > const &  fcx,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_vel_grads_fx() [2/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  apx,
Array4< EBCellFlag const > const &  flag,
Array4< int const > const &  ccm,
Array4< Real const > const &  fcx,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_vel_grads_fx() [3/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  apx,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_vel_grads_fx() [4/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  apx,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_vel_grads_fy() [1/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  apy,
Array4< EBCellFlag const > const &  flag,
Array4< int const > const &  ccm,
Array4< Real const > const &  fcy,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_vel_grads_fy() [2/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  apy,
Array4< EBCellFlag const > const &  flag,
Array4< int const > const &  ccm,
Array4< Real const > const &  fcy,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_vel_grads_fy() [3/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  apy,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_vel_grads_fy() [4/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  apy,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_vel_grads_fz() [1/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  apz,
Array4< EBCellFlag const > const &  flag,
Array4< int const > const &  ccm,
Array4< Real const > const &  fcz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_vel_grads_fz() [2/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  apz,
Array4< EBCellFlag const > const &  flag,
Array4< int const > const &  ccm,
Array4< Real const > const &  fcz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_vel_grads_fz() [3/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  apz,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlebtensor_vel_grads_fz() [4/4]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlebtensor_vel_grads_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  apz,
Array4< EBCellFlag const > const &  flag,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mlebtensor_weight()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlebtensor_weight ( int  d)

◆ mllinop_apply_bc_x() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_bc_x ( int  side,
Box const &  box,
int  blen,
Array4< T > const &  phi,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
Array4< T const > const &  bcval,
int  maxorder,
dxinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mllinop_apply_bc_x() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_bc_x ( int  side,
int  i,
int  j,
int  k,
int  blen,
Array4< T > const &  phi,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
Array4< T const > const &  bcval,
int  maxorder,
dxinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mllinop_apply_bc_y() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_bc_y ( int  side,
Box const &  box,
int  blen,
Array4< T > const &  phi,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
Array4< T const > const &  bcval,
int  maxorder,
dyinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mllinop_apply_bc_y() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_bc_y ( int  side,
int  i,
int  j,
int  k,
int  blen,
Array4< T > const &  phi,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
Array4< T const > const &  bcval,
int  maxorder,
dyinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mllinop_apply_bc_z() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_bc_z ( int  side,
Box const &  box,
int  blen,
Array4< T > const &  phi,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
Array4< T const > const &  bcval,
int  maxorder,
dzinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mllinop_apply_bc_z() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_bc_z ( int  side,
int  i,
int  j,
int  k,
int  blen,
Array4< T > const &  phi,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
Array4< T const > const &  bcval,
int  maxorder,
dzinv,
int  inhomog,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_xhi()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_xhi ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
Array4< T const > const &  bcoef,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
bool  has_bcoef,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_xlo()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_xlo ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
Array4< T const > const &  bcoef,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
bool  has_bcoef,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_yhi()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_yhi ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
Array4< T const > const &  bcoef,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
bool  has_bcoef,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_yhi_m()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_yhi_m ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
xlo,
dx,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_ylo()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_ylo ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
Array4< T const > const &  bcoef,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
bool  has_bcoef,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_ylo_m()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_ylo_m ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
xlo,
dx,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_zhi()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_zhi ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
Array4< T const > const &  bcoef,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
bool  has_bcoef,
int  icomp 
)
noexcept

◆ mllinop_apply_innu_zlo()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_apply_innu_zlo ( int  i,
int  j,
int  k,
Array4< T > const &  rhs,
Array4< int const > const &  mask,
Array4< T const > const &  bcoef,
BoundCond  bct,
,
Array4< T const > const &  bcval,
fac,
bool  has_bcoef,
int  icomp 
)
noexcept

◆ mllinop_comp_interp_coef0_x() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_comp_interp_coef0_x ( int  side,
Box const &  box,
int  blen,
Array4< T > const &  f,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
int  maxorder,
dxinv,
int  icomp 
)
noexcept

◆ mllinop_comp_interp_coef0_x() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_comp_interp_coef0_x ( int  side,
int  i,
int  j,
int  k,
int  blen,
Array4< T > const &  f,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
int  maxorder,
dxinv,
int  icomp 
)
noexcept

◆ mllinop_comp_interp_coef0_y() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_comp_interp_coef0_y ( int  side,
Box const &  box,
int  blen,
Array4< T > const &  f,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
int  maxorder,
dyinv,
int  icomp 
)
noexcept

◆ mllinop_comp_interp_coef0_y() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_comp_interp_coef0_y ( int  side,
int  i,
int  j,
int  k,
int  blen,
Array4< T > const &  f,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
int  maxorder,
dyinv,
int  icomp 
)
noexcept

◆ mllinop_comp_interp_coef0_z() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_comp_interp_coef0_z ( int  side,
Box const &  box,
int  blen,
Array4< T > const &  f,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
int  maxorder,
dzinv,
int  icomp 
)
noexcept

◆ mllinop_comp_interp_coef0_z() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mllinop_comp_interp_coef0_z ( int  side,
int  i,
int  j,
int  k,
int  blen,
Array4< T > const &  f,
Array4< int const > const &  mask,
BoundCond  bct,
bcl,
int  maxorder,
dzinv,
int  icomp 
)
noexcept

◆ mlmg_lin_cc_interp_r2()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlmg_lin_cc_interp_r2 ( Box const &  bx,
Array4< T > const &  ff,
Array4< T const > const &  cc,
int  nc 
)
noexcept

◆ mlmg_lin_cc_interp_r4()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlmg_lin_cc_interp_r4 ( Box const &  bx,
Array4< T > const &  ff,
Array4< T const > const &  cc,
int  nc 
)
noexcept

◆ mlmg_lin_nd_interp_r2()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlmg_lin_nd_interp_r2 ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse 
)
noexcept

◆ mlmg_lin_nd_interp_r4()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlmg_lin_nd_interp_r4 ( int  i,
int  j,
int  k,
int  n,
Array4< T > const &  fine,
Array4< T const > const &  crse 
)
noexcept

◆ mlndabeclap_gauss_seidel_aa()

void amrex::mlndabeclap_gauss_seidel_aa ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Real  alpha,
Real  beta,
Array4< Real const > const &  acf,
Array4< Real const > const &  bcf,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndabeclap_jacobi_aa()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndabeclap_jacobi_aa ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Real  lap,
Array4< Real const > const &  rhs,
Real  alpha,
Real  beta,
Array4< Real const > const &  acf,
Array4< Real const > const &  bcf,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_aa() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_aa ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
bool  is_rz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_aa() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_aa ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_c() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_c ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Real  sigma,
Array4< int const > const &  msk,
bool  is_rz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_c() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_c ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Real  sigma,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_ha() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_ha ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< int const > const &  msk,
bool  is_rz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_ha() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_ha ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< Real const > const &  sz,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_ha() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_ha ( int  i,
int  ,
int  ,
Array4< Real const > const &  x,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_adotx_sten()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_sten ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Array4< Real const > const &  sten,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_adotx_sten_doit()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_adotx_sten_doit ( int  i,
int  j,
int  k,
Array4< Real const > const &  x,
Array4< Real const > const &  sten 
)
noexcept

◆ mlndlap_any_fine_sync_cells()

AMREX_FORCE_INLINE bool amrex::mlndlap_any_fine_sync_cells ( Box const &  bx,
Array4< int const > const &  msk,
int  fine_flag 
)
noexcept

◆ mlndlap_applybc()

template<typename T >
void amrex::mlndlap_applybc ( Box const &  vbx,
Array4< T > const &  phi,
Box const &  domain,
GpuArray< LinOpBCType, AMREX_SPACEDIM >  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM >  bchi 
)
noexcept

◆ mlndlap_avgdown_coeff_x()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_avgdown_coeff_x ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine 
)
noexcept

◆ mlndlap_avgdown_coeff_y()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_avgdown_coeff_y ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine 
)
noexcept

◆ mlndlap_avgdown_coeff_z()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_avgdown_coeff_z ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine 
)
noexcept

◆ mlndlap_Ax_fine_contrib() [1/2]

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_Ax_fine_contrib ( int  i,
int  j,
int  k,
Box const &  ndbx,
Box const &  ccbx,
Array4< Real > const &  f,
Array4< Real const > const &  res,
Array4< Real const > const &  rhs,
Array4< Real const > const &  phi,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_Ax_fine_contrib() [2/2]

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_Ax_fine_contrib ( int  i,
int  j,
int  ,
Box const &  ndbx,
Box const &  ccbx,
Array4< Real > const &  f,
Array4< Real const > const &  res,
Array4< Real const > const &  rhs,
Array4< Real const > const &  phi,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
bool  is_rz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_Ax_fine_contrib_cs() [1/2]

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_Ax_fine_contrib_cs ( int  i,
int  j,
int  k,
Box const &  ndbx,
Box const &  ccbx,
Array4< Real > const &  f,
Array4< Real const > const &  res,
Array4< Real const > const &  rhs,
Array4< Real const > const &  phi,
Real const  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_Ax_fine_contrib_cs() [2/2]

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_Ax_fine_contrib_cs ( int  i,
int  j,
int  ,
Box const &  ndbx,
Box const &  ccbx,
Array4< Real > const &  f,
Array4< Real const > const &  res,
Array4< Real const > const &  rhs,
Array4< Real const > const &  phi,
Real const  sig,
Array4< int const > const &  msk,
bool  is_rz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_Ax_fine_contrib_doit() [1/2]

template<int rr, typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_Ax_fine_contrib_doit ( S const &  sig,
int  i,
int  j,
Box const &  ndbx,
Box const &  ccbx,
Array4< Real > const &  f,
Array4< Real const > const &  res,
Array4< Real const > const &  rhs,
Array4< Real const > const &  phi,
Array4< int const > const &  msk,
bool  is_rz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_Ax_fine_contrib_doit() [2/2]

template<int rr, typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_Ax_fine_contrib_doit ( S const &  sig,
int  i,
int  j,
int  k,
Box const &  ndbx,
Box const &  ccbx,
Array4< Real > const &  f,
Array4< Real const > const &  res,
Array4< Real const > const &  rhs,
Array4< Real const > const &  phi,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_bc_doit()

template<typename T >
void amrex::mlndlap_bc_doit ( Box const &  vbx,
Array4< T > const &  a,
Box const &  domain,
GpuArray< bool, AMREX_SPACEDIM > const &  bflo,
GpuArray< bool, AMREX_SPACEDIM > const &  bfhi 
)
inlinenoexcept

◆ mlndlap_color()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE int amrex::mlndlap_color ( int  i,
int  j,
int  k 
)

◆ mlndlap_crse_resid()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_crse_resid ( int  i,
int  j,
int  k,
Array4< Real > const &  resid,
Array4< Real const > const &  rhs,
Array4< int const > const &  msk,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi,
bool  neumann_doubling 
)
noexcept

◆ mlndlap_divu() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_divu ( int  i,
int  j,
int  k,
Array4< Real > const &  rhs,
Array4< Real const > const &  vel,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  nodal_domain,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi,
bool  is_rz 
)
noexcept

◆ mlndlap_divu() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_divu ( int  i,
int  j,
int  k,
Array4< Real > const &  rhs,
Array4< Real const > const &  vel,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  nodal_domain,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ mlndlap_divu_cf_contrib() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_divu_cf_contrib ( int  i,
int  j,
int  k,
Array4< Real > const &  rhs,
Array4< Real const > const &  vel,
Array4< Real const > const &  fc,
Array4< Real const > const &  rhcc,
Array4< int const > const &  dmsk,
Array4< int const > const &  ndmsk,
Array4< int const > const &  ccmsk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  ccdom_p,
Box const &  veldom,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ mlndlap_divu_cf_contrib() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_divu_cf_contrib ( int  i,
int  j,
int  ,
Array4< Real > const &  rhs,
Array4< Real const > const &  vel,
Array4< Real const > const &  fc,
Array4< Real const > const &  rhcc,
Array4< int const > const &  dmsk,
Array4< int const > const &  ndmsk,
Array4< int const > const &  ccmsk,
bool  is_rz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  ccdom_p,
Box const &  veldom,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ mlndlap_divu_cf_contrib() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_divu_cf_contrib ( int  ,
int  ,
int  ,
Array4< Real > const &  ,
Array4< Real const > const &  ,
Array4< Real const > const &  ,
Array4< Real const > const &  ,
Array4< int const > const &  ,
Array4< int const > const &  ,
Array4< int const > const &  ,
GpuArray< Real, AMREX_SPACEDIM > const &  ,
Box const &  ,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  ,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  ,
bool   
)
noexcept

◆ mlndlap_divu_fine_contrib() [1/2]

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_divu_fine_contrib ( int  i,
int  j,
int  k,
Box const &  fvbx,
Box const &  velbx,
Array4< Real > const &  rhs,
Array4< Real const > const &  vel,
Array4< Real const > const &  frhs,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_divu_fine_contrib() [2/2]

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_divu_fine_contrib ( int  i,
int  j,
int  ,
Box const &  fvbx,
Box const &  velbx,
Array4< Real > const &  rhs,
Array4< Real const > const &  vel,
Array4< Real const > const &  frhs,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  is_rz 
)
noexcept

◆ mlndlap_fillbc_cc()

template<typename T >
void amrex::mlndlap_fillbc_cc ( Box const &  vbx,
Array4< T > const &  sigma,
Box const &  domain,
GpuArray< LinOpBCType, AMREX_SPACEDIM >  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM >  bchi 
)
noexcept

◆ mlndlap_gauss_seidel_aa() [1/2]

void amrex::mlndlap_gauss_seidel_aa ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  is_rz 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_aa() [2/2]

void amrex::mlndlap_gauss_seidel_aa ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_c() [1/2]

void amrex::mlndlap_gauss_seidel_c ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Real  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_c() [2/2]

void amrex::mlndlap_gauss_seidel_c ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Real  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  is_rz 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_ha() [1/3]

void amrex::mlndlap_gauss_seidel_ha ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_ha() [2/3]

void amrex::mlndlap_gauss_seidel_ha ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  is_rz 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_ha() [3/3]

void amrex::mlndlap_gauss_seidel_ha ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< Real const > const &  sz,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_sten() [1/2]

void amrex::mlndlap_gauss_seidel_sten ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sten,
Array4< int const > const &  msk 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_sten() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gauss_seidel_sten ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sten,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_gauss_seidel_with_line_solve_aa() [1/2]

void amrex::mlndlap_gauss_seidel_with_line_solve_aa ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_gauss_seidel_with_line_solve_aa() [2/2]

void amrex::mlndlap_gauss_seidel_with_line_solve_aa ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  is_rz 
)
inlinenoexcept

◆ mlndlap_gscolor_aa() [1/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_aa ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
int  color,
bool  is_rz 
)
noexcept

◆ mlndlap_gscolor_aa() [2/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_aa ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
int  color 
)
noexcept

◆ mlndlap_gscolor_c() [1/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_c ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Real  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
int  color 
)
noexcept

◆ mlndlap_gscolor_c() [2/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_c ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Real  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
int  color,
bool  is_rz 
)
noexcept

◆ mlndlap_gscolor_ha() [1/3]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
int  color 
)
noexcept

◆ mlndlap_gscolor_ha() [2/3]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
int  color,
bool  is_rz 
)
noexcept

◆ mlndlap_gscolor_ha() [3/3]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< Real const > const &  sz,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
int  color 
)
noexcept

◆ mlndlap_gscolor_sten()

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_gscolor_sten ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sten,
Array4< int const > const &  msk,
int  color 
)
noexcept

◆ mlndlap_impose_neumann_bc()

void amrex::mlndlap_impose_neumann_bc ( Box const &  bx,
Array4< Real > const &  rhs,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  lobc,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  hibc 
)
inlinenoexcept

◆ mlndlap_interpadd_aa()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_interpadd_aa ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_interpadd_c()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_interpadd_c ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_interpadd_ha() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_interpadd_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_interpadd_ha() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_interpadd_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< Real const > const &  sigx,
Array4< Real const > const &  sigy,
Array4< Real const > const &  sigz,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_interpadd_ha() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_interpadd_ha ( int  i,
int  j,
int  ,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< Real const > const &  sigx,
Array4< Real const > const &  sigy,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_interpadd_rap()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_interpadd_rap ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< Real const > const &  sten,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_jacobi_aa() [1/2]

void amrex::mlndlap_jacobi_aa ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_jacobi_aa() [2/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_jacobi_aa ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Real  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_jacobi_c() [1/2]

void amrex::mlndlap_jacobi_c ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  Ax,
Array4< Real const > const &  rhs,
Real  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_jacobi_c() [2/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_jacobi_c ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Real  Ax,
Array4< Real const > const &  rhs,
Real  sig,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_jacobi_ha() [1/6]

void amrex::mlndlap_jacobi_ha ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_jacobi_ha() [2/6]

void amrex::mlndlap_jacobi_ha ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_jacobi_ha() [3/6]

void amrex::mlndlap_jacobi_ha ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< Real const > const &  sz,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
inlinenoexcept

◆ mlndlap_jacobi_ha() [4/6]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_jacobi_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Real  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_jacobi_ha() [5/6]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_jacobi_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Real  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< Real const > const &  sz,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_jacobi_ha() [6/6]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_jacobi_ha ( int  i,
int  ,
int  ,
Array4< Real > const &  sol,
Real  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_jacobi_sten() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_jacobi_sten ( Box const &  bx,
Array4< Real > const &  sol,
Array4< Real const > const &  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sten,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_jacobi_sten() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_jacobi_sten ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Real  Ax,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sten,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_mknewu() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_mknewu ( int  i,
int  j,
int  k,
Array4< Real > const &  u,
Array4< Real const > const &  p,
Array4< Real const > const &  sig,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  is_rz 
)
noexcept

◆ mlndlap_mknewu() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_mknewu ( int  i,
int  j,
int  k,
Array4< Real > const &  u,
Array4< Real const > const &  p,
Array4< Real const > const &  sig,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_mknewu_c() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_mknewu_c ( int  i,
int  j,
int  k,
Array4< Real > const &  u,
Array4< Real const > const &  p,
Real  sig,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  is_rz 
)
noexcept

◆ mlndlap_mknewu_c() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_mknewu_c ( int  i,
int  j,
int  k,
Array4< Real > const &  u,
Array4< Real const > const &  p,
Real  sig,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_normalize_aa()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_normalize_aa ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_normalize_ha() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_normalize_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_normalize_ha() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_normalize_ha ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  sx,
Array4< Real const > const &  sy,
Array4< Real const > const &  sz,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_normalize_ha() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_normalize_ha ( int  i,
int  ,
int  ,
Array4< Real > const &  x,
Array4< Real const > const &  sx,
Array4< int const > const &  msk,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_normalize_sten()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_normalize_sten ( int  i,
int  j,
int  k,
Array4< Real > const &  x,
Array4< Real const > const &  sten,
Array4< int const > const &  msk,
Real  s0_norm0 
)
noexcept

◆ mlndlap_res_cf_contrib() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_res_cf_contrib ( int  i,
int  j,
int  k,
Array4< Real > const &  res,
Array4< Real const > const &  phi,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  dmsk,
Array4< int const > const &  ndmsk,
Array4< int const > const &  ccmsk,
Array4< Real const > const &  fc,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  ccdom_p,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi,
bool  neumann_doubling 
)
noexcept

◆ mlndlap_res_cf_contrib() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_res_cf_contrib ( int  i,
int  j,
int  ,
Array4< Real > const &  res,
Array4< Real const > const &  phi,
Array4< Real const > const &  rhs,
Array4< Real const > const &  sig,
Array4< int const > const &  dmsk,
Array4< int const > const &  ndmsk,
Array4< int const > const &  ccmsk,
Array4< Real const > const &  fc,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  ccdom_p,
Box const &  nddom,
bool  is_rz,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi,
bool  neumann_doubling 
)
noexcept

◆ mlndlap_res_cf_contrib() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_res_cf_contrib ( int  ,
int  ,
int  ,
Array4< Real > const &  ,
Array4< Real const > const &  ,
Array4< Real const > const &  ,
Array4< Real const > const &  ,
Array4< int const > const &  ,
Array4< int const > const &  ,
Array4< int const > const &  ,
Array4< Real const > const &  ,
GpuArray< Real, AMREX_SPACEDIM > const &  ,
Box const &  ,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  ,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  ,
bool   
)
noexcept

◆ mlndlap_res_cf_contrib_cs() [1/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_res_cf_contrib_cs ( int  i,
int  j,
int  k,
Array4< Real > const &  res,
Array4< Real const > const &  phi,
Array4< Real const > const &  rhs,
Real const  sig,
Array4< int const > const &  dmsk,
Array4< int const > const &  ndmsk,
Array4< int const > const &  ccmsk,
Array4< Real const > const &  fc,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  ccdom_p,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi,
bool  neumann_doubling 
)
noexcept

◆ mlndlap_res_cf_contrib_cs() [2/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_res_cf_contrib_cs ( int  i,
int  j,
int  ,
Array4< Real > const &  res,
Array4< Real const > const &  phi,
Array4< Real const > const &  rhs,
Real const  sig,
Array4< int const > const &  dmsk,
Array4< int const > const &  ndmsk,
Array4< int const > const &  ccmsk,
Array4< Real const > const &  fc,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Box const &  ccdom_p,
Box const &  nddom,
bool  is_rz,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi,
bool  neumann_doubling 
)
noexcept

◆ mlndlap_res_cf_contrib_cs() [3/3]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_res_cf_contrib_cs ( int  ,
int  ,
int  ,
Array4< Real > const &  ,
Array4< Real const > const &  ,
Array4< Real const > const &  ,
Real  ,
Array4< int const > const &  ,
Array4< int const > const &  ,
Array4< int const > const &  ,
Array4< Real const > const &  ,
GpuArray< Real, AMREX_SPACEDIM > const &  ,
Box const &  ,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  ,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  ,
bool   
)
noexcept

◆ mlndlap_restriction() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_restriction ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_restriction() [2/2]

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_restriction ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
Array4< int const > const &  msk,
Box const &  fdom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ mlndlap_restriction_rap()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_restriction_rap ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
Array4< Real const > const &  sten,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_rhcc()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_rhcc ( int  i,
int  j,
int  k,
Array4< Real const > const &  rhcc,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_rhcc_fine_contrib()

template<int rr>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_rhcc_fine_contrib ( int  i,
int  j,
int  k,
Box const &  ccbx,
Array4< Real > const &  rhs,
Array4< Real const > const &  cc,
Array4< int const > const &  msk 
)
noexcept

◆ mlndlap_scale_neumann_bc()

void amrex::mlndlap_scale_neumann_bc ( Real  s,
Box const &  bx,
Array4< Real > const &  rhs,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  lobc,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  hibc 
)
inlinenoexcept

◆ mlndlap_semi_avgdown_coeff()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_semi_avgdown_coeff ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
int  idir 
)
noexcept

◆ mlndlap_semi_interpadd_aa()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_semi_interpadd_aa ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< Real const > const &  sig,
Array4< int const > const &  msk,
int  idir 
)
noexcept

◆ mlndlap_semi_restriction()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_semi_restriction ( int  i,
int  j,
int  k,
Array4< Real > const &  crse,
Array4< Real const > const &  fine,
Array4< int const > const &  msk,
int  idir 
)
noexcept

◆ mlndlap_set_dirichlet_mask()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_set_dirichlet_mask ( Box const &  bx,
Array4< int > const &  dmsk,
Array4< int const > const &  omsk,
Box const &  dom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ mlndlap_set_dot_mask()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_set_dot_mask ( Box const &  bx,
Array4< Real > const &  dmsk,
Array4< int const > const &  omsk,
Box const &  dom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ mlndlap_set_nodal_mask()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_set_nodal_mask ( int  i,
int  j,
int  k,
Array4< int > const &  nmsk,
Array4< int const > const &  cmsk 
)
noexcept

◆ mlndlap_set_stencil()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_set_stencil ( Box const &  bx,
Array4< Real > const &  sten,
Array4< Real const > const &  sigma,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mlndlap_set_stencil_s0()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_set_stencil_s0 ( int  i,
int  j,
int  k,
Array4< Real > const &  sten 
)
noexcept

◆ mlndlap_stencil_rap()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_stencil_rap ( int  i,
int  j,
int  k,
Array4< Real > const &  csten,
Array4< Real const > const &  fsten 
)
noexcept

◆ mlndlap_sum_Ax() [1/2]

template<typename P , typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_sum_Ax ( P const &  pred,
S const &  sig,
int  i,
int  j,
int  k,
Real  facx,
Real  facy,
Real  facz,
Array4< Real const > const &  phi 
)
noexcept

◆ mlndlap_sum_Ax() [2/2]

template<typename P , typename S >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_sum_Ax ( P const &  pred,
S const &  sig,
int  i,
int  j,
Real  facx,
Real  facy,
Array4< Real const > const &  phi,
bool  is_rz 
)
noexcept

◆ mlndlap_sum_Df() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_sum_Df ( int  ii,
int  jj,
int  kk,
Real  facx,
Real  facy,
Real  facz,
Array4< Real const > const &  vel,
Box const &  velbx 
)
noexcept

◆ mlndlap_sum_Df() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mlndlap_sum_Df ( int  ii,
int  jj,
Real  facx,
Real  facy,
Array4< Real const > const &  vel,
Box const &  velbx,
bool  is_rz 
)
noexcept

◆ mlndlap_unimpose_neumann_bc()

void amrex::mlndlap_unimpose_neumann_bc ( Box const &  bx,
Array4< Real > const &  rhs,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  lobc,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  hibc 
)
inlinenoexcept

◆ mlndlap_zero_fine()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndlap_zero_fine ( int  i,
int  j,
int  k,
Array4< Real > const &  phi,
Array4< int const > const &  msk,
int  fine_flag 
)
noexcept

◆ mlndtslap_adotx() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndtslap_adotx ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< int const > const &  msk,
GpuArray< Real, 3 > const &  s 
)
noexcept

◆ mlndtslap_adotx() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndtslap_adotx ( int  i,
int  j,
int  k,
Array4< Real > const &  y,
Array4< Real const > const &  x,
Array4< int const > const &  msk,
GpuArray< Real, 6 > const &  s 
)
noexcept

◆ mlndtslap_gauss_seidel() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndtslap_gauss_seidel ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< int const > const &  msk,
GpuArray< Real, 3 > const &  s 
)
noexcept

◆ mlndtslap_gauss_seidel() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndtslap_gauss_seidel ( int  i,
int  j,
int  k,
Array4< Real > const &  sol,
Array4< Real const > const &  rhs,
Array4< int const > const &  msk,
GpuArray< Real, 6 > const &  s 
)
noexcept

◆ mlndtslap_interpadd()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndtslap_interpadd ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< int const > const &  msk 
)
noexcept

◆ mlndtslap_semi_interpadd()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlndtslap_semi_interpadd ( int  i,
int  j,
int  k,
Array4< Real > const &  fine,
Array4< Real const > const &  crse,
Array4< int const > const &  msk,
int  semi_dir 
)
noexcept

◆ mlpoisson_adotx() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_adotx ( int  i,
Array4< T > const &  y,
Array4< T const > const &  x,
dhx 
)
noexcept

◆ mlpoisson_adotx() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_adotx ( int  i,
int  j,
int  k,
Array4< T > const &  y,
Array4< T const > const &  x,
dhx,
dhy,
dhz 
)
noexcept

◆ mlpoisson_adotx_m()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_adotx_m ( int  i,
Array4< T > const &  y,
Array4< T const > const &  x,
dhx,
dx,
probxlo 
)
noexcept

◆ mlpoisson_adotx_os() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_adotx_os ( int  i,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< int const > const &  osm,
dhx 
)
noexcept

◆ mlpoisson_adotx_os() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_adotx_os ( int  i,
int  j,
int  k,
Array4< T > const &  y,
Array4< T const > const &  x,
Array4< int const > const &  osm,
dhx,
dhy,
dhz 
)
noexcept

◆ mlpoisson_flux_x()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_x ( Box const &  box,
Array4< T > const &  fx,
Array4< T const > const &  sol,
dxinv 
)
noexcept

◆ mlpoisson_flux_x_m()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_x_m ( Box const &  box,
Array4< T > const &  fx,
Array4< T const > const &  sol,
dxinv,
dx,
probxlo 
)
noexcept

◆ mlpoisson_flux_xface()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_xface ( Box const &  box,
Array4< T > const &  fx,
Array4< T const > const &  sol,
dxinv,
int  xlen 
)
noexcept

◆ mlpoisson_flux_xface_m()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_xface_m ( Box const &  box,
Array4< T > const &  fx,
Array4< T const > const &  sol,
dxinv,
int  xlen,
dx,
probxlo 
)
noexcept

◆ mlpoisson_flux_y()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_y ( Box const &  box,
Array4< T > const &  fy,
Array4< T const > const &  sol,
dyinv 
)
noexcept

◆ mlpoisson_flux_yface()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_yface ( Box const &  box,
Array4< T > const &  fy,
Array4< T const > const &  sol,
dyinv,
int  ylen 
)
noexcept

◆ mlpoisson_flux_z()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_z ( Box const &  box,
Array4< T > const &  fz,
Array4< T const > const &  sol,
dzinv 
)
noexcept

◆ mlpoisson_flux_zface()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_flux_zface ( Box const &  box,
Array4< T > const &  fz,
Array4< T const > const &  sol,
dzinv,
int  zlen 
)
noexcept

◆ mlpoisson_gsrb() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_gsrb ( int  i,
int  j,
int  k,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
dhx,
dhy,
dhz,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Array4< T const > const &  f2,
Array4< int const > const &  m2,
Array4< T const > const &  f3,
Array4< int const > const &  m3,
Array4< T const > const &  f4,
Array4< int const > const &  m4,
Array4< T const > const &  f5,
Array4< int const > const &  m5,
Box const &  vbox,
int  redblack 
)
noexcept

◆ mlpoisson_gsrb() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_gsrb ( int  i,
int  ,
int  ,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
dhx,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox,
int  redblack 
)
noexcept

◆ mlpoisson_gsrb_m()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_gsrb_m ( int  i,
int  ,
int  ,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
dhx,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox,
int  redblack,
dx,
probxlo 
)
noexcept

◆ mlpoisson_gsrb_os() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_gsrb_os ( int  i,
int  j,
int  k,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< int const > const &  osm,
dhx,
dhy,
dhz,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Array4< T const > const &  f2,
Array4< int const > const &  m2,
Array4< T const > const &  f3,
Array4< int const > const &  m3,
Array4< T const > const &  f4,
Array4< int const > const &  m4,
Array4< T const > const &  f5,
Array4< int const > const &  m5,
Box const &  vbox,
int  redblack 
)
noexcept

◆ mlpoisson_gsrb_os() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_gsrb_os ( int  i,
int  ,
int  ,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< int const > const &  osm,
dhx,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox,
int  redblack 
)
noexcept

◆ mlpoisson_jacobi() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_jacobi ( int  i,
int  j,
int  k,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
dhx,
dhy,
dhz,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Array4< T const > const &  f2,
Array4< int const > const &  m2,
Array4< T const > const &  f3,
Array4< int const > const &  m3,
Array4< T const > const &  f4,
Array4< int const > const &  m4,
Array4< T const > const &  f5,
Array4< int const > const &  m5,
Box const &  vbox 
)
noexcept

◆ mlpoisson_jacobi() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_jacobi ( int  i,
int  ,
int  ,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
dhx,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox 
)
noexcept

◆ mlpoisson_jacobi_m()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_jacobi_m ( int  i,
int  ,
int  ,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
dhx,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox,
dx,
probxlo 
)
noexcept

◆ mlpoisson_jacobi_os() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_jacobi_os ( int  i,
int  j,
int  k,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
Array4< int const > const &  osm,
dhx,
dhy,
dhz,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Array4< T const > const &  f2,
Array4< int const > const &  m2,
Array4< T const > const &  f3,
Array4< int const > const &  m3,
Array4< T const > const &  f4,
Array4< int const > const &  m4,
Array4< T const > const &  f5,
Array4< int const > const &  m5,
Box const &  vbox 
)
noexcept

◆ mlpoisson_jacobi_os() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_jacobi_os ( int  i,
int  ,
int  ,
Array4< T > const &  phi,
Array4< T const > const &  rhs,
Array4< T const > const &  Ax,
Array4< int const > const &  osm,
dhx,
Array4< T const > const &  f0,
Array4< int const > const &  m0,
Array4< T const > const &  f1,
Array4< int const > const &  m1,
Box const &  vbox 
)
noexcept

◆ mlpoisson_normalize()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mlpoisson_normalize ( int  i,
int  ,
int  ,
Array4< T > const &  x,
dhx,
dx,
probxlo 
)
noexcept

◆ MLStateRedistribute()

void amrex::MLStateRedistribute ( amrex::Box const &  bx,
int  ncomp,
amrex::Array4< amrex::Real > const &  U_out,
amrex::Array4< amrex::Real > const &  U_in,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
amrex::Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz)  ,
amrex::Array4< amrex::Real const > const &  ccent,
amrex::BCRec const *  d_bcrec_ptr,
amrex::Array4< int const > const &  itracker,
amrex::Array4< amrex::Real const > const &  nrs,
amrex::Array4< amrex::Real const > const &  alpha,
amrex::Array4< amrex::Real const > const &  nbhd_vol,
amrex::Array4< amrex::Real const > const &  cent_hat,
amrex::Geometry const &  geom,
int  as_crse,
Array4< Real > const &  drho_as_crse,
Array4< int const > const &  flag_as_crse,
int  as_fine,
Array4< Real > const &  dm_as_fine,
Array4< int const > const &  levmsk,
int  is_ghost_cell,
amrex::Real  fac_for_deltaR,
int  max_order = 2 
)

◆ mltensor_cross_terms() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  fz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  bscalar 
)
noexcept

◆ mltensor_cross_terms() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  bscalar 
)
noexcept

◆ mltensor_cross_terms_fx() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  etax,
Array4< Real const > const &  kapx,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mltensor_cross_terms_fx() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
Array4< Real const > const &  etax,
Array4< Real const > const &  kapx,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_cross_terms_fy() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  etay,
Array4< Real const > const &  kapy,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mltensor_cross_terms_fy() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
Array4< Real const > const &  etay,
Array4< Real const > const &  kapy,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_cross_terms_fz() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  etaz,
Array4< Real const > const &  kapz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mltensor_cross_terms_fz() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
Array4< Real const > const &  etaz,
Array4< Real const > const &  kapz,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_cross_terms_os() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_os ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< int const > const &  osm,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  bscalar 
)
noexcept

◆ mltensor_cross_terms_os() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_cross_terms_os ( Box const &  box,
Array4< Real > const &  Ax,
Array4< Real const > const &  fx,
Array4< Real const > const &  fy,
Array4< Real const > const &  fz,
Array4< int const > const &  osm,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Real  bscalar 
)
noexcept

◆ mltensor_dx_on_yface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dx_on_yface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi 
)
noexcept

◆ mltensor_dx_on_yface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dx_on_yface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_dx_on_zface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dx_on_zface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi 
)
noexcept

◆ mltensor_dx_on_zface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dx_on_zface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dxi,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_dy_on_xface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dy_on_xface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi 
)
noexcept

◆ mltensor_dy_on_xface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dy_on_xface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_dy_on_zface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dy_on_zface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi 
)
noexcept

◆ mltensor_dy_on_zface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dy_on_zface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dyi,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_dz_on_xface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dz_on_xface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dzi 
)
noexcept

◆ mltensor_dz_on_xface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dz_on_xface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dzi,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_dz_on_yface() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dz_on_yface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dzi 
)
noexcept

◆ mltensor_dz_on_yface() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::mltensor_dz_on_yface ( int  i,
int  j,
int  k,
int  n,
Array4< Real const > const &  vel,
Real  dzi,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_fill_corners()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_corners ( int  icorner,
Box const &  vbox,
Array4< Real > const &  vel,
Array4< int const > const &  mxlo,
Array4< int const > const &  mylo,
Array4< int const > const &  mxhi,
Array4< int const > const &  myhi,
Array4< Real const > const &  bcvalxlo,
Array4< Real const > const &  bcvalylo,
Array4< Real const > const &  bcvalxhi,
Array4< Real const > const &  bcvalyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_fill_edges() [1/2]

void amrex::mltensor_fill_edges ( Box const &  vbox,
Array4< Real > const &  vel,
Array4< int const > const &  mxlo,
Array4< int const > const &  mylo,
Array4< int const > const &  mzlo,
Array4< int const > const &  mxhi,
Array4< int const > const &  myhi,
Array4< int const > const &  mzhi,
Array4< Real const > const &  bcvalxlo,
Array4< Real const > const &  bcvalylo,
Array4< Real const > const &  bcvalzlo,
Array4< Real const > const &  bcvalxhi,
Array4< Real const > const &  bcvalyhi,
Array4< Real const > const &  bcvalzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
inlinenoexcept

◆ mltensor_fill_edges() [2/2]

AMREX_GPU_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges ( int const  bid,
int const  tid,
int const  bdim,
Box const &  vbox,
Array4< Real > const &  vel,
Array4< int const > const &  mxlo,
Array4< int const > const &  mylo,
Array4< int const > const &  mzlo,
Array4< int const > const &  mxhi,
Array4< int const > const &  myhi,
Array4< int const > const &  mzhi,
Array4< Real const > const &  bcvalxlo,
Array4< Real const > const &  bcvalylo,
Array4< Real const > const &  bcvalzlo,
Array4< Real const > const &  bcvalxhi,
Array4< Real const > const &  bcvalyhi,
Array4< Real const > const &  bcvalzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_fill_edges_xhi_yhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xhi_yhi ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxhi,
Array4< int const > const &  myhi,
Array4< Real const > const &  bcvalxhi,
Array4< Real const > const &  bcvalyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xhi_domain,
bool  yhi_domain 
)
noexcept

◆ mltensor_fill_edges_xhi_ylo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xhi_ylo ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxhi,
Array4< int const > const &  mylo,
Array4< Real const > const &  bcvalxhi,
Array4< Real const > const &  bcvalylo,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xhi_domain,
bool  ylo_domain 
)
noexcept

◆ mltensor_fill_edges_xhi_zhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xhi_zhi ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxhi,
Array4< int const > const &  mzhi,
Array4< Real const > const &  bcvalxhi,
Array4< Real const > const &  bcvalzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xhi_domain,
bool  zhi_domain 
)
noexcept

◆ mltensor_fill_edges_xhi_zlo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xhi_zlo ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxhi,
Array4< int const > const &  mzlo,
Array4< Real const > const &  bcvalxhi,
Array4< Real const > const &  bcvalzlo,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xhi_domain,
bool  zlo_domain 
)
noexcept

◆ mltensor_fill_edges_xlo_yhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xlo_yhi ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxlo,
Array4< int const > const &  myhi,
Array4< Real const > const &  bcvalxlo,
Array4< Real const > const &  bcvalyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xlo_domain,
bool  yhi_domain 
)
noexcept

◆ mltensor_fill_edges_xlo_ylo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xlo_ylo ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxlo,
Array4< int const > const &  mylo,
Array4< Real const > const &  bcvalxlo,
Array4< Real const > const &  bcvalylo,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xlo_domain,
bool  ylo_domain 
)
noexcept

◆ mltensor_fill_edges_xlo_zhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xlo_zhi ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxlo,
Array4< int const > const &  mzhi,
Array4< Real const > const &  bcvalxlo,
Array4< Real const > const &  bcvalzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xlo_domain,
bool  zhi_domain 
)
noexcept

◆ mltensor_fill_edges_xlo_zlo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_xlo_zlo ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mxlo,
Array4< int const > const &  mzlo,
Array4< Real const > const &  bcvalxlo,
Array4< Real const > const &  bcvalzlo,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  xlo_domain,
bool  zlo_domain 
)
noexcept

◆ mltensor_fill_edges_yhi_zhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_yhi_zhi ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  myhi,
Array4< int const > const &  mzhi,
Array4< Real const > const &  bcvalyhi,
Array4< Real const > const &  bcvalzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  yhi_domain,
bool  zhi_domain 
)
noexcept

◆ mltensor_fill_edges_yhi_zlo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_yhi_zlo ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  myhi,
Array4< int const > const &  mzlo,
Array4< Real const > const &  bcvalyhi,
Array4< Real const > const &  bcvalzlo,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  yhi_domain,
bool  zlo_domain 
)
noexcept

◆ mltensor_fill_edges_ylo_zhi()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_ylo_zhi ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mylo,
Array4< int const > const &  mzhi,
Array4< Real const > const &  bcvalylo,
Array4< Real const > const &  bcvalzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  ylo_domain,
bool  zhi_domain 
)
noexcept

◆ mltensor_fill_edges_ylo_zlo()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_fill_edges_ylo_zlo ( int const  i,
int const  j,
int const  k,
Dim3 const &  blen,
Array4< Real > const &  vel,
Array4< int const > const &  mylo,
Array4< int const > const &  mzlo,
Array4< Real const > const &  bcvalylo,
Array4< Real const > const &  bcvalzlo,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Array2D< Real, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bcl,
int  inhomog,
int  maxorder,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
bool  ylo_domain,
bool  zlo_domain 
)
noexcept

◆ mltensor_vel_grads_fx() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_vel_grads_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mltensor_vel_grads_fx() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_vel_grads_fx ( Box const &  box,
Array4< Real > const &  fx,
Array4< Real const > const &  vel,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvxlo,
Array4< Real const > const &  bvxhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_vel_grads_fy() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_vel_grads_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mltensor_vel_grads_fy() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_vel_grads_fy ( Box const &  box,
Array4< Real > const &  fy,
Array4< Real const > const &  vel,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvylo,
Array4< Real const > const &  bvyhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ mltensor_vel_grads_fz() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_vel_grads_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv 
)
noexcept

◆ mltensor_vel_grads_fz() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::mltensor_vel_grads_fz ( Box const &  box,
Array4< Real > const &  fz,
Array4< Real const > const &  vel,
GpuArray< Real, AMREX_SPACEDIM > const &  dxinv,
Array4< Real const > const &  bvzlo,
Array4< Real const > const &  bvzhi,
Array2D< BoundCond, 0, 2 *AMREX_SPACEDIM, 0, AMREX_SPACEDIM > const &  bct,
Dim3 const &  dlo,
Dim3 const &  dhi 
)
noexcept

◆ MultiFabFileFullPrefix()

std::string amrex::MultiFabFileFullPrefix ( int  level,
const std::string &  plotfilename,
const std::string &  levelPrefix,
const std::string &  mfPrefix 
)

return the full path multifab prefix, e.g., plt00005/Level_5/Cell

◆ MultiFabHeaderPath()

std::string amrex::MultiFabHeaderPath ( int  level,
const std::string &  levelPrefix,
const std::string &  mfPrefix 
)

return the path of the multifab to write to the header, e.g., Level_5/Cell

◆ MultiLevelToBlueprint() [1/2]

void amrex::MultiLevelToBlueprint ( int  n_levels,
const Vector< const MultiFab * > &  mfs,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geoms,
Real  time_value,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
conduit::Node &  bp_mesh 
)

◆ MultiLevelToBlueprint() [2/2]

void amrex::MultiLevelToBlueprint ( int  n_levels,
const Vector< const MultiFab * > &  mfs,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geoms,
Real  time_value,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
Node &  res 
)

◆ Multiply() [1/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Multiply ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
const IntVect nghost 
)

◆ Multiply() [2/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Multiply ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
int  nghost 
)

◆ nBytesOwned() [1/2]

template<typename T >
Long amrex::nBytesOwned ( BaseFab< T > const &  fab)
noexcept

◆ nBytesOwned() [2/2]

template<typename T , std::enable_if_t<!IsBaseFab< T >::value, int > = 0>
Long amrex::nBytesOwned ( T const &  )
noexcept

◆ nComp() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
int amrex::nComp ( Array< MF, N > const &  mf)

◆ nComp() [2/2]

int amrex::nComp ( FabArrayBase const &  fa)

◆ Nestsets()

bool amrex::Nestsets ( const int  level,
const int  n_levels,
const FArrayBox fab,
const Vector< const BoxArray * >  box_arrays,
const Vector< IntVect > &  ref_ratio,
const Vector< int > &  domain_offsets,
conduit::Node &  nestset 
)

◆ neumann_scale() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::neumann_scale ( int  i,
int  j,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ neumann_scale() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::neumann_scale ( int  i,
int  j,
int  k,
Box const &  nddom,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bclo,
GpuArray< LinOpBCType, AMREX_SPACEDIM > const &  bchi 
)
noexcept

◆ nGrowVect() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF > &&(N > 0), int > = 0>
IntVect amrex::nGrowVect ( Array< MF, N > const &  mf)

◆ nGrowVect() [2/2]

IntVect amrex::nGrowVect ( FabArrayBase const &  fa)

◆ NHops()

int amrex::NHops ( const Box tbox,
const IntVect ivfrom,
const IntVect ivto 
)

◆ nodebilin_interp()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::nodebilin_interp ( Box const &  bx,
Array4< T > const &  fine,
const int  fcomp,
const int  ncomp,
Array4< T const > const &  slope,
Array4< T const > const &  crse,
const int  ccomp,
IntVect const &  ratio 
)
noexcept

◆ nodebilin_slopes()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::nodebilin_slopes ( Box const &  bx,
Array4< T > const &  slope,
Array4< T const > const &  u,
const int  icomp,
const int  ncomp,
IntVect const &  ratio 
)
noexcept

◆ norm()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE T amrex::norm ( const GpuComplex< T > &  a_z)
noexcept

Return the norm (magnitude squared) of a complex number.

◆ NormHelper() [1/2]

template<typename MMF , typename Pred , typename F >
Real amrex::NormHelper ( const MMF &  mask,
const MultiFab x,
int  xcomp,
const MultiFab y,
int  ycomp,
Pred const &  pf,
F const &  f,
int  numcomp,
IntVect  nghost,
bool  local 
)

Returns part of a norm based on three MultiFabs.

The MultiFabs MUST have the same underlying BoxArray. The Predicate pf is used to test the mask The function f is applied elementwise as f(x(i,j,k,n),y(i,j,k,n)) inside the summation (subject to a valid mask entry pf(mask(i,j,k,n)

◆ NormHelper() [2/2]

template<typename F >
Real amrex::NormHelper ( const MultiFab x,
int  xcomp,
const MultiFab y,
int  ycomp,
F const &  f,
int  numcomp,
IntVect  nghost,
bool  local 
)

Returns part of a norm based on two MultiFabs.

The MultiFabs MUST have the same underlying BoxArray. The function f is applied elementwise as f(x(i,j,k,n),y(i,j,k,n)) inside the summation (subject to a valid mask entry pf(mask(i,j,k,n)

◆ norminf() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
MF::value_type amrex::norminf ( Array< MF, N > const &  mf,
int  scomp,
int  ncomp,
IntVect const &  nghost,
bool  local = false 
)

◆ norminf() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
MF::value_type amrex::norminf ( MF const &  mf,
int  scomp,
int  ncomp,
IntVect const &  nghost,
bool  local = false 
)

◆ numParticlesOutOfRange() [1/6]

template<class Iterator , std::enable_if_t< IsParticleIterator< Iterator >::value, int > foo = 0>
int amrex::numParticlesOutOfRange ( Iterator const &  pti,
int  nGrow 
)

Returns the number of particles that are more than nGrow cells from the box correspond to the input iterator.

Template Parameters
Iteratoran AMReX ParticleIterator
Parameters
theiterator pointing to the current grid/tile to test
nGrowthe number of grow cells allowed.

◆ numParticlesOutOfRange() [2/6]

template<class Iterator , std::enable_if_t< IsParticleIterator< Iterator >::value &&!Iterator::ContainerType::ParticleType::is_soa_particle, int > foo = 0>
int amrex::numParticlesOutOfRange ( Iterator const &  pti,
IntVect  nGrow 
)

Returns the number of particles that are more than nGrow cells from the box correspond to the input iterator.

Template Parameters
Iteratoran AMReX ParticleIterator
Parameters
theiterator pointing to the current grid/tile to test
nGrowthe number of grow cells allowed.

◆ numParticlesOutOfRange() [3/6]

template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int amrex::numParticlesOutOfRange ( PC const &  pc,
int  lev_min,
int  lev_max,
int  nGrow 
)

Returns the number of particles that are more than nGrow cells from their assigned box.

This version goes over only the specified levels

Template Parameters
PCa type of AMReX particle container.
Parameters
pcthe particle container to test
lev_minthe minimum level to test
lev_maxthe maximum level to test
nGrowthe number of grow cells allowed.

◆ numParticlesOutOfRange() [4/6]

template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int amrex::numParticlesOutOfRange ( PC const &  pc,
int  lev_min,
int  lev_max,
IntVect  nGrow 
)

Returns the number of particles that are more than nGrow cells from their assigned box.

This version goes over only the specified levels

Template Parameters
PCa type of AMReX particle container.
Parameters
pcthe particle container to test
lev_minthe minimum level to test
lev_maxthe maximum level to test
nGrowthe number of grow cells allowed.

◆ numParticlesOutOfRange() [5/6]

template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int amrex::numParticlesOutOfRange ( PC const &  pc,
int  nGrow 
)

Returns the number of particles that are more than nGrow cells from their assigned box.

This version tests over all levels.

Template Parameters
PCa type of AMReX particle container.
Parameters
pcthe particle container to test
nGrowthe number of grow cells allowed.

◆ numParticlesOutOfRange() [6/6]

template<class PC , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
int amrex::numParticlesOutOfRange ( PC const &  pc,
IntVect  nGrow 
)

Returns the number of particles that are more than nGrow cells from their assigned box.

This version tests over all levels.

Template Parameters
PCa type of AMReX particle container.
Parameters
pcthe particle container to test
nGrowthe number of grow cells allowed.

◆ numTilesInBox()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::numTilesInBox ( const Box box,
const bool  a_do_tiling,
const IntVect a_tile_size 
)

◆ numUniquePhysicalCores()

int amrex::numUniquePhysicalCores ( )

...

◆ ONES_COMP_NEG()

void amrex::ONES_COMP_NEG ( Long &  n,
int  nb,
Long  incr 
)
inline

◆ operator!=()

template<typename A1 , typename A2 , std::enable_if_t< IsArenaAllocator< A1 >::value &&IsArenaAllocator< A2 >::value, int > = 0>
bool amrex::operator!= ( A1 const &  a1,
A2 const &  a2 
)

◆ operator&()

FPExcept amrex::operator& ( FPExcept  a,
FPExcept  b 
)
inline

◆ operator*() [1/8]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator* ( const GpuComplex< T > &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Multiply two complex numbers.

◆ operator*() [2/8]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator* ( const GpuComplex< T > &  a_x,
const T &  a_y 
)
noexcept

Multiply a complex number by a real one.

◆ operator*() [3/8]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator* ( const RealVect s,
const RealVect p 
)
inlinenoexcept

Returns component-wise product of s and p.

◆ operator*() [4/8]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator* ( const T &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Multiply a real number by a complex one.

◆ operator*() [5/8]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::operator* ( int  s,
const IntVectND< dim > &  p 
)
noexcept

Returns p * s.

◆ operator*() [6/8]

template<typename LLs , typename... As>
constexpr auto amrex::operator* ( LLs  ,
TypeList< As... >   
)
constexpr

◆ operator*() [7/8]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator* ( Real  s,
const RealVect p 
)
inlinenoexcept

Returns a RealVect that is a RealVect p with each component multiplied by a scalar s.

◆ operator*() [8/8]

template<class U , int N1, int N2, int N3, Order Ord, int SI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE SmallMatrix<U,N1,N3,Ord,SI> amrex::operator* ( SmallMatrix< U, N1, N2, Ord, SI > const &  lhs,
SmallMatrix< U, N2, N3, Ord, SI > const &  rhs 
)

◆ operator+() [1/8]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator+ ( const GpuComplex< T > &  a_x)

Identity operation on a complex number.

◆ operator+() [2/8]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator+ ( const GpuComplex< T > &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Add two complex numbers.

◆ operator+() [3/8]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator+ ( const GpuComplex< T > &  a_x,
const T &  a_y 
)
noexcept

Add a real number to a complex one.

◆ operator+() [4/8]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator+ ( const RealVect s,
const RealVect p 
)
inlinenoexcept

Returns component-wise sum of RealVects s and p.

◆ operator+() [5/8]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator+ ( const T &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Add a complex number to a real one.

◆ operator+() [6/8]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::operator+ ( int  s,
const IntVectND< dim > &  p 
)
noexcept

Returns p + s.

◆ operator+() [7/8]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator+ ( Real  s,
const RealVect p 
)
inlinenoexcept

Returns a RealVect that is a RealVect p with a scalar s added to each component.

◆ operator+() [8/8]

template<typename... As, typename... Bs>
constexpr auto amrex::operator+ ( TypeList< As... >  ,
TypeList< Bs... >   
)
constexpr

Concatenate two TypeLists.

◆ operator-() [1/7]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator- ( const GpuComplex< T > &  a_x)

Negate a complex number.

◆ operator-() [2/7]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator- ( const GpuComplex< T > &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Subtract two complex numbers.

◆ operator-() [3/7]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator- ( const GpuComplex< T > &  a_x,
const T &  a_y 
)
noexcept

Subtract a real number from a complex one.

◆ operator-() [4/7]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator- ( const RealVect s,
const RealVect p 
)
inlinenoexcept

Returns s - p.

◆ operator-() [5/7]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator- ( const T &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Subtract a complex number from a real one.

◆ operator-() [6/7]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE AMREX_GPU_HOST_DEVICE IntVectND<dim> amrex::operator- ( int  s,
const IntVectND< dim > &  p 
)
noexcept

Returns -p + s.

◆ operator-() [7/7]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator- ( Real  s,
const RealVect p 
)
inlinenoexcept

Returns s - p.

◆ operator/() [1/5]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator/ ( const GpuComplex< T > &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Divide a complex number by another one.

◆ operator/() [2/5]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator/ ( const GpuComplex< T > &  a_x,
const T &  a_y 
)
noexcept

Divide a complex number by a real.

◆ operator/() [3/5]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator/ ( const RealVect s,
const RealVect p 
)
inlinenoexcept

Returns component-wise quotient p / s.

◆ operator/() [4/5]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::operator/ ( const T &  a_x,
const GpuComplex< T > &  a_y 
)
noexcept

Divide a real number by a complex one.

◆ operator/() [5/5]

AMREX_GPU_HOST_DEVICE RealVect amrex::operator/ ( Real  s,
const RealVect p 
)
inlinenoexcept

Returns a RealVect that is a RealVect p with each component divided by a scalar s.

◆ operator<<() [1/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const Geometry g 
)

Nice ASCII output.

◆ operator<<() [2/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const RealBox b 
)

Nice ASCII output.

◆ operator<<() [3/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
AmrMesh const &  amr_mesh 
)

◆ operator<<() [4/42]

template<typename T >
std::ostream& amrex::operator<< ( std::ostream &  os,
Array< T, AMREX_SPACEDIM > const &  a 
)

◆ operator<<() [5/42]

template<typename T >
std::ostream& amrex::operator<< ( std::ostream &  os,
const Array4< T > &  a 
)

◆ operator<<() [6/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const BCRec b 
)

◆ operator<<() [7/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const BoxArray ba 
)

Write a BoxArray to an ostream in ASCII format.

◆ operator<<() [8/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const BoxArray::RefID id 
)

◆ operator<<() [9/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const BoxDomain bd 
)

Output a BoxDomain to an ostream is ASCII format.

◆ operator<<() [10/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const BoxList blist 
)

Output a BoxList to an ostream in ASCII format.

◆ operator<<() [11/42]

template<int dim>
std::ostream& amrex::operator<< ( std::ostream &  os,
const BoxND< dim > &  bx 
)

Write an ASCII representation to the ostream.

◆ operator<<() [12/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const CArena arena 
)

◆ operator<<() [13/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const CoordSys c 
)

◆ operator<<() [14/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const dim3 &  d 
)

◆ operator<<() [15/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const DistributionMapping pmap 
)

Our output operator.

◆ operator<<() [16/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const DistributionMapping::RefID id 
)

◆ operator<<() [17/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const EBCellFlag flag 
)

◆ operator<<() [18/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const ErrorList elst 
)

◆ operator<<() [19/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const FabArrayBase::BDKey id 
)

◆ operator<<() [20/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const FArrayBox f 
)

◆ operator<<() [21/42]

template<int dim>
std::ostream& amrex::operator<< ( std::ostream &  os,
const IndexTypeND< dim > &  it 
)

Write an IndexTypeND to an ostream in ASCII.

◆ operator<<() [22/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const IntDescriptor id 
)

Write out an IntDescriptor to an ostream in ASCII.

◆ operator<<() [23/42]

template<int dim>
std::ostream& amrex::operator<< ( std::ostream &  os,
const IntVectND< dim > &  iv 
)

◆ operator<<() [24/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const LinOpBCType &  t 
)

◆ operator<<() [25/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const Mask m 
)

◆ operator<<() [26/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const MemProfiler::Builds builds 
)

◆ operator<<() [27/42]

std::ostream& amrex::operator<< ( std::ostream &  os,
const MemProfiler::Bytes bytes 
)

◆ operator<<() [28/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const Orientation o 
)

Write to an ostream in ASCII format.

◆ operator<<() [29/42]

template<int NReal = 0, int NInt = 0>
std::ostream& amrex::operator<< ( std::ostream &  os,
const Particle< 0, 0 > &  p 
)

◆ operator<<() [30/42]

template<int NInt>
std::ostream& amrex::operator<< ( std::ostream &  os,
const Particle< 0, NInt > &  p 
)

◆ operator<<() [31/42]

template<int NReal>
std::ostream& amrex::operator<< ( std::ostream &  os,
const Particle< NReal, 0 > &  p 
)

◆ operator<<() [32/42]

template<int NReal, int NInt>
std::ostream& amrex::operator<< ( std::ostream &  os,
const Particle< NReal, NInt > &  p 
)

◆ operator<<() [33/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const amrex::RealDescriptor rd 
)

Write out an RealDescriptor to an ostream in ASCII.

◆ operator<<() [34/42]

template<typename T , typename S >
std::ostream& amrex::operator<< ( std::ostream &  os,
const std::pair< T, S > &  v 
)

◆ operator<<() [35/42]

template<typename T , std::enable_if_t< std::is_same_v< T, Dim3 >||std::is_same_v< T, XDim3 >> * = nullptr>
std::ostream& amrex::operator<< ( std::ostream &  os,
const T &  d 
)

◆ operator<<() [36/42]

static std::ostream& amrex::operator<< ( std::ostream &  os,
const Vector< Vector< Real > > &  ar 
)
static

◆ operator<<() [37/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const Vector< VisMF::FabOnDisk > &  fa 
)

Write an Vector<FabOnDisk> to an ostream in ASCII.

◆ operator<<() [38/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const VisMF::FabOnDisk fod 
)

Write a FabOnDisk to an ostream in ASCII.

◆ operator<<() [39/42]

std::ostream & amrex::operator<< ( std::ostream &  os,
const VisMF::Header hd 
)

Write a VisMF::Header to an ostream in ASCII.

◆ operator<<() [40/42]

template<class T , int NRows, int NCols, Order ORDER, int SI>
std::ostream& amrex::operator<< ( std::ostream &  os,
SmallMatrix< T, NRows, NCols, ORDER, SI > const &  mat 
)

◆ operator<<() [41/42]

std::ostream& amrex::operator<< ( std::ostream &  ostr,
const RealVect p 
)

Print to the given output stream in ASCII.

◆ operator<<() [42/42]

template<typename U >
std::ostream& amrex::operator<< ( std::ostream &  out,
const GpuComplex< U > &  c 
)

◆ operator==()

template<typename A1 , typename A2 , std::enable_if_t< IsArenaAllocator< A1 >::value &&IsArenaAllocator< A2 >::value, int > = 0>
bool amrex::operator== ( A1 const &  a1,
A2 const &  a2 
)

◆ operator>>() [1/16]

std::istream & amrex::operator>> ( std::istream &  is,
const expect exp 
)

◆ operator>>() [2/16]

std::istream & amrex::operator>> ( std::istream &  is,
Geometry g 
)

Nice ASCII input.

◆ operator>>() [3/16]

std::istream & amrex::operator>> ( std::istream &  is,
RealBox b 
)

Nice ASCII input.

◆ operator>>() [4/16]

template<int dim>
std::istream& amrex::operator>> ( std::istream &  is,
BoxND< dim > &  bx 
)

Read from istream.

◆ operator>>() [5/16]

std::istream& amrex::operator>> ( std::istream &  is,
CoordSys c 
)

◆ operator>>() [6/16]

std::istream& amrex::operator>> ( std::istream &  is,
FArrayBox f 
)

◆ operator>>() [7/16]

template<int dim>
std::istream& amrex::operator>> ( std::istream &  is,
IndexTypeND< dim > &  it 
)

Read an IndexTypeND from an istream.

◆ operator>>() [8/16]

std::istream & amrex::operator>> ( std::istream &  is,
IntDescriptor id 
)

Read in an IntDescriptor from an istream.

◆ operator>>() [9/16]

template<int dim>
std::istream& amrex::operator>> ( std::istream &  is,
IntVectND< dim > &  iv 
)

◆ operator>>() [10/16]

std::istream& amrex::operator>> ( std::istream &  is,
Mask m 
)

◆ operator>>() [11/16]

std::istream& amrex::operator>> ( std::istream &  is,
Orientation o 
)

◆ operator>>() [12/16]

std::istream & amrex::operator>> ( std::istream &  is,
amrex::RealDescriptor rd 
)

Read in a RealDescriptor from an istream.

◆ operator>>() [13/16]

std::istream& amrex::operator>> ( std::istream &  is,
RealVect iv 
)

◆ operator>>() [14/16]

std::istream & amrex::operator>> ( std::istream &  is,
Vector< VisMF::FabOnDisk > &  fa 
)

Read an Vector<FabOnDisk> from an istream.

◆ operator>>() [15/16]

std::istream & amrex::operator>> ( std::istream &  is,
VisMF::FabOnDisk fod 
)

Read a FabOnDisk from an istream.

◆ operator>>() [16/16]

std::istream & amrex::operator>> ( std::istream &  is,
VisMF::Header hd 
)

Read a VisMF::Header from an istream.

◆ operator|()

FPExcept amrex::operator| ( FPExcept  a,
FPExcept  b 
)
inline

◆ OutOfMemory()

void amrex::OutOfMemory ( )

Aborts after printing message indicating out-of-memory; i.e. operator new has failed. This is the "supported" set_new_handler() function for AMReX applications.

◆ OutStream()

std::ostream & amrex::OutStream ( )

◆ OverrideSync()

template<class FAB , class IFAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value && IsBaseFab<IFAB>::value>>
void amrex::OverrideSync ( FabArray< FAB > &  fa,
FabArray< IFAB > const &  msk,
const Periodicity period 
)

◆ OverrideSync_finish()

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::OverrideSync_finish ( FabArray< FAB > &  fa)

◆ OverrideSync_nowait()

template<class FAB , class IFAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value && IsBaseFab<IFAB>::value>>
void amrex::OverrideSync_nowait ( FabArray< FAB > &  fa,
FabArray< IFAB > const &  msk,
const Periodicity period 
)

◆ overset_rescale_bcoef_x()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::overset_rescale_bcoef_x ( Box const &  box,
Array4< T > const &  bX,
Array4< int const > const &  osm,
int  ncomp,
osfac 
)
noexcept

◆ overset_rescale_bcoef_y()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::overset_rescale_bcoef_y ( Box const &  box,
Array4< T > const &  bY,
Array4< int const > const &  osm,
int  ncomp,
osfac 
)
noexcept

◆ overset_rescale_bcoef_z()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::overset_rescale_bcoef_z ( Box const &  box,
Array4< T > const &  bZ,
Array4< int const > const &  osm,
int  ncomp,
osfac 
)
noexcept

◆ OwnerMask()

std::unique_ptr< iMultiFab > amrex::OwnerMask ( FabArrayBase const &  mf,
const Periodicity period,
const IntVect ngrow 
)

◆ packBuffer()

template<class PC , class Buffer , std::enable_if_t< IsParticleContainer< PC >::value &&std::is_base_of_v< PolymorphicArenaAllocator< typename Buffer::value_type >, Buffer >, int > foo = 0>
void amrex::packBuffer ( const PC &  pc,
const ParticleCopyOp op,
const ParticleCopyPlan plan,
Buffer &  snd_buffer 
)

◆ ParallelCopy() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::ParallelCopy ( Array< MF, N > &  dst,
Array< MF, N > const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  ng_src = IntVect(0),
IntVect const &  ng_dst = IntVect(0),
Periodicity const &  period = Periodicity::NonPeriodic() 
)

dst = src w/ MPI communication

◆ ParallelCopy() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::ParallelCopy ( MF &  dst,
MF const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  ng_src = IntVect(0),
IntVect const &  ng_dst = IntVect(0),
Periodicity const &  period = Periodicity::NonPeriodic() 
)

dst = src w/ MPI communication

◆ ParallelFor() [1/54]

template<int MT, typename L , int dim>
void amrex::ParallelFor ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ ParallelFor() [2/54]

template<typename L , int dim>
void amrex::ParallelFor ( BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ ParallelFor() [3/54]

template<typename L , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::ParallelFor ( BoxND< dim > const &  box,
L const &  f 
)
noexcept

◆ ParallelFor() [4/54]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::ParallelFor ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ ParallelFor() [5/54]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
void amrex::ParallelFor ( BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ ParallelFor() [6/54]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::ParallelFor ( BoxND< dim > const &  box,
ncomp,
L const &  f 
)
noexcept

◆ ParallelFor() [7/54]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [8/54]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [9/54]

template<typename L1 , typename L2 , int dim>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [10/54]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [11/54]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [12/54]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [13/54]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [14/54]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::ParallelFor ( BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [15/54]

template<typename L , int dim>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ ParallelFor() [16/54]

template<int MT, typename L , int dim>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ ParallelFor() [17/54]

template<int MT, typename L , int dim>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
L const &  f 
)
noexcept

◆ ParallelFor() [18/54]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ ParallelFor() [19/54]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ ParallelFor() [20/54]

template<int MT, typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box,
ncomp,
L const &  f 
)
noexcept

◆ ParallelFor() [21/54]

template<typename L1 , typename L2 , typename L3 , int dim>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [22/54]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [23/54]

template<int MT, typename L1 , typename L2 , typename L3 , int dim>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value && MaybeDeviceRunnable<L3>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [24/54]

template<typename L1 , typename L2 , int dim>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [25/54]

template<int MT, typename L1 , typename L2 , int dim>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [26/54]

template<int MT, typename L1 , typename L2 , int dim>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [27/54]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [28/54]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [29/54]

template<int MT, typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [30/54]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [31/54]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral_v<T1>>, typename M2 = std::enable_if_t<std::is_integral_v<T2>>, typename M3 = std::enable_if_t<std::is_integral_v<T3>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [32/54]

template<int MT, typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value && MaybeDeviceRunnable<L3>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  ,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [33/54]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
n,
L &&  f 
)
noexcept

◆ ParallelFor() [34/54]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::ParallelFor ( Gpu::KernelInfo const &  ,
n,
L &&  f 
)
noexcept

◆ ParallelFor() [35/54]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  ,
n,
L const &  f 
)
noexcept

◆ ParallelFor() [36/54]

template<typename L , int dim>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
L &&  f 
)
noexcept

◆ ParallelFor() [37/54]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box,
ncomp,
L &&  f 
)
noexcept

◆ ParallelFor() [38/54]

template<typename L1 , typename L2 , typename L3 , int dim>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value && MaybeDeviceRunnable<L3>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
BoxND< dim > const &  box3,
L1 &&  f1,
L2 &&  f2,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [39/54]

template<typename L1 , typename L2 , int dim>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
BoxND< dim > const &  box2,
L1 &&  f1,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [40/54]

template<typename T1 , typename T2 , typename L1 , typename L2 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2 
)
noexcept

◆ ParallelFor() [41/54]

template<typename T1 , typename T2 , typename T3 , typename L1 , typename L2 , typename L3 , int dim, typename M1 = std::enable_if_t<std::is_integral<T1>::value>, typename M2 = std::enable_if_t<std::is_integral<T2>::value>, typename M3 = std::enable_if_t<std::is_integral<T3>::value>>
std::enable_if_t<MaybeDeviceRunnable<L1>::value && MaybeDeviceRunnable<L2>::value && MaybeDeviceRunnable<L3>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  info,
BoxND< dim > const &  box1,
T1  ncomp1,
L1 &&  f1,
BoxND< dim > const &  box2,
T2  ncomp2,
L2 &&  f2,
BoxND< dim > const &  box3,
T3  ncomp3,
L3 &&  f3 
)
noexcept

◆ ParallelFor() [42/54]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelFor ( Gpu::KernelInfo const &  info,
n,
L &&  f 
)
noexcept

◆ ParallelFor() [43/54]

template<int MT, typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
void amrex::ParallelFor ( n,
L &&  f 
)
noexcept

◆ ParallelFor() [44/54]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
void amrex::ParallelFor ( n,
L &&  f 
)
noexcept

◆ ParallelFor() [45/54]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::ParallelFor ( n,
L const &  f 
)
noexcept

◆ ParallelFor() [46/54]

template<class F , int dim, typename... CTOs>
void amrex::ParallelFor ( TypeList< CTOs... >  ctos,
std::array< int, sizeof...(CTOs)> const &  option,
BoxND< dim > const &  box,
F &&  f 
)

ParallelFor with compile time optimization of kernels with run time options.

It uses fold expression to generate kernel launches for all combinations of the run time options. The kernel function can use constexpr if to discard unused code blocks for better run time performance. In the example below, the code will be expanded into 4*2=8 normal ParallelFors for all combinations of the run time parameters.

    int A_runtime_option = ...;
    int B_runtime_option = ...;
    enum A_options : int { A0, A1, A2, A3};
    enum B_options : int { B0, B1 };
    ParallelFor(TypeList<CompileTimeOptions<A0,A1,A2,A3>,
                         CompileTimeOptions<B0,B1>>{},
                {A_runtime_option, B_runtime_option},
                box, [=] AMREX_GPU_DEVICE (int i, int j, int k,
                                           auto A_control, auto B_control)
    {
        ...
        if constexpr (A_control.value == A0) {
            ...
        } else if constexpr (A_control.value == A1) {
            ...
        } else if constexpr (A_control.value == A2) {
            ...
        } else {
            ...
        }
        if constexpr (A_control.value != A3 && B_control.value == B1) {
            ...
        }
        ...
    });

Note that due to a limitation of CUDA's extended device lambda, the constexpr if block cannot be the one that captures a variable first. If nvcc complains about it, you will have to manually capture it outside constexpr if. The data type for the parameters is int.

Parameters
ctoslist of all possible values of the parameters.
optionthe run time parameters.
boxa Box specifying the 3D for loop's range.
fa callable object taking three integers and working on the given cell.

◆ ParallelFor() [47/54]

template<typename T , class F , int dim, typename... CTOs>
std::enable_if_t<std::is_integral_v<T> > amrex::ParallelFor ( TypeList< CTOs... >  ctos,
std::array< int, sizeof...(CTOs)> const &  option,
BoxND< dim > const &  box,
ncomp,
F &&  f 
)

ParallelFor with compile time optimization of kernels with run time options.

It uses fold expression to generate kernel launches for all combinations of the run time options. The kernel function can use constexpr if to discard unused code blocks for better run time performance. In the example below, the code will be expanded into 4*2=8 normal ParallelFors for all combinations of the run time parameters.

    int A_runtime_option = ...;
    int B_runtime_option = ...;
    enum A_options : int { A0, A1, A2, A3};
    enum B_options : int { B0, B1 };
    ParallelFor(TypeList<CompileTimeOptions<A0,A1,A2,A3>,
                         CompileTimeOptions<B0,B1>>{},
                {A_runtime_option, B_runtime_option},
                box, ncomp, [=] AMREX_GPU_DEVICE (int i, int j, int k, int n,
                                                  auto A_control, auto B_control)
    {
        ...
        if constexpr (A_control.value == A0) {
            ...
        } else if constexpr (A_control.value == A1) {
            ...
        } else if constexpr (A_control.value == A2) {
            ...
        } else {
            ...
        }
        if constexpr (A_control.value != A3 && B_control.value == B1) {
            ...
        }
        ...
    });

Note that due to a limitation of CUDA's extended device lambda, the constexpr if block cannot be the one that captures a variable first. If nvcc complains about it, you will have to manually capture it outside constexpr if. The data type for the parameters is int.

Parameters
ctoslist of all possible values of the parameters.
optionthe run time parameters.
boxa Box specifying the iteration in 3D space.
ncompan integer specifying the range for iteration over components.
fa callable object taking three integers and working on the given cell.

◆ ParallelFor() [48/54]

template<typename T , class F , typename... CTOs>
std::enable_if_t<std::is_integral_v<T> > amrex::ParallelFor ( TypeList< CTOs... >  ctos,
std::array< int, sizeof...(CTOs)> const &  option,
N,
F &&  f 
)

ParallelFor with compile time optimization of kernels with run time options.

It uses fold expression to generate kernel launches for all combinations of the run time options. The kernel function can use constexpr if to discard unused code blocks for better run time performance. In the example below, the code will be expanded into 4*2=8 normal ParallelFors for all combinations of the run time parameters.

    int A_runtime_option = ...;
    int B_runtime_option = ...;
    enum A_options : int { A0, A1, A2, A3};
    enum B_options : int { B0, B1 };
    ParallelFor(TypeList<CompileTimeOptions<A0,A1,A2,A3>,
                         CompileTimeOptions<B0,B1>>{},
                {A_runtime_option, B_runtime_option},
                N, [=] AMREX_GPU_DEVICE (int i, auto A_control, auto B_control)
    {
        ...
        if constexpr (A_control.value == A0) {
            ...
        } else if constexpr (A_control.value == A1) {
            ...
        } else if constexpr (A_control.value == A2) {
            ...
        } else {
            ...
        }
        if constexpr (A_control.value != A3 && B_control.value == B1) {
            ...
        }
        ...
    });

Note that due to a limitation of CUDA's extended device lambda, the constexpr if block cannot be the one that captures a variable first. If nvcc complains about it, you will have to manually capture it outside constexpr if. The data type for the parameters is int.

Parameters
ctoslist of all possible values of the parameters.
optionthe run time parameters.
Nan integer specifying the 1D for loop's range.
fa callable object taking an integer and working on that iteration.

◆ ParallelFor() [49/54]

template<int MT, class F , int dim, typename... CTOs>
void amrex::ParallelFor ( TypeList< CTOs... >  ctos,
std::array< int, sizeof...(CTOs)> const &  runtime_options,
BoxND< dim > const &  box,
F &&  f 
)

◆ ParallelFor() [50/54]

template<int MT, typename T , class F , int dim, typename... CTOs>
std::enable_if_t<std::is_integral_v<T> > amrex::ParallelFor ( TypeList< CTOs... >  ctos,
std::array< int, sizeof...(CTOs)> const &  runtime_options,
BoxND< dim > const &  box,
ncomp,
F &&  f 
)

◆ ParallelFor() [51/54]

template<int MT, typename T , class F , typename... CTOs>
std::enable_if_t<std::is_integral_v<T> > amrex::ParallelFor ( TypeList< CTOs... >  ctos,
std::array< int, sizeof...(CTOs)> const &  runtime_options,
N,
F &&  f 
)

◆ ParallelFor() [52/54]

template<class TagType , class F >
std::enable_if_t<std::is_same<std::decay_t<decltype(std::declval<TagType>).box())>, Box>::value> amrex::ParallelFor ( Vector< TagType > const &  tags,
F &&  f 
)

◆ ParallelFor() [53/54]

template<class TagType , class F >
std::enable_if_t<std::is_integral<std::decay_t<decltype(std::declval<TagType>).size())> >::value> amrex::ParallelFor ( Vector< TagType > const &  tags,
F &&  f 
)

◆ ParallelFor() [54/54]

template<class TagType , class F >
std::enable_if_t<std::is_same<std::decay_t<decltype(std::declval<TagType>).box())>, Box>::value> amrex::ParallelFor ( Vector< TagType > const &  tags,
int  ncomp,
F &&  f 
)

◆ ParallelForRNG() [1/6]

template<typename L , int dim>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::ParallelForRNG ( BoxND< dim > const &  box,
L const &  f 
)
noexcept

◆ ParallelForRNG() [2/6]

template<typename L , int dim>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelForRNG ( BoxND< dim > const &  box,
L const &  f 
)
noexcept

◆ ParallelForRNG() [3/6]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::ParallelForRNG ( BoxND< dim > const &  box,
ncomp,
L const &  f 
)
noexcept

◆ ParallelForRNG() [4/6]

template<typename T , typename L , int dim, typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelForRNG ( BoxND< dim > const &  box,
ncomp,
L const &  f 
)
noexcept

◆ ParallelForRNG() [5/6]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral_v<T>>>
AMREX_ATTRIBUTE_FLATTEN_FOR void amrex::ParallelForRNG ( n,
L const &  f 
)
noexcept

◆ ParallelForRNG() [6/6]

template<typename T , typename L , typename M = std::enable_if_t<std::is_integral<T>::value>>
std::enable_if_t<MaybeDeviceRunnable<L>::value> amrex::ParallelForRNG ( n,
L const &  f 
)
noexcept

◆ ParReduce() [1/6]

template<typename Op , typename T , typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
T amrex::ParReduce ( TypeList< Op >  operation_list,
TypeList< T >  type_list,
FabArray< FAB > const &  fa,
F &&  f 
)

Parallel reduce for MultiFab/FabArray.

This performs reduction over a MultiFab's valid region. For example, the code below computes the sum of the processed data in a MultiFab.

    auto const& ma = mf.const_arrays();
    Real ektot = ParReduce(TypeList<ReduceOpSum>{}, TypeList<Real>{}, mf,
    [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k) noexcept
        -> GpuTuple<Real>
    {
        auto rho = ma[box_no](i,j,k,0);
        auto mx = ma[box_no](i,j,k,1);
        auto my = ma[box_no](i,j,k,2);
        auto mz = ma[box_no](i,j,k,3);
        auto ek = (mx*mx+my*my+mz*mz)/(2.*rho);
        return { ek };
    });
Template Parameters
OpReduce operator (e.g., ReduceOpSum, ReduceOpMin, ReduceOpMax, ReduceOpLogicalAnd, and ReduceOpLogicalOr)
Tdata type (e.g., Real, int, etc.)
FABMultiFab/FabArray type
Fcallable type like a lambda function
Parameters
operation_lista reduce operator stored in TypeList
type_lista data type stored in TypeList
faa MultiFab/FabArray object used to specify the iteration space
fa callable object returning GpuTuple<T>. It takes four ints, where the first int is the local box index and the others are spatial indices for x, y, and z-directions.
Returns
reduction result (T)

◆ ParReduce() [2/6]

template<typename Op , typename T , typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
T amrex::ParReduce ( TypeList< Op >  operation_list,
TypeList< T >  type_list,
FabArray< FAB > const &  fa,
IntVect const &  nghost,
F &&  f 
)

Parallel reduce for MultiFab/FabArray.

This performs reduction over a MultiFab's valid and specified ghost regions. For example, the code below computes the sum of the processed data in a MultiFab.

    auto const& ma = mf.const_arrays();
    Real ektot = ParReduce(TypeList<ReduceOpSum>{}, TypeList<Real>{},
                           mf, IntVect(0),
    [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k) noexcept
        -> GpuTuple<Real>
    {
        auto rho = ma[box_no](i,j,k,0);
        auto mx = ma[box_no](i,j,k,1);
        auto my = ma[box_no](i,j,k,2);
        auto mz = ma[box_no](i,j,k,3);
        auto ek = (mx*mx+my*my+mz*mz)/(2.*rho);
        return { ek };
    });
Template Parameters
OpReduce operator (e.g., ReduceOpSum, ReduceOpMin, ReduceOpMax, ReduceOpLogicalAnd, and ReduceOpLogicalOr)
Tdata type (e.g., Real, int, etc.)
FABMultiFab/FabArray type
Fcallable type like a lambda function
Parameters
operation_lista reduce operator stored in TypeList
type_lista data type stored in TypeList
faa MultiFab/FabArray object used to specify the iteration space
nghostthe number of ghost cells included in the iteration space
fa callable object returning GpuTuple<T>. It takes four ints, where the first int is the local box index and the others are spatial indices for x, y, and z-directions.
Returns
reduction result (T)

◆ ParReduce() [3/6]

template<typename Op , typename T , typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
T amrex::ParReduce ( TypeList< Op >  operation_list,
TypeList< T >  type_list,
FabArray< FAB > const &  fa,
IntVect const &  nghost,
int  ncomp,
F &&  f 
)

Parallel reduce for MultiFab/FabArray.

This performs reduction over a MultiFab's valid and specified ghost regions. For example, the code below computes the sum of the data in a MultiFab.

    auto const& ma = mf.const_arrays();
    Real ektot = ParReduce(TypeList<ReduceOpSum>{}, TypeList<Real>{},
                           mf, mf.nGrowVect(), mf.nComp(),
    [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k, int n) noexcept
        -> GpuTuple<Real>
    {
        return { ma[box_no](i,j,k,n) };
    });
Template Parameters
OpReduce operator (e.g., ReduceOpSum, ReduceOpMin, ReduceOpMax, ReduceOpLogicalAnd, and ReduceOpLogicalOr)
Tdata type (e.g., Real, int, etc.)
FABMultiFab/FabArray type
Fcallable type like a lambda function
Parameters
operation_lista reduce operator stored in TypeList
type_lista data type stored in TypeList
faa MultiFab/FabArray object used to specify the iteration space
nghostthe number of ghost cells included in the iteration space
ncompthe number of components in the iteration space
fa callable object returning GpuTuple<T>. It takes four ints, where the first int is the local box index and the others are spatial indices for x, y, and z-directions.
Returns
reduction result (T)

◆ ParReduce() [4/6]

template<typename... Ops, typename... Ts, typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ReduceData<Ts...>::Type amrex::ParReduce ( TypeList< Ops... >  operation_list,
TypeList< Ts... >  type_list,
FabArray< FAB > const &  fa,
F &&  f 
)

Parallel reduce for MultiFab/FabArray.

This performs reduction over a MultiFab's valid region. For example, the code below computes the minimum of the first MultiFab and the maximum of the second MultiFab.

    auto const& ma1 = mf1.const_arrays();
    auto const& ma2 = mf2.const_arrays();
    GpuTuple<Real,Real> mm = ParReduce(TypeList<ReduceOpMin,ReduceOpMax>{},
                                       TypeList<Real,Real>{}, mf1,
    [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k) noexcept
        -> GpuTuple<Real,Real>
    {
        return { ma1[box_no](i,j,k), ma2[box_no](i,j,k) };
    });
Template Parameters
Ops...reduce operators (e.g., ReduceOpSum, ReduceOpMin, ReduceOpMax, ReduceOpLogicalAnd, and ReduceOpLogicalOr)
Ts...data types (e.g., Real, int, etc.)
FABMultiFab/FabArray type
Fcallable type like a lambda function
Parameters
operation_listlist of reduce operators
type_listlist of data types
faa MultiFab/FabArray object used to specify the iteration space
fa callable object returning GpuTuple<Ts...>. It takes four ints, where the first int is the local box index and the others are spatial indices for x, y, and z-directions.
Returns
reduction result (GpuTuple<Ts...>)

◆ ParReduce() [5/6]

template<typename... Ops, typename... Ts, typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ReduceData<Ts...>::Type amrex::ParReduce ( TypeList< Ops... >  operation_list,
TypeList< Ts... >  type_list,
FabArray< FAB > const &  fa,
IntVect const &  nghost,
F &&  f 
)

Parallel reduce for MultiFab/FabArray.

This performs reduction over a MultiFab's valid and specified ghost regions. For example, the code below computes the minimum of the first MultiFab and the maximum of the second MultiFab.

    auto const& ma1 = mf1.const_arrays();
    auto const& ma2 = mf2.const_arrays();
    GpuTuple<Real,Real> mm = ParReduce(TypeList<ReduceOpMin,ReduceOpMax>{},
                                       TypeList<Real,Real>{},
                                       mf1, mf1.nGrowVect(),
    [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k) noexcept
        -> GpuTuple<Real,Real>
    {
        return { ma1[box_no](i,j,k), ma2[box_no](i,j,k) };
    });
Template Parameters
Ops...reduce operators (e.g., ReduceOpSum, ReduceOpMin, ReduceOpMax, ReduceOpLogicalAnd, and ReduceOpLogicalOr)
Ts...data types (e.g., Real, int, etc.)
FABMultiFab/FabArray type
Fcallable type like a lambda function
Parameters
operation_listlist of reduce operators
type_listlist of data types
faa MultiFab/FabArray object used to specify the iteration space
nghostthe number of ghost cells included in the iteration space
fa callable object returning GpuTuple<Ts...>. It takes four ints, where the first int is the local box index and the others are spatial indices for x, y, and z-directions.
Returns
reduction result (GpuTuple<Ts...>)

◆ ParReduce() [6/6]

template<typename... Ops, typename... Ts, typename FAB , typename F , typename foo = std::enable_if_t<IsBaseFab<FAB>::value>>
ReduceData<Ts...>::Type amrex::ParReduce ( TypeList< Ops... >  operation_list,
TypeList< Ts... >  type_list,
FabArray< FAB > const &  fa,
IntVect const &  nghost,
int  ncomp,
F &&  f 
)

Parallel reduce for MultiFab/FabArray.

This performs reduction over a MultiFab's valid and specified ghost regions and components. For example, the code below computes the minimum of the first MultiFab and the maximum of the second MultiFab.

    auto const& ma1 = mf1.const_arrays();
    auto const& ma2 = mf2.const_arrays();
    GpuTuple<Real,Real> mm = ParReduce(TypeList<ReduceOpMin,ReduceOpMax>{},
                                       TypeList<Real,Real>{},
                                       mf1, mf1.nGrowVect(), mf1.nComp(),
    [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k, int n) noexcept
        -> GpuTuple<Real,Real>
    {
        return { ma1[box_no](i,j,k,n), ma2[box_no](i,j,k,n) };
    });
Template Parameters
Ops...reduce operators (e.g., ReduceOpSum, ReduceOpMin, ReduceOpMax, ReduceOpLogicalAnd, and ReduceOpLogicalOr)
Ts...data types (e.g., Real, int, etc.)
FABMultiFab/FabArray type
Fcallable type like a lambda function
Parameters
operation_listlist of reduce operators
type_listlist of data types
faa MultiFab/FabArray object used to specify the iteration space
nghostthe number of ghost cells included in the iteration space
ncompthe number of components in the iteration space
fa callable object returning GpuTuple<Ts...>. It takes five ints, where the first int is the local box index, the next three are spatial indices for x, y, and z-directions, and the last is for component.
Returns
reduction result (GpuTuple<Ts...>)

◆ parser_ast_depth()

int amrex::parser_ast_depth ( struct parser_node node)

◆ parser_ast_dup()

struct parser_node * amrex::parser_ast_dup ( struct amrex_parser my_parser,
struct parser_node node,
int  move 
)

◆ parser_ast_get_symbols()

void amrex::parser_ast_get_symbols ( struct parser_node node,
std::set< std::string > &  symbols,
std::set< std::string > &  local_symbols 
)

◆ parser_ast_optimize()

void amrex::parser_ast_optimize ( struct parser_node node)

◆ parser_ast_print()

void amrex::parser_ast_print ( struct parser_node node,
std::string const &  space,
std::ostream &  printer 
)

◆ parser_ast_regvar()

void amrex::parser_ast_regvar ( struct parser_node node,
char const *  name,
int  i 
)

◆ parser_ast_setconst()

void amrex::parser_ast_setconst ( struct parser_node node,
char const *  name,
double  c 
)

◆ parser_ast_size()

std::size_t amrex::parser_ast_size ( struct parser_node node)

◆ parser_ast_sort()

void amrex::parser_ast_sort ( struct parser_node node)

◆ parser_call_f1()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double amrex::parser_call_f1 ( enum parser_f1_t  type,
double  a 
)

◆ parser_call_f2()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double amrex::parser_call_f2 ( enum parser_f2_t  type,
double  a,
double  b 
)

◆ parser_call_f3()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double amrex::parser_call_f3 ( enum  parser_f3_t,
double  a,
double  b,
double  c 
)

◆ parser_compile()

Vector<char const*> amrex::parser_compile ( struct amrex_parser parser,
char *  p 
)
inline

◆ parser_compile_exe_size()

void amrex::parser_compile_exe_size ( struct parser_node node,
char *&  p,
std::size_t &  exe_size,
int max_stack_size,
int stack_size,
Vector< char const * > &  local_variables 
)

◆ parser_defexpr()

void amrex::parser_defexpr ( struct parser_node body)

◆ parser_depth()

int amrex::parser_depth ( struct amrex_parser parser)

◆ parser_dup()

struct amrex_parser * amrex::parser_dup ( struct amrex_parser source)

◆ parser_exe_eval()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE double amrex::parser_exe_eval ( const char *  p,
double const *  x 
)

◆ parser_exe_print()

void amrex::parser_exe_print ( char const *  p,
Vector< std::string > const &  vars,
Vector< char const * > const &  locals 
)

◆ parser_exe_size()

std::size_t amrex::parser_exe_size ( struct amrex_parser parser,
int max_stack_size,
int stack_size 
)
inline

◆ parser_get_number()

double amrex::parser_get_number ( struct parser_node node)

◆ parser_get_symbols()

std::set< std::string > amrex::parser_get_symbols ( struct amrex_parser parser)

◆ parser_makesymbol()

struct parser_symbol * amrex::parser_makesymbol ( char *  name)

◆ parser_math_acos()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_acos ( a)

◆ parser_math_acosh()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_acosh ( a)

◆ parser_math_asin()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_asin ( a)

◆ parser_math_asinh()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_asinh ( a)

◆ parser_math_atan()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_atan ( a)

◆ parser_math_atan2()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_atan2 ( a,
b 
)

◆ parser_math_atanh()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_atanh ( a)

◆ parser_math_comp_ellint_1()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE T amrex::parser_math_comp_ellint_1 ( k)

◆ parser_math_comp_ellint_2()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE T amrex::parser_math_comp_ellint_2 ( k)

◆ parser_math_cos()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_cos ( a)

◆ parser_math_cosh()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_cosh ( a)

◆ parser_math_erf()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_erf ( a)

◆ parser_math_exp()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_exp ( a)

◆ parser_math_jn()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_jn ( int  a,
b 
)

◆ parser_math_log()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_log ( a)

◆ parser_math_log10()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_log10 ( a)

◆ parser_math_pow()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_pow ( a,
b 
)

◆ parser_math_sin()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_sin ( a)

◆ parser_math_sinh()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_sinh ( a)

◆ parser_math_tan()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_tan ( a)

◆ parser_math_tanh()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_tanh ( a)

◆ parser_math_yn()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_NO_INLINE T amrex::parser_math_yn ( int  a,
b 
)

◆ parser_newassign()

struct parser_node * amrex::parser_newassign ( struct parser_symbol sym,
struct parser_node v 
)

◆ parser_newf1()

struct parser_node * amrex::parser_newf1 ( enum parser_f1_t  ftype,
struct parser_node l 
)

◆ parser_newf2()

struct parser_node * amrex::parser_newf2 ( enum parser_f2_t  ftype,
struct parser_node l,
struct parser_node r 
)

◆ parser_newf3()

struct parser_node * amrex::parser_newf3 ( enum parser_f3_t  ftype,
struct parser_node n1,
struct parser_node n2,
struct parser_node n3 
)

◆ parser_newlist()

struct parser_node * amrex::parser_newlist ( struct parser_node nl,
struct parser_node nr 
)

◆ parser_newneg()

struct parser_node * amrex::parser_newneg ( struct parser_node n)

◆ parser_newnode()

struct parser_node * amrex::parser_newnode ( enum parser_node_t  type,
struct parser_node l,
struct parser_node r 
)

◆ parser_newnumber()

struct parser_node * amrex::parser_newnumber ( double  d)

◆ parser_newsymbol()

struct parser_node * amrex::parser_newsymbol ( struct parser_symbol symbol)

◆ parser_node_equal()

bool amrex::parser_node_equal ( struct parser_node a,
struct parser_node b 
)

◆ parser_print()

void amrex::parser_print ( struct amrex_parser parser)

◆ parser_regvar()

void amrex::parser_regvar ( struct amrex_parser parser,
char const *  name,
int  i 
)

◆ parser_set_number()

void amrex::parser_set_number ( struct parser_node node,
double  v 
)

◆ parser_setconst()

void amrex::parser_setconst ( struct amrex_parser parser,
char const *  name,
double  c 
)

◆ ParticleContainerToBlueprint()

template<typename ParticleType , int NArrayReal, int NArrayInt>
void amrex::ParticleContainerToBlueprint ( const ParticleContainer_impl< ParticleType, NArrayReal, NArrayInt > &  pc,
const Vector< std::string > &  real_comp_names,
const Vector< std::string > &  int_comp_names,
conduit::Node &  res,
const std::string &  topology_name 
)

◆ ParticleReduce() [1/3]

template<class RD , class PC , class F , class ReduceOps , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
RD::Type amrex::ParticleReduce ( PC const &  pc,
F &&  f,
ReduceOps reduce_ops 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels.

This version can operate on a GpuTuple worth of data at once. It also takes an arbitrary tuple of reduction operators.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Unlike the other reduction functions in this file, this version does not respect the Gpu::launchRegion flag. If AMReX is built with GPU support, this reduction will always be done on the device.

Template Parameters
RDan amrex::ReduceData type
PCthe ParticleContainer type
Fa function object
ReduceOpsa ReduceOps type
Parameters
pcthe ParticleContainer to operate on
fa callable that operates on a single particle, see below for example forms.
reduce_opsspecifies the reduction operations for each tuple element

Example usage: using PType = typename PC::ParticleType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const PType& p) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = p.rdata(1); const amrex::Real b = p.rdata(2); const int c = p.idata(1); return {a, b, c}; }, reduce_ops);

using SPType = typename PC::SuperParticleType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const SPType& p) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = p.rdata(1); const amrex::Real b = p.rdata(2); const int c = p.idata(1); return {a, b, c}; }, reduce_ops);

using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const PTDType& ptd, const int i) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = ptd.m_aos[i].rdata(1); const amrex::Real b = ptd.m_aos[i].rdata(2); const int c = ptd.m_aos[i].idata(1); return {a, b, c}; }, reduce_ops);

◆ ParticleReduce() [2/3]

template<class RD , class PC , class F , class ReduceOps , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
RD::Type amrex::ParticleReduce ( PC const &  pc,
int  lev,
F &&  f,
ReduceOps reduce_ops 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level.

This version can operate on a GpuTuple worth of data at once. It also takes an arbitrary tuple of reduction operators.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Unlike the other reduction functions in this file, this version does not respect the Gpu::launchRegion flag. If AMReX is built with GPU support, this reduction will always be done on the device.

Template Parameters
RDan amrex::ReduceData type
PCthe ParticleContainer type
Fa function object
ReduceOpsa ReduceOps type
Parameters
pcthe ParticleContainer to operate on
levthe level to operate on
fa callable that operates on a single particle, see below for example forms.
reduce_opsspecifies the reduction operations for each tuple element

Example usage: using PType = typename PC::ParticleType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const PType& p) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = p.rdata(1); const amrex::Real b = p.rdata(2); const int c = p.idata(1); return {a, b, c}; }, reduce_ops);

using SPType = typename PC::SuperParticleType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const SPType& p) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = p.rdata(1); const amrex::Real b = p.rdata(2); const int c = p.idata(1); return {a, b, c}; }, reduce_ops);

using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const PTDType& ptd, const int i) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = ptd.m_aos[i].rdata(1); const amrex::Real b = ptd.m_aos[i].rdata(2); const int c = ptd.m_aos[i].idata(1); return {a, b, c}; }, reduce_ops);

◆ ParticleReduce() [3/3]

template<class RD , class PC , class F , class ReduceOps , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
RD::Type amrex::ParticleReduce ( PC const &  pc,
int  lev_min,
int  lev_max,
F const &  f,
ReduceOps reduce_ops 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max.

This version can operate on a GpuTuple worth of data at once. It also takes an arbitrary tuple of reduction operators.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Unlike the other reduction functions in this file, this version does not respect the Gpu::launchRegion flag. If AMReX is built with GPU support, this reduction will always be done on the device.

Template Parameters
RDan amrex::ReduceData type
PCthe ParticleContainer type
Fa function object
ReduceOpsa ReduceOps type
Parameters
pcthe ParticleContainer to operate on
lev_minthe minimum level to include
lev_maxthe maximum level to include
fa callable that operates on a single particle, see below for example forms.
reduce_opsspecifies the reduction operations for each tuple element

Example usage: using PType = typename PC::ParticleType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const PType& p) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = p.rdata(1); const amrex::Real b = p.rdata(2); const int c = p.idata(1); return {a, b, c}; }, reduce_ops);

using SPType = typename PC::SuperParticleType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const SPType& p) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = p.rdata(1); const amrex::Real b = p.rdata(2); const int c = p.idata(1); return {a, b, c}; }, reduce_ops);

using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType; amrex::ReduceOps<ReduceOpSum, ReduceOpMin, ReduceOpMax> reduce_ops; auto r = amrex::ParticleReduce<ReduceData<amrex::Real, amrex::Real,int>> ( pc, [=] AMREX_GPU_DEVICE (const PTDType& ptd, const int i) noexcept -> amrex::GpuTuple<amrex::Real,amrex::Real,int> { const amrex::Real a = ptd.m_aos[i].rdata(1); const amrex::Real b = ptd.m_aos[i].rdata(2); const int c = ptd.m_aos[i].idata(1); return {a, b, c}; }, reduce_ops);

◆ ParticleTileToBlueprint()

template<typename ParticleType , int NArrayReal, int NArrayInt>
void amrex::ParticleTileToBlueprint ( const ParticleTile< ParticleType, NArrayReal, NArrayInt > &  ptile,
const Vector< std::string > &  real_comp_names,
const Vector< std::string > &  int_comp_names,
conduit::Node &  res,
const std::string &  topology_name 
)

◆ ParticleToMesh() [1/2]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void amrex::ParticleToMesh ( PC const &  pc,
const Vector< MultiFab * > &  mf,
int  lev_min,
int  lev_max,
F &&  f,
bool  zero_out_input = true,
bool  vol_weight = true 
)

◆ ParticleToMesh() [2/2]

template<class PC , class MF , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void amrex::ParticleToMesh ( PC const &  pc,
MF &  mf,
int  lev,
F const &  f,
bool  zero_out_input = true 
)

◆ Partition() [1/3]

template<typename T , typename F >
int amrex::Partition ( Gpu::DeviceVector< T > &  v,
F &&  f 
)

A GPU-capable partition function for contiguous data.

After calling this, all the items for which the predicate is true will be before the items for which the predicate is false in the input array.

This version is not stable, if you want that behavior use amrex::StablePartition instead.

Template Parameters
Ttype of the data to be partitioned.
Ftype of the predicate function.
Parameters
va Gpu::DeviceVector with the data to be partitioned.
fpredicate function that returns 1 or 0 for each input

Returns the index of the first element for which f is 0.

◆ Partition() [2/3]

template<typename T , typename F >
int amrex::Partition ( T *  data,
int  beg,
int  end,
F &&  f 
)

A GPU-capable partition function for contiguous data.

After calling this, all the items for which the predicate is true will be before the items for which the predicate is false in the input array.

This version is not stable, if you want that behavior use amrex::StablePartition instead.

Template Parameters
Ttype of the data to be partitioned.
Ftype of the predicate function.
Parameters
datapointer to the data to be partitioned
begindex at which to start
endindex at which to stop (exclusive)
fpredicate function that returns 1 or 0 for each input

Returns the index of the first element for which f is 0.

◆ Partition() [3/3]

template<typename T , typename F >
int amrex::Partition ( T *  data,
int  n,
F &&  f 
)

A GPU-capable partition function for contiguous data.

After calling this, all the items for which the predicate is true will be before the items for which the predicate is false in the input array.

This version is not stable, if you want that behavior use amrex::StablePartition instead.

Template Parameters
Ttype of the data to be partitioned.
Ftype of the predicate function.
Parameters
datapointer to the data to be partitioned
Nthe number of elements in the array
fpredicate function that returns 1 or 0 for each input

Returns the index of the first element for which f is 0.

◆ partitionParticles()

template<typename PTile , typename ParFunc >
int amrex::partitionParticles ( PTile &  ptile,
ParFunc const &  is_left 
)

Reorders the ParticleTile into two partitions left [0, num_left-1] and right [num_left, ptile.numParticles()-1] and returns the number of particles in the left partition.

The functor is_left [(ParticleTileData ptd, int index) -> bool] maps each particle to either the left [return true] or the right [return false] partition. It must return the same result if evaluated multiple times for the same particle.

Parameters
ptilethe ParticleTile to partition
is_leftfunctor to map particles to a partition

◆ partitionParticlesByDest()

template<typename PTile , typename PLocator , typename CellAssignor >
int amrex::partitionParticlesByDest ( PTile &  ptile,
const PLocator &  ploc,
CellAssignor const &  assignor,
const ParticleBufferMap pmap,
const GpuArray< Real, AMREX_SPACEDIM > &  plo,
const GpuArray< Real, AMREX_SPACEDIM > &  phi,
const GpuArray< ParticleReal, AMREX_SPACEDIM > &  rlo,
const GpuArray< ParticleReal, AMREX_SPACEDIM > &  rhi,
const GpuArray< int, AMREX_SPACEDIM > &  is_per,
int  lev,
int  gid,
int  ,
int  lev_min,
int  lev_max,
int  nGrow,
bool  remove_negative 
)

◆ pcg_solve()

template<int N, typename T , typename M , typename P >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE int amrex::pcg_solve ( T *AMREX_RESTRICT  x,
T *AMREX_RESTRICT  r,
M const &  mat,
P const &  precond,
int  maxiter,
rel_tol 
)

Preconditioned conjugate gradient solver.

Parameters
xinitial guess
rinitial residual
matmatrix
precondpreconditioner
maxitermax number of iterations
rel_tolrelative tolerance

◆ pcinterp_interp()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::pcinterp_interp ( Box const &  bx,
Array4< Real > const &  fine,
const int  fcomp,
const int  ncomp,
Array4< Real const > const &  crse,
const int  ccomp,
IntVect const &  ratio 
)
noexcept

◆ periodicShift()

MultiFab amrex::periodicShift ( MultiFab const &  mf,
IntVect const &  offset,
Periodicity const &  period 
)

Periodic shift MultiFab.

◆ PermutationForDeposition() [1/2]

template<class index_type , class PTile >
void amrex::PermutationForDeposition ( Gpu::DeviceVector< index_type > &  perm,
index_type  nitems,
const PTile &  ptile,
Box  bx,
Geometry  geom,
const IntVect  idx_type 
)

◆ PermutationForDeposition() [2/2]

template<class index_type , typename F >
void amrex::PermutationForDeposition ( Gpu::DeviceVector< index_type > &  perm,
index_type  nitems,
index_type  nbins,
F const &  f 
)

◆ placementDelete() [1/2]

template<typename T >
std::enable_if_t<!std::is_trivially_destructible_v<T> > amrex::placementDelete ( T *const  ptr,
Long  n 
)

◆ placementDelete() [2/2]

template<typename T >
std::enable_if_t<std::is_trivially_destructible_v<T> > amrex::placementDelete ( T * const  ,
Long   
)

◆ placementNew() [1/3]

template<typename T >
std::enable_if_t<std::is_trivially_default_constructible_v<T> && !std::is_arithmetic_v<T> > amrex::placementNew ( T *const  ptr,
Long  n 
)

◆ placementNew() [2/3]

template<typename T >
std::enable_if_t<!std::is_trivially_default_constructible_v<T> > amrex::placementNew ( T *const  ptr,
Long  n 
)

◆ placementNew() [3/3]

template<typename T >
std::enable_if_t<std::is_arithmetic_v<T> > amrex::placementNew ( T * const  ,
Long   
)

◆ polar()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::polar ( const T &  a_r,
const T &  a_theta 
)
noexcept

Return a complex number given its polar representation.

◆ poly_interp_coeff() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::poly_interp_coeff ( xInt,
T const *AMREX_RESTRICT  x,
int  N,
T *AMREX_RESTRICT  c 
)
noexcept

◆ poly_interp_coeff() [2/2]

template<int N, typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::poly_interp_coeff ( xInt,
T const *AMREX_RESTRICT  x,
T *AMREX_RESTRICT  c 
)
noexcept

◆ pout()

std::ostream & amrex::pout ( )

the stream that all output except error msgs should use

Use this in place of std::cout for program output.

In serial this is the standard output, in parallel it is a different file on each proc (see setPoutBaseName()).

Can be used to replace std::cout. In serial this just returns std::cout. In parallel, this creates a separate file for each proc called <basename>.n where n is the procID and <basename> defaults to "pout" but can be set by calling setPoutBaseName(). Output is then directed to these files. This keeps the output from different processors from getting all jumbled up. If you want fewer files, you can use ParmParse parameter amrex.pout_int=nproc and it will only output every nproc processors pout.n files (where nnproc == 0).

◆ poutFileName()

const std::string & amrex::poutFileName ( )

return the current filename as used by pout()

Accesses the filename for the local pout() file.

in serial, just return the string "cout"; abort if MPI is not initialized.

Returns the name used for the local pout() file. In parallel this is "\<pout_basename\>.\<procID\>", where <pout_basename> defaults to "pout" and can be modified by calling setPoutBaseName(), and <procID> is the local proc number. In serial, this always returns the string "cout". It is an error (exit code 111) to call this in parallel before MPI_Initialize().

◆ pow() [1/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::pow ( const GpuComplex< T > &  a_z,
const T &  a_y 
)
noexcept

Raise a complex number to a (real) power.

◆ pow() [2/2]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::pow ( const GpuComplex< T > &  a_z,
int  a_n 
)
noexcept

Raise a complex number to an integer power.

◆ PreBuildDirectorHierarchy()

void amrex::PreBuildDirectorHierarchy ( const std::string &  dirName,
const std::string &  subDirPrefix,
int  nSubDirs,
bool  callBarrier 
)

prebuild a hierarchy of directories dirName is built first. if dirName exists, it is renamed. then build dirName/subDirPrefix_0 .. dirName/subDirPrefix_nSubDirs-1 if callBarrier is true, call ParallelDescriptor::Barrier() after all directories are built ParallelDescriptor::IOProcessor() creates the directories

Parameters
&dirName
&subDirPrefix
nSubDirs
callBarrier

◆ prefetchToDevice()

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::prefetchToDevice ( FabArray< FAB > const &  fa,
const bool  synchronous = true 
)

◆ prefetchToHost()

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::prefetchToHost ( FabArray< FAB > const &  fa,
const bool  synchronous = true 
)

◆ print_state()

void amrex::print_state ( const MultiFab mf,
const IntVect cell,
const int  n,
const IntVect ng 
)

Output state data for a single zone.

◆ printCell()

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::printCell ( FabArray< FAB > const &  mf,
const IntVect cell,
int  comp = -1,
const IntVect ng = IntVect::TheZeroVector() 
)

◆ PrintTimeRangeList()

void amrex::PrintTimeRangeList ( const std::list< RegionsProfStats::TimeRange > &  trList)

◆ ProperlyNested()

template<typename Interp >
bool amrex::ProperlyNested ( const IntVect ratio,
const IntVect blocking_factor,
int  ngrow,
const IndexType boxType,
Interp *  mapper 
)

Test if AMR grids are properly nested.

If grids are not properly nested, FillPatch functions may fail.

Template Parameters
InterpInterpolater type
Parameters
ratiorefinement ratio
blocking_factorblocking factor on the fine level
ngrownumber of ghost cells of fine MultiFab
boxTypeindex type
mapperan interpolater object

◆ Random() [1/2]

Real amrex::Random ( )

Generate a psuedo-random double from uniform distribution.

Generates one pseudorandom real number (double) from a uniform distribution between 0.0 and 1.0 (0.0 included, 1.0 excluded)

◆ Random() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::Random ( RandomEngine const &  random_engine)

◆ Random_int() [1/2]

unsigned int amrex::Random_int ( unsigned int  n)

Generates one pseudorandom unsigned integer which is uniformly distributed on [0,n-1]-interval for each call.

The CPU version of this function uses C++11's mt19937. The GPU version uses CURAND's XORWOW generator.

◆ Random_int() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE unsigned int amrex::Random_int ( unsigned int  n,
RandomEngine const &  random_engine 
)

◆ Random_long()

ULong amrex::Random_long ( ULong  n)

Generates one pseudorandom unsigned long which is uniformly distributed on [0,n-1]-interval for each call.

The CPU version of this function uses C++11's mt19937. There is no GPU version.

◆ RandomGamma() [1/2]

Real amrex::RandomGamma ( Real  alpha,
Real  beta 
)

Generate a psuedo-random floating point number from the Gamma distribution.

Generates one real number (single or double) extracted from a Gamma distribution, given the Real parameters alpha and beta. alpha and beta must both be > 0. The CPU version of this function relies on the Standard Template Library The GPU version of this function relies is implemented in terms of Random and RandomNormal.

◆ RandomGamma() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::RandomGamma ( Real  alpha,
Real  beta,
RandomEngine const &  random_engine 
)

◆ RandomNormal() [1/2]

Real amrex::RandomNormal ( Real  mean,
Real  stddev 
)

Generate a psuedo-random double from a normal distribution.

Generates one pseudorandom real number (double) from a normal distribution with mean 'mean' and standard deviation 'stddev'.

◆ RandomNormal() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Real amrex::RandomNormal ( Real  mean,
Real  stddev,
RandomEngine const &  random_engine 
)

◆ RandomPoisson() [1/2]

unsigned int amrex::RandomPoisson ( Real  lambda)

Generate a psuedo-random integer from a Poisson distribution.

Generates one pseudorandom positive integer number (double) extracted from a Poisson distribution, given the Real parameter lambda. The CPU version of this function relies on the standard Template Library The GPU version of this function relies on the cuRAND library

◆ RandomPoisson() [2/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE unsigned int amrex::RandomPoisson ( Real  lambda,
RandomEngine const &  random_engine 
)

◆ Read()

template<typename FAB >
std::enable_if_t<std::is_same_v<FAB,IArrayBox> > amrex::Read ( FabArray< FAB > &  fa,
const std::string &  name 
)

Read iMultiFab/FabArray<IArrayBox>

This reads an iMultiFab/FabArray<IArrayBox> from disk. If it has been fully defined, the BoxArray on the disk must match the BoxArray in the given iMultiFab/FabArray<IArrayBox> object. If it is only constructed with the default constructor, the BoxArray on the disk will be used and a new DistributionMapping will be made. When this function is used to restart a calculation from checkpoint files, one should use a fully defined iMultiFab/FabArray<IArrayBox> except for the first one in a series of iMultiFab/MultiFab objects that share the same BoxArray/DistributionMapping. This will ensure that they share the same BoxArray/DistributionMapping after restart.

Parameters
fais the iMultiFab.
nameis the base name for the files.

◆ readBoxArray()

void amrex::readBoxArray ( BoxArray ba,
std::istream &  s,
bool  b = false 
)

Read a BoxArray from a stream. If b is true, read in a special way.

◆ readData() [1/4]

void amrex::readData ( double *  data,
std::size_t  size,
std::istream &  is 
)
inline

◆ readData() [2/4]

void amrex::readData ( float *  data,
std::size_t  size,
std::istream &  is 
)
inline

◆ readData() [3/4]

void amrex::readData ( int data,
std::size_t  size,
std::istream &  is 
)
inline

◆ readData() [4/4]

void amrex::readData ( Long *  data,
std::size_t  size,
std::istream &  is 
)
inline

◆ readDoubleData()

void amrex::readDoubleData ( double *  data,
std::size_t  size,
std::istream &  is,
const RealDescriptor rd 
)

Read double data from the istream. The arguments are a pointer to data buffer to read into, the size of that buffer, the istream, and a RealDescriptor that describes the format of the data on disk. The buffer is assumed to be large enough to store 'size' Reals, and it is the user's reponsiblity to allocate this data.

◆ readFloatData()

void amrex::readFloatData ( float *  data,
std::size_t  size,
std::istream &  is,
const RealDescriptor rd 
)

Read float data from the istream. The arguments are a pointer to data buffer to read into, the size of that buffer, the istream, and a RealDescriptor that describes the format of the data on disk. The buffer is assumed to be large enough to store 'size' Reals, and it is the user's reponsiblity to allocate this data.

◆ readIntData() [1/2]

void amrex::readIntData ( int data,
std::size_t  size,
std::istream &  is,
const IntDescriptor id 
)

Read int data from the istream. The arguments are a pointer to data buffer to read into, the size of that buffer, the istream, and an IntDescriptor that describes the format of the data on disk. The buffer is assumed to be large enough to store 'size' integers, and it is the user's reponsiblity to allocate this data.

◆ readIntData() [2/2]

template<typename To , typename From >
void amrex::readIntData ( To *  data,
std::size_t  size,
std::istream &  is,
const amrex::IntDescriptor id 
)

◆ readLongData()

void amrex::readLongData ( Long *  data,
std::size_t  size,
std::istream &  is,
const IntDescriptor id 
)

Read int data from the istream. The arguments are a pointer to data buffer to read into, the size of that buffer, the istream, and an IntDescriptor that describes the format of the data on disk. The buffer is assumed to be large enough to store 'size' longs, and it is the user's reponsiblity to allocate this data.

◆ readRealData()

void amrex::readRealData ( Real *  data,
std::size_t  size,
std::istream &  is,
const RealDescriptor rd 
)

Read Real data from the istream. The arguments are a pointer to data buffer to read into, the size of that buffer, the istream, and a RealDescriptor that describes the format of the data on disk. The buffer is assumed to be large enough to store 'size' Reals, and it is the user's reponsiblity to allocate this data.

◆ RedistFiles()

void amrex::RedistFiles ( )

◆ ReduceLogicalAnd() [1/7]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool amrex::ReduceLogicalAnd ( FabArray< FAB > const &  fa,
int  nghost,
F &&  f 
)

◆ ReduceLogicalAnd() [2/7]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool amrex::ReduceLogicalAnd ( FabArray< FAB > const &  fa,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceLogicalAnd() [3/7]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool amrex::ReduceLogicalAnd ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
int  nghost,
F &&  f 
)

◆ ReduceLogicalAnd() [4/7]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool amrex::ReduceLogicalAnd ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceLogicalAnd() [5/7]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool amrex::ReduceLogicalAnd ( PC const &  pc,
F &&  f 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels.

This version uses "LogicalAnd" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> bool
             {
                 return p.id() > 0;
             });

   using SPType  = typename PC::SuperParticleType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> bool
             {
                 return p.id() > 0;
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> bool
             {
                 return ptd.m_aos[i].id() > 0;
             });

◆ ReduceLogicalAnd() [6/7]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool amrex::ReduceLogicalAnd ( PC const &  pc,
int  lev,
F &&  f 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level.

This version uses "LogicalAnd" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
levthe level to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> bool
             {
                 return p.id() > 0;
             });

   using SPType  = typename PC::SuperParticleType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> bool
             {
                 return p.id() > 0;
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> bool
             {
                 return ptd.m_aos[i].id() > 0;
             });

◆ ReduceLogicalAnd() [7/7]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool amrex::ReduceLogicalAnd ( PC const &  pc,
int  lev_min,
int  lev_max,
F const &  f 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max.

This version uses "LogicalAnd" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
lev_minthe minimum level to include
lev_maxthe maximum level to include
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> bool
             {
                 return p.id() > 0;
             });

   using SPType  = typename PC::SuperParticleType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> bool
             {
                 return p.id() > 0;
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto rv = amrex::ReduceLogicalAnd(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> bool
             {
                 return ptd.m_aos[i].id() > 0;
             });

◆ ReduceLogicalOr() [1/7]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool amrex::ReduceLogicalOr ( FabArray< FAB > const &  fa,
int  nghost,
F &&  f 
)

◆ ReduceLogicalOr() [2/7]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
bool amrex::ReduceLogicalOr ( FabArray< FAB > const &  fa,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceLogicalOr() [3/7]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool amrex::ReduceLogicalOr ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
int  nghost,
F &&  f 
)

◆ ReduceLogicalOr() [4/7]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
bool amrex::ReduceLogicalOr ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceLogicalOr() [5/7]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool amrex::ReduceLogicalOr ( PC const &  pc,
F &&  f 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels.

This version uses "LogicalOr" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> bool
             {
                 return p.id() < 1;
             });

   using SPType  = typename PC::SuperParticleType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> bool
             {
                 return p.id() < 1;
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> bool
             {
                 return ptd.m_aos[i].id() < 1;
             });

◆ ReduceLogicalOr() [6/7]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool amrex::ReduceLogicalOr ( PC const &  pc,
int  lev,
F &&  f 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level.

This version uses "LogicalOr" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
levthe level to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> bool
             {
                 return p.id() < 1;
             });

   using SPType  = typename PC::SuperParticleType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> bool
             {
                 return p.id() < 1;
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> bool
             {
                 return ptd.m_aos[i].id() < 1;
             });

◆ ReduceLogicalOr() [7/7]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
bool amrex::ReduceLogicalOr ( PC const &  pc,
int  lev_min,
int  lev_max,
F const &  f 
)

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max.

This version uses "LogicalOr" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
lev_minthe minimum level to include
lev_maxthe maximum level to include
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> bool
             {
                 return p.id() < 1;
             });

   using SPType  = typename PC::SuperParticleType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> bool
             {
                 return p.id() < 1;
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto rv = amrex::ReduceLogicalOr(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> bool
             {
                 return ptd.m_aos[i].id() < 1;
             });

◆ ReduceMax() [1/9]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type amrex::ReduceMax ( FabArray< FAB > const &  fa,
int  nghost,
F &&  f 
)

◆ ReduceMax() [2/9]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type amrex::ReduceMax ( FabArray< FAB > const &  fa,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceMax() [3/9]

template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMax ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
FabArray< FAB3 > const &  fa3,
int  nghost,
F &&  f 
)

◆ ReduceMax() [4/9]

template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMax ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
FabArray< FAB3 > const &  fa3,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceMax() [5/9]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMax ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
int  nghost,
F &&  f 
)

◆ ReduceMax() [6/9]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMax ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceMax() [7/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceMax ( PC const &  pc,
F &&  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels.

This version uses "Max" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceMax() [8/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceMax ( PC const &  pc,
int  lev,
F &&  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level.

This version uses "Mas" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
levthe level to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceMax() [9/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceMax ( PC const &  pc,
int  lev_min,
int  lev_max,
F const &  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max.

This version uses "Max" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
lev_minthe minimum level to include
lev_maxthe maximum level to include
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto mx = amrex::ReduceMax(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceMin() [1/9]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type amrex::ReduceMin ( FabArray< FAB > const &  fa,
int  nghost,
F &&  f 
)

◆ ReduceMin() [2/9]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type amrex::ReduceMin ( FabArray< FAB > const &  fa,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceMin() [3/9]

template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMin ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
FabArray< FAB3 > const &  fa3,
int  nghost,
F &&  f 
)

◆ ReduceMin() [4/9]

template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMin ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
FabArray< FAB3 > const &  fa3,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceMin() [5/9]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMin ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
int  nghost,
F &&  f 
)

◆ ReduceMin() [6/9]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceMin ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceMin() [7/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceMin ( PC const &  pc,
F &&  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels.

This version uses "Min" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceMin() [8/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceMin ( PC const &  pc,
int  lev,
F &&  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level.

This version uses "Min" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
levthe level to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceMin() [9/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceMin ( PC const &  pc,
int  lev_min,
int  lev_max,
F const &  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max.

This version uses "Min" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
lev_minthe minimum level to include
lev_maxthe maximum level to include
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto mn = amrex::ReduceMin(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceSum() [1/9]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type amrex::ReduceSum ( FabArray< FAB > const &  fa,
int  nghost,
F &&  f 
)

◆ ReduceSum() [2/9]

template<class FAB , class F , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
FAB::value_type amrex::ReduceSum ( FabArray< FAB > const &  fa,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceSum() [3/9]

template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceSum ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
FabArray< FAB3 > const &  fa3,
int  nghost,
F &&  f 
)

◆ ReduceSum() [4/9]

template<class FAB1 , class FAB2 , class FAB3 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceSum ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
FabArray< FAB3 > const &  fa3,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceSum() [5/9]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceSum ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
int  nghost,
F &&  f 
)

◆ ReduceSum() [6/9]

template<class FAB1 , class FAB2 , class F , class bar = std::enable_if_t<IsBaseFab<FAB1>::value>>
FAB1::value_type amrex::ReduceSum ( FabArray< FAB1 > const &  fa1,
FabArray< FAB2 > const &  fa2,
IntVect const &  nghost,
F &&  f 
)

◆ ReduceSum() [7/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceSum ( PC const &  pc,
F &&  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates over all particles on all levels.

This version uses "Sum" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceSum() [8/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceSum ( PC const &  pc,
int  lev,
F &&  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates only on the specified level.

This version uses "Sum" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
levthe level to operate on
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceSum() [9/9]

template<class PC , class F , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
auto amrex::ReduceSum ( PC const &  pc,
int  lev_min,
int  lev_max,
F const &  f 
) -> decltype(particle_detail::call_f(f, typename PC::ParticleTileType::ConstParticleTileDataType(), int()))

A general reduction method for the particles in a ParticleContainer that can run on either CPUs or GPUs. This version operates from the specified lev_min to lev_max.

This version uses "Sum" as the reduction operation. The quantity reduced over is an arbitrary function of a "superparticle", which contains all the data in the particle type, whether it is stored in AoS or SoA form.

Note that there is no MPI reduction performed at the end of this operation. Users should manually call the MPI reduction operations described in ParallelDescriptor if they want that behavior.

Template Parameters
PCthe ParticleContainer type
Fa function object
Parameters
pcthe ParticleContainer to operate on
lev_minthe minimum level to include
lev_maxthe maximum level to include
fa callable that operates on a single particle. Example forms:
   using PType = typename PC::ParticleType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PType& p) -> ParticleReal
             {
                 return p.rdata(0);
             });

   using SPType  = typename PC::SuperParticleType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const SPType& p) -> int
             {
                 return p.idata(0);
             });

   using PTDType = typename PC::ParticleTileType::ConstParticleTileDataType;
   auto sm = amrex::ReduceSum(pc,
             [=] AMREX_GPU_HOST_DEVICE (const PTDType& ptd, const int i) -> ParticleReal
             {
                 return ptd.m_aos[i].rdata(0);
             });

◆ ReduceToPlane()

template<typename Op , typename T , typename FAB , typename F , std::enable_if_t< IsBaseFab< FAB >::value, int > FOO = 0>
BaseFab< T > amrex::ReduceToPlane ( int  direction,
Box const &  domain,
FabArray< FAB > const &  mf,
F const &  f 
)

Reduce FabArray/MultiFab data to a plane.

This function takes a FabArray/MultiFab and reduces its data to a plane. The return data are stored in a BaseFab with only one cell in the normal direction of the plane. The index range of the BaseFab in the other directions is the same as the provided domain Box. If data do not exist along a certain line, the value is set to the minimum, maximum and zero, for reduce max, min and sum, respectively. The reduction is local and the user may need to do MPI communication afterwards if needed.

In the example code below, the sum along each line at (i,j) in the z-direction is computed and stored at (i,j,0) of the returned BaseFab.

    int dir = 2; // z-direction
    auto const& domain_box = geom.Domain();
    auto const& ma = mf.const_arrays();
    auto rr = ReduceToPlane<ReduceOpSum,Real>(dir, domain_box, mf,
        [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k) -> Real
        {
            return ma[box_no](i,j,k); // data at (i,j,k) of Box box_no
        });

Below is another example. This finds the maximum value in the x-direction and stores the maximum value and the i-index. An MPI reduce is then called to further reduce the data to the root process 0.

    int dir = 0; // x-direction
    auto const& domain_box = geom.Domain().surroundingNodes(); // nodal data
    auto const& ma = mf.const_arrays();
    auto rr = ReduceToPlane<ReduceOpMax,KeyValuePair<Real,int>>
        (dir, domain_box, mf,
         [=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k)
             -> KeyValuePair<Real,int>
        {
            return {ma[box_no](i,j,k), i};
        });
    ParallelReduce::Max(rr.dataPtr(), rr.size(), root,
                        ParallelDescriptor::Communicator());
    // Process root now has the final results.
Template Parameters
Opreduce operator (e.g., ReduceOpSum, ReduceOpMin and ReduceOpMax)
Tdata type of reduction result
FABFabArray/MultiFab type
Fcallable type like a lambda function
Parameters
directionnormal direction of the plane (e.g., 0, 1 and 2)
domaindomain Box
mfa FabArray/MultiFab object specifying the iteration space
fa callable object returning T. It takes four ints, where the first int is the local box index and the others are spatial indices for x, y, and z-directions.
Returns
reduction result (BaseFab<T>)

◆ refine() [1/9]

void amrex::refine ( BoxDomain dest,
const BoxDomain fin,
int  ratio 
)

Refine all Boxes in the domain by the refinement ratio and return the result in dest.

◆ refine() [2/9]

BoxArray amrex::refine ( const BoxArray ba,
const IntVect ratio 
)

◆ refine() [3/9]

BoxArray amrex::refine ( const BoxArray ba,
int  ratio 
)

◆ refine() [4/9]

BoxList amrex::refine ( const BoxList bl,
int  ratio 
)

Returns a new BoxList in which each Box is refined by the given ratio.

◆ refine() [5/9]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::refine ( const BoxND< dim > &  b,
const IntVectND< dim > &  ref_ratio 
)
noexcept

Refine BoxND by given (positive) refinement ratio. NOTE: if type(dir) = CELL centered: lo <- lo*ratio and hi <- (hi+1)*ratio - 1. NOTE: if type(dir) = NODE centered: lo <- lo*ratio and hi <- hi*ratio.

◆ refine() [6/9]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::refine ( const BoxND< dim > &  b,
int  ref_ratio 
)
noexcept

Refine BoxND by given (positive) refinement ratio. NOTE: if type(dir) = CELL centered: lo <- lo*ratio and hi <- (hi+1)*ratio - 1. NOTE: if type(dir) = NODE centered: lo <- lo*ratio and hi <- hi*ratio.

◆ refine() [7/9]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::refine ( Dim3 const &  coarse,
IntVectND< dim > const &  ratio 
)
noexcept

◆ refine() [8/9]

Geometry amrex::refine ( Geometry const &  crse,
int  rr 
)
inline

◆ refine() [9/9]

Geometry amrex::refine ( Geometry const &  crse,
IntVect const &  rr 
)
inline

◆ reflect()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::reflect ( const IntVectND< dim > &  a,
int  ref_ix,
int  idir 
)
noexcept

Returns an IntVectND that is the reflection of input in the plane which passes through ref_ix and normal to the coordinate direction idir.

◆ RemoveDuplicates() [1/2]

template<class T >
void amrex::RemoveDuplicates ( Vector< T > &  vec)

◆ RemoveDuplicates() [2/2]

template<class T , class H >
void amrex::RemoveDuplicates ( Vector< T > &  vec)

◆ removeInvalidParticles()

template<typename PTile >
void amrex::removeInvalidParticles ( PTile &  ptile)

◆ removeOverlap()

BoxList amrex::removeOverlap ( const BoxList bl)

Return BoxList which covers the same area but has no overlapping boxes.

◆ ResetRandomSeed()

void amrex::ResetRandomSeed ( ULong  cpu_seed,
ULong  gpu_seed 
)

◆ ResetTotalBytesAllocatedInFabsHWM()

void amrex::ResetTotalBytesAllocatedInFabsHWM ( )
noexcept

◆ RestoreRandomState()

void amrex::RestoreRandomState ( std::istream &  is,
int  nthreads_old,
int  nstep_old 
)

◆ SameIteratorsOK()

template<class PC1 , class PC2 >
bool amrex::SameIteratorsOK ( const PC1 &  pc1,
const PC2 &  pc2 
)

◆ SanitizeName()

std::string amrex::SanitizeName ( const std::string &  sname)

◆ SaveRandomState()

void amrex::SaveRandomState ( std::ostream &  os)

Save and restore random state.

◆ Saxpy() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::Saxpy ( Array< MF, N > &  dst,
typename MF::value_type  a,
Array< MF, N > const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst += a * src

◆ Saxpy() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::Saxpy ( MF &  dst,
typename MF::value_type  a,
MF const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst += a * src

◆ Scale() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::Scale ( Array< MF, N > &  dst,
typename MF::value_type  val,
int  scomp,
int  ncomp,
int  nghost 
)

dst *= val

◆ scale() [1/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::scale ( const IntVectND< dim > &  p,
int  s 
)
noexcept

Returns a IntVectND obtained by multiplying each of the components of this IntVectND by s.

◆ scale() [2/2]

AMREX_GPU_HOST_DEVICE RealVect amrex::scale ( const RealVect p,
Real  s 
)
inlinenoexcept

Returns a RealVect obtained by multiplying each of the components of the given RealVect by a scalar.

◆ Scale() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::Scale ( MF &  dst,
typename MF::value_type  val,
int  scomp,
int  ncomp,
int  nghost 
)

dst *= val

◆ scatterParticles()

template<typename PTile , typename N , typename Index , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void amrex::scatterParticles ( PTile &  dst,
const PTile &  src,
np,
const Index *  inds 
)

Scatter particles copies particles from contiguous order into an arbitrary order. Specifically, the particle at the index i in src will be copied to the index inds[i] in dst.

Template Parameters
PTilethe particle tile type
Nthe size type, e.g. Long
Indexthe index type, e.g. unsigned int
Parameters
dstthe destination tile
srcthe source tile
npthe number of particles
indspointer to the permutation array

◆ second()

double amrex::second ( )
noexcept

◆ senseiNewMacro() [1/2]

amrex::senseiNewMacro ( AmrDataAdaptor  )

◆ senseiNewMacro() [2/2]

amrex::senseiNewMacro ( AmrMeshDataAdaptor  )

◆ SerializeStringArray()

amrex::Vector< char > amrex::SerializeStringArray ( const Vector< std::string > &  stringArray)

◆ setBC() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::setBC ( const Box bx,
const Box domain,
const BCRec bc_dom,
BCRec bcr 
)
noexcept

Function for setting a BC.

◆ setBC() [2/2]

void amrex::setBC ( const Box bx,
const Box domain,
int  src_comp,
int  dest_comp,
int  ncomp,
const Vector< BCRec > &  bc_dom,
Vector< BCRec > &  bcr 
)
noexcept

Function for setting array of BCs.

◆ setBndry() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::setBndry ( Array< MF, N > &  dst,
typename MF::value_type  val,
int  scomp,
int  ncomp 
)

dst = val in ghost cells.

◆ setBndry() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::setBndry ( MF &  dst,
typename MF::value_type  val,
int  scomp,
int  ncomp 
)

dst = val in ghost cells.

◆ SetErrorHandler()

void amrex::SetErrorHandler ( amrex::ErrorHandler  f)

◆ setFPExcept()

FPExcept amrex::setFPExcept ( FPExcept  excepts)

Set FP exception traps. Linux only. This enables set flags and DISABLES unset flags. This can be used to restore previous settings.

◆ SetHDF5fapl()

static void amrex::SetHDF5fapl ( hid_t  fapl,
MPI_Comm  comm 
)
static

◆ SetInitSNaN()

void amrex::SetInitSNaN ( bool  v)
noexcept

◆ SetParticleIDandCPU()

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE std::uint64_t amrex::SetParticleIDandCPU ( Long  id,
int  cpu 
)
noexcept

Set the idcpu value at once, based on a particle id and cpuid

This can be used in initialization and assignments, to avoid writing twice into the same memory bank.

◆ setPoutBaseName()

void amrex::setPoutBaseName ( const std::string &  a_Name)

Set the base name for the parallel output files used by pout().

Changes the base part of the filename for pout() files.

If the file has already been used and this is a different name, close the current file and open a new one.

When in parallel, changes the base name of the pout() files. If pout() has already been called, it closes the current output file and opens a new one (unless the name is the same, in which case it does nothing). In serial, ignores the argument and does nothing.

◆ setVal() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::setVal ( Array< MF, N > &  dst,
typename MF::value_type  val 
)

dst = val

◆ setVal() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::setVal ( MF &  dst,
typename MF::value_type  val 
)

dst = val

◆ SetVerbose()

void amrex::SetVerbose ( int  v)
noexcept

◆ shift() [1/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::shift ( const BoxND< dim > &  b,
const IntVectND< dim > &  nzones 
)
noexcept

◆ shift() [2/2]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::shift ( const BoxND< dim > &  b,
int  dir,
int  nzones 
)
noexcept

Return a BoxND with indices shifted by nzones in dir direction.

◆ SimpleRemoveOverlap()

void amrex::SimpleRemoveOverlap ( BoxArray ba)

◆ single_level_redistribute()

void amrex::single_level_redistribute ( amrex::MultiFab div_tmp_in,
amrex::MultiFab div_out,
int  div_comp,
int  ncomp,
const amrex::Geometry geom 
)

◆ single_level_weighted_redistribute()

void amrex::single_level_weighted_redistribute ( amrex::MultiFab div_tmp_in,
amrex::MultiFab div_out,
const amrex::MultiFab weights,
int  div_comp,
int  ncomp,
const amrex::Geometry geom,
bool  use_wts_in_divnc 
)

◆ single_product()

template<typename... Ls, typename A >
constexpr auto amrex::single_product ( TypeList< Ls... >  ,
 
)
constexpr

◆ single_task() [1/2]

template<typename L >
void amrex::single_task ( gpuStream_t  stream,
L const &  f 
)
noexcept

◆ single_task() [2/2]

template<typename L >
void amrex::single_task ( L &&  f)
noexcept

◆ SingleLevelToBlueprint() [1/2]

void amrex::SingleLevelToBlueprint ( const MultiFab mf,
const Vector< std::string > &  varnames,
const Geometry geom,
Real  time_value,
int  level_step,
conduit::Node &  bp_mesh 
)

◆ SingleLevelToBlueprint() [2/2]

void amrex::SingleLevelToBlueprint ( const MultiFab mf,
const Vector< std::string > &  varnames,
const Geometry geom,
Real  time_value,
int  level_step,
Node &  res 
)

◆ Sleep()

void amrex::Sleep ( double  sleepsec)

◆ split()

std::vector< std::string > amrex::split ( std::string const &  s,
std::string const &  sep 
)

Split a string using given tokens in sep.

◆ SpMV()

template<typename T >
void amrex::SpMV ( AlgVector< T > &  y,
SpMatrix< T > const &  A,
AlgVector< T > const &  x 
)

◆ sqrt()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE GpuComplex<T> amrex::sqrt ( const GpuComplex< T > &  a_z)
noexcept

Return the square root of a complex number.

◆ StablePartition() [1/3]

template<typename T , typename F >
int amrex::StablePartition ( Gpu::DeviceVector< T > &  v,
F &&  f 
)

A GPU-capable partition function for contiguous data.

After calling this, all the items for which the predicate is true will be before the items for which the predicate is false in the input array.

This version is stable, meaning that, within each side of the resulting array, order is maintained - if element i was before element j in the input, then it will also be before j in the output. If you don't care about this property, use amrex::Partition instead.

Template Parameters
Ttype of the data to be partitioned.
Ftype of the predicate function.
Parameters
va Gpu::DeviceVector with the data to be partitioned.
fpredicate function that returns 1 or 0 for each input

Returns the index of the first element for which f is 0.

◆ StablePartition() [2/3]

template<typename T , typename F >
int amrex::StablePartition ( T *  data,
int  beg,
int  end,
F &&  f 
)

A GPU-capable partition function for contiguous data.

After calling this, all the items for which the predicate is true will be before the items for which the predicate is false in the input array.

This version is stable, meaning that, within each side of the resulting array, order is maintained - if element i was before element j in the input, then it will also be before j in the output. If you don't care about this property, use amrex::Partition instead.

Template Parameters
Ttype of the data to be partitioned.
Ftype of the predicate function.
Parameters
datapointer to the data to be partitioned
begindex at which to start
endindex at which to stop (exclusive)
fpredicate function that returns 1 or 0 for each input

Returns the index of the first element for which f is 0.

◆ StablePartition() [3/3]

template<typename T , typename F >
int amrex::StablePartition ( T *  data,
int  n,
F &&  f 
)

A GPU-capable partition function for contiguous data.

After calling this, all the items for which the predicate is true will be before the items for which the predicate is false in the input array.

This version is stable, meaning that, within each side of the resulting array, order is maintained - if element i was before element j in the input, then it will also be before j in the output. If you don't care about this property, use amrex::Partition instead.

Template Parameters
Ttype of the data to be partitioned.
Ftype of the predicate function.
Parameters
datapointer to the data to be partitioned
Nthe number of elements in the array
fpredicate function that returns 1 or 0 for each input

Returns the index of the first element for which f is 0.

◆ StateRedistribute()

void amrex::StateRedistribute ( amrex::Box const &  bx,
int  ncomp,
amrex::Array4< amrex::Real > const &  U_out,
amrex::Array4< amrex::Real > const &  U_in,
amrex::Array4< amrex::EBCellFlag const > const &  flag,
amrex::Array4< amrex::Real const > const &  vfrac,
AMREX_D_DECL(amrex::Array4< amrex::Real const > const &fcx, amrex::Array4< amrex::Real const > const &fcy, amrex::Array4< amrex::Real const > const &fcz)  ,
amrex::Array4< amrex::Real const > const &  ccent,
amrex::BCRec const *  d_bcrec_ptr,
amrex::Array4< int const > const &  itracker,
amrex::Array4< amrex::Real const > const &  nrs,
amrex::Array4< amrex::Real const > const &  alpha,
amrex::Array4< amrex::Real const > const &  nbhd_vol,
amrex::Array4< amrex::Real const > const &  cent_hat,
amrex::Geometry const &  geom,
int  max_order = 2 
)

◆ Subtract() [1/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Subtract ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
const IntVect nghost 
)

◆ Subtract() [2/2]

template<class FAB , class bar = std::enable_if_t<IsBaseFab<FAB>::value>>
void amrex::Subtract ( FabArray< FAB > &  dst,
FabArray< FAB > const &  src,
int  srccomp,
int  dstcomp,
int  numcomp,
int  nghost 
)

◆ sum_fine_to_coarse()

void amrex::sum_fine_to_coarse ( const MultiFab S_Fine,
MultiFab S_crse,
int  scomp,
int  ncomp,
const IntVect ratio,
const Geometry cgeom,
const Geometry fgeom 
)

Add a coarsened version of the data contained in the S_fine MultiFab to S_crse, including ghost cells.

◆ sumToLine()

Gpu::HostVector< Real > amrex::sumToLine ( MultiFab const &  mf,
int  icomp,
int  ncomp,
Box const &  domain,
int  direction,
bool  local = false 
)

Sum MultiFab data to line.

Return a HostVector that contains the sum of the given MultiFab data in the plane with the given normal direction. The size of the vector is domain.length(direction) x ncomp. The vector is actually a 2D array, where the element for component icomp at spatial index k is at [icomp+ncomp*k].

Parameters
mfMultiFab data for summing
icompstarting component
ncompnumber of components
domainthe domain
directionthe direction of the line
localIf false, reduce across MPI processes.

◆ surroundingNodes() [1/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::surroundingNodes ( const BoxND< dim > &  b)
noexcept

Returns a BoxND with NODE based coordinates in all directions that encloses BoxND b.

◆ surroundingNodes() [2/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::surroundingNodes ( const BoxND< dim > &  b,
Direction  d 
)
noexcept

◆ surroundingNodes() [3/3]

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE BoxND<dim> amrex::surroundingNodes ( const BoxND< dim > &  b,
int  dir 
)
noexcept

Returns a BoxND with NODE based coordinates in direction dir that encloses BoxND b. NOTE: equivalent to b.convert(dir,NODE) NOTE: error if b.type(dir) == NODE.

◆ Swap()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::Swap ( T &  t1,
T &  t2 
)
noexcept

◆ swapBytes() [1/6]

std::int16_t amrex::swapBytes ( std::int16_t  val)

◆ swapBytes() [2/6]

std::int32_t amrex::swapBytes ( std::int32_t  val)

◆ swapBytes() [3/6]

std::int64_t amrex::swapBytes ( std::int64_t  val)

◆ swapBytes() [4/6]

std::uint16_t amrex::swapBytes ( std::uint16_t  val)

◆ swapBytes() [5/6]

std::uint32_t amrex::swapBytes ( std::uint32_t  val)

◆ swapBytes() [6/6]

std::uint64_t amrex::swapBytes ( std::uint64_t  val)

◆ swapParticle()

template<typename T_ParticleType , int NAR, int NAI>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::swapParticle ( const ParticleTileData< T_ParticleType, NAR, NAI > &  dst,
const ParticleTileData< T_ParticleType, NAR, NAI > &  src,
int  src_i,
int  dst_i 
)
noexcept

A general single particle swapping routine that can run on the GPU.

Template Parameters
NSRnumber of extra reals in the particle struct
NSInumber of extra ints in the particle struct
NARnumber of reals in the struct-of-arrays
NAInumber of ints in the struct-of-arrays
Parameters
dstthe destination tile
srcthe source tile
src_ithe index in the source to read from
dst_ithe index in the destination to write to

◆ SyncStrings()

void amrex::SyncStrings ( const Vector< std::string > &  localStrings,
Vector< std::string > &  syncedStrings,
bool &  alreadySynced 
)

◆ TagCutCells()

void amrex::TagCutCells ( TagBoxArray tags,
const MultiFab state 
)

◆ TagVolfrac()

void amrex::TagVolfrac ( TagBoxArray tags,
const MultiFab volfrac,
Real  tol 
)

◆ The_Arena()

Arena * amrex::The_Arena ( )

◆ The_Async_Arena()

Arena * amrex::The_Async_Arena ( )

◆ The_Comms_Arena()

Arena * amrex::The_Comms_Arena ( )

◆ The_Cpu_Arena()

Arena * amrex::The_Cpu_Arena ( )

◆ The_Device_Arena()

Arena * amrex::The_Device_Arena ( )

◆ The_Managed_Arena()

Arena * amrex::The_Managed_Arena ( )

◆ The_Pinned_Arena()

Arena * amrex::The_Pinned_Arena ( )

◆ thePlotFileType()

static std::string amrex::thePlotFileType ( )
static

◆ Tie()

template<typename... Args>
constexpr AMREX_GPU_HOST_DEVICE GpuTuple<Args&...> amrex::Tie ( Args &...  args)
constexprnoexcept

◆ TilingIfNotGPU()

bool amrex::TilingIfNotGPU ( )
inlinenoexcept

◆ ToArray4()

template<class Tto , class Tfrom >
AMREX_GPU_HOST_DEVICE Array4<Tto> amrex::ToArray4 ( Array4< Tfrom > const &  a_in)
noexcept

◆ Tokenize()

const std::vector< std::string > & amrex::Tokenize ( const std::string &  instr,
const std::string &  separators 
)

Splits "instr" into separate pieces based on "separators".

◆ ToLongMultiFab()

FabArray< BaseFab< Long > > amrex::ToLongMultiFab ( const iMultiFab imf)

Convert iMultiFab to Long.

◆ toLower()

std::string amrex::toLower ( std::string  s)

Converts all characters of the string into lower case based on std::locale.

◆ ToMultiFab()

MultiFab amrex::ToMultiFab ( const iMultiFab imf)

Convert iMultiFab to MultiFab.

◆ TotalBytesAllocatedInFabs()

Long amrex::TotalBytesAllocatedInFabs ( )
noexcept

◆ TotalBytesAllocatedInFabsHWM()

Long amrex::TotalBytesAllocatedInFabsHWM ( )
noexcept

◆ TotalCellsAllocatedInFabs()

Long amrex::TotalCellsAllocatedInFabs ( )
noexcept

◆ TotalCellsAllocatedInFabsHWM()

Long amrex::TotalCellsAllocatedInFabsHWM ( )
noexcept

◆ toUpper()

std::string amrex::toUpper ( std::string  s)

Converts all characters of the string into uppercase based on std::locale.

◆ transformParticles() [1/4]

template<typename DstTile , typename SrcTile , typename F >
void amrex::transformParticles ( DstTile &  dst,
const SrcTile &  src,
F &&  f 
)
noexcept

Apply the function f to all the particles in src, writing the result to dst. This version does all the particles in src.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Fa function object
Parameters
dstthe destination tile
srcthe source tile
fthe function that will be applied to each particle

◆ transformParticles() [2/4]

template<typename DstTile , typename SrcTile , typename Index , typename N , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void amrex::transformParticles ( DstTile &  dst,
const SrcTile &  src,
Index  src_start,
Index  dst_start,
n,
F const &  f 
)
noexcept

Apply the function f to particles in src, writing the result to dst. This version applies the function to n particles starting at index src_start, writing the result starting at dst_start.

Template Parameters
DstTilethe dst particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Nthe size type, e.g. Long
Fa function object
Parameters
dstthe destination tile
srcthe source tile
src_startthe offset at which to start reading particles from src
dst_startthe offset at which to start writing particles to dst
fthe function that will be applied to each particle

◆ transformParticles() [3/4]

template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename F >
void amrex::transformParticles ( DstTile1 &  dst1,
DstTile2 &  dst2,
const SrcTile &  src,
F &&  f 
)
noexcept

Apply the function f to all the particles in src, writing the results to dst1 and dst2. This version does all the particles in src.

Template Parameters
DstTile1the dst1 particle tile type
DstTile2the dst2 particle tile type
SrcTilethe src particle tile type
Fa function object
Parameters
dst1the first destination tile
dst2the second destination tile
srcthe source tile
fthe function that will be applied to each particle

◆ transformParticles() [4/4]

template<typename DstTile1 , typename DstTile2 , typename SrcTile , typename Index , typename N , typename F , std::enable_if_t< std::is_integral_v< Index >, int > foo = 0>
void amrex::transformParticles ( DstTile1 &  dst1,
DstTile2 &  dst2,
const SrcTile &  src,
Index  src_start,
Index  dst1_start,
Index  dst2_start,
n,
F const &  f 
)
noexcept

Apply the function f to particles in src, writing the results to dst1 and dst2. This version applies the function to n particles starting at index src_start, writing the result starting at dst1_start and dst2_start.

Template Parameters
DstTile1the dst1 particle tile type
DstTile2the dst2 particle tile type
SrcTilethe src particle tile type
Indexthe index type, e.g. unsigned int
Nthe size type, e.g. Long
Fa function object
Parameters
dst1the first destination tile
dst2the second destination tile
srcthe source tile
src_startthe offset at which to start reading particles from src
dst1_startthe offset at which to start writing particles to dst1
dst2_startthe offset at which to start writing particles to dst2
fthe function that will be applied to each particle

◆ tridiagonal_solve() [1/2]

AMREX_FORCE_INLINE void amrex::tridiagonal_solve ( Array1D< Real, 0, 31 > &  a_ls,
Array1D< Real, 0, 31 > &  b_ls,
Array1D< Real, 0, 31 > &  c_ls,
Array1D< Real, 0, 31 > &  r_ls,
Array1D< Real, 0, 31 > &  u_ls,
Array1D< Real, 0, 31 > &  gam,
int  ilen 
)
noexcept

◆ tridiagonal_solve() [2/2]

template<typename T >
AMREX_FORCE_INLINE void amrex::tridiagonal_solve ( Array1D< T, 0, 31 > &  a_ls,
Array1D< T, 0, 31 > &  b_ls,
Array1D< T, 0, 31 > &  c_ls,
Array1D< T, 0, 31 > &  r_ls,
Array1D< T, 0, 31 > &  u_ls,
Array1D< T, 0, 31 > &  gam,
int  ilen 
)
noexcept

◆ trim()

std::string amrex::trim ( std::string  s,
std::string const &  space = " \t" 
)

Trim leading and trailing characters in the optional space argument.

◆ TupleCat() [1/3]

template<typename TP >
constexpr AMREX_GPU_HOST_DEVICE auto amrex::TupleCat ( TP &&  a) -> typename detail::tuple_cat_result<detail::tuple_decay_t<TP> >::type
constexpr

◆ TupleCat() [2/3]

template<typename TP1 , typename TP2 >
constexpr AMREX_GPU_HOST_DEVICE auto amrex::TupleCat ( TP1 &&  a,
TP2 &&  b 
) -> typename detail::tuple_cat_result<detail::tuple_decay_t<TP1>, detail::tuple_decay_t<TP2> >::type
constexpr

◆ TupleCat() [3/3]

template<typename TP1 , typename TP2 , typename... TPs>
constexpr AMREX_GPU_HOST_DEVICE auto amrex::TupleCat ( TP1 &&  a,
TP2 &&  b,
TPs &&...  args 
) -> typename detail::tuple_cat_result<detail::tuple_decay_t<TP1>, detail::tuple_decay_t<TP2>, detail::tuple_decay_t<TPs>...>::type
constexpr

◆ TupleSplit()

template<std::size_t... Is, typename... Args>
constexpr AMREX_GPU_HOST_DEVICE auto amrex::TupleSplit ( const GpuTuple< Args... > &  tup)
constexprnoexcept

Returns a GpuTuple of GpuTuples obtained by splitting the input GpuTuple according to the sizes specified by the template arguments.

◆ tupleToArray() [1/2]

template<typename T >
constexpr AMREX_GPU_HOST_DEVICE auto amrex::tupleToArray ( GpuTuple< T > const &  tup)
constexpr

◆ tupleToArray() [2/2]

template<typename T , typename T2 , typename... Ts, std::enable_if_t< Same< T, T2, Ts... >::value, int > = 0>
constexpr AMREX_GPU_HOST_DEVICE auto amrex::tupleToArray ( GpuTuple< T, T2, Ts... > const &  tup)
constexpr

Convert GpuTuple<T,T2,Ts...> to GpuArray.

◆ ubound() [1/2]

template<class T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::ubound ( Array4< T > const &  a)
noexcept

◆ ubound() [2/2]

template<int dim, std::enable_if_t<(1<=dim &&dim<=3), int > = 0>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE Dim3 amrex::ubound ( BoxND< dim > const &  box)
noexcept

◆ ubound_iv()

template<int dim>
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE IntVectND<dim> amrex::ubound_iv ( BoxND< dim > const &  box)
noexcept

◆ UniqueRandomSubset()

void amrex::UniqueRandomSubset ( Vector< int > &  uSet,
int  setSize,
int  poolSize,
bool  printSet 
)

Create a unique subset of random numbers from a pool of integers in the range [0, poolSize - 1] the set will be in the order they are found setSize must be <= poolSize uSet will be resized to setSize if you want all processors to have the same set, call this on one processor and broadcast the array.

◆ UniqueString()

std::string amrex::UniqueString ( )

Create a (probably) unique string.

◆ unpackBuffer()

template<class PC , class Buffer , class UnpackPolicy , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void amrex::unpackBuffer ( PC &  pc,
const ParticleCopyPlan plan,
const Buffer &  snd_buffer,
UnpackPolicy const &  policy 
)

◆ unpackRemotes()

template<class PC , class Buffer , class UnpackPolicy , std::enable_if_t< IsParticleContainer< PC >::value, int > foo = 0>
void amrex::unpackRemotes ( PC &  pc,
const ParticleCopyPlan plan,
Buffer &  rcv_buffer,
UnpackPolicy const &  policy 
)

◆ UnSerializeStringArray()

amrex::Vector< std::string > amrex::UnSerializeStringArray ( const Vector< char > &  charArray)

◆ update_fab_stats() [1/2]

void amrex::update_fab_stats ( Long  n,
Long  s,
size_t  szt 
)
noexcept

◆ update_fab_stats() [2/2]

void amrex::update_fab_stats ( Long  n,
Long  s,
std::size_t  szt 
)
noexcept

◆ upper_bound()

template<typename ItType , typename ValType >
AMREX_GPU_HOST_DEVICE ItType amrex::upper_bound ( ItType  first,
ItType  last,
const ValType &  val 
)

◆ UtilCreateCleanDirectory()

void amrex::UtilCreateCleanDirectory ( const std::string &  path,
bool  callbarrier = true 
)

Create a new directory, renaming the old one if it exists.

◆ UtilCreateDirectory()

bool amrex::UtilCreateDirectory ( const std::string &  path,
mode_t  mode,
bool  verbose = false 
)

Creates the specified directories. path may be either a full pathname or a relative pathname. It will create all the directories in the pathname, if they don't already exist, so that on successful return the pathname refers to an existing directory. Returns true or false depending upon whether or not it was successful. Also returns true if path is NULL or "/". mode is the mode passed to mkdir() for any directories that must be created (for example: 0755). verbose will print out the directory creation steps.

For example, if it is passed the string "/a/b/c/d/e/f/g", it will return successfully when all the directories in the pathname exist; i.e. when the full pathname is a valid directory.

In a Windows environment, the path separator is a '\', so that if using the example given above you must pass the string "\\a\\b\\c\\d\\e\\f\\g" (Note that you must escape the backslash in a character string),

Only the last mkdir return value is checked for success as errno may not be set to EEXIST if a directory exists but mkdir has other reasons to fail such as part of the path being a read-only filesystem (EROFS). If this function fails, it will print out an error stack.

◆ UtilCreateDirectoryDestructive()

void amrex::UtilCreateDirectoryDestructive ( const std::string &  path,
bool  callbarrier = true 
)

Create a new directory, removing old one if it exists. This will only work on unix systems, as it has a system call.

◆ UtilRenameDirectoryToOld()

void amrex::UtilRenameDirectoryToOld ( const std::string &  path,
bool  callbarrier = true 
)

Rename a current directory if it exists.

◆ Verbose()

int amrex::Verbose ( )
noexcept

◆ Version()

std::string amrex::Version ( )

the AMReX "git describe" version

◆ VisMFBaseName()

std::string amrex::VisMFBaseName ( const std::string &  filename)

◆ VisMFWrite()

VisMF::FabOnDisk amrex::VisMFWrite ( const FArrayBox fabIn,
const std::string &  filename,
std::ostream &  os,
long &  bytes,
int  whichPlane 
)

◆ VisMFWriteHeader()

long amrex::VisMFWriteHeader ( const std::string &  mf_name,
VisMF::Header hdr,
int  whichPlane 
)

◆ volumeWeightedSum()

Real amrex::volumeWeightedSum ( Vector< MultiFab const * > const &  mf,
int  icomp,
Vector< Geometry > const &  geom,
Vector< IntVect > const &  ratio,
bool  local = false 
)

Volume weighted sum for a vector of MultiFabs.

Return a volume weighted sum of MultiFabs of AMR data. The sum is perform on a single component of the data. If the MultiFabs are built with EB Factories, the cut cell volume fraction will be included in the weight.

◆ Warning() [1/2]

AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::Warning ( const char *  msg)

◆ Warning() [2/2]

void amrex::Warning ( const std::string &  msg)

Print out warning message to cerr.

◆ Warning_host()

void amrex::Warning_host ( const char *  msg)

◆ Write()

template<typename FAB >
std::enable_if_t<std::is_same_v<FAB,IArrayBox> > amrex::Write ( const FabArray< FAB > &  fa,
const std::string &  name 
)

Write iMultiFab/FabArray<IArrayBox>

This writes an iMultiFab/FabArray<IArrayBox> to files on disk, including a clear text file NAME_H and binary files NAME_D_00000 etc.

Parameters
fais the iMultiFab to be written.
nameis the base name for the files.

◆ Write2DBoxFrom3D()

void amrex::Write2DBoxFrom3D ( const Box box,
std::ostream &  os,
int  whichPlane 
)

◆ Write2DFab()

void amrex::Write2DFab ( const string &  filenameprefix,
const int  xdim,
const int  ydim,
const double *  data 
)

◆ Write2DText()

void amrex::Write2DText ( const string &  filenameprefix,
const int  xdim,
const int  ydim,
const double *  data 
)

◆ Write3DFab()

void amrex::Write3DFab ( const string &  filenameprefix,
const int  xdim,
const int  ydim,
const int  zdim,
const double *  data 
)

◆ write_to_stderr_without_buffering()

void amrex::write_to_stderr_without_buffering ( const char *  str)

This is used by amrex::Error(), amrex::Abort(), and amrex::Assert() to ensure that when writing the message to stderr, that no additional heap-based memory is allocated.

◆ WriteBlueprintFiles()

void amrex::WriteBlueprintFiles ( const conduit::Node &  bp_mesh,
const std::string &  fname_base,
int  step,
const std::string &  protocol 
)

◆ writeData() [1/4]

void amrex::writeData ( double const *  data,
std::size_t  size,
std::ostream &  os 
)
inline

◆ writeData() [2/4]

void amrex::writeData ( float const *  data,
std::size_t  size,
std::ostream &  os 
)
inline

◆ writeData() [3/4]

void amrex::writeData ( int const *  data,
std::size_t  size,
std::ostream &  os 
)
inline

◆ writeData() [4/4]

void amrex::writeData ( Long const *  data,
std::size_t  size,
std::ostream &  os 
)
inline

◆ writeDoubleData()

void amrex::writeDoubleData ( const double *  data,
std::size_t  size,
std::ostream &  os,
const RealDescriptor rd = FPC::Native64RealDescriptor() 
)

Write double data to the ostream. The arguments are a pointer to data to write, the size of the data buffer, the ostream, and an optional RealDescriptor that describes the data format to use for writing. If no RealDescriptor is provided, the data will be written using the native format for your machine.

◆ WriteEBSurface()

void amrex::WriteEBSurface ( const BoxArray ba,
const DistributionMapping dmap,
const Geometry geom,
const EBFArrayBoxFactory ebf 
)

◆ WriteFab()

void amrex::WriteFab ( const string &  filenameprefix,
const int  xdim,
const int  ydim,
const double *  data 
)

◆ writeFabs() [1/2]

void amrex::writeFabs ( const MultiFab mf,
const std::string &  name 
)

Write each fab individually.

◆ writeFabs() [2/2]

void amrex::writeFabs ( const MultiFab mf,
int  comp,
int  ncomp,
const std::string &  name 
)

◆ writeFloatData()

void amrex::writeFloatData ( const float *  data,
std::size_t  size,
std::ostream &  os,
const RealDescriptor rd = FPC::Native32RealDescriptor() 
)

Write float data to the ostream. The arguments are a pointer to data to write, the size of the data buffer, the ostream, and an optional RealDescriptor that describes the data format to use for writing. If no RealDescriptor is provided, the data will be written using the native format for your machine.

◆ WriteGenericPlotfileHeader()

void amrex::WriteGenericPlotfileHeader ( std::ostream &  HeaderFile,
int  nlevels,
const Vector< BoxArray > &  bArray,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geom,
Real  time,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
const std::string &  versionName = "HyperCLaw-V1.1",
const std::string &  levelPrefix = "Level_",
const std::string &  mfPrefix = "Cell" 
)

write a generic plot file header to the file plotfilename/Header the plotfilename directory must already exist

◆ WriteGenericPlotfileHeaderHDF5()

static void amrex::WriteGenericPlotfileHeaderHDF5 ( hid_t  fid,
int  nlevels,
const Vector< const MultiFab * > &  mf,
const Vector< BoxArray > &  bArray,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geom,
Real  time,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)
static

◆ writeIntData() [1/2]

template<typename To , typename From >
void amrex::writeIntData ( const From *  data,
std::size_t  size,
std::ostream &  os,
const amrex::IntDescriptor id 
)

◆ writeIntData() [2/2]

void amrex::writeIntData ( const int data,
std::size_t  size,
std::ostream &  os,
const IntDescriptor id = FPC::NativeIntDescriptor() 
)

Functions for writing integer data to disk in a portable, self-describing manner.

Write int data to the ostream. The arguments are a pointer to data to write, the size of the data buffer, the ostream, and an optional IntDescriptor that describes the data format to use for writing. If no IntDescriptor is provided, the data will be written using the native format for your machine.

◆ writeLongData()

void amrex::writeLongData ( const Long *  data,
std::size_t  size,
std::ostream &  os,
const IntDescriptor id = FPC::NativeLongDescriptor() 
)

Write long data to the ostream. The arguments are a pointer to data to write, the size of the data buffer, the ostream, and an optional IntDescriptor that describes the data format to use for writing. If no IntDescriptor is provided, the data will be written using the native format for your machine.

◆ WriteMLMF()

void amrex::WriteMLMF ( const std::string &  plotfilename,
const Vector< const MultiFab * > &  mf,
const Vector< Geometry > &  geom 
)

write a plotfile to disk given: -plotfile name -vector of MultiFabs -vector of Geometrys variable names are written as "Var0", "Var1", etc. refinement ratio is computed from the Geometry vector "time" and "level_steps" are set to zero

Parameters
&plotfilename
mf
&geom

◆ WriteMultiLevelPlotfile()

void amrex::WriteMultiLevelPlotfile ( const std::string &  plotfilename,
int  nlevels,
const Vector< const MultiFab * > &  mf,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geom,
Real  time,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ WriteMultiLevelPlotfileHDF5()

void amrex::WriteMultiLevelPlotfileHDF5 ( const std::string &  plotfilename,
int  nlevels,
const Vector< const MultiFab * > &  mf,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geom,
Real  time,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
const std::string &  compression,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ WriteMultiLevelPlotfileHDF5MultiDset()

void amrex::WriteMultiLevelPlotfileHDF5MultiDset ( const std::string &  plotfilename,
int  nlevels,
const Vector< const MultiFab * > &  mf,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geom,
Real  time,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
const std::string &  compression,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ WriteMultiLevelPlotfileHDF5SingleDset()

void amrex::WriteMultiLevelPlotfileHDF5SingleDset ( const std::string &  plotfilename,
int  nlevels,
const Vector< const MultiFab * > &  mf,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geom,
Real  time,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
const std::string &  compression,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ WriteMultiLevelPlotfileHeaders()

void amrex::WriteMultiLevelPlotfileHeaders ( const std::string &  plotfilename,
int  nlevels,
const Vector< const MultiFab * > &  mf,
const Vector< std::string > &  varnames,
const Vector< Geometry > &  geom,
Real  time,
const Vector< int > &  level_steps,
const Vector< IntVect > &  ref_ratio,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ writePlotFile() [1/3]

void amrex::writePlotFile ( const char *  name,
const amrex::MultiFab mf,
const amrex::Geometry geom,
const amrex::IntVect refRatio,
amrex::Real  bgVal,
const amrex::Vector< std::string > &  names 
)

◆ writePlotFile() [2/3]

void amrex::writePlotFile ( const char *  name,
const MultiFab mf,
const Geometry geom,
const IntVect refRatio,
Real  bgVal,
const Vector< std::string > &  names 
)

◆ writePlotFile() [3/3]

void amrex::writePlotFile ( const std::string &  dir,
std::ostream &  os,
int  level,
const MultiFab mf,
const Geometry geom,
const IntVect refRatio,
Real  bgVal,
const Vector< std::string > &  names 
)

◆ WritePlotfile()

void amrex::WritePlotfile ( const std::string &  pfversion,
const Vector< MultiFab > &  data,
const Real  time,
const Vector< Real > &  probLo,
const Vector< Real > &  probHi,
const Vector< int > &  refRatio,
const Vector< Box > &  probDomain,
const Vector< Vector< Real > > &  dxLevel,
const int  coordSys,
const std::string &  oFile,
const Vector< std::string > &  names,
const bool  verbose,
const bool  isCartGrid,
const Real *  vfeps,
const int levelSteps 
)

◆ WritePlotFile() [1/2]

void amrex::WritePlotFile ( const Vector< MultiFab * > &  mfa,
AmrData amrdToMimic,
const std::string &  oFile,
bool  verbose,
const Vector< std::string > &  varNames 
)

◆ WritePlotFile() [2/2]

void amrex::WritePlotFile ( const Vector< MultiFab * > &  mfa,
const Vector< Box > &  probDomain,
AmrData amrdToMimic,
const std::string &  oFile,
bool  verbose,
const Vector< std::string > &  varNames 
)

◆ WritePlotfile2DFrom3D()

void amrex::WritePlotfile2DFrom3D ( const std::string &  pfversion,
const Vector< MultiFab > &  data,
const Real  time,
const Vector< Real > &  probLo,
const Vector< Real > &  probHi,
const Vector< int > &  refRatio,
const Vector< Box > &  probDomain,
const Vector< Vector< Real > > &  dxLevel,
const int  coordSys,
const std::string &  oFile,
const Vector< std::string > &  names,
const bool  verbose,
const bool  isCartGrid,
const Real *  vfeps,
const int levelSteps 
)

◆ writeRealData()

void amrex::writeRealData ( const Real *  data,
std::size_t  size,
std::ostream &  os,
const RealDescriptor rd = FPC::NativeRealDescriptor() 
)

Write Real data to the ostream. The arguments are a pointer to data to write, the size of the data buffer, the ostream, and an optional RealDescriptor that describes the data format to use for writing. If no RealDescriptor is provided, the data will be written using the native format for your machine.

◆ WriteSingleLevelPlotfile()

void amrex::WriteSingleLevelPlotfile ( const std::string &  plotfilename,
const MultiFab mf,
const Vector< std::string > &  varnames,
const Geometry geom,
Real  time,
int  level_step,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ WriteSingleLevelPlotfileHDF5()

void amrex::WriteSingleLevelPlotfileHDF5 ( const std::string &  plotfilename,
const MultiFab mf,
const Vector< std::string > &  varnames,
const Geometry geom,
Real  time,
int  level_step,
const std::string &  compression,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ WriteSingleLevelPlotfileHDF5MultiDset()

void amrex::WriteSingleLevelPlotfileHDF5MultiDset ( const std::string &  plotfilename,
const MultiFab mf,
const Vector< std::string > &  varnames,
const Geometry geom,
Real  time,
int  level_step,
const std::string &  compression,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ WriteSingleLevelPlotfileHDF5SingleDset()

void amrex::WriteSingleLevelPlotfileHDF5SingleDset ( const std::string &  plotfilename,
const MultiFab mf,
const Vector< std::string > &  varnames,
const Geometry geom,
Real  time,
int  level_step,
const std::string &  compression,
const std::string &  versionName,
const std::string &  levelPrefix,
const std::string &  mfPrefix,
const Vector< std::string > &  extra_dirs 
)

◆ Xpay() [1/2]

template<class MF , std::size_t N, std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::Xpay ( Array< MF, N > &  dst,
typename MF::value_type  a,
Array< MF, N > const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst = src + a * dst

◆ Xpay() [2/2]

template<class MF , std::enable_if_t< IsMultiFabLike_v< MF >, int > = 0>
void amrex::Xpay ( MF &  dst,
typename MF::value_type  a,
MF const &  src,
int  scomp,
int  dcomp,
int  ncomp,
IntVect const &  nghost 
)

dst = src + a * dst

◆ yafluxreg_crseadd() [1/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::yafluxreg_crseadd ( Box const &  bx,
Array4< T > const &  d,
Array4< int const > const &  flag,
Array4< T const > const &  fx,
Array4< T const > const &  fy,
Array4< T const > const &  fz,
dtdx,
dtdy,
dtdz,
int  nc 
)
noexcept

◆ yafluxreg_crseadd() [2/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::yafluxreg_crseadd ( Box const &  bx,
Array4< T > const &  d,
Array4< int const > const &  flag,
Array4< T const > const &  fx,
Array4< T const > const &  fy,
dtdx,
dtdy,
int  nc 
)
noexcept

◆ yafluxreg_crseadd() [3/3]

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::yafluxreg_crseadd ( Box const &  bx,
Array4< T > const &  d,
Array4< int const > const &  flag,
Array4< T const > const &  fx,
dtdx,
int  nc 
)
noexcept

◆ yafluxreg_fineadd()

template<typename T >
AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void amrex::yafluxreg_fineadd ( Box const &  bx,
Array4< T > const &  d,
Array4< T const > const &  f,
dtdx,
int  nc,
int  dirside,
Dim3 const &  rr 
)
noexcept

Variable Documentation

◆ atomic_total_bytes_allocated_in_fabs

std::atomic< Long > amrex::atomic_total_bytes_allocated_in_fabs {0L}

◆ atomic_total_bytes_allocated_in_fabs_hwm

std::atomic< Long > amrex::atomic_total_bytes_allocated_in_fabs_hwm {0L}

◆ atomic_total_cells_allocated_in_fabs

std::atomic< Long > amrex::atomic_total_cells_allocated_in_fabs {0L}

◆ atomic_total_cells_allocated_in_fabs_hwm

std::atomic< Long > amrex::atomic_total_cells_allocated_in_fabs_hwm {0L}

◆ cell_bilinear_interp

AMREX_EXPORT CellBilinear amrex::cell_bilinear_interp

◆ cell_cons_interp

AMREX_EXPORT CellConservativeLinear amrex::cell_cons_interp ( false  )

◆ cell_quartic_interp

AMREX_EXPORT CellQuartic amrex::cell_quartic_interp

◆ eb_cell_cons_interp

AMREX_EXPORT EBCellConservativeLinear amrex::eb_cell_cons_interp ( false  )

◆ eb_covered_val

constexpr amrex::Real amrex::eb_covered_val = amrex::Real(1.e40)
staticconstexpr

◆ eb_lincc_interp

AMREX_EXPORT EBCellConservativeLinear amrex::eb_lincc_interp

◆ eb_mf_cell_cons_interp

AMREX_EXPORT EBMFCellConsLinInterp amrex::eb_mf_cell_cons_interp ( false  )

◆ eb_mf_lincc_interp

AMREX_EXPORT EBMFCellConsLinInterp amrex::eb_mf_lincc_interp ( true  )

◆ face_cons_linear_interp

AMREX_EXPORT FaceConservativeLinear amrex::face_cons_linear_interp

◆ face_divfree_interp

AMREX_EXPORT FaceDivFree amrex::face_divfree_interp

◆ face_linear_interp

AMREX_EXPORT FaceLinear amrex::face_linear_interp

◆ gcc_map_node_extra_bytes

const Long amrex::gcc_map_node_extra_bytes = 32L
static

◆ gpu_rand_state

randState_t * amrex::gpu_rand_state = nullptr

◆ gpuSuccess

constexpr gpuError_t amrex::gpuSuccess = cudaSuccess
constexpr

◆ initialized

bool amrex::initialized = false

◆ int

const amrex::int[]

◆ INVALID_TIME

constexpr Real amrex::INVALID_TIME = -1.0e200_rt
staticconstexpr

◆ IsBaseFab_v

template<class A >
constexpr bool amrex::IsBaseFab_v = IsBaseFab<A>::value
inlineconstexpr

◆ IsConvertible_v

template<typename T , typename... Args>
constexpr bool amrex::IsConvertible_v = IsConvertible<T, Args...>::value
inlineconstexpr

◆ IsFabArray_v

template<class A >
constexpr bool amrex::IsFabArray_v = IsFabArray<A>::value
inlineconstexpr

◆ IsMultiFabLike_v

template<class M >
constexpr bool amrex::IsMultiFabLike_v = IsMultiFabLike<M>::value
inlineconstexpr

◆ K1D

constexpr auto amrex::K1D = int(AMREX_SPACEDIM>=1)
staticconstexpr

◆ K2D

constexpr auto amrex::K2D = int(AMREX_SPACEDIM>=2)
staticconstexpr

◆ K3D

constexpr auto amrex::K3D = int(AMREX_SPACEDIM>=3)
staticconstexpr

◆ lincc_interp

AMREX_EXPORT CellConservativeLinear amrex::lincc_interp

◆ max_efficiency

Real amrex::max_efficiency

◆ mf_cell_bilinear_interp

AMREX_EXPORT MFCellBilinear amrex::mf_cell_bilinear_interp

◆ mf_cell_cons_interp

AMREX_EXPORT MFCellConsLinInterp amrex::mf_cell_cons_interp ( false  )

◆ mf_lincc_interp

AMREX_EXPORT MFCellConsLinInterp amrex::mf_lincc_interp ( true  )

◆ mf_linear_slope_minmax_interp

AMREX_EXPORT MFCellConsLinMinmaxLimitInterp amrex::mf_linear_slope_minmax_interp

◆ mf_node_bilinear_interp

AMREX_EXPORT MFNodeBilinear amrex::mf_node_bilinear_interp

◆ mf_pc_interp

AMREX_EXPORT MFPCInterp amrex::mf_pc_interp

◆ MFNEWDATA

constexpr int amrex::MFNEWDATA = 0
staticconstexpr

◆ MFOLDDATA

constexpr int amrex::MFOLDDATA = 1
staticconstexpr

◆ node_bilinear_interp

AMREX_EXPORT NodeBilinear amrex::node_bilinear_interp

◆ node_size

int amrex::node_size

◆ parser_f1_s

constexpr std::string_view amrex::parser_f1_s[]
staticconstexpr
Initial value:
=
{
"sqrt",
"exp",
"log",
"log10",
"sin",
"cos",
"tan",
"asin",
"acos",
"atan",
"sinh",
"cosh",
"tanh",
"asinh",
"acosh",
"atanh",
"abs",
"floor",
"ceil",
"comp_ellint_1",
"comp_ellint_2",
"erf"
}

◆ parser_f2_s

constexpr std::string_view amrex::parser_f2_s[]
staticconstexpr
Initial value:
=
{
"pow",
"atan2",
"gt",
"lt",
"geq",
"leq",
"eq",
"neq",
"and",
"or",
"heaviside",
"jn",
"yn",
"min",
"max",
"fmod"
}

◆ parser_f3_s

constexpr std::string_view amrex::parser_f3_s[]
staticconstexpr
Initial value:
=
{
"if"
}

◆ parser_node_s

constexpr std::string_view amrex::parser_node_s[]
staticconstexpr
Initial value:
=
{
"number",
"symbol",
"add",
"sub",
"mul",
"div",
"f1",
"f2",
"f3",
"assign",
"list"
}

◆ pc_interp

AMREX_EXPORT PCInterp amrex::pc_interp

CONSTRUCT A GLOBAL OBJECT OF EACH VERSION.

◆ private_total_bytes_allocated_in_fabs

Long amrex::private_total_bytes_allocated_in_fabs = 0L

total bytes at any given time

◆ private_total_bytes_allocated_in_fabs_hwm

Long amrex::private_total_bytes_allocated_in_fabs_hwm = 0L

high-water-mark over a given interval

◆ private_total_cells_allocated_in_fabs

Long amrex::private_total_cells_allocated_in_fabs = 0L

total cells at any given time

◆ private_total_cells_allocated_in_fabs_hwm

Long amrex::private_total_cells_allocated_in_fabs_hwm = 0L

high-water-mark over a given interval

◆ protected_interp

AMREX_EXPORT CellConservativeProtected amrex::protected_interp

◆ quadratic_interp

AMREX_EXPORT CellQuadratic amrex::quadratic_interp

◆ quartic_interp

AMREX_EXPORT CellConservativeQuartic amrex::quartic_interp

◆ ResetDisplay

constexpr char amrex::ResetDisplay[] = "\033[0m"
constexpr

◆ sfc_threshold

int amrex::sfc_threshold

◆ sys_name

const char amrex::sys_name[] = "IEEE"
static

◆ verbose

int amrex::verbose