Block-Structured AMR Software Framework
 
Loading...
Searching...
No Matches
Linear Containers

One-dimensional and small fixed-size containers used throughout AMReX. More...

Classes

class  amrex::GpuArray< T, N >
 Fixed-size array that can be used on GPU. More...
 
class  amrex::PODVector< T, Allocator >
 Dynamically allocated vector for trivially copyable data. More...
 
class  amrex::GpuTuple< Ts >
 GPU-compatible tuple. More...
 

Typedefs

template<class T , std::size_t N>
using amrex::Array = std::array< T, N >
 
template<class T >
using amrex::Gpu::DeviceVector = PODVector< T, ArenaAllocator< T > >
 A PODVector that uses the standard memory Arena. Note that the memory might or might not be managed depending on the amrex.the_arena_is_managed ParmParse parameter.
 
template<class T >
using amrex::Gpu::NonManagedDeviceVector = PODVector< T, DeviceArenaAllocator< T > >
 A PODVector that uses the non-managed device memory arena.
 
template<class T >
using amrex::Gpu::ManagedVector = PODVector< T, ManagedArenaAllocator< T > >
 A PODVector that uses the managed memory arena.
 
template<class T >
using amrex::Gpu::PinnedVector = PODVector< T, PinnedArenaAllocator< T > >
 A PODVector that uses the pinned memory arena.
 
template<class T >
using amrex::Gpu::AsyncVector = PODVector< T, AsyncArenaAllocator< T > >
 A PODVector that uses the async memory arena. Maybe useful for temporary vectors inside MFIters that are accessed on the device.
 
template<class T >
using amrex::Gpu::HostVector = PinnedVector< T >
 A PODVector that uses pinned host memory. Same as PinnedVector. For a vector class that uses std::allocator by default, see amrex::Vector.
 
template<class T >
using amrex::Gpu::ManagedDeviceVector = PODVector< T, ManagedArenaAllocator< T > >
 This is identical to ManagedVector<T>. The ManagedDeviceVector form is deprecated and will be removed in a future release.
 

Detailed Description

One-dimensional and small fixed-size containers used throughout AMReX.

Central types include:

Typedef Documentation

◆ Array

template<class T , std::size_t N>
using amrex::Array = typedef std::array<T,N>

◆ AsyncVector

template<class T >
using amrex::Gpu::AsyncVector = typedef PODVector<T, AsyncArenaAllocator<T> >

A PODVector that uses the async memory arena. Maybe useful for temporary vectors inside MFIters that are accessed on the device.

◆ DeviceVector

template<class T >
using amrex::Gpu::DeviceVector = typedef PODVector<T, ArenaAllocator<T> >

A PODVector that uses the standard memory Arena. Note that the memory might or might not be managed depending on the amrex.the_arena_is_managed ParmParse parameter.

◆ HostVector

template<class T >
using amrex::Gpu::HostVector = typedef PinnedVector<T>

A PODVector that uses pinned host memory. Same as PinnedVector. For a vector class that uses std::allocator by default, see amrex::Vector.

◆ ManagedDeviceVector

template<class T >
using amrex::Gpu::ManagedDeviceVector = typedef PODVector<T, ManagedArenaAllocator<T> >

This is identical to ManagedVector<T>. The ManagedDeviceVector form is deprecated and will be removed in a future release.

◆ ManagedVector

template<class T >
using amrex::Gpu::ManagedVector = typedef PODVector<T, ManagedArenaAllocator<T> >

A PODVector that uses the managed memory arena.

◆ NonManagedDeviceVector

template<class T >
using amrex::Gpu::NonManagedDeviceVector = typedef PODVector<T, DeviceArenaAllocator<T> >

A PODVector that uses the non-managed device memory arena.

◆ PinnedVector

template<class T >
using amrex::Gpu::PinnedVector = typedef PODVector<T, PinnedArenaAllocator<T> >

A PODVector that uses the pinned memory arena.