DUNE PDELab (git)

Operator splitting

This recipe shows two implementation issues arising when operator splitting approach is used: Communicating the data between two systems of PDE to another, and calculating operator splitting errors.

Accessing data of another system from inside the LocalOperator

We need a reference to the data vector and its GridFunctionSpace (GFS). GFS tells us which parts of the vector to extract. We store shared pointers to the data vector (type Data) and GFS

using LFSFCache = Dune::PDELab::LFSIndexCache<LFSF>;
std::shared_ptr<ZF> dataf;
std::shared_ptr<const GFSF> pgfsf;
mutable LFSF lfsf;
mutable LFSFCache lfsf_cache;

The mutable token is necessary, because functions in the local oparator are const.

They are instantiated in the constructor:

LOP_spatial_contaminant (Param& param_, const GFSF& gfsf_, ZF& zf_, const GFSC& gfsc_, ZC& zc_)
: param(param_)
, pgfsf(stackobject_to_shared_ptr(gfsf_))
, lfsf(pgfsf)
, lfsf_cache(lfsf)
Definition: recipe-operator-splitting.cc:493
std::shared_ptr< T > stackobject_to_shared_ptr(T &t)
Create a shared_ptr for a stack-allocated object.
Definition: shared_ptr.hh:72

We load data with respect to the local Entity. If we are accumulating Skeleton terms, we need to repeat the process to get data from both inside and outside Entity.

auto cell_inside = ig.inside(); // entity object
typename ZF::template LocalView<LFSFCache> p_view(*dataf);
lfsf.bind(cell_inside);
lfsf_cache.update();
std::vector<RF> pw(lfsf.size());
p_view.bind(lfsf_cache);
p_view.read(pw);
p_view.unbind();

We got a vector with local data (from the other system). If the vector is a collection of data from multiple variables, it places values of the same variable together. The data to the first variable are at indices from 0 to lfs.size()-1, the second variable occupies lfs.size() to 2*lfs.size()-1, etc.

Accumulate splitting errors

This example calculates a correction calculated by one operator splitting iteration. Firstly, we need variables storing data (previous and current iteration results). To extract a single data type from a coupled system use GridFunctionSubSpace. The path to the data is defined through TreePath. Create two DiscreteGridFunction-s which are fed to an Adapter (analytic function can be used in Adapter too), and choose the Adapter -we chose DifferenceSquaredAdapter.

GFSC_Sub0 gfsc_sub0(gfsc);
SubC0 subC0(gfsc_sub0,zcstep);
SubC0 subC0new(gfsc_sub0,zcnew);
ErrC0 errC0(subC0,subC0new);
Adapter returning ||f1(x)-f2(x)||^2 for two given grid functions.
Definition: gridfunctionadapter.hh:70
convert a grid function space and a coefficient vector into a grid function
Definition: gridfunctionspaceutilities.hh:76
Non-nesting implementation of GridFunctionSubSpace.
Definition: subspace.hh:389

Then we use integrateGridFunction to get a vector with (local) results from Adapter,

typename ErrC0::Traits::RangeType spliterrorcont0;
Dune::PDELab::integrateGridFunction(errC0,spliterrorcont0,1);
GF::Traits::RangeType integrateGridFunction(const GF &gf, unsigned qorder=1)
Integrate a GridFunction.
Definition: functionutilities.hh:51

and sum the corrections. If the program is parallel, communicate.

RF sperrC0 = spliterrorcont0.one_norm();
sperrC0 = gv.comm().sum(sperrC0);

Note that communication passes a (const) reference and returns a value. Some collective communication is necessary. Otherwise we risk one rank continuing operator splitting iteration and other going to the next step producing a deadlock.

For a short summary of the communication, check Communication in parallel programs

Full example code: recipe-operator-splitting.cc

Creative Commons License   |  Legal Statements / Impressum  |  Hosted by TU Dresden  |  generated with Hugo v0.80.0 (Apr 23, 22:30, 2024)