Skip to content

The Driver

Ben Prather edited this page Sep 4, 2024 · 7 revisions

The Driver object/package is responsible for coordinating the rest of the code, by constructing the list of tasks that are to be performed each (sub-)step. This means that if there is an operation that is to be executed every step, it has to be called from the driver in some way or another; either it is explicitly added to a TaskCollection made by the driver, or it can be a "callback" registered by a package, e.g. pkg->BlockUtoP, which will be called whenever KHARMA needs the primitive variables as a function of the conserved variables (that is, when the task list calls Packages::UtoP). The driver class contains a public function Execute that is called from main. In KHARMA, we currently have a single KHARMADriver class, which implements three different algorithms - a default KHARMA step, an ImEx step, and a Simple step for testing. One type of step is chosen in any given simulation, based on the parameter driver/type provided at runtime. Colloquially, the different steps are still referred to as "drivers" -- this is a holdover from when they were each different Driver subclasses.

The KHARMADriver class inherits from Parthenon's MultiStageDriver class, with each sub-step creating a different TaskCollection of everything that must be done in that stage (that is, two task lists are built and then run in second-order schemes like vl2 or rk2). We now discuss the algorithms available in KHARMA:

KHARMA Step

This is the default mode, used for most ideal GRMHD simulations. Like most hydro schemes, our GRMHD scheme must keep two forms of the variables: the conserved variables, and a set of "primitive" variables representing more tangible things like the density and velocity. Unlike most Newtonian hydro schemes, recovering the primitive variables from the conserved variables requires a numerical solve. The presence of this solve (and the fact that it can fail to converge) introduces considerable complexity to the scheme. To evolve the fluid, the code must:

  1. Calculate the fluxes of conserved variables through each face of each cell, based on the values of primitive variables at each zone's center (KHARMADriver::AddFluxCalculations)
  2. Apply any fixes to fluxes (e.g., for the magnetic field) (Packages::FixFlux, calling pkg->FixFlux)
    1. Send/receive any flux corrections for SMR/AMR grids, calculate and synchronize EMF values if using Face-CT
  3. Update conserved variables using their prior values the divergence of conserved fluxes (KHARMADriver::FluxDivergence)
    1. Apply any source terms (e.g., the geometric term in GRMHD) (Packages::AddSource, calling pkg->AddSource)
  4. Exchange ghost zones: currently all conserved variables are exchanged, as well as the previous step's values of the fluid primitive variables ($\rho$, $u$, $u^i$). The latter are exchanged to provide a consistent starting value when solving for the new primitive variables in the next step. (A similar effect could be achieved synchronizing just the 1D solution variable, but this is not yet implemented -- in limited testing, MPI bandwidth is not a primary speed concern). (KHARMADriver::AddBoundarySync)
  5. Recover primitive variables (Packages::MeshUtoP)
    1. Apply any stability limits (floors) (Packages::MeshApplyFloors)
    2. Fix any errors in recovering the primitives, re-apply floors to fixed zones (Inverter::MeshFixUtoP)
  6. Apply the domain boundaries, e.g. outflow boundaries at radial boundaries, reflecting or transmitting boundaries at polar boundaries. (parthenon::ApplyBoundaryConditionsOnCoarseOrFineMD)
  7. Apply any source terms (KEL), or calculate outputs (jcon) which use the primitive variables (Packages::MeshApplyPrimSource)

The final TaskRegion is an additional boundary sync that is executed only if two_sync is enabled in the GRMHD package. This is a safeguard to ensure the ghost zone primitives are identical to their physical counterpart, a strategy similar to the one employed in iharm3d. When running with Flux-CT (cell-centered) magnetic fields, this strict similarity is necessary to keep the magnetic field divergence small. However, with Face-CT the divergence is kept regardless of what happens in ghost zones, so this second sync is only necessary if you want to keep binary-similar results between different meshblock breakdowns (results are always binary-similar between runs on the same mesh, and this is a regression test in KHARMA).

The prevalence of Mesh in naming the latter half of the step is also historical -- this is in contrast to Block forms of functions, which were run on a single block at a time. Now that KHARMA uses MeshData objects throughout, it calls exclusively Mesh versions of functions. A few MeshX functions just iterate through blocks calling BlockX versions; this is gradually being cleaned up, as running kernels over all blocks in a MeshData object is generally faster than running block-by-block.

The comments in the code itself are also worth a read in understanding KHARMA's algorithm and top-level organization. They attempt to run through both the reasoning and purpose of each call, and the mechanics of creating and adding to a TaskCollection.

ImEx Step

Much like the KHARMA step, the Imex step starts off with counting the packages that are loaded and allocates the required number of fluid states each sub-step. There are two key differences:

  • It needs two additional fluid states - one for the implicit solver, and one for the linesearch operation performed in the solver.
  • The Imex step handles primitives as fundamental variables. This means that it is the primitives that are synced during boundary exchanges. This is specified by declaring the primitives with the Metadata::FillGhost flag.

It is probably worth noting that while the driver can handle ImEx schemes, it can also perform a fully explicit update. The advantage of using the Imex step over the KHARMA step for a fully explicit update has to do with the fact that this driver treats primitives as fundamental variables. This is favorable in the case of problems where one needs to define the primitives at physical boundaries manually eg., the conducting atmosphere test.

This is followed by a large synchronous TaskRegion where we perform several operations over the entire mesh,

  1. Calculating the fluxes through all faces and the local wavespeeds.
  2. Performing Flux-CT to ensure divergence-free magnetic fields at zone corners.
  3. Computing the flux divergence.
  4. Evaluating all the explicit sources based on the packages (physics) invoked.
  5. Updating all explicit variables and computing the primitives so that they are in lockstep with the conserved variables.
  6. Copying an initial guess for the implicit solver to the md_solver fluid state ONLY for the variables that are tagged isImplicit, and performing an implicit update.
  7. Following the implicit update, md_solver contains the updated primitives. These are copied into the final state of the sub-step.
  8. If fluid variables are evolved explicitly, copying a guess for the UtoP solver.

The next TaskRegion is an asynchronous one that is performed over all meshblocks concurrently. It executes the PreFillDerived, FillDerived and PostFillDerived calls for all the packages by calling Parthenon's Update::FillDerived function. This call rather surreptitiously carries out a lot of vital operations, applies floors (and instability limits for EMHD problems), performs UtoP for fluid variables and syncs pflag to name a few.

The driver then performs a boundary sync. This needs to be a synchronous region.

This is followed by an asynchronous region that,

  1. Fixes zones where UtoP failed (this is only done if we updated fluid variables explicitly) over the entire domain.
  2. Applies physical boundary conditions.
  3. Performs any operations that need to be done post-sub-step, for eg, heating electrons.
  4. Has a global PtoU call to ensure conserved vars and primitives are in sync.
  5. Computes time step.

Like the KHARMA step, the ImEx step has a second synchronization -- it is likely unnecessary in the same circumstances.

Simple Step

The simple step is mostly for testing and experimentation. Like the ImEx step, it treats primitive variables as fundamental. However, it rips out all pieces not essential to doing basic GRMHD: it supports only cell-centered magnetic fields, no SMR/AMR, and no extra physics or source terms. It's probably the closest algorithm to iharm3D/iharm2D, for direct testing.

Clone this wiki locally