Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Add support for probes in the model #313

Open
plietar opened this issue Jun 20, 2024 · 0 comments
Open

Add support for probes in the model #313

plietar opened this issue Jun 20, 2024 · 0 comments

Comments

@plietar
Copy link
Member

plietar commented Jun 20, 2024

This is a rough sketch for a feature I brought up in the malariasimulation meeting this week. I'm curious what people's thoughts are and whether they would find it useful.

The way the model is built today, users interact with it by setting a bunch of parameters, wait for it to run and at the end they get a dataframe with data that was measured over time. This data is always an aggregate over the entire population (or in a few cases of a defined age group). There's a balance in how much we report, where some information is useful to a handful of users and might be expensive to compute, so it is not included. When this happens, users have to carry patches that add their custom bit of reporting (see for example #302).

An idea I've been thinking about is a way to attach "probes" to the model, which are functions that get called at specific points in time. The user of the library can write arbitrary code to run when a specific condition happens.

Such probes could for example correspond to birth/death of an individual, infection, administration of an intervention (vaccine, drug, bednets, ..). In practice the functions would be called once per timestep at most, with a bitset of affected individuals (and possibly additional information, if relevant).

The API would look something like:

run_simulation(timesteps, probes = list(
  mda_treatment = function(drug, target) { ... },
  death = function(target) { ... }
))

The performance overhead of defining new probes in the model should be minimal. Actually attaching and invoking the probes might add some more overhead, but at least that is only paid by the users that need it.

This is of course a more advanced API that breaks the abstraction of the model a little. Most users should be fine with the existing interface and not have to resort to probes. It is likely that the list of probes and their signatures needs to change in a breaking way more often than the current API does.

There are some open questions about how powerful and expressive probes should be:

  • Should they have access to the Render object? (I think yes)
  • Should they have access to the model state and be able to read variables? (I am on the fence)
  • Should they be allowed to modify their arguments, for example modifying the individuals that receive an intervention? (I'm leaning towards no). If not, should bitsets be defensively copied before being passed to the probes?
  • Should they be allowed to modify the state of the simulation, including modifying variables and scheduling events. (I think no)
  • Do we want a more general extension mechanism, where an entire intervention might for example be defined outside of the library and be plugged into the simulation? In addition to probes, the plugins could define their own variables, processes and events.

I would suggest starting with a more conservative version, and adding more features if and when the need arises.

(The inspiration and probe terminology comes from the dynamic tracing world, eg. DTrace and USDT probes, which work similarly but in an entirely different context. I'm open to other names. I'm trying really hard to avoid the word "event", since this means something else in individual. "callback" seems too vague to me)

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant