Skip to content

Memoize model methods #7816

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ricardoV94
Copy link
Member

@ricardoV94 ricardoV94 commented Jun 11, 2025

This PR caches the expensive model methods (logp extraction, and function compilation).

It introduces a decorator for methods that can invalidate the cache of a model (such as registering a new variable or changing the initval strategy). When those methods are called the old cache is reset.

Closes #7815

One important aspect to be able to cache compiled functions is to reproduce the random_seed behavior from pytensorf.compile. In hindsight this was a poor mixing of concerns, compiling the function and setting the RNG variables should not be mixed. This behavior is now disabled by setting random_seed=False and use the new seed_compiled_function helper instead. This allows us to cache the compiled functions but still respect the behavior of random_seed that may be passed to these functions.

I decided to put the cache at the Model level so it's easier for the user to control. That means some parts of the codebase will contort themselves to use model.compile_fn in order to benefit from the cache.

It's also important that they avoid creating new expressions (such as model.logp().sum())` as this will be a new variable and invalidate the cache.


📚 Documentation preview 📚: https://pymc--7816.org.readthedocs.build/en/7816/

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cache model functions for iterative workflows
1 participant