Source for "Exploiting Symmetry in High-Dimensional Dynamic Programming"
Warning: See the HyperParameter Tuning section for more details on robustness checks, tuning, and examples using Weights and Biases. Hyperparameter optimization is an essential part of the machine learning workflow, and it rarely not make sense to check for robustness without considering how/when a new HPO process is required.
Since manual tweaking of hyperparameters is slow and error prone, a variety of ML tools to automate the process and visualization. The primary investment_euler.py and related code is provided as an expanding of this tooling.
Within a python environment, clone this repository with git and execute pip install -r requirements.txt
.
See more complete instructions below in the detailed installation section.
You can load the Jupyter notebook baseline_example.ipynb directly in VS Code or on the command-line with jupyter lab
run in the local directory. This notebook loads the investment_euler.py
and provides utilities to examine the output without using it on the commandline.
There is a command-line interface to solve for the equilibrium given various model and neural network parameters. This is especially convenient for deploying on the cloud or when running in parallel.
The default values of all parameters is given by investment_euler_default.yaml. You can override these by passing in a different YAML file, or by passing in the parameters on the commandline.
To use this, in a console at the root of this project, you can do things such as the following.
python investment_euler.py --trainer.max_epochs=5
Or to change the neural network architecture, you could try things such as increasing the L
of the model
python investment_euler.py --trainer.max_epochs=2 --model.ml_model.L=8
Or changing the number of layers
python investment_euler.py --trainer.max_epochs=5 --model.ml_model.phi_layers=1
You can swap out the entire neural network by passing in a different ml_model
class. For example, to use a DeepSetMoments
model, you could do
python investment_euler.py --model.ml_model.class_path=econ_layers.layers.DeepSetMoments --model.ml_model.L=4 --model.ml_model.n_in=1 --model.ml_model.n_out=1 --model.ml_model.rho_layers=3 --model.ml_model.rho_hidden_dim=256 --model.ml_model.rho_hidden_bias=false --model.ml_model.rho_last_bias=true
To change the economic variables such nonlinearity in prices, you could try things such as
python investment_euler.py --trainer.max_epochs=5 --model.nu=1.1
Note that for the nu != 1
there is no closed form to check against.
If you a GPU available and you installed the appropriate version of PyTorch, then you can pass in the accelerator option,
python investment_euler.py --trainer.accelerator=gpu
Note, however, that the GPU will be slower for less than about 1024 agents.
Central to deep learning is the need to tuning hyperparameters. A variety of tooling for ML and deep-learning is there to help, mostly under the category of "ML DevOps". This includes tools for hyperparameter optimization, model versioning, managing results, model deployment, and running on clusters/clouds. Here we will only show one of these tools, which provides simple HPO and outstanding visualization.
One tool for managing parameters and hyperparameter optimization is Weights and Biases. This is a free service for academic use. It provides a dashboard to track experiments, and a way to run hyperparameter optimization sweeps.
To use, first create an account with Weights and Biases then, assuming you have installed the packages above, ensure you have logged in,
wandb login
The train_time_sweep.yaml file contains a list of parameters defining the sweeep of interest. See W&B docs for more details, including how to handle distributions. probv For our example sweep, in a terminal run
wandb sweep train_time_sweep.yaml
This will create a new sweep on the server. It will give you a URL to the sweep, which you can open in a browser. You can also see the sweep in your W&B dashboard. You will need the returned ID as well.
This doesn't create any "agents". To do that, take the <sweep_id>
that was returned and run
wandb agent <sweep_id>
Or to only execute a fixed number of experiments on that agent, give it a count (e.g. wandb agent --count 10 <sweep_id>
).
You can then login to the server and run that same line, with the provided sweep_id, to execute the same experiments on a different machine.
See W&B Training Time Sweep Results for an example. A few useful features of this tool include,
This provides a standard visualization to evaluate many different hyperparameters, listed along the top and each with its own y-axis. The color matches the objective of the HPO sweep, where the value is shown on the rightmost side.
Another visualization is to look at the correlation between the hyperparameter and the objective, as shown above, which summarizes the relative importance.
For users with less experience using python, conda, and VS Code, the following provides more details.
-
Ensure you have installed Python. For example, using Anaconda
-
Recommended but not required: Install VS Code along with its Python Extension
-
Clone this repository
- Recommended: With VS Code, go
<Shift-Control-P>
to open up the commandbar, then chooseGit Clone
, and use the URLhttps://github.com/HighDimensionalEconLab/symmetry_dynamic_programming.git
. That will give you a full environment to work with. - Alternatively, you can clone it with git installed
git clone https://github.com/HighDimensionalEconLab/symmetry_dynamic_programming.git
- Recommended: With VS Code, go
-
(Optional) create a conda virtual environment
conda create -n symmetry_dp python=3.9 conda activate symmetry_dp
- Python 3.10 is also broadly supported, but PyTorch doesn't fully support Python 3.11 yet. See Troubleshooting below if Python 3.10 has issues.
-
(Optional) In VS Code, you can then do
<Shift-Control-P>
to open up the commandbar, then choose> Python: Select Interpreter
, and choose the one in thesymmetry_dp
environment. Future> Python: Terminal
commands then automatically activate it.- If you are in VS Code, opening a python terminal with
<Shift-Control-P>
then> Python: Terminal
and other terminals should automatically activate the environment and start in the correct location.
- If you are in VS Code, opening a python terminal with
-
Install dependencies. With a terminal in that cloned folder (after, optionally, activating an environment as discussed above).
pip install -r requirements.txt
-
(Optional) installation of PyTorch with GPU support.
- If the above process only installs the CPU version and you have a GPU available, follow for more details https://pytorch.org/get-started/locally/ with the activated environment.
- For example
conda install pytorch pytorch-cuda=11.8 -c pytorch -c nvidia
. - Then, if you pass the
python investment_euler.py --trainer.accelerator=gpu
etc it will use available hardware - Note that GPUs are not required for these experiments, and are often slower.
Troubleshooting:
- If you are having trouble installing packages on Windows with Python 3.10, then either downgrade to 3.9 or see here. To summarize those steps:
- Download https://visualstudio.microsoft.com/visual-cpp-build-tools/
- Local to that folder in a terminal, run
vs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools
- If PyTorch is not working after the initial installation, consider installing manually with
conda install pytorch cpuonly -c pytorch
or something similar, and then retrying the dependencies installation. GPUs are not required for these experiments. If you get compatibility clashes between packages with thepip install -r requirements.txt
then we recommend using a virtual environment with conda, as described above.