This branch contains the code used to perform the analyses comprising the Annex 37 sub-task A report. It consists of implementations of the prediction/forecasting methods investigated (found in the models
directory), and scripts to assess and compare their performance (assess_forecasts.py
& evaluate.py
).
For information on the the setup of the task, see the README
on the main
branch.
data
, datasets (from Cambridge University Estates building electricity usage archive) used to perform simulationsmodels
, implementations of the prediction/forecasting methods investigated Forecasting models implemented:dms
, simple Direct Multi-Step neural modelsLinear
, linear multi-layer perceptron (MLP) modelConv
, convolutional neural network (CNN) modelResMLP
, residual MLP model with skip-connections
TFModels
, complex neural models implemented using PyTorch Forecasting frameworkTFT
, Temporal Fusion TransformersNHiTS
, Neural Hierarchical Interpolaiton for Time Series ForecastingDeepAR
, DeepARRNN
, LSTM and GRU models
noise
, explicit noising of perfect forecastsexample
, example model implementation for template
experiments
, run scripts for performing experiments comprising analysesresults
, output files containing results of tests performed for conducted analysesplots
, notebooks for plotting analysis resultsutils
, supporting scriptsassess_forecasts.py
, script to test the quality/accuracy of forecasts provided by modelsevaluate.py
, script to test the control performance of prediction models when used in the MPCground-truth.py
, script to test the control performance of the MPC if perfect forecasts were availableleaderboard.py
, script to update the leaderboard with the results of the above scriptslinmodel.py
, implementation of linear optimisation model used in MPC
Once you have a complete model implementation, you can add it to the library of methods in the models
directory by doing the following:
- create a new sub-directory for your model -
models/<your model name>
- wrap the implementation of your model in a class in its own script, e.g.
models/<your model name>/model.py
containing the classMyModel
- this class must have a method
compute_forecast
, which takes in the array of current observations, and returns arrays for the forecasted variables - see documentation and example docstrings for required formatting - you should provide sensible default parameters (e.g. kwargs) so that a 'good' forecasting model is set up when an object of your class is constructed/initialised without arguments, e.g. load an up-to-date pre-trained model by default
- this class must have a method
- you should provide additional scripts as necessary to allow others to work with your model, e.g. for training, testing, interrogating, etc.
- put any required data files, e.g. pre-trained model specifications, in a sub-directory called
resources
- update
models/__init__.py
to import your model class - provide a
README.md
in your model directory detailing: your model, the files you've provided, your preliminary results, any other important info
An example model implementation directory is given at models/example
.
We provide the following files for comparing the model's performance.
assess_forecast.py
evaluate.py
leaderboard.py
Setting save = True
in assess_forecast.py
and evaluate.py
will log the performances in the outputs
directory.
Running leaderboard.py
will then load the results from the output
directory and update leaderboard correspondingly.
assess_forecast.py
is used to assess the quality of the forecast only.
evaluate.py
uses the model's forecast for model predictive control, based on linear programming. This evaluates how good the forecast is for battery control.
If you wish to test your model in this framework you can run assess_forecasts.py
and evaluate.py
, editing the runtime sections of the scripts appropriately. However it is recommended that you do this testing in your development branch before merging your model implementation.
Note: all scripts should be executed from the base EECi
directory, and all file paths are specified relative to this.