Implementation of the paper "TimeRecipe: A Time-Series Forecasting Recipe via Benchmarking Module Level Effectiveness."
Authors: Zhiyuan Zhao, Juntong Ni, Haoxin Liu, Shangqing Xu, Wei Jin, B.Aditya Prakash
Paper + Appendix: [OpenReview], [Arxiv: TBD]
Please follow the training scripts provided in TimeRecipeResults.
To train a single setup
python -u run.py --seed 2021 --task_name long_term_forecast --use_norm "True" --use_decomp "True" --fusion "temporal" --emb_type "token" --ff_type "mlp" --${Other Args}$
To train a batch of setup
bash scripts/ecl_96_m/2021.sh
or a customized batch of experiments aross datasets
bash run_2021.sh
All raw and processes results can be found at TimeReciperesults.
./notebook/error_rank.ipynb
: Convert the raw forecasting results over different random seeds to ranked results with averaged error and std../notebook/read_res_m.ipynb
: Filter and combine the top 30 ranked results (top_k=30
) from different datasets to a single csv file../notebook/cor_ana_m.ipynb
: Perform statistic testing for the correlation analysis, using the combined csv file (Paper Table 3)../notebook/lightgbm_m.ipynb
: Perform the training-free model selection using a LightGBM model and pre-trained results (Paper Table 2)../notebook/count_surpass.ipynb
: Count the number of setups that TimeRecipe outperforms SOTA (Paper Section 4.1.1).
For data properties calculation, please follow: [Code], [Setup(en)], [Setup(cn)].
If you have any questions about the code, please contact Zhiyuan Zhao at leozhao1997[at]gatech[dot]edu
.
If you find our work useful, please cite our work:
[TBD]
This work also builds on previous works, please consider cite these works properly.
Time Series Library (TSLib). [Code]
TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods. [Paper][Code]