Project Details
Description
This paper proposes a training framework by rolling k-fold cross-validation to compare forecasting performance of several quantitative methods, mainly standard time series and our pre-selected machine learning methods. Using US unemployment rate, we find that: Firstly, individual machine learning constituents may not perform as good as standard time series; secondly, among on constituent basis, SVM (support vector machine) performs the best, the deep learning (RNN-LSTM) unexpectedly performs the worst; thirdly, forecasting averaging evidence shows that the automatic machine learning (autoML, h2o.ai) performs less than our pre-selected machine learning methods, and the averaged standard time series is better than autoML. We conclude that forecasting averaging is a good way to combine diversified forecasts and a suitable combination of methods depends on the data.
Status | Finished |
---|---|
Effective start/end date | 2020/08/01 → 2021/07/31 |
Keywords
- Forecasting time series
- forecasting averaging
- machine learning
- training by rolling k-fold cross validation
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.