We compared the predictive performance of a series of machine learning and traditional methods for monthly CDS spreads, using firms’ accounting-based, market-based and macroeconomics variables for a time period of 2006 to 2016. We find that ensemble machine learning methods (Bagging, Gradient Boosting and Random Forest) strongly outperform other estimators, and Bagging particularly stands out in terms of accuracy. Traditional credit risk models using OLS techniques have the lowest out-of-sample prediction accuracy. The results suggest that the non-linear machine learning methods, especially the ensemble methods, add considerable value to existent credit risk prediction accuracy and enable CDS shadow pricing for companies missing those securities.
Inflation has been rising during the pandemic against supply chain disruptions and a multi-year boom in global owner-occupied house prices. We present some stylized facts pointing to house prices as a leading indicator of headline inflation in the U.S. and eight other major economies with fast-rising house prices. We then apply machine learning methods to forecast inflation in two housing components (rent and owner-occupied housing cost) of the headline inflation and draw tentative inferences about inflationary impact. Our results suggest that for most of these countries, the housing components could have a relatively large and sustained contribution to headline inflation, as inflation is just starting to reflect the higher house prices. Methodologically, for the vast majority of countries we analyze, machine-learning models outperform the VAR model, suggesting some potential value for incorporating such models into inflation forecasting.
methods (MLs). In terms of credit risk, most studies using machinelearningmethods focus on bankruptcy and credit rating. Empirical evidence from these discrete measures suggests that recent classifiers such as gradient boost and random forest clearly excel compared to traditional LDA or probit/logit ( Jones et al., 2015 , Flavio et al., 2017 ). But there has not been equal scrutiny on the continuous measure of CDS spreads. What enables machinelearningmethods to outperform traditional approaches have not been investigated sufficiently. In this study, we “horserace
Mr. Jean-Francois Dauphin, Mr. Kamil Dybczak, Morgan Maneely, Marzie Taheri Sanjani, Mrs. Nujin Suphaphiphat, Yifei Wang, and Hanqi Zhang
period or over the very near future. More recently, and not surprisingly, machinelearningmethods have gained popularity among economists and have been deployed also in nowcasting.
This paper describes an effort to strengthen nowcasting capacity at the IMF’s European department in early 2020 . It adds to the growing literature on nowcasting in several ways. First, it motivates and compiles datasets of standard variables and new ones, such as Google search and air quality, including country-specific databases of selected European economies and their transformations
We develop a framework to nowcast (and forecast) economic variables with machine learning techniques. We explain how machine learning methods can address common shortcomings of traditional OLS-based models and use several machine learning models to predict real output growth with lower forecast errors than traditional models. By combining multiple machine learning models into ensembles, we lower forecast errors even further. We also identify measures of variable importance to help improve the transparency of machine learning-based forecasts. Applying the framework to Turkey reduces forecast errors by at least 30 percent relative to traditional models. The framework also better predicts economic volatility, suggesting that machine learning techniques could be an important part of the macro forecasting toolkit of many countries.
the DGP follows a linear factor structure, which may not necessarily be the case.
B. The Advantages of MachineLearningMethods
Unlike traditional forecasting techniques, ML methods are specifically designed to optimize the bias-variance tradeoff . In particular, ML models can address the above issues with which traditional forecasts have struggled because they select predictors to optimize out-of-sample (rather than in-sample) performance and are better able to handle nonlinear interactions among a large number of predictors ( Annex III ). In this study we