Search Results

You are looking at 1 - 5 of 5 items for :

  • "essence of machine learning" x
Clear All
Karim Barhoumi, Seung Mo Choi, Tara Iyer, Jiakun Li, Franck Ouattara, Mr. Andrew J Tiffin, and Jiaxiong Yao

Essence of Machine Learning: Overfitting vs. Underfitting Key Concepts and Algorithms

Karim Barhoumi, Seung Mo Choi, Tara Iyer, Jiakun Li, Franck Ouattara, Mr. Andrew J Tiffin, and Jiaxiong Yao
The COVID-19 crisis has had a tremendous economic impact for all countries. Yet, assessing the full impact of the crisis has been frequently hampered by the delayed publication of official GDP statistics in several emerging market and developing economies. This paper outlines a machine-learning framework that helps track economic activity in real time for these economies. As illustrative examples, the framework is applied to selected sub-Saharan African economies. The framework is able to provide timely information on economic activity more swiftly than official statistics.
Mr. Andrew J Tiffin
Machine learning tools are well known for their success in prediction. But prediction is not causation, and causal discovery is at the core of most questions concerning economic policy. Recently, however, the literature has focused more on issues of causality. This paper gently introduces some leading work in this area, using a concrete example—assessing the impact of a hypothetical banking crisis on a country’s growth. By enabling consideration of a rich set of potential nonlinearities, and by allowing individually-tailored policy assessments, machine learning can provide an invaluable complement to the skill set of economists within the Fund and beyond.
Karim Barhoumi, Seung Mo Choi, Tara Iyer, Jiakun Li, Franck Ouattara, Mr. Andrew J Tiffin, and Jiaxiong Yao

to sift efficiently through a broad range of potential variables, identifying the relationships, thresholds, and interactions that are most reliably and robustly informative. The Essence of Machine Learning: Overfitting vs. Underfitting But the use of complex, flexible models often comes at a cost—they can work too well. Fitting is easy, prediction is hard. And a key danger of using a complex model is that it will almost always fit the existing sample well. Indeed, a sufficiently complex model should be able to fit the data perfectly. But that is no guarantee

Mr. Andrew J Tiffin

. As a general guide, the field has its origins in computational statistics, and is chiefly concerned with the use of algorithms to identify patterns within a dataset (Kuhn and Johnson, 2016). The actual algorithms can range from the simplest OLS regression to the most-complex “deep learning” neural network; but ML is distinguished by its often single-minded focus on predictive performance—indeed, the essence of machine learning is the design of experiments to assess how well a model trained on one dataset will predict new data ( Box 2 ). In this regard, the