Reviewed by Oct 05, 2020| Updated on
Overfitting is a design error arising when a function is too tightly tailored to a small collection of data points. In general, the overfitting of the model takes the form of creating an excessively complex model to clarify idiosyncrasies in the data being studied.
In fact, the data often studied carries within it some degree of error or random noise. Therefore, attempting to make the model too closely adhering to slightly incorrect data will infect the model with substantial errors and reduce its predictive ability.
Using computer algorithms to search vast databases of historical market data to find trends is a common issue. Given enough research, detailed theorems can sometimes be established that seem to predict items like stock market returns with near accuracy.
Nonetheless, when applied to data outside the sample, these theorems are likely to merely prove to be the overfitting of a model of what in fact is only chance occurrences. In any case, testing a model against data that is beyond the sample used to create it is essential.
Financial practitioners also need to be mindful of the risks of overfitting a limited-data model.