Difference between revisions of "24F Final Project: Overfitting"
(→Overfitting) |
|||
Line 7: | Line 7: | ||
== Overfitting == | == Overfitting == | ||
− | '' | + | ''testing'' |
+ | |||
== References == | == References == | ||
<references /> | <references /> |
Revision as of 07:27, 22 October 2022
By Thomas Zhang
Overfitting is a phenomenon in machine learning which occurs when a learning algorithm fits too closely (or even exactly) to its training data, resulting in a model that is unable to make accurate predictions on new data.[1] More generally, it means that a machine learning model has learned the training data too well, including noise and random fluctuations, leading to decreased performance when presented with new data. This is a major problem as the ability of machine learning models to make predictions/decisions and classify data has many real-world applications; overfitting interferes with a model’s ability to generalize well to new data, directly affecting its ability to do the classification and prediction tasks it was intended for.[1]
Background/History
The term “overfitting” first originated in the field of statistics, with this subject being extensively studied in the context of regression analysis and pattern recognition; however, with the arrival of artificial intelligence and machine learning, this phenomenon has been subject to increased attention due to its important implications on the performance of AI models.[2] Since its early days, the concept has evolved significantly, with researchers continuously endeavoring to develop methods to mitigate overfitting’s adverse effects on the accuracy of models and their ability to generalize.[2]
Overfitting
testing
References
- ↑ 1.0 1.1 https://www.ibm.com/topics/overfitting
- ↑ 2.0 2.1 Lark Editorial Team. (2023, Dec 26). Overfitting. Lark Suite. https://www.larksuite.com/en_us/topics/ai-glossary/overfitting