На жаль, вміст цієї сторінки недоступний обраною мовою.

Перейти до основного контенту

Головна Underfitting

Underfitting

Underfitting definition

Underfitting occurs when a machine-learning model is too simple to capture the patterns in its training data. Because it lacks sufficient parameters, complexity, or training time, it misses important relationships and yields poor predictions. As a result, the model performs badly even on the data it was trained on. It suffers from high bias — making broad, oversimplified assumptions — and low variance, so it cannot flexibly respond to the true complexity of the data.

See also: overfitting, supervised machine learning, unsupervised machine learning, adversarial machine learning

How underfitting happens

Underfitting occurs when:

  • The model architecture is too basic, such as using a linear model to fit non-linear data.
  • The training process ends too early, before the model captures key features.
  • Input features are limited or poorly selected, reducing the amount of information available to learn from.

For example, a cybersecurity model that tries to detect fraud based on only one or two transaction features (like amount or time) will more than likely miss complex fraud patterns involving location, device type, user behavior, etc. In this case, the system simply doesn’t have enough capacity to model reality accurately.

To prevent underfitting, developers can increase model complexity, gather more representative data, and tune hyperparameters to improve learning without overcomplicating the system.

Why underfitting matters

Underfitted models are unreliable because they fail to make accurate predictions even in familiar situations. In security applications, such models may ignore early warning signs of intrusions or fail to detect unusual activity. The goal in machine learning is to balance performance — avoiding both overfitting and underfitting — to ensure consistent, real-world accuracy.