Overview

Regularization discourages the model from becoming too complex or relying too heavily on specific features, which helps it generalize better to new data.

Common Techniques

  • L1 Regularization (Lasso): Adds a penalty equal to the absolute value of the weights, often leading to sparse models.
  • L2 Regularization (Ridge): Adds a penalty equal to the square of the weights.
  • Dropout: Randomly ignoring neurons during training.
  • Early Stopping: Halting training when performance on a validation set stops improving.

Related Terms