Basic Math Concepts – L1 Regularization in Neural Network

Loss Function without Regularization:

L1 Regularized Loss:

Where:

  • λ: Regularization strength (how much penalty you apply)
  • |wj|: Absolute value of weights
  • ∑|wj|: Total penalty for all weights

This penalty shrinks weights and some even become exactly zero, making the model simpler and more interpretable.

Concept Description
Goal Avoid overfitting by penalizing large weights
What it does Adds absolute value of weights to loss
Result Many weights become exactly zero (sparse model)
Benefit Feature selection, model simplicity

Next – L2 Regularization in Neural Network