regularization - AshokBhat/ml GitHub Wiki
- Techniques used to avoid overfitting problem during training
- Makes the fitted function smoother
Most widely used techniques
-
L1 regularization - Penalize weights in proportion to the sum of the absolute values of the weights
-
L2 regularization - Penalize weights in proportion to the sum of the squares of the weights
-
Dropout - randomly set a fraction of neurons to 0 at each training iteration
- Weight decay
⚠️ **GitHub.com Fallback** ⚠️