gradient descent - AshokBhat/ml GitHub Wiki
- An optimization algorithm used to minimize loss function by iteratively moving in the direction of steepest descent
- Descent defined by the negative of the gradient
- To update the parameters of the model
Type | Examples per pass |
---|---|
Batch gradient descent | During each pass, process all examples |
Stochastic gradient descent | During each pass, process 1 random example |
Mini-batch gradient descent | During each pass, process N random examples, with N << all the training examples |
- What is the algorithm?
- How is it used in machine learning?
- Why are the different ways of doing gradient descent?