Gradient Descent - AshokBhat/ml GitHub Wiki
Description
- An optimization algorithm used to minimize loss function by iteratively moving in the direction of steepest descent
- Descent defined by the negative of the gradient
Usage
- To update the parameters of the model
Types
Type | Examples per pass |
---|---|
[Batch gradient descent]] ](/AshokBhat/ml/wiki/During-each-pass,-process-all-[[example)s | |
[Stochastic gradient descent]] ](/AshokBhat/ml/wiki/During-each-pass,-process-1-random-[[example) | |
[Mini-batch gradient descent]] ](/AshokBhat/ml/wiki/During-each-pass,-process-N-random-[[example)s, with N << all the training examples |
FAQ
- What is the algorithm?
- How is it used in machine learning?
- Why are the different ways of doing gradient descent?