NN training
The training process of a neural network (NN) can be summarized as:
backpropagation + gradient descent (or another optimizer)
Variations:
- Instead of plain Gradient Descent, modern deep learning often uses optimizers like:
- SGD (Stochastic Gradient Descent)
- Momentum (adds inertia to updates)
- Adam (adaptive learning rates)
- RMSprop (adaptive per-weight learning rates)
But the core idea remains:
Backpropagation calculates the gradients, and the optimizer uses them to update the weights.