🧠 Neural Network Training Flow - Vanishing Gradient Problem

Epoch: 0
Step 1:
Random
Initialization
Step 2:
Feed Forward
Step 3:
Calculate
Loss Function
Step 4:
Calculate
Derivative
Step 5:
Backpropagate
Step 6:
Update
Weights
Step 7:
Iterate Until
Convergence
Weights/
Model
→
→
→
→
→
→
→
→

Current Values

Input X: 1
Expected Y: 2
Predicted Y: 0.5
Loss: 2.25
Weight: 0.5
Gradient: 0.0
Learning Rate: 0.01

Problem Demonstration: Notice how the weights change by very small amounts each iteration, showing the vanishing gradient problem!