Basic Math Concepts – Categorical Cross Entropy relevancy in Neural Network
1. Minimum Math Concepts:
Concept | Why It’s Needed |
---|---|
Probability | Understanding class probabilities (e.g., [0.7, 0.2, 0.1]) |
Logarithm (log base e) | The formula uses natural log (ln) to calculate loss |
One-hot encoding | Target values are represented like [1, 0, 0] |
Summation / Looping | To compute total loss over multiple samples |
2. Categorical Cross Entropy Formula:
For a single sample with N classes:
Loss = − Σi=1N yi ⋅ log(ŷi)
Where:
- yi is the actual (one-hot) label.
- y^i is the predicted probability for class i.
If only one class is correct (one-hot), the formula simplifies to:
Loss = −log(predicted probability for correct class)
Categorical Cross-Entropy relevancy in Neural Network – Visual Roadmap