Basic Math Concepts – Binary Cross Entropy Relevancy in Neural Network

1. Logarithms (Log)

  • We should know that:
    • log(1) = 0
    • log(x) is negative if x is between 0 and 1
  • Used in:

    Cross Entropy = – [ y * log(p) + (1 – y) * log(1 – p) ]

Why it’s needed? To penalize wrong predictions sharply and reward close-to-true predictions.

2. Probability Basics

  • Know what 0 to 1 values mean:
    • p = 0.9 → 90% chance
    • p = 0.1 → 10% chance
  • Sum of probabilities in multi-class classification = 1

Why it’s needed? Because neural networks output probabilities, and Cross Entropy compares these to the actual (true) labels.

3. Binary Numbers (0 or 1)

  • In binary classification, our actual labels are:
    • 1 (true class)
    • 0 (false class)

Why it’s needed? Because the formula uses y (actual) as either 0 or 1 — this decides which log term is active.

4. Multiplication & Addition

  • Nothing fancy — just:
    • Multiply actual values with log(predicted)
    • Add both parts of the loss

5. Negative Numbers

  • Understand how negative values behave, especially:
    • Taking a negative log returns a positive loss
    • Cross Entropy formula starts with a – sign

Summary Table

Concept Why It’s Needed
Logarithms To penalize or reward predictions
Probabilities Model outputs and target labels
Binary values (0/1) Label-based switching in the formula
Multiplication To compute weighted loss components
Negative numbers Ensures the loss is positive (non-negative)

Binary Cross Entropy relevancy in Neural Network – Visual Roadmap