Backpropagation with Multiple Neurons

1. What is Backpropagation?

It’s the process a neural network uses to learn from its mistakes by adjusting the weights after checking the error in its predictions.

When There Are Multiple Neurons…

Imagine we have:

  • 2 input neurons (for features)
  • 2 hidden neurons (in one hidden layer)
  • 1 output neuron

2. Step-by-Step Flow (Backpropagation):

  1. Forward Pass
    • Compute weighted sums at hidden layer:

      h1 = w1*x1 + w2*x2 + b1, then apply activation
      h2 = w3*x1 + w4*x2 + b2, then apply activation

    • Compute output:

      y_pred = w5*h1 + w6*h2 + b3, then apply activation

  2. Calculate Error
    • Error = difference between predicted output (y_pred) and actual output (y_actual)
      Example:

      error = (y_pred – y_actual)^2

  3. Backward Pass

    • Calculate the gradient of the error w.r.t output weights (w5, w6)
    • Then go one layer back and calculate gradient w.r.t hidden weights (w1 to w4)
    • Use chain rule to pass the error backwards through each neuron
  4. Update Weights

    • Adjust each weight:

      weight = weight – learning_rate * gradient

Why Do We Do This?

  • So the next time the same input is seen, the network makes a better prediction by reducing error.

Visual Chart (Text Version)

Analogy:

Think of it like baking a cake and someone says it’s too sweet.

  • We figure out it’s because of too much sugar (w5).
  • Then we realize that sugar came from both frosting (h1) and batter (h2).
  • Next time, we adjust sugar in both parts to fix the problem → This is backpropagation!

Backpropagation with Multiple Neurons – Backpropagation(Multiple Neurons) example with Simple Python