Feed Forward mechanism with multiple Neurons

1. Imagine This First: A Team of Weighing Machines

Let’s say at a factory we have:

  • A conveyor belt with some fruits = Input values
  • Several smart weighing machines = Neurons
  • Each weighing machine multiplies the fruits’ weights with their own preferences = Weights
  • Then each machine makes a decision: “Is this combo heavy enough?” using a rule = Activation Function

Now, let’s go through the steps.

2. Step-by-Step Feed Forward (Multiple Neurons)

1. Inputs

Let’s say our input layer has 3 values:

Inputs (X): [x₁, x₂, x₃] = [0.5, 0.3, 0.2]

These represent the features of our data, like:

  • x₁: size
  • x₂: sweetness
  • x₃: ripeness

2. Weights (per Neuron)

Assume we have 2 neurons in the next layer, each with their own weights:

Neuron 1 Weights: [w₁₁, w₁₂, w₁₃] = [0.2, 0.4, 0.6]
Neuron 2 Weights: [w₂₁, w₂₂, w₂₃] = [0.5, 0.1, 0.9]

Each weight shows how much a neuron cares about each input.

3. Weighted Sum (Dot Product)

Each neuron calculates:

z₁ = x₁*w₁₁ + x₂*w₁₂ + x₃*w₁₃
  = 0.5×0.2 + 0.3×0.4 + 0.2×0.6
  = 0.1 + 0.12 + 0.12 = 0.34

z₂ = x₁*w₂₁ + x₂*w₂₂ + x₃*w₂₃
  = 0.5×0.5 + 0.3×0.1 + 0.2×0.9
  = 0.25 + 0.03 + 0.18 = 0.46

4. Activation Function

Let’s apply a ReLU (Rectified Linear Unit) function:

ReLU(x) = max(0, x)

a₁ = ReLU(z₁) = max(0, 0.34) = 0.34
a₂ = ReLU(z₂) = max(0, 0.46) = 0.46

Final Output from These 2 Neurons:

[0.34, 0.46]

What Just Happened?

Step What it Means
Inputs Information fed to the layer
Weights Each neuron’s “opinion” on each input
Dot Product Combining inputs & weights
Activation Makes the output non-linear & meaningful
Output Passed to next layer or as final prediction

Feed Forward mechanism with multiple Neurons – Feed Forward Mechanism(Multiple Neurons) example with Simple Python