Hidden Layer Influence
Story Setup: Predicting Mood from Weather & Sleep
Imagine we’re building a small neural network that predicts “Our Mood” (Happy or Sad) based on:
- Weather (0 = Bad, 1 = Good)
- Hours of Sleep (0 = Low, 1 = Enough)
The Network Structure
We’ll use:
- 2 Inputs: Weather and Sleep
- 1 Hidden Layer: with 2 neurons
- 1 Output Layer: with 1 neuron (gives value between 0 and 1, representing probability of being happy)
Input Layer: (Weather, Sleep) | ↓ Hidden Layer: [Neuron H1] [Neuron H2] \ / \ / \ / \ / Output Neuron
Let’s Use Real Numbers
Step 1: Inputs
Let’s say:
- Weather = 1 (Good)
- Sleep = 0 (Low)
So, our input vector is: X = [1, 0]
Step 2: Hidden Layer Processing
Each hidden neuron gets inputs from both weather and sleep. Suppose:
Hidden Neuron H1:
- Weights: W1 = [0.5 (for weather), 0.4 (for sleep)]
- Bias: b1 = -0.3
- Activation: sigmoid
z_H1 = (1 * 0.5) + (0 * 0.4) + (-0.3) = 0.2
a_H1 = sigmoid(0.2) ≈ 0.55
Hidden Neuron H2:
- Weights: W2 = [0.3, 0.7]
- Bias: b2 = -0.1
z_H2 = (1 * 0.3) + (0 * 0.7) + (-0.1) = 0.2
a_H2 = sigmoid(0.2) ≈ 0.55
Step 3: Output Layer
Output neuron takes a_H1 and a_H2 as inputs.
- Weights: [0.6, 0.9]
- Bias: -0.2
z_output = (0.55 * 0.6) + (0.55 * 0.9) + (-0.2) ≈ 0.33 + 0.495 – 0.2 = 0.625
a_output = sigmoid(0.625) ≈ 0.65
So, the final output is 0.65, which we interpret as a 65% chance of being Happy.
How Hidden Layers Influence Prediction
Without a hidden layer, the network would have done:
Mood = sigmoid(W1*Weather + W2*Sleep + b)
That’s just a straight line — can’t capture complex relationships.
With hidden layers:
- Neurons learn intermediate “features”.
- H1 may learn “Is the day comfortable?”
- H2 may learn “Is the person well-rested?”
- These new concepts allow the network to combine inputs non-linearly.
Analogy:
Think of the hidden layer as detectives figuring out clues:
- “Is the weather good but sleep bad?” ➝ medium mood
- “Is both good?” ➝ high mood
- “Both bad?” ➝ low mood
Hidden Layer Influence – Hidden Layer Influence example with Simple Python