Hidden Layers example with Simple Python
1. Simple Neural Network (2-1-1 Architecture)
- 2 inputs: Sweetness & Moistness
- 1 hidden layer with 2 neurons
- 1 second hidden layer with 1 neuron
- 1 output neuron (final score)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | # Input: Cake features x1 = 7 # Sweetness x2 = 6 # Moistness # -------- Hidden Layer 1 (2 neurons) ---------- # Weights and biases w11, w21, b1 = 0.5 , 0.4 , 1 # Neuron 1 w12, w22, b2 = - 0.3 , 0.8 , 0.5 # Neuron 2 # ReLU Activation Function def relu(x): return max ( 0 , x) # Neuron 1 Output z1 = x1 * w11 + x2 * w21 + b1 a1 = relu(z1) # Neuron 2 Output z2 = x1 * w12 + x2 * w22 + b2 a2 = relu(z2) # -------- Hidden Layer 2 (1 neuron) ---------- w31, w32, b3 = 0.6 , 0.9 , - 1 z3 = a1 * w31 + a2 * w32 + b3 a3 = relu(z3) # -------- Output Layer ---------- w4, b4 = 1.2 , 0.3 final_score = a3 * w4 + b4 # -------- Print Output -------- print ( "Final Cake Score:" , round (final_score, 3 )) |
Output:
Final Cake Score: 7.524
What This Simulates:
- Hidden Layer 1: Experts analyzing raw features.
- Hidden Layer 2: Combines those mini-judgments.
- Output Layer: Final score, like a competition result.
Each step shows how the layered thinking (ensemble effect) builds up — from raw cake traits to a refined evaluation.
Hidden Layers relevancy in Neural Network – Visual Roadmap