Building Blocks of Neural Network example with Simple Python
1. Problem Statement:
Let’s say we want our neural network to predict whether a person likes mango juice based on:
- sweetness (input 1)
- thickness (input 2)
The output should be:
- 1 = likes mango juice
- 0 = doesn’t like it
Neural Network (1 Neuron)
# Simple neural network with 2 inputs and 1 output
# No libraries used
# -----------------------------
# Step 1: Define helper functions
# -----------------------------
# Sigmoid activation function
def sigmoid(x):
# Approximation of 1 / (1 + e^-x) without using math.exp
# Using a simple Taylor expansion approximation (just to simulate)
return 1 / (1 + (2.71828 ** -x))
# Derivative of sigmoid for learning step
def sigmoid_derivative(output):
return output * (1 - output)
# -----------------------------
# Step 2: Initial setup
# -----------------------------
# Training data: [sweetness, thickness]
inputs = [
[0.9, 0.1], # likes mango juice
[0.2, 0.8], # doesn't like
[0.8, 0.2], # likes
[0.1, 0.9] # doesn't like
]
# Expected outputs
targets = [1, 0, 1, 0]
# Random initial weights (can be improved)
weight1 = 0.5
weight2 = 0.5
bias = 0.0
learning_rate = 0.1
# -----------------------------
# Step 3: Training loop
# -----------------------------
for epoch in range(20):
print(f"\nEpoch {epoch+1}")
for i in range(len(inputs)):
x1, x2 = inputs[i]
target = targets[i]
# Step 1: Weighted sum (z)
z = (x1 * weight1) + (x2 * weight2) + bias
# Step 2: Activation
output = sigmoid(z)
# Step 3: Error
error = target - output
# Step 4: Adjustment using gradient descent
d_output = error * sigmoid_derivative(output)
# Update weights and bias
weight1 += learning_rate * d_output * x1
weight2 += learning_rate * d_output * x2
bias += learning_rate * d_output
print(f"Input: {x1:.2f}, {x2:.2f} | Target: {target} | Predicted: {output:.4f} | Error: {error:.4f}")
# -----------------------------
# Step 4: Final prediction
# -----------------------------
print("\nFinal Weights:")
print(f"Weight1: {weight1:.4f}")
print(f"Weight2: {weight2:.4f}")
print(f"Bias: {bias:.4f}")
What’s Happening in the Code:
- The neuron takes sweetness and thickness as inputs.
- Multiplies them by weights, adds a bias, and feeds it into a sigmoid function.
- Compares the result to the correct answer and adjusts the weights.
- Over 20 training cycles (epochs), the neuron learns better and predicts closer to the target.
2. The simulation by adding a hidden layer with multiple neurons
What We’re Building Now : We’ll build a 2-layer neural network:
Architecture:
Input Layer → Hidden Layer (2 neurons) → Output Layer (1 neuron)
- Inputs: sweetness, thickness (2 values)
- Hidden Layer: 2 neurons (each with own weights + activation)
- Output Layer: 1 neuron (takes both hidden outputs as input)
- Activation: Sigmoid
- Learning: Manual weight update via backpropagation (2 steps)
Step-by-Step Flow
- Forward pass:
- Inputs → Hidden layer
- Hidden outputs → Output neuron
- Backpropagation:
- Calculate error at output
- Propagate error back to hidden neurons
- Update all weights & biases
Pure Python Code (with Hidden Layer)
# 2-layer neural network: 2 input → 2 hidden → 1 output
# No external libraries
def sigmoid(x):
return 1 / (1 + (2.71828 ** -x))
def sigmoid_derivative(output):
return output * (1 - output)
# Training data
inputs = [
[0.9, 0.1], # likes mango juice
[0.2, 0.8], # doesn't like
[0.8, 0.2], # likes
[0.1, 0.9] # doesn't like
]
targets = [1, 0, 1, 0]
# Initialize weights and biases (random values or fixed)
w_input_hidden = [ # 2 hidden neurons × 2 inputs
[0.5, -0.4], # weights for hidden neuron 1
[-0.3, 0.8] # weights for hidden neuron 2
]
bias_hidden = [0.0, 0.0]
w_hidden_output = [0.3, 0.7] # weights from 2 hidden neurons to output
bias_output = 0.0
learning_rate = 0.1
# Training loop
for epoch in range(20):
print(f"\nEpoch {epoch+1}")
for i in range(len(inputs)):
x1, x2 = inputs[i]
target = targets[i]
# --- Forward pass ---
# Hidden neuron outputs
hidden_raw = []
hidden_out = []
for h in range(2): # 2 hidden neurons
z = x1 * w_input_hidden[h][0] + x2 * w_input_hidden[h][1] + bias_hidden[h]
hidden_raw.append(z)
hidden_out.append(sigmoid(z))
# Output neuron
output_raw = (hidden_out[0] * w_hidden_output[0] +
hidden_out[1] * w_hidden_output[1] +
bias_output)
output = sigmoid(output_raw)
# --- Backward pass ---
error = target - output
d_output = error * sigmoid_derivative(output)
# Update weights: hidden to output
for h in range(2):
w_hidden_output[h] += learning_rate * d_output * hidden_out[h]
bias_output += learning_rate * d_output
# Update weights: input to hidden
for h in range(2):
d_hidden = d_output * w_hidden_output[h] * sigmoid_derivative(hidden_out[h])
w_input_hidden[h][0] += learning_rate * d_hidden * x1
w_input_hidden[h][1] += learning_rate * d_hidden * x2
bias_hidden[h] += learning_rate * d_hidden
print(f"Input: {x1:.2f}, {x2:.2f} | Target: {target} | Predicted: {output:.4f} | Error: {error:.4f}")
# Final weights
print("\nFinal Weights and Biases:")
print("Input to Hidden:")
for h in range(2):
print(f" Hidden Neuron {h+1}: w1={w_input_hidden[h][0]:.4f}, w2={w_input_hidden[h][1]:.4f}, bias={bias_hidden[h]:.4f}")
print("\nHidden to Output:")
for h in range(2):
print(f" w_hidden_output[{h}] = {w_hidden_output[h]:.4f}")
print(f" Output Bias = {bias_output:.4f}")
What Changed from Before?
| Concept | Single Neuron | Hidden Layer Version |
|---|---|---|
| Structure | Input → Output | Input → Hidden Layer → Output |
| Weights | 2 inputs → 1 output | 2 inputs → 2 hidden → 1 output |
| Forward Pass | Simple sum + activation | Two-level activation |
| Learning | 1-step update | Backpropagation from output → hidden |
Building Blocks of Neural Network (Primary Concepts) – Basic Math Concepts
