Parameters vs Hyperparameters with Simple Python

We’ll build a minimal neural network to understand this.

import random

# === Hyperparameters ===
learning_rate = 0.1
epochs = 10

# === Parameters (initial weights and bias) ===
w = random.uniform(-1, 1)  # weight
b = random.uniform(-1, 1)  # bias

# Training Data: y = 2*x + 1
data = [(1, 3), (2, 5), (3, 7), (4, 9)]

# === Training the model ===
for epoch in range(epochs):
    total_loss = 0
    for x, y_true in data:
        # --- Forward Pass ---
        y_pred = w * x + b
        
        # --- Loss (MSE) ---
        error = y_pred - y_true
        loss = error ** 2
        total_loss += loss

        # --- Backward Pass (Manual Gradient Descent) ---
        dw = 2 * error * x
        db = 2 * error
        
        # --- Update Parameters ---
        w = w - learning_rate * dw
        b = b - learning_rate * db

    print(f"Epoch {epoch+1}: Loss={total_loss:.4f}, w={w:.4f}, b={b:.4f}")

Observations and Impact

Parameters (w, b):

  • These are updated every epoch based on gradients.
  • They directly affect how accurate our prediction is.
  • After training, we use these for inference.

Hyperparameters (learning_rate, epochs):

  • They decide:
    • How fast/slow we move towards the best weights (learning rate)
    • How many times we iterate over the data (epochs)
  • Poor choices can lead to:
    • Underfitting (not trained enough)
    • Overfitting (trained too much)
    • Divergence (if learning rate too high)

Parameters vs Hyperparameters in Neural Network – Summary