Minimize Objective example with Simple Python

Problem:

We want to predict y = 2 * x. Our model starts with a random weight and learns through repeated updates to minimize the squared error.

Python Simulation: Gradient Descent Minimization

# Simulate: Learning y = 2 * x

# Training data
x_data = [1, 2, 3, 4, 5]
y_true = [2 * x for x in x_data]

# Initial weight (random guess)
w = 0.5

# Learning rate (small steps)
lr = 0.01

# Training loop
for epoch in range(50):
    total_loss = 0
    for x, y in zip(x_data, y_true):
        # Step 1: Forward pass (prediction)
        y_pred = w * x

        # Step 2: Compute loss (Mean Squared Error)
        loss = (y_pred - y) ** 2
        total_loss += loss

        # Step 3: Compute gradient of loss w.r.t weight
        grad = 2 * (y_pred - y) * x

        # Step 4: Update weight to minimize loss
        w = w - lr * grad

    print(f"Epoch {epoch+1}: Weight = {w:.4f}, Loss = {total_loss:.4f}")

What Happens?

Each loop (epoch):

  • The model makes a guess using the current weight.
  • It compares the guess with the true value (via squared error).
  • It calculates how to correct the weight.
  • It updates the weight slightly in the right direction.
  • The loss keeps decreasing — that’s minimization!

Output (sample):

Epoch 1: Weight = 1.0200, Loss = 55.0000
Epoch 2: Weight = 1.3040, Loss = 26.3456
Epoch 3: Weight = 1.5133, Loss = 12.6288

Epoch 50: Weight = 1.9973, Loss = 0.0051

See how the weight converges toward 2 (the correct multiplier)? And the loss keeps decreasing — mission accomplished!

Minimize objective in Neural Network – Basic Math Concepts