Learning Algorithms example with Simple Python

1.Goal: Learn a rule like score = m × study_hours + c

We’ll try to learn the best values for m (slope) and c (intercept) by minimizing the error.

Python Code – Learning Exam Scores from Study Hours

# Sample data: [study_hours, actual_score]
data = [
    [2, 35],
    [3, 50],
    [4, 60],
    [5, 70],
    [6, 80]
]

# Initialize slope (m) and intercept (c)
m = 0
c = 0

# Learning rate - controls how fast we learn
learning_rate = 0.01

# Number of iterations
epochs = 1000

# Total number of data points
n = len(data)

# Training loop
for epoch in range(epochs):
    total_error_m = 0
    total_error_c = 0
    
    for x, y in data:
        # Predicted score
        y_pred = m * x + c
        
        # Error
        error = y - y_pred
        
        # Partial derivatives (gradient)
        total_error_m += -2 * x * error
        total_error_c += -2 * error
    
    # Update rule
    m -= learning_rate * (total_error_m / n)
    c -= learning_rate * (total_error_c / n)

# Final learned model
print("Learned formula: score = {:.2f} * hours + {:.2f}".format(m, c))

# Try predicting new score
new_hours = 7
predicted_score = m * new_hours + c
print("If a student studies for {} hours, predicted score is {:.2f}".format(new_hours, predicted_score))

What’s Happening?

We’re guessing values of m and c. Then we check how wrong the guess is (difference between actual and predicted).We adjust m and c slowly to reduce the error.After many loops (epochs), we get a good prediction formula

Example Output:

Learned formula: score = 10.12 * hours + 14.76
If a student studies for 7 hours, predicted score is 85.60

This means: roughly 10 marks per hour of study, starting from a base of ~15 marks.

Python Template Skeleton for a Learning Algorithm


# 1. Input Data (Features) and Labels (Target)
X = [...]  # List of input values (e.g., hours studied)
Y = [...]  # Corresponding output (e.g., exam scores)

# 2. Initialize Model Parameters (e.g., slope and intercept for linear model)
m = 0  # Slope
c = 0  # Intercept

# 3. Set Learning Parameters
learning_rate = 0.01
epochs = 1000

# 4. Learning Loop (Optimization via Gradient Descent)
for epoch in range(epochs):
    total_error_m = 0
    total_error_c = 0
    n = len(X)

    for i in range(n):
        x = X[i]
        y = Y[i]

        # 5. Predict using the current model
        y_pred = m * x + c

        # 6. Calculate Error (Loss Function: Squared Error)
        error = y - y_pred

        # 7. Compute Gradients (How to adjust m and c)
        total_error_m += -2 * x * error
        total_error_c += -2 * error

    # 8. Update Model Parameters
    m -= learning_rate * (total_error_m / n)
    c -= learning_rate * (total_error_c / n)

# 9. Final Model Output
print(f"Learned model: score = {m:.2f} * hours + {c:.2f}")

Simple Visual Infographic: How a Learning Algorithm Works

[INPUT DATA]
|

[ MODEL: Start with a guess]
|

[ PREDICT OUTPUT using current model]
|

[COMPARE to real output (loss function)]
|

[ADJUST MODEL using learning rule]
|

[REPEAT over epochs]
|

[ FINAL MODEL: Ready to Predict!]

How learning algorithms work – Summary