Summary – How Learning Algorithms Work

1. Core Building Blocks of a Learning Algorithm

1. Input (Features)

What the algorithm is going to learn from.

Example: Study hours, room size, age, income, etc.

Represented as numbers, often in a list or vector.

We give it data.
Think: “What are the things I already know?”

2. Output (Target or Label)

What the algorithm is trying to predict or understand.

Example: Exam score, house price, yes/no, etc.

We tell it what answer to expect.
Think: “What do I want it to learn to guess?”

3. Model (Rule or Formula)

The “guessing machine” — often a math equation like:

Output=m⋅x+c

In more advanced models, this could be a tree, a network, etc.But it always connects input to output via some rule.
This is what the algorithm learns.

4. Loss Function (Error Measurement)

A way to check how wrong the guess is.Common example: Mean Squared Error (MSE)

Error=(y^actual​−y^predicted​)^2

This helps the algorithm know if it’s doing a good or bad job.

5. Learning Rule (Optimization)

The way to fix mistakes and improve the model.The most basic is Gradient Descent:

Look at the error. Adjust the formula slightly (e.g., change m and c).Repeat.It keeps minimizing the error step by step
This is how the algorithm learns over time.

6. Iterations / Epochs

The number of learning rounds the model goes through.Each round, it improves the rule.

More rounds = better learning (but not always!)

This is like practice.

How learning algorithms work – Visual Roadmap