Tensor example with Simple Python
1. Step-by-step Python Example (Tensor Intuition)
1. Scalar (0D Tensor) — Just a single number
scalar = 5 print("Scalar:", scalar)
2. Vector (1D Tensor) — A list of numbers
vector = [1, 2, 3, 4] print("Vector:", vector)
3. Matrix (2D Tensor) — A table of numbers
matrix = [ [1, 2, 3], [4, 5, 6] ] print("Matrix:") for row in matrix: print(row)
4. 3D Tensor — A list of matrices (like pages in a book)
tensor3D = [ [ # Page 1 [1, 2], [3, 4] ], [ # Page 2 [5, 6], [7, 8] ] ] print("3D Tensor:") for page in tensor3D: print("Page:") for row in page: print(row)
Interpretation:
- scalar → just one number (0D)
- vector → line of numbers (1D)
- matrix → grid of numbers (2D)
- tensor3D → stacked grids (3D)
2. How a neural network layer might transform tensors using very simple operations like multiplication and addition
Let’s simulate a “Tiny Neural Layer”
Imagine we have:
- Input Vector (1D tensor):values coming from previous layer or data
- Weights (1D tensor):numbers the neural network uses to learn
- Bias (1 number): a small nudge added to the result
Formula Used by a Neuron: Output = (Input × Weights) + Bias
We’ll simulate this with just Python lists.
Example: One Neuron, 3 Inputs
# Input data: e.g., [height, weight, age] inputs = [2, 3, 1] # Weights learned by the neuron weights = [0.5, -1.2, 1.0] # Bias value bias = 2.0 # Compute weighted sum def neuron_output(inputs, weights, bias): output = 0 for i in range(len(inputs)): output += inputs[i] * weights[i] output += bias return output # Calculate the result result = neuron_output(inputs, weights, bias) print("Neuron Output:", result)
What’s Happening:
Input | Weight | Input × Weight |
---|---|---|
2 | 0.5 | 1.0 |
3 | -1.2 | -3.6 |
1 | 1.0 | 1.0 |
Bias | +2.0 | |
Total | = 0.4 |
This is what a neural network layer does:
- Takes inputs (as a tensor)
- Multiplies by weights
- Adds bias
- Passes the result to the next layer (optionally through an activation function)
3. A mini neural network layer with multiple neurons
Part 1: Multiple Neurons in a Layer
Think of this setup:
- Input → [2, 3, 1]
- We have 3 neurons in the layer.
- Each neuron has its own set of weights and bias.
Let’s define the structure:
# Input values inputs = [2, 3, 1] # Each sub-list is weights for one neuron weights = [ [0.5, -1.2, 1.0], # Neuron 1 [-1.0, 2.0, 0.5], # Neuron 2 [1.5, 1.2, -0.5] # Neuron 3 ] # Biases for each neuron biases = [2.0, 0.5, -1.0] # Output of each neuron def layer_output(inputs, weights, biases): outputs = [] for neuron_weights, bias in zip(weights, biases): output = 0 for i in range(len(inputs)): output += inputs[i] * neuron_weights[i] output += bias outputs.append(output) return outputs # Compute layer output raw_outputs = layer_output(inputs, weights, biases) print("Raw outputs before activation:", raw_outputs)
Part 2: Add Activation Function
Let’s apply a simple one: ReLU (Rectified Linear Unit)
- It just says:
- If the output is negative, make it 0
- Otherwise, keep it as-is
# ReLU activation function def relu(x): return x if x > 0 else 0 # Apply ReLU to each neuron's output activated_outputs = [relu(output) for output in raw_outputs] print("Activated outputs:", activated_outputs)
Final Output:
Now you’ll see how raw outputs may have negative values, but after activation, they’re all non-negative:
Example Output:
Raw outputs before activation: [0.4, 4.0, -1.9]
Activated outputs: [0.4, 4.0, 0]
Why Activation Functions Are Needed?
Without activation:
The network becomes just a giant linear calculator — no matter how many layers you add, the output is always a straight-line-like function.
That means: no curves, no twists, no smart decisions.
Real-life Analogy:
Imagine you’re designing a smart light switch:
- Input: sunlight, motion, time of day
- Without activation: the light turns on in a strictly linear way, like “add 10% light for every 10% sunlight.”
But the world isn’t linear!
- At night, even full motion shouldn’t turn on lights if it’s bedtime
- During bright daylight, motion shouldn’t matter at all
So, we need rules with “if-else” behavior, like: “Only turn on if it’s dark AND there’s motion” → That’s like an activation function deciding what fires and what doesn’t.
What is a Tensor (Primary Concepts) – Visual Roadmap