Weights and Biases relevancy in Neural Network
1. What Are Weights and Biases?
-
Weights (W):
They are multipliers attached to each input feature. Each connection between neurons has a weight that determines how much importance to give to the corresponding input.
Like how much attention to pay to someone’s words in a group conversation. -
Bias (b):
This is a shift added to the weighted sum before applying the activation function. It helps the network adjust the output independently of the input, adding flexibility.
Like a personal opinion that tilts your decision even before hearing anyone.
Why Are They Important?
Feature | Importance |
---|---|
Weights | Control the strength of connections between neurons. They are learned during training to minimize the error. |
Biases | Allow the model to fit data more accurately by shifting the activation threshold. Without bias, the network becomes very limited (like a line that must pass through origin in linear regression). |
Real-Life Analogy
Let’s say we’re predicting whether someone will buy a product based on:
- Price
- Brand popularity
- Friend recommendation
Our model might start with equal weights (i.e., all factors equally important), but after training it might learn:
- Weight for Price = 0.8 (very important)
- Weight for Brand = 0.3 (somewhat important)
- Weight for Friend = 1.2 (very important)
Bias could be = –1.5, meaning you’d need a minimum push before your decision leans toward “Buy”.
In Math Terms
For a neuron:
Output = Activation(W1 × x1 + W2 × x2 + … + Wn × xn + b)
Where:
- W1, W2, …Wn are weights
- x1, x2, …xn are input features
- b is bias
- Activation is a function like sigmoid or ReLU
Why They’re Crucial in Training
- During training (via backpropagation), the neural network adjusts the weights and biases to reduce the prediction error.
- Without these, the network cannot learn or adapt to different data patterns.
Weights and Biases relevancy in Neural Network – Weights & Biases Relevancy example with Simple Python