Feedforward Neural Network Basic Concepts
1. What is a Feedforward Neural Network?
A Feedforward Neural Network (FNN) is the simplest type of artificial neural network. Information moves in one direction—from the input layer to the output layer—without looping back (no cycles or feedback).
Think of it like a human answering a question: input → process → output.
2. Structure of FNN
Layers:
- Input Layer:Receives raw data (e.g., pixels of an image, features of a product).
- Hidden Layers (1 or more): Internal layers that learn representations via weighted connections and nonlinear activations.
- Output Layer: Produces the final result (e.g., classification, regression value).
3. Forward Propagation
This is how data flows:
- Multiply inputs with weights
- Add a bias
- Apply an activation function: Example (Sigmoid, ReLU, Tanh)
Mathematically for one neuron:
z=w1x1+w2x2+…+wnxn+b
a=activation(z)
4. Activation Functions
They add non-linearity, allowing networks to learn complex patterns:
Name | Formula | Use case |
---|---|---|
Sigmoid | 1 / (1 + e−x) | Probabilities (0 to 1) |
ReLU | max(0, x) | Fast convergence, deep networks |
Tanh | tanh(x) | Outputs between -1 and 1 |
5. Output Layer (with Activation)
Depends on task:
- Classification: Softmax/Sigmoid
- Regression: Linear output
6. Loss Function
Measures how far off predictions are from true labels.
Examples:
- MSE (Mean Squared Error) for regression
- Cross-Entropy for classification
7. Backpropagation & Learning
Although not part of “feedforward” directly, learning involves:
- Calculating loss
- Computing gradients using backpropagation
- Updating weights using Gradient Descent
Feedforward Neural Network – Feedforward Neural Network example with Simple Python