Feed Forward Mechanism with Simple Python

1. We’ll simulate a tiny neural network with:

  • 3 input features (e.g., Neatness, Correctness, Creativity)
  • 1 hidden layer with 2 neurons
  • 1 output neuron
  • Activation function: ReLU (just to keep it simple)

Feed Forward Neural Network (No Learning)

# Activation function: ReLU
def relu(x):
    return max(0, x)

# Inputs (e.g., student scores)
inputs = [7, 9, 6]  # Neatness, Correctness, Creativity

# Weights for hidden layer (2 neurons)
weights_hidden = [
    [0.2, 0.4, -0.5],   # Neuron 1
    [0.3, -0.1, 0.8]    # Neuron 2
]

# Biases for hidden layer
bias_hidden = [1.0, 0.5]

# Calculate hidden layer outputs
hidden_outputs = []
for neuron_weights, bias in zip(weights_hidden, bias_hidden):
    activation = sum(i * w for i, w in zip(inputs, neuron_weights)) + bias
    output = relu(activation)
    hidden_outputs.append(output)

# Weights for output layer (1 neuron, takes 2 hidden outputs)
weights_output = [0.6, -0.3]
bias_output = 0.2

# Calculate final output
final_activation = sum(h * w for h, w in zip(hidden_outputs, weights_output)) + bias_output
final_output = relu(final_activation)

# Display the process
print("Input:", inputs)
print("Hidden Layer Outputs:", hidden_outputs)
print("Final Output (Predicted Grade):", round(final_output, 2))

What This Code Does

  • Takes inputs (like features from real data).
  • Passes them through a hidden layer using weights and ReLU.
  • Then passes the hidden outputs to an output layer to get the final result.
  • This is a pure forward pass — no learning, no correction.

Sample Output (When Run):

Input: [7, 9, 6]
Hidden Layer Outputs: [0, 7.2]
Final Output (Predicted Grade): 2.86

Feed Forward mechanism in Neural Network – Visual Roadmap