Neuron In Hidden Layer example with Simple Python

1. Python Example – One Hidden Layer (No Library)

import math

# Activation function
def relu(x):
    return max(0, x)

# Inputs to the network
inputs = [1.0, 2.0, 3.0]

# Define weights and biases for 3 neurons in the hidden layer
hidden_layer = [
    {'weights': [0.2, -0.5, 1.0], 'bias': 0.1},
    {'weights': [-1.5, 2.0, 0.3], 'bias': -0.3},
    {'weights': [0.7, 0.8, -1.2], 'bias': 0.5}
]

# Compute the output of each neuron
def neuron_output(weights, bias, inputs):
    z = sum(w * i for w, i in zip(weights, inputs)) + bias
    return relu(z)

# Process through hidden layer
hidden_outputs = [neuron_output(n['weights'], n['bias'], inputs) for n in hidden_layer]

print("Outputs from hidden layer neurons:", hidden_outputs)

What Is Happening Internally?

For each neuron:

  • weighted_sum = w₁x₁ + w₂x₂ + w₃x₃ + b
  • activation = relu(weighted_sum)

This mimics biological neurons firing signals based on the strength of input signals.

Summary

Component Biological Equivalent Math Role
Inputs Dendrites x₁, x₂, … Incoming data
Weights Synapse strength w₁, w₂, … Influence of each input
Bias Intrinsic excitability b Adjusts firing threshold
Weighted sum Membrane potential ∑wᵢxᵢ + b Combines all signals
Activation Function Firing threshold σ(z) Output signal

2. Python Example (Extending the Previous Hidden Layer)ytho

Let’s extend our previous example and compute the final output:

import math

# Activation functions
def relu(x):
    return max(0, x)

def sigmoid(x):
    return 1 / (1 + math.exp(-x))

# Inputs to the network
inputs = [1.0, 2.0, 3.0]

# Hidden layer (3 neurons)
hidden_layer = [
    {'weights': [0.2, -0.5, 1.0], 'bias': 0.1},
    {'weights': [-1.5, 2.0, 0.3], 'bias': -0.3},
    {'weights': [0.7, 0.8, -1.2], 'bias': 0.5}
]

def neuron_output(weights, bias, inputs, activation=relu):
    z = sum(w * i for w, i in zip(weights, inputs)) + bias
    return activation(z)

# Compute hidden layer outputs
hidden_outputs = [neuron_output(n['weights'], n['bias'], inputs) for n in hidden_layer]

# Output neuron weights (connecting hidden to output)
output_weights = [0.4, -1.0, 0.6]
output_bias = -0.2

# Compute final output (sigmoid activation for binary classification)
final_output = neuron_output(output_weights, output_bias, hidden_outputs, activation=sigmoid)

print("Final prediction output:", final_output)

What This Does:

Step What Happens
Hidden layer processing Converts raw input into meaningful intermediate features
Output layer combination Weights these features to form a final decision
Activation (e.g., sigmoid) Squashes output into a usable format (e.g., probability 0–1)

Output Choices Based on Task:

Task Type Output Layer Activation Output Interpretation
Regression Identity / None Continuous number (e.g., price, score)
Binary classification Sigmoid Probability of class 1
Multi-class Softmax Probabilities of each class