Elasticnet Regression example with Simple Python
1. Scenario:
We’re predicting a house price based on:
- Area of the house
- Bedrooms in the house
We’ll run gradient descent to update weights using ElasticNet, combining L1 (Lasso) and L2 (Ridge) penalties.
Python Code (No Libraries)
# Sample data: [area, bedrooms] and target price
X = [
[1400, 3],
[1600, 3],
[1700, 4],
[1875, 4],
[1100, 2]
]
y = [245000, 312000, 279000, 308000, 199000]
# Normalize the features (simple min-max normalization for interpretability)
def normalize(X):
norm_X = []
for col in zip(*X):
min_val, max_val = min(col), max(col)
norm_col = [(val - min_val) / (max_val - min_val) for val in col]
norm_X.append(norm_col)
return list(map(list, zip(*norm_X)))
X = normalize(X)
# Initialize parameters
w = [0.0 for _ in range(len(X[0]))] # weights for features
b = 0.0 # bias
alpha = 0.1 # learning rate
lambda1 = 0.01 # L1 penalty (Lasso)
lambda2 = 0.01 # L2 penalty (Ridge)
epochs = 1000
# Gradient Descent
for epoch in range(epochs):
dw = [0.0 for _ in w]
db = 0.0
n = len(X)
for i in range(n):
y_pred = sum([w[j] * X[i][j] for j in range(len(w))]) + b
error = y_pred - y[i]
for j in range(len(w)):
dw[j] += error * X[i][j]
db += error
for j in range(len(w)):
# Add L1 and L2 regularization to the gradient
dw[j] = (dw[j] / n) + lambda1 * (1 if w[j] > 0 else -1) + 2 * lambda2 * w[j]
w[j] -= alpha * dw[j]
db = db / n
b -= alpha * db
# Output final weights and bias
print("Weights:", w)
print("Bias:", b)
What We Just Did
- Normalized the features for stability
- Updated weights via gradient descent
- Penalty terms:
- L1 (λ₁) adds/subtracts a fixed amount → encourages some weights to become 0
- L2 (λ₂) adds weight² → discourages large weights
- No ML library used
Elasticnet Regression – ElasticNet Regression Suitability Checklist
