easy end_to_end

Training Loop

Implement a simple SGD training loop for linear regression.

Given initial weight w and bias b, perform n_steps of gradient descent:

For each step:

  1. Forward: $\hat{y} = x \cdot w + b$
  2. Loss: $L = \frac{1}{N} \sum (y - \hat{y})^2$
  3. Gradients: $\frac{\partial L}{\partial w} = \frac{-2}{N} x^T (y - \hat{y})$, $\frac{\partial L}{\partial b} = \frac{-2}{N} \sum (y - \hat{y})$
  4. Update: $w \leftarrow w - lr \cdot \nabla w$, $b \leftarrow b - lr \cdot \nabla b$

Input:

  • x: shape (N, 1), y: shape (N, 1)
  • w_init: shape (1, 1), b_init: shape (1,)
  • lr: learning rate (float), n_steps: number of steps (int)

Output: A dict with final “w” (shape (1,1)), “b” (shape (1,)), and “final_loss” (scalar).

Hints

training-loop sgd gradient-descent linear-regression
Detecting runtime...