medium end_to_end

Autoencoder

Build a simple autoencoder with one hidden layer encoder and decoder.

Encoder: $z = \text{ReLU}(x \cdot W_e + b_e)$ Decoder: $\hat{x} = x_{dec} = z \cdot W_d + b_d$

Reconstruction loss (MSE): $L = \frac{1}{N \cdot D} \sum (x - \hat{x})^2$

where N is batch size and D is input dimension.

Input:

  • x: input of shape (batch, input_dim)
  • W_e: encoder weights (input_dim, latent_dim)
  • b_e: encoder bias (latent_dim,)
  • W_d: decoder weights (latent_dim, input_dim)
  • b_d: decoder bias (input_dim,)

Output: A dict with “latent” (shape (batch, latent_dim)), “reconstruction” (shape (batch, input_dim)), and “loss” (scalar).

Hints

autoencoder encoder-decoder reconstruction unsupervised
Detecting runtime...