Posts

    Anomaly Detection and Invertible Reparameterizations

    Density-based anomaly detection methods identify anomalies in the data as points with low probability density under the learned model. However, an invertible reparameterization of the data can arbitrarily change the density levels of any point - should anomalous points in one representation retain this status in any other representation?

    Concentration Inequalities I: Tensorization Identities

    Tensorization allows us to break apart complicated joint distributions of random variables to a sum of simpler terms related to their marginals. Here we cover a range of useful tensorization identities which may be used to derive interesting concentration inequalities.

    Neural Image Compression I

    Modern learnable data compression schemes use neural networks to define the transforms used in transform coding. We illustrate the basic idea behind learnable lossy compression and look at one possible continuous relaxation to quantization, required for entropy coding.

    Pastimes

    Of cat.

    Monte Carlo Methods and Normalizing Flows

    Importance sampling can be used to reduce the variance of Monte Carlo integration, including computing expectations. We investigate the main idea and then examine an interesting application of normalizing flow models to this area.

    Latent Variable Models in Jax

    Recently proposed latent variable models use quantities derived from importance-sampling bounds on the marginal log-likelihood to construct an unbiased estimator of $\log p(x)$. We investigate one such model, and use this to attempt a pedagogical introduction to Jax.

    Intuitive IWAE Bounds + Implementation in Jax

    Multi-sample estimators of the marginal log-likelihood provide tighter bounds on $\log p(x)$ than the standard evidence lower bound. Here we sketch why this is, and walk through an implementation in Jax.

    Latent Variable Model Primer

    This post serves as an easy-to-update personal encyclopedia about latent variable models. We start with a straightforward derivation of the evidence lower bound and then examines modern techniques used in variational inference.