Posts
2022
An interesting non-constructive proof of the fundamental theorem of Riemannian geometry, written to put off doing actual work.
Renormalization group flow for stringy nonlinear sigma models
01 September 2022
When is it possible to study the dynamics of two-dimensional nonlinear sigma models? The qualitative answer is seductively geometric, but getting quantitative is a different matter.
2020
Anomaly Detection and Invertible Reparameterizations
17 December 2020
Density-based anomaly detection methods identify anomalies in the data as points with low probability density under the learned model. However, an invertible reparameterization of the data can arbitrarily change the density levels of any point - should anomalous points in one representation retain this status in any other representation?
Concentration Inequalities I: Tensorization Identities
04 December 2020
Tensorization allows us to break apart complicated joint distributions of random variables to a sum of simpler terms related to their marginals. Here we cover a range of useful tensorization identities which may be used to derive interesting concentration inequalities.
12 September 2020
Modern learnable data compression schemes use neural networks to define the transforms used in transform coding. We illustrate the basic idea behind learnable lossy compression and look at one possible continuous relaxation to quantization, required for entropy coding.
10 September 2020
Of cat.
Monte Carlo Methods and Normalizing Flows
04 August 2020
Importance sampling can be used to reduce the variance of Monte Carlo integration, including computing expectations. We investigate the main idea and then examine an interesting application of normalizing flow models to this area.
21 July 2020
Recently proposed latent variable models use quantities derived from importance-sampling bounds on the marginal log-likelihood to construct an unbiased estimator of $\log p(x)$. We investigate one such model, and use this to attempt a pedagogical introduction to Jax.
Intuitive IWAE Bounds + Implementation in Jax
20 June 2020
Multi-sample estimators of the marginal log-likelihood provide tighter bounds on $\log p(x)$ than the standard evidence lower bound. Here we sketch why this is, and walk through an implementation in Jax.
15 June 2020
This post serves as an easy-to-update personal encyclopedia about latent variable models. We start with a straightforward derivation of the evidence lower bound and then examines modern techniques used in variational inference.