Hey there, these posts are in part so that I can keep a record of the various topics that I work on and for something I can point to for students interested in them (hoping for the day).
In this post I’m going to go through Gibbs and slice sampling, you’ve probably seen them used everywhere if you’re a statistician, but have you ever looked into why they work in detail? (UNDER CONSTRUCTION)
In this post I’m going to try and introduce Hamiltonian and Langevin Monte Carlo, and there stochastic gradient counterparts (among a couple other things). This post will involve a little stochastic differential calculus and some results from A Complete Recipe for Stochastic Gradient MCMC - Ma et al. (2015).
In this post I’m going to attempt to give an intuitive introduction to Fisher Information, (very briefly) Jeffrey priors, and the lower bounds on the variance of unbiased estimators i.e. the Cramer-Rao bound. Hopefully this post will be shorter than my last couple…
In this post, I’ll go through Constant Curvature VAEs (traditional, hyperspherical, and hyperbolic) for image data classification and molecular structure reconstruction.
In this post I’m going to go through the kernel trick and how it helps or enables various tools in statistics and machine learning including support vector machines, gaussian processes, kernel regression and kernel PCA. This is going to be a bit of a long one, I’ll probably split it up later but for now … sorry? {}
In this post, I’m going to investigate the underlying relationships between various physical and mental health indicators and student stress levels. In the process I will give an introduction to the Uniform Manifold Approximation and Projection or UMAP dimensional reduction technique.
In this post, I’ll attempt to give an introduction to simulation-based inference specifically delving into the method of NRE including rudimentary implementations.
In this post, I’ll attempt to give an introduction to simulation-based inference specifically delving into the methods NPE and NLE including rudimentary implementations.
In this post I will attempt to give an introduction to conditional normalising flows, not to be confused with continuous normalising flows, that model both \(\vec{\theta}\) and \(\vec{x}\) in the conditional distribution \(p(\vec{\theta}\vert\vec{x})\). I was nicely surprised at how simple it is to implement compared to unconditional normalising flows, so I thought I’d show this in a straightforward way. Assumes you’ve read my post on Building a normalising flow from scratch using PyTorch.
In this post I will attempt to give an introduction to continuous normalising flows, an evolution of normalising flows that translate the idea of training a discrete set of transformations to approximate a posterior, into training an ODE or vector field to do the same thing.
In this post I will attempt to show you how to construct a simple normalising flow using base elements from PyTorch heavily inspired by a similar post by Eric Jang doing the same thing with TensorFlow from 2018 and subsequently his tutorial using JAX from 2019.
In this post I will attempt to give an introduction to variational inference with some examples using the NumPyro python package. Partly under construction
In this post, I’m going to try to give an intuitive intro into the Metropolis-Hastings algorithm without getting bogged down in much of the math to show the utility of this method.