Efficient Gibbs for Dirchlet-multinomials with missing data?

Another recent paper that I’ve both enjoyed and found a lot of practical benefit from is Nathan Stein and Xiao-li Meng’s “Practical perfect sampling using composite bounding chains: the Dirichlet-multinomial model,” (Biometrika, 2013). In addition to constructing a perfect sampler for Dirichlet-multinomial (DM) distributions, this paper gives two easily constructed Gibbs samplers for DMs. What’s cool about these samplers is that they both use a variable augmentation strategy that places the DM within an urn-replacement scheme. This allows them to construct two different ways of looking at the DM that correspond to two different parameterizations of the distribution that naturally come up in a lot of situations.

The DM on K categories is usually parameterized in terms of (\alpha_1, \alpha_2, \cdots, \alpha_K), where each \alpha_i is an inverse concentration parameter ‘repelling’ counts away from i as \alpha_i increases. There is also a somewhat more intuitive presentation more reminiscent of the multinomial distribution with parameters (\theta,p_1,p_2,\cdots,p_K). The p_i‘s are the expected frequencies each category (like the multinomial) and $\theta$ is an inverse-variance parameter. The relationship between the two is straight-forward: \alpha_i = \theta \cdot p_i. What’s cool about Stein and Meng’s work (or at least the start of it; there’s a lot of even cooler stuff in the construction of the composite bounding chain) is that they can show that both of these presentations can be embedded in the same replacement scheme to realize two complementary Gibbs samplers. This means that folks can build MCMC schemes that can be reasonably efficient even for the generally difficult-to-sample DM distribution.

For applied folks such as myself, the upside is that you can use DM distributions in a lot more cases than you could before (large K, large N). I’ve put these samplers to fairly good use in a couple of recent papers (shameless self-promotion: http://arxiv.org/abs/1511.05185 and http://biorxiv.org/content/early/2016/03/24/045468). However, the data sets in those papers made for easy work since there was no missingness: every sample had the potential to observe every one of the categories. Unfortunately, my current data sets (one in ecology, one in genomics, one in political science) all have the same underlying issue: they all have samples where some number of categories are not observed for structural reasons. All of which creates a big ole headache since I can’t seem to re-derive these samplers for the case of missing data. Which would be fantastic since – while DM-based models are definitely on the rise – not being able to deal thoughtfully with missing data is going to hold back their wide-deployment.

Any help, interwebs?

 

 

 

Stick-breaking made easy!

So, one of the papers I’ve gotten the most out of in the last month or so is this really superlative effort by Scott Linderman, Matt Johnson and Ryan Adams on multinomial-stick-breaking using Polya-Gamma augmentations (a draft of the paper can be found here). It’s excellent!

But why is it excellent? And how is going to help me better understand species or pottery or voter preference distributions? Well, a few years ago Nicholas Poulson and others showed how you could augment very specific likelihoods with additional variables (called Polya-Gamma augmentations; PG for short) to make them vastly easier to sample by (effectively) transforming them into Gaussians. That doesn’t sound like much until you realize that the logistic is one of those likelihoods (also Bernoullis, binomials, negative binomials). That models with logistic regression samples often have serious issues with sampling is a well-known issue in statistics and a lot of smart people (Chris Holmes, Leonard Held, Siddhartha Chib, David Mimmo) have tried their hand in different ways. I didn’t really understand this (even though I’ve both used and taught logistic regression) but I did understand that when I tried to build a logistic-regression framework for the DMM, convergence was horribly unreliable. Now I see why!

Basically, Poulson’s work shows how to rid statistics of this problem in a lot of practical contexts. What Linderman et al. do is extend this to multinomial distributions: they cleverly use an identity that comes up in a first-semester probability course (you can test yourself: show that a multinomial on K categories can be written as a product of K-1 binomials) to extend the PG idea to multinomials. Which has a lot of applications (they show some nice ones in the paper) and, having thought a bit about applications for the DMM, I can come up with a few more! Anyway, since I just spent the day implementing an extension to their Gaussian processes framework, I thought I’d do a shout-out so that others can look at this really cool work.

(While I do love the paper, be forewarned: there are a couple of typos in inconvenient places, so you can’t just use the formulas as written; you’ll have to re-derive a couple of them. Personally, that was helpful since it meant I had to really understand what they were saying but others might not find it so.)

 

 

A blog, as it should be.

So, I’ve been using this site as more of a professional webpage for the last year or so but I’m converting it back to what it really should be: a blog. I don’t imagine anyone is going to be reading much of it either way, but I like the blog tone – informal, digressive, and open.  I’m in the last 5 months of my sabbatical and I’ve got a lot of research that’s finally out the door (some interesting things, some old collaborations, some student projects) and now I’ve got to make the most of the remaining 4 months. So, I’m going to use this as an opportunity to chat about my daily travails, struggles with code, and (my favorite) dealing with reviewers.