• Gunsilius’ tutorial paper on optimal transport for causal inference: OT is the most intimidatingly french-math field that I’ve been put off studying seriously, what with Cedric Villani - maths’ very own Willy Wonka - having written one of the main expository textbooks in it. FG wrote a nice paper applying OT ideas to synthetic control on CDFs (which in turn has nice connections to Athey and Imbens’ elegant but seldom-used Changes in Changes estimator), and this seems like a nice expository expansion of that work. I’m partway through and enjoying it. On a similar note - Alfred Galichon’s book on OT also looks good, as does his forthcoming discrete choice book.

  • Kang et al connect dynamic discrete choice models [Rust, Hotz-Miller etc] in econometrics to Maxent inverse reinforcement learning (IRL) and provide a clean ERM-based approach that sidesteps the need to estimate state transition probabilities. These models are a bear to estimate [I am not writing rust’s nested fixed point trick in spark], and the paper is a nice step forward in making them more tractable.

  • Rambachan and Shephard provide a nice overview of time series estimands from a causal inference point of view. The macroeconometrics literature, which involves estimands such as impulse response functions, local projections, and the like are extremely hard to interpret from a microeconometric point of view because they are defined relative to the DGP rather than any notion of potential outcomes and interventions. This paper provides a nice bridge between the two literatures, and provides affirmative answers for when some of these time series estimands are causally meaningful.

  • Takatsu and Kuchibhotla provide a nice generalized procedure to perform inference in the m-estimation problem [estimation problems that involve maximizing or minimizing a criterion, which often amounts to setting a loss function to zero, which covers almost all stats/econometrics you’d ever want to do]. I love when sample-splitting is a viable alternative to gratuitous assumptions.

  • Michael Manring’s Tetrahedron is nuts. Manring plays a fretless bass with a unique style that involves many harmonics and percussive playing.

  • Chad Wackerman’s drumeo appearance is great; drumeo has drummers listen to songs without drum tracks and create one on the fly; this is one of the best ones. Dennis Chambers playing Tool is also great.

shameless self-promotion

  • When can we get away with using the two-way fixed effects regression? is a paper I wrote on a whim over the holidays to answer the basic question of when to worry about contamination and negative- weighting bias in staggered diff-in-diff settings. Over the last five years or so, there has been a huge amount of tex-ink spilled over how (arguably the most popular) regression run in economics and adjacent fields was doomed due to its potential biases in the presence of temporal heterogeneity across adoption cohorts [see page 2 of the paper for the overview]. While these papers are all mathematically correct in that the conclusions about bias follow from the premises about arbitrary temporal heterogeneity, the question of when this actually matters isn’t answered because the motivating examples are typically contrived and feature extreme heterogeneity. There has also been a dawning realisation among applied researchers (I moonlight as one) that since modern heterogeneity-robust estimators take finer cuts of the data to shut down the ‘forbidden comparisons’ that generate negative weights, they typically have higher variance, and given the popularity of panel regression in ‘medium-scale’ data regimes (say, 50 states observed over the course of 40 years), doing away with the vanilla TWFE regression entirely shuts down the ability to study many potentially interesting and important questions. So, the paper takes an alternative route by providing a (simple, first-year-econometrics-powered) joint test of heterogeneity across cohorts that can help researchers choose between the low-variance but potentially biased 2WFE regression, and the high-variance but no-bias saturated regression (and its cousins, such as Callway Sant’anna). Simulations in the paper suggest that it has good power in detecting deviations from homogeneity, and can serve as a simple test before researchers are spooked by reviewers into using the high-variance estimators. The test will be merged in pyfixest (an excellent regression library in python that I contribute frequently to) soon. The paper is replicable from soup to nuts in this repo, and was the first time I used typst (a modern rust-based alternative to latex) for a paper; I’m a fan of the workflow, and will likely use it for future papers.

  • richcli: Wrote this with claude code to make it easier to use terminal applications. ffmpeg can do most things you’d ever want to do with video or audio, but it is a pain to use. Similarly, pdftk and ghostscript can do most things you’d ever want to do with PDFs and images, but they are a pain to use. richcli is a python library that provides a rich-based interface to build up function calls to these tools. The most experimental part of the library is a magnet mode that works on arbitrary CLIs; It first calls command -h to get the help text, then parses the help text to build up a function call interactively. I demo this feature with a pandoc (another venerable CLI that does most document-processing things but has middling ergonomics) call.