Causal Seminar: Avi Feller, University of California, Berkeley

Hawes Hall, Classroom 101, Harvard Business School

Weight for it: Equivalent outcome models of weighting estimators in causal inference

Many common outcome models have equivalent representations as weighting estimators, such as linear smoothers. This talk explores the less-studied converse question: what are equivalent outcome models of weighting estimators commonly used in causal inference? For unconstrained weights (like Riesz regression), we first demonstrate that their equivalent outcome models are penalized linear regressions. We use this result to show that augmented balancing weights, such as automatic debiased machine learning (AutoDML), are often equivalent to single, under-smoothed outcome models and characterize the asymptotic properties when both weighting and outcome models are kernel ridge regression. For weights constrained to be non-negative or on the simplex (such as traditional inverse propensity score weighting, matching, and the Synthetic Control Method), we show that the equivalent outcome models are a form of generalized ridge regression, where the penalization depends on the target distribution. For orthogonal designs, this regularization becomes a form of penalized Principal Components Regression. We provide analogous results for other constrained weighting models, including non-negative least squares and entropy balancing. Finally, we derive explicit regularization forms for augmented IPW (including with traditional logistic regression propensity score models) and variations of the synthetic control method.

Based on this forthcoming JRSSB discussion paper (with David Bruns-Smith, Oliver Dukes, and Betsy Ogburn) and on ongoing work with David Arbour, Anup Rao, and Pratik Patil

Avi Feller

Associate Professor

University of California, Berkeley
Public Policy and Statistics