Causal Inference Research Showcase

Hawes Hall, Classroom 202, Harvard Business School

Design Sensitivity and Its Implications for Weighted Observational Studies

Sensitivity to unmeasured confounding is not typically a primary consideration in designing treated-control comparisons in observational studies. We introduce a framework for researchers to optimize robustness to omitted variable bias at the design stage using a measure called design sensitivity. Design sensitivity describes the asymptotic power of a sensitivity analysis, and transparently assesses the impact of different estimation strategies on sensitivity.  We apply this general framework to two commonly-used sensitivity models, the marginal sensitivity model and the variance-based sensitivity model. By comparing design sensitivities, we interrogate how key features of weighted designs, including choices about trimming of weights and model augmentation, impact robustness to unmeasured confounding, and how these impacts may differ for the two different sensitivity models. We illustrate the proposed framework on a study examining drivers of support for the 2016 Colombian peace agreement. This is joint work with Dan Soriano and Samuel Pimentel.

Pre-print: https://arxiv.org/abs/2307.00093

Speaker:

  • Melody Huang, 2023 Wojcicki Troper HDSI Postdoctoral Fellow, The Institute of Quantitative Social Sciences, Harvard University

Design of Panel Experiments with Spatial and Temporal Interference

For modern online platforms, many online controlled experiments (or A/B tests) are conducted over a panel: a number of experimental units are involved in an experiment that lasts for a duration of time. Such panel experiments have gained significant benefits in new product and service development. One of the main challenges faced by such online platforms is interference, the setting where a treatment assignment to one unit impacts the outcomes of another, and possibly at the following time period. Conventional wisdom has identified clustered experiments as the preferred approach for handling interference, which involves grouping multiple experimental units in a manner that confines the majority of interference within these clusters rather than across them. However, it still remains an open question how to choose the proper size of each cluster. In this work, we present a new randomized design of panel experiments and answer this question when all experimental units are modeled as vertices on a two-dimensional lattice, which is common in ridesharing and food delivery applications. Our proposed design has two features: the first is a notion of randomized spatial clustering, which we refer to as random shaking, that partitions units into equal-size clusters; the second is a notion of balanced temporal randomization that extends the classical completely randomized designs to the temporal interference setting.

We prove the theoretical performance of our design, develop its inferential techniques, and verify its superior performance by conducting extensive simulations, including simulations using real data from a ride-hailing platform.

Speaker:

  • Tu Ni, Postdoctoral Research Fellow, Laboratory for Innovation Science at Harvard (LISH)

Anatomy of Two-Way Fixed Effects Models: Hypothetical Experiment, Exact Decomposition, and Robust Estimation. 

In recent decades, event studies have emerged as a leading methodology in health and social research for evaluating the causal effects of discrete interventions. In this talk, I will provide a novel characterization of the classical dynamic two-way fixed effects (TWFE) regression estimator for event studies. The decomposition is expressed in closed-form and reveals, in finite samples and without approximations, the hypothetical experiment that TWFE regression adjustments approximate. This decomposition offers insights into how standard regression estimators use information from various units and time points, generalizing the notion of forbidden comparison noted in the literature in simpler settings. I will introduce a robust weighting approach for estimation in event studies, which allows investigators to progressively build larger valid weighted contrasts by leveraging, in a sequential manner, increasingly stronger assumptions on the potential outcomes and the assignment mechanism. I will provide visualization tools, and illustrate these methods in a case study of the impact of divorce reforms on female suicide.

Speaker:

  • Lucy Shen, Ph.D. Candidate, Department of Biostatistics, Harvard T.H. Chan School of Public Health

Individualized Policy Evaluation and Learning under Clustered Network Interference

While there now exists a large literature on policy evaluation and learning, much of prior work assumes that the treatment assignment of one unit does not affect the outcome of another unit. Unfortunately, ignoring interference may lead to biased policy evaluation and yield ineffective learned policies. For example, treating influential individuals who have many friends can generate positive spillover effects, thereby improving the overall performance of an individualized treatment rule (ITR). We consider the problem of evaluating and learning an optimal ITR under clustered network (or partial) interference where clusters of units are sampled from a population and units may influence one another within each cluster. Under this model, we propose an estimator that can be used to evaluate the empirical performance of an ITR. We show that this estimator is substantially more efficient than the standard inverse probability weighting estimator, which does not impose any assumption about spillover effects. We derive the finite-sample regret bound for a learned ITR, showing that the use of our efficient evaluation estimator leads to the improved performance of learned policies. Finally, we conduct simulation and empirical studies to illustrate the advantages of the proposed methodology.

Speaker:

  • Yi Zhang, Ph.D. Candidate, Department of Statistics, Harvard University