Hawes Hall, Classroom 201, Harvard Business School
Being Realistic About Unmeasured Biases in Observational Studies
The talk is intended as an introduction to some recent technical work, but I don’t recommend reading the technical work prior to the talk; so, there is no pre-reading. Someone who entirely new to the subject might optionally take a look at Chapter 9 (Sensitivity Analysis) and Chapter 10 (Design Sensitivity) of my book “Observation and Experiment: An Introduction to Causal Inference” (Harvard University Press, 2017); however, the talk will be a gentle introduction to new and somewhat more technical material. It looks like Harvard’s library provides on-line access to “Observation and Experiment”.
Observational studies of the effects caused by treatments are always subject to the concern that an ostensible treatment effect may reflect a bias in treatment assignment, rather than an effect actually caused by the treatment. The degree of legitimate concern is strongly affected by simple decisions that an investigator makes during the design and analysis of an observational study. Poor choices lead to heightened concern; that is, poor choices make a study sensitive to small unmeasured biases where better choices would correctly report insensitivity to larger biases. Indeed, perhaps surprisingly, unambiguous evidence of the presence of unmeasured bias may increase insensitivity to unmeasured bias. These issues are discussed with the aid of some theory and a simple example of an observational study.
Reading Group: 3:00 PM – 3:45 PM EST
Pre-readings:
- Sensitivity analyses informed by tests for bias in observational studies
- Bahadur Efficiency of Observational Block Designs
Speaker:
- Paul Rosenbaum, Robert G. Putzel Professor Emeritus of Statistics and Data Science, Wharton School of the University of Pennsylvania