Penn Arts & Sciences Logo

Wednesday, February 28, 2001 - 3:00pm

Ben Hansen

University of California, Berkeley

Location

The Wharton School

Vance Hall 208

Refreshments will be served after the seminar in 3009 SH-DH.

Medical procedures, educational programs, toxic exposures, and policy interventions can all be thought of as treatments. A treatment can elicit varying responses, depending on the person getting it. So a treatment need have no one effect; but the average effect of treatment may still be discerned by comparing the untreated, as a group, to the treated. That is, unless systematic differences between the groups confound the comparison. In social and medical science studies, it is common to adjust for differences between treatment and control groups using multiple regression or a generalization of it. Users of such techniques often face criticisms of these sorts: \tbut you should have adjusted for the variable ___, which is not among the covariates included in your regression; I am not convinced that your regression equation is true, in the sense that it accurately depicts the mechanisms deciding how subjects respond to treatment; and \tthe statistical model does not address the stronger tendency of more receptive (healthier, more motivated, more able) subjects to find themselves in the treatment group (or, in the control group). I present two complementary approaches to sensitivity analysis aimed at assessing a study's vulnerability to these complaints. The first is similar in kind to existing sensitivity analyses for regression [Rosenbaum 1986, Dempster 1988, Marcus 1997, Frank 2000], addressing the impact of omitting a variable on the regression result. The second builds on recent developments in causal inference and asymptotics in order to confront problems of model misspecification and selection head-on [Heitjan and Rubin 1991, Bickel et al. 1993, Rosenbaum 1995, Robins 2000, van der Laan et al. 2000]. The techniques are illustrated by application to a controversial College Board assessment of the effect of test-coaching services on performance on the SAT [Powers and Rock 1999].