An observational study draws inferences about treatment effects when treatments are not randomly assigned, as they would be in a randomized experiment. The naive analysis of an observational study assumes that adjustments for measured covariates suffice to remove bias from nonrandom treatment assignment. A sensitivity analysis in an observational study determines the magnitude of bias from nonrandom treatment assignment that would need to be present to alter the qualitative conclusions of the naive analysis, say leading to the acceptance of a null hypothesis rejected in the naive analysis. Observational studies vary greatly in their sensitivity to unmeasured biases, but a poor choice of test statistic can lead to an exaggerated report of sensitivity to bias. The Bahadur efficiency of a sensitivity analysis is introduced, calculated, and connected to established concepts, such as the power of a sensitivity analysis and the design sensitivity. The Bahadur slope equals zero when the sensitivity parameter equals the design sensitivity, but the Bahadur slope permits more refined distinctions. Specifically, the Bahadur relative efficiency can also compare the relative performance of two test statistics at a value of the sensitivity parameter below the minimum of their design sensitivities. Adaptive procedures that combine several tests can achieve the best design sensitivity and the best Bahadur slope of their component tests. Ultimately, in sufficiently large sample sizes, design sensitivity is more important than efficiency for the power of a sensitivity analysis, and the exponential rate at which rate design sensitivity overtakes efficiency is characterized.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados