02.Causation02.Difference in differences - sporedata/researchdesigneR GitHub Wiki
- Difference in differences (DD) is often used when a healthcare implementation (policy, clinical practice guideline) led to a change in clinical practice, including patient outcomes. These changes can also occur in response to events such as a pandemic.[1] [2]
-
Need a cohort that received an implementation (for example, a specific clinical practice guideline) and another control cohort, both with identical or very similar characteristics.
DD compare data before and after the specific event (e.g. treatment) for two cohorts under similar conditions. Thus, our sample is divided into four groups, according to the table below:
In the example above, A-B and C-D represent the extent to which control and treatment groups have changed, considering a similar event period. Since the event did not impact the control group (A-B), these changes were due to other factors, which must have influenced the treatment group (C-D). A-C and B-D represent the differences between the control and treatment groups before and after the event.
It is helpfull to keep in mind the DD regression diagram (see Metaphors in 4. Output).
To assess DD, we can use a regression analysis, traditionally an Ordinary Least Squares (OLS) model:
where is a variable for the period (e.g. time 1 vs time2), and
is a variable for the policy intervention, and
is an interaction between the two. This parameterization will isolate the effect of
, i.e. the situation where we are at time 2 and with the intervention being present. An important detail is that the model assumes that the two groups are similar with the exception of the intervention, which makes DD account for unmeasured confounding. Since that assumption usually cannot be fully checked, DD models are also frequently accompanied by matching mechanisms that will pair the two samples. For more details see.
- Causal Bayesian Networks
- Instrumental Variables
- Heterogeneity of effect subgroup analysis, mediation, and moderation
- Propensity Scores
- Regression discontinuity design
- Causal machine learning
- Interrupted time series
- Books
- Articles
- Difference-in-Differences Method in Comparative Effectiveness Research: Utility with Unbalanced Groups [4].
- Methods for Evaluating Changes in Health Care Policy - The Difference-in-Differences Approach [5].
- US Food and Drug Administration Approvals of Drugs and Devices Based on Nonrandomized Clinical Trials [6].
- Methods for evaluating changes in health care policy [7]
- Common references for causation
- Association of State Access Standards With Accessibility to Specialists for Medicaid Managed Care Enrollees (non-open source, and therefore no tables and plots are displayed here)
- Overall, there was no significant improvement in timely access to specialty services for MMC enrollees in the period following implementation of standard(s) (adjusted difference-in-differences, -1.2 percentage points; 95% CI, -2.7 to 0.1), nor was there any impact of access standards on insurance-based disparities in access (0.6 percentage points; 95% CI, -4.3 to 5.4). There was heterogeneity across states, with 1 state that implemented both time and distance standards demonstrating significant improvements in access and reductions in disparities.
- Association of State Access Standards With Accessibility to Specialists for Medicaid Managed Care Enrollees (non-open source, and therefore no tables and plots are displayed here)
- Table 2 presents with five metrics: pre-intervention outcomes, post-intervention outcomes, unadjusted pre-post change, unadjusted DD with 95% CI, and adjusted DD with 95% CI
- Figure 1 presents a time plot comparing the two interventions
- Figure 2 presents a subgroup analysis for different outcome measures
-
DD allows for causal inferences that are robust to measured and unmeasured confounders, although the degree to which confounding is adjusted for cannot be directly assessed. This approach is in contrast with propensity scores, where the impact on confounding can be measured, but where unmeasured confounding can only partially be accounted for through high-dimensional propensity score adjustment.[3]
-
[Empirical Strategies in Labor Economics](https://www.sciencedirect.com/topics/economics-econometrics-and-finance/difference-in-differences
[1] Bíró A. Reduced user fees for antibiotics under age 5 in Hungary: Effect on antibiotic use and imbalances in the implementation. Plos one. 2019 Jun 28;14(6):e0219085.
[2] Price RA, Frank RG, Cleary PD, Goldie SJ. Effects of direct-to-consumer advertising and clinical guidelines on appropriate use of human papillomavirus DNA tests. Medican Care. 2011 Feb;49(2):132-8.
[3] Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of effects using health care claims data. Epidemiology (Cambridge, Mass.). 2009 Jul;20(4):512.
[4] Zhou H, Taber C, Arcona S, Li Y. Difference-in-differences method in comparative effectiveness research: utility with unbalanced groups. Applied health economics and health policy. 2016 Aug 1;14(4):419-29.
[5] Dimick JB, Ryan AM. Methods for evaluating changes in health care policy: the difference-in-differences approach. Jama. 2014 Dec 10;312(22):2401-2.
[6] Razavi M, Glasziou P, Klocksieben FA, Ioannidis JP, Chalmers I, Djulbegovic B. US Food and Drug Administration Approvals of drugs and devices based on nonrandomized clinical trials: a systematic review and meta-analysis. JAMA network open. 2019 Sep 4;2(9):e1911111-.
[7] Dimick JB, Ryan AM. Methods for evaluating changes in health care policy: the difference-in-differences approach. Jama. 2014 Dec 10;312(22):2401-2.
[8] Cunningham, S.Causal inference: The mixtape. Yale University Press. 2021.