09.Machine learning10.Clinical decision support systems - sporedata/researchdesigneR GitHub Wiki

1. Use cases: in which situations should I use this method?

  • Clinical Decision Support Systems (CDSSs) are used for information management. They support the use of clinical data science in everyday clinical practice [1].

  • CDSSs assist and improve medical decisions by proposing recommendations based on a multitude of patient data faster than healthcare professionals [2].

  • Machine learning (ML) powered Artificial intelligence (AI) methods are increasingly applied in the form of CDSSs to assist healthcare professionals (HCPs) in predicting patient outcomes [2].

  • Evaluate the implementation of a decision support system [3].

2. Input: what kind of data does the method require?

  • Decision system available or in development
  • Clinic where implementation can be conducted

3. Algorithm: how does the method work?

Model mechanics

  • Interpretability of ML models ensures the understanding of the reasoning behind the decisions or recommendations of those models and bias identification increasing the trust of their users. In short, interpretability is a decisive element in the broader adoption of ML-based methodologies in healthcare. It is commonly associated with the transparency of a decision system's several factors (features, algorithms, parameters, generated models, etc.)

  • Interpretable machine learning expresses a way to compute the impact of each variable in a final model, so through this method, we can better understand the variable "weight" in the final model.

  • In contrast, explainable machine learning refers to methods that tend to solve the black box of the model, so this method also focuses on what happens between the input and the output. It means that through explainable ML, we can understand how the model work with the variables to result in the visualized outcomes. These methods are often interpretable algorithmic models that approximate the black-box models. An explanation for a black-box model can then, for example, be obtained through an interpretable approximation, e.g., SHAP or LIME).

  • Overall, interpretable means the ability to predict, while explainable is about the role of individual risk factors.

  • LIME and SHAP are surrogate models. It means they still use the black-box machine learning models. A data point has to be converted into a format that is easier to work with for building surrogate models (sampling data points in the neighborhood of the original data point). This representation is called interpretable as it is understandable to humans (it converts the data into binary).

  • LIME and SHAP explore and use the property of local explainability to build surrogate models to black-box machine learning models to provide them interpretability.

  • In contrast with statistics, where variable coefficients point to the average importance (within a group), LIME and SHAP point to a specific individual (in our case, a patient). This is a crucial difference since healthcare professionals care for individuals (one patient at a time) rather than groups (usually the role of a healthcare policymaker).

  • SHAP and LIME differ in their underlying algorithms; for example, LIME assumes linear models while SHAP does not. SHAP is more computer-intensive since it is based on monte carlo simulation, among other things. You can find two great articles explaining these concepts and models here and here.

  • LIVE is an alternative implementation of LIME for regression problems, which emphasizes the role of model visualization in understanding of complex models. In comparison with the original LIME, both the method of local exploration and handling of interpretable inputs are changed. Dataset for local exploration is simulated by perturbing the explained instance one feature at a time.

  • LIVE package approximates the local structure of the black box model around a single point in the feature space. The idea behind the breakDown is different. In case of that package the main goal is to decompose model predictions into parts that can be attributed to particular variables. It is straightforward for linear (and more general: additive) models. You can find more information about LIVE and breakDown methods here.

Reporting guidelines

Data science packages

  • shinymanager is a simple and secure authentication mechanism for single 'Shiny' applications.
  • shiny-gallery comprises code and other documentation for apps in the 'Shiny' gallery.

Suggested companion methods

  • Shapley Additive Explanation (SHAP)

  • Local Interpretable Model-Agnostic Explanations (LIME)

Learning materials

  1. Books
  2. Articles

Resources

References

[1] Wasylewicz ATM, Scheepers-Hoeks AMJW. Clinical Decision Support Systems. 2018 Dec 22. In: Kubben P, Dumontier M, Dekker A, editors. Fundamentals of Clinical Data Science. Cham (CH): Springer; 2019. Chapter 11.

[2] Amann J, Vetter D, Blomberg SN, Christensen HC, Coffee M, et al. To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems. PLOS Digital Health 1(2): e0000016, 2022.

[3] Marafino BJ, Schuler A, Liu VX, Escobar GJ, Baiocchi M. Predicting preventable hospital readmissions with causal machine learning [published correction appears in Health Serv Res. 2021 Feb;56(1):168]. Health Serv Res. 2020;55(6):993-1002.

[4] Panagiotou OA, Högg LH, Hricak H, Khleif SN, Levy MA, Magnus D, Murphy MJ, Patel B, Winn RA, Nass SJ, Gatsonis C. Clinical Application of Computational Methods in Precision Oncology. JAMA oncology. 2020 May 14.

⚠️ **GitHub.com Fallback** ⚠️