Clinical artificial intelligence (AI)/machine learning (ML) is anticipated to offer new abilities in clinical decision support, diagnostic reasoning, precision medicine, clinical operational support, and clinical research, but careful concern is needed to ensure these technologies work effectively in the clinic. Here, we detail the clinical ML/AI design process, identifying several key questions and detailing several common forms of issues that arise with ML tools, as motivated by real-world examples, such that clinicians and researchers can better anticipate and correct for such issues in their own use of ML/AI techniques
Key points
- •
Without careful consideration of the machine learning (ML)/artificial intelligence (AI) design process and the problem of interest, effective use of ML/AI in the clinic is challenged by several key problems.
- •
Model misspecification is when the evaluation process for an ML tool does not sufficiently mirror the real world, which can lead to significant problems when tools are deployed.
- •
Even when correctly specified, models must be developed responsibly, taking into consideration performance across numerous patient subpopulations and evaluation settings to ensure models are fair and unbiased.
- •
Tools from interpretability can help catch some of these issues before model deployment, but also pose challenges of their own and need to be used carefully.
Introduction
Clinical artificial intelligence (AI)/machine learning (ML) is anticipated to offer new abilities in clinical decision support, diagnostic reasoning, precision medicine, clinical operational support, and clinical research.
However, in practice, it is hard to determine how one can effectively use ML/AI techniques for real-world problems. ,
This confusion stems from various factors, including that: (1) ML for health is often explored in solely academic settings, without considering the nuances of true clinical deployments; (2) clinicians deploying ML algorithms in practice may have different backgrounds and have different stages of familiarity with ML methodology and assumptions than those in the research community; and (3) many nontechnical barriers exist preventing the widespread use of ML in health care, limiting practical examples of its usage.
Ultimately, regardless of its cause, this uncertainty results in many real-world problems, such as the development and use of ML algorithms in inappropriate contexts, ML models demonstrating unexpectedly poor performance in deployment scenarios, and substantial disparities in algorithm performance across patient subpopulations.
, , In this work, we provide a practical overview of clinical ML/AI designed to ameliorate some of these issues. We focus not on providing a technical overview into clinical ML, because such tutorials and resources are widely available elsewhere,
but rather on outlining the distinct aspects of clinical AI that give rise to common fallacies, which can hinder progress in this domain. By understanding these fallacies, clinician scientists will be able to develop stronger intuition regarding effective use of ML tools in health research and deployment and will be able to anticipate key issues in the deployment of ML/AI tools.
To elicit understanding of clinical ML/AI, we first delve into the design specifications of clinical ML/AI. We focus specifically on core questions that one must answer before beginning any ML project, and examples of ML applications within health care across medical imaging modalities, tabular electronic health record (EHR) data, and clinical text.
Next, we overview three key areas of common ML fallacies: (1) misspecification, that is, when models are inapplicable; (2) irresponsibility, that is, when models are misleading; and (3) uninterpretability, that is, when models are inexplicable. Within each area, we detail several concrete kinds of problems clinicians are likely to encounter, and how to approach addressing these problems. Finally, we close with concluding thoughts.
Reviews
There are no reviews yet.