physicianscharter

Further Reading and Supporting Research

Introduction, Values, Mission, and Vision
    1. This is a wonderful piece on Dr. Rene Theophile Hyacinthe Laënnec, a French physician and the inventor of the stethoscope in the early 1800s.
    2. In a similar vein, we have supporting histories of the PET scan and medical ultrasound.
    3. Stanford’s Human-Centered Artificial Intelligence has a great brief overview of different AI terms and definitions.
 Human-Centered Design and Engagement
    1. The New England Journal of Medicine provides a thorough summary of Artificial Intelligence and Machine Learning in Clinical Medicine, including history, ML, and chatbots, with proposals for research standards as well.
    2. Stanford’s HAI (Institute for Human-Centered Artificial Intelligence) is an outstanding resource for all-things AI, and its Values section aligns well with this Charter’s vision, and its Humanity section aligns perfectly with our first chapter.
    3. This paper from San Diego comparing ChatGPT responses to Reddit physician responses made headlines and sparked controversy when it suggested that OpenAI’s tool provided more empathetic responses than the physician users on Reddit.
    4. Want to learn more about Human-Computer Interaction? Ben Shneiderman’s book entitled Human-Centered AI is a great place to get started.
    5. And for more HCI information, look to HCI International’s book series and conference.
Data Quality and Privacy
    1. This 2023 article from Computers in Biology and Medicine delves into the barriers of AI adoption in healthcare, focusing on various privacy and data concerns, and presents an overview of advanced privacy-preserving techniques like Federated Learning and Hybrid Techniques.
    2. A team in China provides an excellent review on federated learning and privacy-preserving algorithms as solutions to data fragmentation and privacy challenges in healthcare AI.
    3. This is a summary of a roundtable discussion by the US Department of Health and Human Services (HHS) on the opportunities, challenges, and strategies for using data to train AI models in healthcare, offering recommendations for HHS and stakeholders to further AI advancements.
Ethics and Bias Mitigation
    1. This NEJM paper discusses the use of race in predictive algorithms, the problems that arise when using race, and highlights the importance of knowing what goes into algorithms.
Trust: Transparency, Explainability, and Accountability
    1. Carnegie Mellon’s Violet Turri has an outstanding piece on “What is Explainable AI?
    2. This famous paper from Microsoft on explainable models revealed an issue with a neural network that was predicting that patients with asthma had a lower likelihood of mortality from pneumonia (when in actuality they have a higher mortality); the model was making technically accurate conclusions, but made these conclusions because asthmatic patients were more often managed in the ICU, lowering their mortality due to more aggressive, intensive care.
    3. This paper from PLOS is an outstanding review of the ethical, theoretical, and practical concerns around AI models and tools — specifically focusing on how emergency dispatch operators did not adopt a tool that predicted which emergency calls were for a cardiac arrest case because they did not trust or understand it.
    4. Epic’s Sepsis model is discussed in this paper and is unfortunately a good example of a model failing “in the wild.”
Continuous Validation, Monitoring, and Improvement
    1. This Lancet paper suggests concerns around generalizability of models in healthcare and explains the reasons that models may not be as generalizable as we would like to think.
    2. This NEJM correspondence (in particular, its Table 1) provides an overview of approaches to recognizing and addressing dataset shift.
Collaborative Approach and Workflow Integration
    1. Authors from Ohio State provide a roadmap for the integration of AI into Radiology workflows specifically, from the Journal of Medical Imaging.
    2. European Radiology reviews the challenges — and offers solutions to them — in this piece, again focusing around AI in radiology.
    3. This article from the UK discusses advances in Human-Computer Interaction, breaking the paper up into 6 categories: Interfaces, Visualization, Electronic Health Records, Devices, Usability, and Clinical Decision Support Systems.
Regulatory Compliance and Safety
    1. The FDA provides guidance for AI and ML in Software as a Medical Device applications.
    2. The FDA also has a helpful navigator to help developers determine if their software is a medical device.
Education and Support
    1. This paper from Health Education UK argues that the healthcare workforce in the UK will need education and training — and the creation of an educational framework — to use AI successfully.
    2. This article interviews 45 physician champions and discusses what they felt were critical to the adoption of a new EHR and what challenges they faced.
    3. Health Affairs discusses 7 lessons from EHR implementation — including hands-on training.
    4. Here are 10 more lessons learned from an academic medical center that adopted an EHR for its 6 hospitals, 2 campuses, and 46 outpatient sites.
Patient-Important Outcomes and Value in Healthcare
    1. This paper reviews the very concept of patient-important outcomes, and acknowledges that medicine doesn’t often ask patients what’s important to them as an outcome.
    2. Even in research today, we don’t focus nearly enough on patient-important outcomes — in diabetes and critical care as just two examples.
Understanding the Limits of AI
    1. Thinking, Fast and Slow is a book by psychologist Daniel Kahneman, who describes two systems that humans use when thinking; a fast, instinctive system, and a slow, deliberative thought process.
    2. The complementarity-driven deferral to clinicians (CoDoC) system proposes a model that could even help clinicians decide when to rely on AI tools and when to defer to clinician judgment.
    3. This paper demonstrates how AI can be helpful to humans — by re-ordering CT scan reading queues — without replacing physician interpretation.
    4. The tragic crash of flight AF447 is an example of the devastating consequences of automation bias.
    5. Automation bias is hard to overcome, even when humans are educated and warned that it exists.