physicianscharter

AI in medicine must always put patients first.

We are a dedicated group of physicians, excited by artificial intelligence's potential to improve medical care.

AI must focus on our patients' needs, grounded in ethical principles. Its adoption and implementation in healthcare must be safe, accurate, and, fair.

We see AI as a valuable co-pilot for a physician, but never a replacement for the human touch of the patient-doctor relationship.

Organized by
ai-image2

Our Core Values

Our core values build on the four pillars of medical ethics: autonomy, beneficence, non-maleficence, and justice.

ai-image1

The 10 Rules of the Road for AI Implementation

Rule #1

Human-Centered Design & Engagement

Keep the patient-doctor relationship central, involve patients and doctors early in the development process and inform them about how AI is utilized in their care.

Examples in Clinical Practice

In developing an AI tool for assessing depression, feedback from patients and psychiatrists is included in its initial development to ensure the tool is both clinically useful and user-friendly.

During an AI-guided surgery, the surgeon discloses and explains to the patient the role of AI in assisting but not performing the procedure.

An AI diagnostic tool is used in the context of a patient's evaluation by a physician so that the physician can interpret the tool using their medical expertise.

Rule #2

Data Quality and Privacy

Prioritize high-quality, diverse, and geographically relevant data for training AI models. Respect patient privacy and foster responsible data interpretation.

Examples in Clinical Practice

An AI tool for predicting disease progression should use diverse datasets representing different geographic and demographic cohorts, ensuring its efficacy across a broad patient population.

EHR data used to train AI models for outcome prediction should be de-identified and encrypted end-to-end to maintain patient privacy.

rule3

Rule #3

Ethics, Bias Mitigation, and Their Implications

Monitor and mitigate biases in AI algorithms and consider potential ethical implications in AI deployment.

Examples in Clinical Practice

In developing an AI tool for skin cancer detection, the model is trained on a diverse dataset representing various skin types to minimize bias.

When deploying an AI tool for prioritizing patient referrals, consider its impact on access to care to ensure it does not inadvertently favor or disadvantage certain patient groups.

rule 4

Rule #4

Trust: Transparency, Explainability, and Accountability

Encourage a "glass box" approach to AI, provide clear information about its workings, and establish a robust framework for trust and accountability.

Examples in Clinical Practice

When using an AI model for predictive analytics, both patients and physicians are provided with clear, understandable explanations of how the model works, what data it uses in its analysis, and how it makes predictions (some AI models make this not possible).

An accountability framework is implemented so that in case of misdiagnosis by an AI tool, there are mechanisms for addressing the error and preventing recurrence through rapid feedback from clinician to AI developer.

rule 5

Rule #5

Continuous Validation, Feedback, and Improvement

Ensure that AI tools are evaluated on and iterate from formal, objective evaluations of their utility as well as everyday use; models require continuous review. Encourage feedback and provide clear paths for users to share their experiences and insights, keeping the tools effective, safe, and up-to-date.

Examples in Clinical Practice

An AI tool for diagnosing diabetic retinopathy is repeatedly and regularly validated on diverse, independent datasets. Its performance is closely monitored over time using standardized benchmarks. Any erroneous suggestions from this AI tool are directly reported by physicians via a dedicated feedback system, leading to the tool's refinement and improvement.

An AI system built for heart disease diagnosis is not just validated once, but regularly checked against standard performance measures and monitored for drift in accuracy. Doctors using the system can report any inaccuracies they find, helping make the system better over time.

rule 6

Rule #6

Collaborative Approach and Workflow Integration

Promote a collaborative, compensated, multidisciplinary approach to AI development, focusing on AI tools that integrate seamlessly into healthcare workflows.

Examples in Clinical Practice

In developing an AI tool for radiology, radiologists, data scientists, ethicists, and patients are all involved, with compensation structures in place for the time required to review these models.

An AI tool for analyzing CT scans is designed to integrate directly into a hospital's existing imaging and EHR systems, providing insights within the existing workflow.

rule 7

Rule #7

Regulatory Compliance and Safety

Adhere to regulatory guidelines for AI development and implementation. Implement robust safety measures to protect patient safety.

Examples in Clinical Practice

AI-based diagnostic tools are developed in accordance with FDA guidelines and regulations, ensuring their safety and efficacy.

In AI-assisted surgery, backup safety measures are considered and ready for use to prevent potential harm from AI-induced errors.

rule 8

Rule #8

Education and Support

Provide comprehensive education and training to healthcare providers about AI, and support them in their roles as primary interpreters of AI outputs.

Examples in Clinical Practice

In a hospital deploying an AI tool for radiology interpretation, a comprehensive training program is provided, offering radiologists extensive knowledge about the tool, its use cases, and how to interpret and verify its outputs.

A healthcare organization provides an ongoing support program for clinicians, providing regular updates, resources, and direct communication lines with the AI development team for queries and feedback.

rule 9

Rule #9

Patient-Centered Outcomes and Value in Healthcare

Develop clinically meaningful AI tools that enhance healthcare value, reduce overdiagnosis and overtreatment, and provide better outcomes at the same or lower cost.

Examples in Clinical Practice

An AI tool for lung cancer screening is trained to accurately differentiate between benign and malignant nodules, thus reducing unnecessary invasive procedures and patient anxiety.

An AI system for detecting pulmonary emboli takes into consideration that some emboli may be clinically insignificant (or even false positives). It includes the risk of anticoagulation and PE treatment into its model, recognizing that patient-important outcomes are the overall goal, not just detection of clot.

rule 10

Rule #10

Understand the Limits of AI

Recognize that while AI can augment and improve healthcare delivery, it is not a panacea and cannot solve every problem in our complex, fragmented healthcare system. We must understand its limitations, understand when human intervention is needed, and find a balance between technological assistance and human action to provide optimal care to patients.

Examples in Clinical Practice

AI models can assist in predicting disease progression based on extensive data sets, but these predictions are purely statistical and do not account for individual patient responses and differences. Clinicians must interpret these predictions in light of their personal understanding of the patient's condition and unique circumstances.

An AI algorithm may be capable of sorting and prioritizing patient referrals based on their medical data, but it cannot wholly substitute the human touch in empathizing with patient fears and anxieties. Clinicians are needed to communicate comfort, and provide the caring human interaction that patients often require.

About

About the Charter

The origin of our Physicians’ Charter stems from a growing concern among the physician creators of MDCalc about the rapid pace of AI and how it will be implemented in healthcare. In the absence of an existing resource that provided practical, clear guidance using real-world clinical scenarios and authored by frontline physicians, we assembled a diverse group of experts to create one. This document is the collective effort of practicing physicians across numerous medical specialties. We all share enthusiasm for AI’s potential in medicine, and are steadfast in our commitment to its ethical, fair, and patient-focused implementation.

Delivering care to patients every day provides us with a unique understanding of the intricacies of healthcare, a perspective we consider essential in guiding AI’s integration into our field. There is a true urgency from physicians to set safe boundaries and demand high expectations from AI in the clinical environment.

We hope this charter offers a practical, understandable, and accessible framework to guide all stakeholders. As physician leaders in this era of AI evolution, we must always prioritize the values and the welfare of our patients above all. This charter is our pledge to ensure AI in medicine is effective, ethical, and fundamentally patient-centric.

About the Authors

tony

Anthony Cardillo, MD

Anthony is a pathologist and Clinical Informatics fellow at NYU Langone. His primary interests are in medical cybersecurity and the digital transition of pathology. He presently serves on two national committees involving artificial intelligence and ethics in the College of American Pathologists and the American Medical Informatics Association. More recently, Dr. Cardillo was recognized in The Pathologist’s Power List in 2021 and 2022, and in 2023 placed in the US and UK Summit for Democracy competition to develop secure AI models.

willcollins

William “Will” Collins, MD

Will is a hospital medicine physician and Clinical Assistant Professor at the Stanford School of Medicine. He is also the current president of the Society of Hospital Medicine San Francisco Bay Area Chapter. He has been captivated by both the potential and the risk of AI applications in medicine. From his experience in clinical research, he is interested in designing rigorous trials to assess AI interventions to show meaningful outcomes for patients and medical providers.

dustin

Dustin Cotliar, MD MPH

Dustin brings significant care delivery expertise that comes from over eight years of clinical practice and studying healthcare policy and management at Columbia University. He has served as a clinical consultant with the Kaiser Family Foundation where he published health system research that has been cited in articles by the NY Times, VOX, politico, and others. A recent first-place winner at MIT’s Hacking Medicine, one of the largest clinical hackathons in the country, Dr. Cotliar is passionate about building innovative clinical products, especially those rooted in artificial intelligence and machine learning.

carly

Carly Eckert, MD, MPH

Carly is a physician technologist located in Chapel Hill, NC. She is double-boarded in preventive medicine and clinical informatics. Carly has led clinical and data teams within healthcare startups for nearly a decade. Her areas of focus include AI governance, ethics, and bias. She also enjoys teaching physicians and other healthcare providers on the topics of practical and applied AI solutions and how to communicate with technical teams.

sarah

Sarah Gebauer, MD

Sarah is an experienced hospital leader and healthcare technology consultant with a background in clinical informatics. She’s passionate about physician engagement with artificial intelligence and founded Machine Learning for MDs, a free online community providing education, training, and networking for physicians in the AI space.

raouf

Raouf Hajji, MD, PhD

Raouf is an Assistant Professor of Internal Medicine, Medicine Faculty of Sousse, Tunisia. With his expertise in clinical practice, biomedical research, and academia, he is the author, reviewer, and editor of many peer-reviewed medical journals and book chapters. He is Co-founder and Medical Lead of International Medical Community (IMC), an international initiative working as an Innovation Health technologies Hub with the main scope of advancing international cooperation and creating a link to cutting- edge technologies for the healthcare sector worldwide. You can join him on LinkedIn where he publishes weekly medical newsletter: Healthcare Present & Future with Updates on Biomedical Research, Academia, Clinical Practice and Emerging Technologies in Healthcare.

MorganJeffries

S. Morgan Jeffries, MD

Morgan is a neurohospitalist and physician informatician

at Geisinger, where his work focuses on quality measures, workflow improvements, and AI strategy. He’s also an assistant professor at the Geisinger Commonwealth School of Medicine and a member of Epic’s Adult Neurology Specialty Steering board. He’s interested in the similarities and differences between human and AI minds, AI safety and alignment, and AI evaluation. He occasionally writes on LinkedIn and less frequently on X (née Twitter).

matt

Matt Sakumoto, MD

Matt Sakumoto, MD is a virtualist primary care physician in San Francisco, and Adjunct Clinical Professor at UCSF focusing on virtual care and clinician efficiency tools for the EHR. With prior industry experience at multiple telehealth startups and as a clinician-advisor to many early-stage companies, he is passionate about exploring and expanding the digital health landscape. 

William Small, MD, MBA

Will is a hospital medicine physician and clinical informatics fellow at NYU Langone Health who is focused on the impact of communication technologies on the clinician experience with the EHR and patient outcomes. He is dedicated to understanding how best to evaluate outputs of generative AI and is a key member of the team evaluating the effects of generative AI chatbot integration into patient-provider Inbasket communications on provider efficiency and satisfaction.

grahamwalker

Graham Walker, MD

Graham organized the Physician's Charter and is an emergency physician and clinical informaticist in San Francisco, California with The Permanente Medical Group (TPMG). He enjoys working at the intersection of technology and medicine and built MDCalc and theNNT, two free online resources that have allowed millions of clinicians from around the world to incorporate evidence-based decision-making into their medical practice. You can find him at LinkedIn writing about technology and referring to himself in the third person.

Endorse the Charter

Agree with our statements about the role and boundaries for AI in medicine? Endorse our charter and we’ll include your name and information on the site.

Fill out the form

Add Your Name

Add Your Name

Fill out the form below to have your name, role, specialty, and location details added to our list of supporters. (Your email address will never be sold, shared, or spammed.)

We will never sell nor spam.
endorse