|
|
|
|
|
Weekly Update
|
February 28, 2026 |
|
|
|
Can we trust AI-driven decision tools to uphold medicine’s “do no harm” principle? In the latest JAMA+ AI Conversations, JAMA Associate Editor Yulin Hswen, ScD, interviews Mass General Brigham’s David Wu, MD, PhD, and Beth Israel Deaconess Medical Center’s Adam Rodman, MD, MPH, about their work evaluating the safety and reliability of AI in health care.
They describe their live leaderboard that tracks errors made by large language models, from frontier models to specialized medical tools. Their findings show that errors of omission are more common than errors of commission, echoing patterns seen in human clinicians.
As more physicians routinely adopt AI for clinical support, understanding how this impacts clinical decisions only becomes more critical. The episode contains practical lessons, chess analogies, discussion of medical education, and consideration of how physician-AI teams might work.
Listen now on Spotify | Apple Podcasts | YouTube | JAMA.com.
Editor’s Picks in this week’s JAMA+ AI:
- A study using 10 LLMs found that clinical recommendations for simulated gastroenterology cases varied by patient demographics. Marginalized groups—transgender, unhoused, low-income, unemployed—were more often recommended to have mental health assessment. The findings highlight the risk of perpetuating biases when deploying AI in clinical decision-making. (JAMA Network Open)
- Diabetic retinopathy screening rates have historically been low among underserved populations due to barriers in accessing traditional eye care. The Diabetic Retinopathy Screening Point-of-Care Artificial Intelligence trial aims to demonstrate that a multicomponent approach within federally qualified health centers can improve patient adherence to annual retinal screening and diabetes standard of care. (JAMA Network Open)
- A 2025 executive order seeks to preempt most state-level AI regulations, including those that relate to medical AI, by pushing federal agencies to challenge or override state laws. In a commentary, the authors argue that this order exceeds presidential authority and risks undermining the balance of federalism, stifling state-led innovations in health care, and creating regulatory gaps. (JAMA)
|
|
Multimedia
JAMA
From AI Bench to AI Bedside
Yulin Hswen, ScD, MPH
Research Letter
JAMA Network Open
Sociodemographic Bias in Large Language Model–Assisted Gastroenterology
Asaf Levartovsky, MD; Mahmud Omar, MD; Girish N. Nadkarni, MD, MPH, CPH; et al
Original Investigation
JAMA Network Open
Diabetic Retinopathy Screening Among Federally Qualified Health Center Patients Using Point-of-Care AI
Edgar A. Diaz, MD; Marva L. Seifert, PhD, MPH; Vida Gruning, JD; et al
Viewpoint
JAMA
Preemption at the Intersection of Health Care and Artificial Intelligence
Carmel Shachar, JD, MPH; David Blumenthal, MD, MPP; I. Glenn Cohen, JD; et al
AUDIO
AI and "Do No Harm"
|
|
|
|
For Authors
JAMA+ AI highlights the role of artificial intelligence and digital medicine in health care, drawing on original research, editorials, and medical news from across the JAMA Network. Please submit your manuscripts directly to JAMA and the JAMA Network journals. More information and complete instructions are available at the For Authors page.
|
|
|
|
|
|
Thank you for subscribing to JAMA Network email alerts. This message was sent to buiduytam1@gmail.com by updates@email.jamanetwork.com.
To update your contact information, change your email preferences, or unsubscribe, click here.
To ensure you always receive JAMA Network emails, add the email address updates@email.jamanetwork.com to your address book.
To unsubscribe by mail, contact:
JAMA Network
AMA Plaza
330 N Wabash Ave
Chicago, IL 60611
Or call (800) 621-8335.
|
 |
|
©2026 American Medical Association. All rights reserved.
|
|
 |
|
Advertisement
|
|
|
|