|
|
|
|
|
Weekly Update
|
March 28, 2026 |
|
|
|
What safety, evidence, and transparency standards are needed for AI chatbots used in mental health contexts, particularly for young people?
John Torous, MD, MS, Associate Professor of Psychiatry at Beth Israel Medical Center in Boston and JAMA Psychiatry Web Editor, joined JAMA Associate Editor Yulin Hswen, ScD, to discuss what innovation and responsible guardrails look like for the future of AI chatbots in psychiatry, with a focus on youth mental health.
Dr Torous emphasized the need for high-quality randomized studies in mental health research. Echoing a common sentiment in the field, he noted that chatbot studies should move beyond simple comparisons to waitlists toward designs that rigorously isolate therapeutic components and assess true clinical impact.
Listen now on Spotify | Apple Podcasts | YouTube | JAMA.com.
Editor’s Picks in this week’s JAMA+ AI:
- A diagnostic study assessed how leading LLM chatbots interpret descriptions of probability when characterizing medical risks. The models often abstained from offering numeric estimates of risk, especially in higher-risk contexts. When they did offer estimates, their responses more closely reflected lay interpretations than regulatory standards, suggesting LLMs may inadvertently amplify misunderstandings about medical risk. (JAMA Network Open)
- In a cohort of over 22,000 children with blunt trauma, the PECARN cervical spine injury (CSI) prediction rule outperformed two older rules, offering the highest sensitivity for detecting injury and the lowest projected CT scan rate. The results support the PECARN rule as the preferred tool for pediatric CSI risk assessment, optimizing early detection while limiting unnecessary radiation exposure. (JAMA Network Open)
- Commercial large language models remain highly vulnerable to prompt-injection attacks: 94% of attempts to manipulate clinical advice were successful, including potentially dangerous recommendations such as contraindicated pregnancy drugs. Even state-of-the-art models with advanced safety features were susceptible, highlighting an urgent need for adversarial testing and stronger safeguards before clinical use. (JAMA Network Open)
|
|
Multimedia
JAMA
AI Chatbots and Youth Mental Health
Yulin Hswen, ScD, MPH
Research Letter
JAMA Network Open
Large Language Models and Communication of Medical Probabilities
Nicholas J. Jackson, BS; Katerina Andreadis, MS; Jessica S. Ancker, MPH, PhD
Original Investigation
JAMA Network Open
Comparison of Cervical Spine Injury Clinical Prediction Rules for Children After Blunt Trauma
Lois K. Lee, MD, MPH; FahdvA. Ahmad, MD, MPH; Lorin R. Browne, DO; et al
Original Investigation
JAMA Network Open
Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice
Ro Woon Lee, MD; Tae Joon Jun, PhD; Jeong-Moo Lee, MD; et al
AUDIO
AI Chatbots and Youth Mental Health
|
|
|
|
|
|
|
Thank you for subscribing to JAMA Network email alerts. This message was sent to buiduytam1@gmail.com by updates@email.jamanetwork.com.
To update your contact information, change your email preferences, or unsubscribe, click here.
To ensure you always receive JAMA Network emails, add the email address updates@email.jamanetwork.com to your address book.
To unsubscribe by mail, contact:
JAMA Network
AMA Plaza
330 N Wabash Ave
Chicago, IL 60611
Or call (800) 621-8335.
|
 |
|
©2026 American Medical Association. All rights reserved.
|
|
 |
|
Advertisement
|
|
|