Symposia
Technology/Digital Health
Johannes Eichstaedt, PhD (he/him/his)
Assistant Professor
Stanford University
PALO ALTO, California, United States
Aadesh Salecha, M.S. (he/him/his)
Data Scientist
Stanford University
Stanford, California, United States
Zoe Tait, B.A. (she/her/hers)
Research Assistant
Stanford
Stanford, California, United States
Philip Held, Ph.D. (he/him/his)
Assistant Professor
Rush University Medical Center
Chicago, Illinois, United States
Betsy Stade, PhD
Postdoctoral Researcher
Stanford University
Stanford, California, United States
Huy Vu, PhD
Data Scientist
Stony Brook University
Stanford, California, United States
Cody Boland, PhD
Clinical Research Scientist
VA Palo Alto Health Care System
Stanford, California, United States
Shannon Wiltsey Stirman, Ph.D.
Professor
National Center for PTSD and Stanford University
Menlo Park, California, United States
The more patients practice therapy skills, the higher the quality of their practice and the better the therapy outcomes across mental disorders. However, rates of skill practice compliance are often low. Only about half of skill practice assignments are completed; these rates tend to decrease throughout treatment. Historically, cognitive and behavioral skills have been implemented as paper or digital/app worksheets, which typically passively capture client entries and are assigned to clients to complete at home between sessions. Large Language Models (LLMs) offer a new technological affordance to re-think and re-design these important self-practice components. LLM-based “copilots'' can clarify instructions and generally enliven the practice experience. However, psychotherapy is an uncommonly high-stakes domain, and establishing the feasibility, safety, and non-inferiority of these technological innovations is paramount.
In this talk, we present preliminary results for an LLM-based copilot that facilitates the completion of thought record worksheets (N = 20 clinicians and clients). We designed a chatbot interface that programmatically draws on various LLM modules, drawing on the expertise of clients with lived experience, clinicians, and implementation experts to shape the LLM behavior. The system interacts with clients via text chat to iteratively fill out a thought record worksheet with the client. This system offers immediate feedback to clients on their work, providing quality control and corrective suggestions (“Is this an event or a thought?”), which may increase the quality of clients’ skill practice. The system provides encouragement and guidance when required. Throughout, we engaged in a user-centered design process involving clinicians and clients, which iteratively improved the usability and effectiveness of the copilot. We present qualitative and quantitative results from the user interviews and pilot data about client engagement and symptom reduction.