Symposia
Technology/Digital Health
Andrew M. Sherrill, Ph.D. (he/him/his)
Assistant Professor
Emory University
Atlanta, Georgia, United States
Saeed Abdullah, Ph.D.
Assistant Professor of Information Sciences and Technology
The Pennsylvania State University
University Park, Pennsylvania, United States
Rosa Arriaga, PhD
Associate Professor
Georgia Institute of Technology
Atlanta, Georgia, United States
Christopher Wiese, PhD (he/him/his)
Assistant Professor
Georgia Institute of Technology
Atlanta, Georgia, United States
Recently, several research groups have begun to develop artificial intelligence (AI) systems that aim to review psychotherapy session recordings and automatically detect protocol fidelity. Future psychotherapists may have opportunities to team with AI systems to learn and maintain clinical skills in evidence-based treatments (ESTs). “AI teaming” can revolutionize the availability and effectiveness of EST training by reducing the reliance on passive didactic experiences (e.g., workshops) and resource-consuming session review with expert consultants, who are often not accessible. Importantly, perspectives from industrial-organizational psychology suggest that integrating an AI teammate in the workplace has far greater implications than simply using AI as a data processing tool. Psychotherapists will likely evaluate and interact with AI teammates in fundamentally different ways than their human teammates. These AI systems may fundamentally change the nature of work, such as determining the ethical division of responsibilities between AI and humans (e.g., AI gathers data while the human makes data-informed decisions). The mental health field needs AI development and integration guidelines to facilitate a range of ethical issues including transparency (e.g., explainability of outputs), fairness (e.g., prevention of bias), nonmaleficence (e.g., prevention of misuse), and privacy (e.g., protections for data security).
In this presentation, we will present findings from three interdependent studies. First, we will present quantitative findings from a cross-sectional survey using a nationally representative sample of psychotherapists that characterizes person- and setting-level barriers and facilitators of AI teaming within mental health workplaces. Second, we will present qualitative data that characterizes what psychotherapists want from their AI teammates and how to design AI systems that psychotherapists perceive as ethical, useful, and useable. Third, we will present insights from the early stages of designing a novel computational system called “TEAMMAIT” (Trustworthy, Explainable, and Adaptive Monitoring Machine for AI Teams). The first use case of TEAMMAIT is Prolonged Exposure for PTSD. The presentation will focus on how to address ethical considerations early in the development of this technology to ensure AI teaming within the context of learning and maintaining clinical skills is inclusive, unbiased, and fosters a healthcare ecosystem that supports health equity.