Symposia
Military and Veterans Psychology
Sean Lauderdale, Ph.D. (he/him/his)
Assistant Professor
University of Houston – Clear Lake
Houston, Texas, United States
Since the release of ChatGPT-3, clinicians have explored how artificial intelligence supplements mental health care. Research shows that AI provides unbiased and evidence-based recommendations for major depression better than physicians (Levkovich & Elyoseph, 2023), but its recognition of suicide risks varies (Elyoseph & Levkovich, 2023). AI’s detection of PTSD in veterans has not been assessed. For this investigation, we assessed ChatGPT-4’s recognition of PTSD symptoms, gender bias in symptom recognition, PTSD knowledge, and veteran stigma. Vignettes about a woman and man veteran were used to assess ChatGPT-4’s identification of PTSD. For each trial, ChatGPT-4 was opened in a privacy browser and provided with a vignette. Afterwards, ChatGPT-4 was directed to rate the level of distress and happiness the veteran was experiencing. ChatGPT-4 also completed the PTSD Knowledge Questionnaire (Harik et al., 2016) and the Endorsed and Anticipated Stigma Inventory (Vogt et al., 2014). All ChatGPT-4’s responses were compared to human participants. After ChatGPT-4 answered all items, the data was copied, the conversation deleted, and the tab was closed. A total of 21 trials (N = 11 man and N = 10 woman) were generated. ChatGPT-4’s identification of the vignette character as having PTSD (100%) was superior to humans (65.9%; X2 (1) = 10.34, p < .01). ChatGPT-4 identified the veteran as having more distress and less happiness than human participants (average t(252) = 15.78, p < .01). ChatGPT-4 rated women and men veterans similarly across all variables (e.g., severity of PTSD symptoms; all t’s < 1.60, all p’s > .05). There were few differences between ChatGPT-4 and humans in identification of trauma-associated events (X2 (1) < 2.85, p’s > .05) and PTSD symptoms (X2 (1) < 3.55, p’s > .05). Humans identified a greater number of non-trauma events (e.g., divorce) and non-trauma symptoms as associated with PTSD than ChatGPT-4 (X2 (1) > 3.93, p’s < .05). Although ChatGPT-4 identified more evidence-based treatments than humans (X2 (1) > 9.2, p’s < .05), it also endorsed non-evidence based treatments (e.g., marijuana and dogs) at rates similar to humans (X2 (1) < 3.45, p’s > .05). ChatGPT-4’s stigma for veterans with mental health difficulties did not differ from humans (t’s < 2.10, p’s > .05). ChatGPT-4 rated veterans as more likely to seek treatment than humans (t(676) = 3.13, p < .01). Although ChatGPT-4 recognizes and provides unbiased, evidence-based treatment recommendations for PTSD, it also promotes non-evidence based recommendations and has public stigma for veterans. The use of AI to assess veterans’ mental health needs will be discussed.