Symposia
Suicide and Self-Injury
Kelly L. Zuromski, Ph.D.
Research Associate
Harvard University
Cambridge, Massachusetts, United States
Daniel Low, M.A.
Ph.D. Candidate, Speech and Hearing Bioscience
Harvard University
Cambridge, Massachusetts, United States
Noah Jones, M.S.
Ph.D. Candidate, Graduate Research Assistant
Massachusetts Institute of Technology
Cambridge, Massachusetts, United States
Daniel Kessler, M.A.
Graduate Research Assistant
Massachusetts Institute of Technology
Cambridge, Massachusetts, United States
Carlos Madden, B.A.
Chief Community Officer
RallyPoint Networks, Inc
Middleton, Massachusetts, United States
Satrajit Ghosh, Ph.D.
Principal Research Scientist
Massachusetts Institute of Technology
Cambridge, Massachusetts, United States
Dave Gowel, B.S.
Chief Executive Officer
RallyPoint Networks, Inc
Middleton, Massachusetts, United States
Matthew K. Nock, Ph.D. (he/him/his)
Research Scientist
Harvard University
Cambridge, Massachusetts, United States
Background: Suicide is a leading cause of death among U.S. military Servicemembers and Veterans. Unfortunately, many military personnel at risk for suicide do not receive mental health treatment due to both attitudinal (e.g., stigma) and structural barriers. One way to improve resource availability for at risk individuals outside of traditional healthcare settings is to harness social media. Self-disclosures related to mental health and suicide are relatively common on social media, which may provide an opportunity to identify people who would likely benefit from help. In the current study, we developed an algorithm to identify posts with suicide-related content on a military-specific social media platform called RallyPoint.
Methods: Public posts (~5.3 million) shared on RallyPoint between September 2013 – March 2020 were used in this study. Our team reviewed a subset of posts (n = 7967) and labeled them for presence/absence of suicidal thoughts and behaviors using a codebook generated for this study. These labeled posts were used to train several machine learning models.
Results: We tested both classic machine learning models (e.g., logistic regression, light gradient boosting machines) and deep learning models. The best performing model was a deep learning (RoBERTa) model that incorporated post text and metadata (e.g., post type, user-generated tags). This model detected the presence of suicidal posts with relatively high sensitivity (.85), specificity (.96), precision (.64), F1 score (.73), and an area under the precision-recall curve of .84. Compared to non-suicidal posts, suicidal posts were more likely to contain explicit mentions of suicide, descriptions of risk factors (e.g., depression, PTSD) and help-seeking, and first-person singular pronouns.
Conclusions: Our results demonstrate the feasibility and potential promise of using social media posts to identify at-risk Servicemembers and Veterans. We also identified key differences in suicidal vs. non-suicidal posts, which sheds light on how military personnel share about their suicide risk on social media. Future directions and clinical applications of this work will be discussed.