Dissemination & Implementation Science
Improving Accurate Detection of Mental Health Treatment Needs for Youth through a Standardized Feedback Report
Sophia Young, None
Research Assistant
University of Pennsylvania School of Medicine
Philadelphia, Pennsylvania, United States
Amanda Jensen-Doss, Ph.D. (she/her/hers)
Professor
University of Miami
Coral Gables, Florida, United States
Grace S. Woodard, M.S.
Doctoral Student
University of Miami
Coral Gables, Florida, United States
Megan Brady, B.S.
Project Manager
University of Pennsylvania School of Medicine
Philadelphia, Pennsylvania, United States
Jesslyn Jamison, Ph.D. (she/her/hers)
Postdoctoral Fellow
Penn Center for Mental Health
Philadelphia, Pennsylvania, United States
Emily Becker-Haimes, Ph.D. (she/her/hers)
Assistant Professor
University of Pennsylvania
Philadelphia, Pennsylvania, United States
Background: Despite investment in the implementation of evidence-based practices (EBPs) for youth mental health care, most youth in need of psychiatric intervention are undertreated. One reason for this is that clinicians rarely use evidence-based assessments (EBAs) and struggle to accurately detect youth target problems, especially in community settings. To improve clinicians’ ability to accurately detect treatment needs and deliver appropriate EBPs, we aim to develop a clinical decision-making algorithm based on EBA data delivered in the format of a feedback report. We first tested whether receiving a simulated feedback report with results of standardized assessments and implicated treatment recommendations would impact clinicians’ diagnostic and treatment planning decisions.
Methods: We presented 102 clinicians (M age = 36.85, 85% female; 92% held a master’s degree) with one of two randomly assigned vignettes describing a prototypical youth client for a community mental health setting. Clinical symptoms in each vignette described symptoms of one of two potential target problems (anxiety or depression). After reading the vignette, clinicians rated the perceived likelihood of a range of potential treatment targets, including anxiety or depression. Clinicians then were randomly assigned to receive a simulated feedback report that either confirmed or disconfirmed the initially presented target problem and re-rated perceived treatment targets. Clinicians also reported on perceptions of the feedback report and rated their attitudes toward standardized assessment tools.
Results: On average, clinicians who received confirmatory feedback reports strengthened their likelihood ratings for target problems (p =.008, Cohen’s d = -.38) and decreased those for other potential targets. Those receiving disconfirmatory reports showed decreases in likelihood ratings for the initial target problems (p < .001, Cohen’s d =.60) and increases in ratings for the alternative problem presented as salient in the feedback report (p < .001, Cohen’s d = -1.14). Disconfirmatory reports showed greater effects on clinicians’ ratings than confirmatory reports (p < .001). Overall, clinicians rated the perceived utility of the report highly (M = 3.74 out of 5) and indicated it would be likely to influence their practice (M = 4.01 out of 5). Clinicians with more positive attitudes about standardized assessments rated the feedback report as more useful (r = .36, p < .001) and more likely to influence their practice (r = .35, p < .001).
Conclusions: Results of this analog study suggest a diagnostic feedback report may impact clinician diagnostic formulation and was perceived as useful by clinicians in this sample. Future work to develop and test this feedback report further is indicated.