Couples / Close Relationships
Statistical Features of Computer Recognized Facial Affect during Social Support Interactions Predict Married Couples’ Relationship Quality
Alyssa J. Miville, B.A., M.S. (she/her/hers)
Graduate Student
Binghamton University
Binghamton, New York, United States
Malak Fora, B.S., M.S.
Graduate Student
Binghamton University
Binghamton, New York, United States
Richard E. Mattson, Ph.D.
Associate Professor
Binghamton University
Binghamton, New York, United States
Congyu Wu, Ph.D.
Assistant Professor
Binghamton University
Binghamton, New York, United States
Communication processes are integral to relationship functioning and form a cornerstone of virtually all etiological models of relationship dysfunction. Prior research has focused on communication processes within specific contexts that are theorized to be particularly relevant to relationship outcomes. How couples provide and receive support is connected to the health of the relationship as a whole, and is consistently predictive of relationship satisfaction and long-term stability. Therein, the affective components of the communication process is believed to be particularly important, with expressions of positive affect (e.g., humor) and negative emotion (e.g., anger/contempt) being generally associated with better and worse relationship outcomes, respectively. Though important, these studies often only examine frequency counts of specific behaviors, which overlooks other characteristics of these rich data that may prove informative about support behavior. To examine this possibility, the current study applied a machine learning approach to observational data (videos) of social support interaction, to identify particular features that could discriminate between happily versus dysfunctional married couples. Affect during social support exchanges was estimated using iMotions, a facial expression recognition (FER) software trained to automatically quantify facially-expressed affect from video recordings. Each of the 18 emotional outputs (e.g., anger, disgust, joy) was treated as a separate channel for predicting relationship quality. Couples (N=27) were divided into “non-distressed” (N=13) and “distressed” (N=14) groups based on their Quality in Marriage Inventory (QMI) scores, with below 80 being the cutoff for the latter. Statistical features (e.g., mean, variance, kurtosis) were extracted from each emotional channel, resulting in 144 variables total. Using Welch’s t-tests, we found that predominantly features of husbands’ expressed affect significantly differentiated the groups, with the distressed group characterized by heavier tailedness in the distribution of husbands’ expressions of anger and elevated magnitude of husbands’ expressions of surprise. These variables were then input into a linear support vector machine classifier and evaluated using leave-one-subject-out cross-validation (LOSO-CV), with group being the outcome. This model with 7 retained variables classified distressed and non-distressed couples with 96.3% accuracy. These results not only demonstrate that automatic facial expression analysis using iMotions can be used to reliably discriminate between distressed and non-distressed couples, but also provide grist for the theory mill with attention towards the role of support communication in marital success versus dysfunction.