Technology/Digital Health
Filtering Trust: Viewing AI-generated selfies negatively impacts trust in technology
Alexis Lamere, B.A.
Graduate Research Assistant
Clemson University
Clemson, South Carolina, United States
Ellena Wood, None
Student
Clemson University
Clemson, South Carolina, United States
Brooke L. Bennett, Ph.D.
Assistant Professor
Clemson University
Easley, South Carolina, United States
The production of selfies generated by artificial intelligence (AI) creates an unrealistic beauty standard that has the potential to affect individuals' perceptions of themselves and others. The “trust paradox” in AI describes the idea that users’ willingness to use this technology may outweigh their trust in it (Kreps et al., 2023). For example, while technology users may hesitate to trust social media, there is widespread adoption of AI filters used to achieve a certain appearance. This trust may be further tested by the use of fully AI generated images that are displayed on platforms such as TikTok, Instagram, Snapchat, and other mainstream media apps. More research is needed to better understand how interacting with AI-generated content impacts the trust paradox. Thus, the present study aimed to examine the effect of viewing AI generated selfies of influencers/content creators on social media users’ trust in technology. It was hypothesized that participants who were exposed to these images would have decreased trust in technology. One hundred forty-eight female identifying participants who were regular social media users were recruited to participate in this study (Mage = 37.53; MBMI = 27.20). Participants primarily identified as White (67%), Black or African American (19%), Filipino (4%), Chinese (4%), American Indian or Alaska Native (2%), Japanese (2%), Korean (2%), Vietnamese (2%), Pacific Islander (1%), and other (4%). Participants completed the Propensity to Trust Technology scale before and after viewing ten AI-generated images. Participants were asked to rate the desirability, attractiveness, similarity, and representativeness of each AI-generated influencer. A paired samples t-test revealed that baseline trust in technology scores (M = 22.05) significantly decreased after viewing the AI-generated selfies (M = 21.43), t(144) = 3.97, p < .001, d = .33. Findings of the present study supported the hypothesis that trust can be impacted by AI tools that alter individuals’ appearances. Specifically, users who actively use editing tools provided by AI can negatively affect other users’ capacities to trust technology. It is proposed that when users are able to produce images using AI filters and features, these manipulated images are spreading a form of misinformation. The proliferation of these types of images may result in mistrust and confusion for others about what is real and what is edited. When people are unable to decipher between reality and fantasy, it can result in discomfort and distrust, underscoring the need for clearer guidance on whether and how AI tools can be safely used.