In this study, we empirically evaluate the effectiveness of soft moderation interventions on X warning labels, warning bundles (labels combined with community notes), and warning covers in reducing perceived accuracy and sharing intentions of inauthentic political content across two contexts: (1) the 2024 U.S. presidential election, and (2) a non-election setting. Using a sample of n1 = 925 X users during the election campaign, we find that both the warning bundle and the warning cover significantly reduce the perceived accuracy of manipulated content related to each presidential candidate. A follow-up evaluation with n2 = 649 X users after the election confirms these findings, reinforcing the role of such interventions as interaction frictions that effectively lower perceived accuracy of inauthentic content concerning both politically affiliated individuals and global conflict topics. Thematic analysis of participants' explanations suggests that warning labels and covers - especially those incorporating third-party fact-checks - are viewed as less trustworthy than the community notes included in the warning bundles. Across both contexts, no intervention significantly impacted participants' willingness to share the content, mainly due to concerns that sharing inauthentic material could harm self-presentation on X.
«
In this study, we empirically evaluate the effectiveness of soft moderation interventions on X warning labels, warning bundles (labels combined with community notes), and warning covers in reducing perceived accuracy and sharing intentions of inauthentic political content across two contexts: (1) the 2024 U.S. presidential election, and (2) a non-election setting. Using a sample of n1 = 925 X users during the election campaign, we find that both the warning bundle and the warning cover significa...
»