personalization and crowdsourcing for misinformation prevention
Problem: Misinformation on social media is rampant, and different people may have different needs and attitudes towards misinformation mitigation strategies like warning tags.
Solution: We assess the viability of crowdsourcing as a means to identify misinformation online and identify factors which may impact a person’s vulnerability and attitudes towards prevention efforts.
Study 1: Crowdsourcing to Detect Online Misinformation
Can we use crowdsourcing to detect online misinformation? How do a person’s cognitive, information assessment, and personality traits affect their detection ability?
Crowdsourcing can be a means to detect and impede the spread of misinformation online. However, past studies have not deeply examined the individual characteristics—such as cognitive factors and biases—that predict crowdworker accuracy at identifying misinformation.
In our study (n = 265), Amazon Mechanical Turk (MTurk) workers and university students assessed the truthfulness and sentiment of COVID-19 related tweets as well as answered several surveys on personal characteristics.
Results support the viability of crowdsourcing for assessing misinformation and content stance (i.e., sentiment) related to ongoing and politically-charged topics like the COVID-19 pandemic, however, alignment with experts depends on cognitive, informational, and dispositional, and personality traits.
This study offers insight into how crowdsourcing can be used for misinformation in practice, using crowd composition as a means for crowd recruitment and filtering.
Study 2: Towards Misinformation Intervention Strategies Based On Individual Differences
How can we shape misinformation prevention strategies to fit diverse groups with different attitudes?
Personalizing warning tags to the individual characteristics of their diverse users may enhance mitigation effectiveness. To reach this goal, we need to understand how people differ and how those differences predict a person’s attitudes and behaviors toward tags and tagged content.
In this study, we leverage Amazon Mechanical Turk (n = 132) and undergraduate students (n = 112) to provide this foundational understanding via survey.
Results show attitudes towards warning tags and behaviors are influenced by factors such as personality, information processing, trust and credibility dispositions and cognitive abilities.
We synthesize our results into design insights that can inform the creation of effective and personalized misinformation warning tags and misinformation mitigation strategies more generally.
Publications that have come out of this research agenda:
Kaufman, R., Haupt, M., Dow, S. (2022). Who’s In the Crowd Matters: Cognitive Factors and Beliefs Predict Misinformation Assessment Accuracy. Proceedings of the 2022 ACM Conference on Computer Supported Cooperative Work (CSCW). PDF
Kaufman, R. A., Broukhim, A., Haupt, M. (2024 - in review). WARNING This Contains Misinformation: The Effect of Cognitive Factors, Beliefs, and Personality on Misinformation Warning Tag Attitudes. PDF