
Figuring out why so many people are willing to share misinformation online is a major focus of behavioral scientists. It’s easy to think that partisanship drives everything—people will only share things that make their side look good or their opponents look bad. But the truth is a bit more complicated. Studies have indicated that many people seem not carefully check the links for accuracy, and partisanship may be secondary to the haste of getting a lot of likes on social media. Because of that, it’s unclear what motivates users to stop sharing things that a little scrutiny would show to be false.
So, a team of researchers tested the obvious: We’ll give you money if you stop and check the accuracy of a story. The work shows that small payments and even modest rewards boost the accuracy of people’s evaluation of stories. Almost all of that effect comes from people identifying stories that don’t favor their political stance as factually accurate. Although money has boosted conservatives’ accuracy more, they are so far behind liberals in judging accuracy that the gap remains large.
Money for accuracy
The basic framework of the new experiments is pretty simple: take a group of people, ask them about their political beliefs, and then show them a bunch of headlines that appear on a social media site like Facebook . The headlines were rated based on their accuracy (ie, whether they were true or false information) and whether they would be more favorable to liberals or conservatives.
Consistent with previous experiments, participants were more likely to rate headlines favoring their political beliefs as true. As a result, most of the misinformation that was rated as true happened because people liked how it aligned with their political leanings. While this was true for both sides of the political spectrum, conservatives were more likely to rate misinformation as true—an effect seen when researchers cited the seven different papers as previously presented.
On its own, this type of replication is useful but not very interesting. Interesting things came when researchers started to vary this method. And the simplest variation was one where they paid participants a dollar for each story they correctly identified as true.
In news that would surprise no one, people have become better at accurately determining whether stories are false. In raw numbers, participants got an average of 10.4 accuracy ratings (out of 16) in the control condition itself but more than 11 out of 16 right when there was a reward. This effect also emerged when, instead of payment, participants were told the researchers would give them an accuracy score when the experiment was over.
The most striking thing about this experiment is that almost all of the improvement came when people were asked to rate the accuracy of statements in favor of their political opponents. In other words, the reward made people better about recognizing the truth in statements that, for political reasons, they would prefer to think were false.
A smaller space, but still a space
The opposite was true when the experiment was switched, and people were asked to identify stories that their political allies liked. Here, accuracy drops. This suggests that the participants’ frame of mind played a major role, as prompting them to focus on politics caused them to have a lower focus on accuracy. Interestingly, the effect is almost as large as a financial award.
The researchers also created a condition in which users were not told the source of the headline, so they couldn’t tell if it came from partisan-friendly media. This did not make any significant difference to the results.
As noted above, conservatives generally fare worse here than liberals, with the average conservative getting 9.3 out of 16 correct and the average liberal getting 10.9. Both groups see their accuracy increase when there are incentives, but the effects are greater for conservatives, who increase their accuracy to an average of 10.1 right from 16. But, while this is better than it does they do without the incentive, it’s not as good as the liberals do without the incentive.
So, even if it looks like some of the problem with misinformation being shared by conservatives is due to a lack of motivation to get things right, this only explains part of the effect.
The research team suggests that, while a payment system would likely be impossible to measure, the fact that an accuracy score has nearly the same effect could mean that it points to a way for social networks to reduce false information spread by their users. But it seems naive.
Fact checkers were initially promoted as a means of reducing misinformation. But, consistent with these results, they were more likely to rate the parts shared by conservatives as misinformation and later labeled as bias. Similarly, attempts to limit the spread of misinformation in social networks have seen the leaders of those networks. accused of censorship by conservatives in Congressional hearings. So, even if it works in these experiments, it’s likely that any attempt to roll out a similar system in the real world will be very unpopular in some quarters.
Nature of Human Behavior, 2023. DOI: 10.1038/s41562-023-01540-w (About DOIs).