Quantcast

NYC Gazette

Thursday, November 21, 2024

New study links belief in online misinformation with ideological extremism

Webp vfattgaezc76f4j9qzyrc2qqxv4i

Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business

Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business

Political observers have been troubled by the rise of online misinformation—a concern that has grown as we approach Election Day. However, while the spread of fake news may pose threats, a new study finds that its influence is not universal. Rather, users with extreme political views are more likely than others to both encounter and believe false news.

“Misinformation is a serious issue on social media, but its impact is not uniform,” says Christopher K. Tokita, the lead author of the study conducted by New York University’s Center for Social Media and Politics (CSMaP).

The findings, which appear in the journal PNAS Nexus, also indicate that current methods to combat the spread of misinformation are likely not viable—and that the most effective way to address it is to implement interventions quickly and target them toward users most likely to be vulnerable to these falsehoods.

“Because these extreme users also tend to see misinformation early on, current social media interventions often struggle to curb its impact—they are typically too slow to prevent exposure among those most receptive to it,” adds Zeve Sanderson, executive director of CSMaP.

Existing methods used to assess exposure to and impact of online misinformation rely on measuring views or shares. However, these fail to fully capture the true impact of misinformation, which depends not just on spread but also on whether users actually believe the false information.

To address this shortcoming, Tokita, Sanderson, and their colleagues developed a novel approach using Twitter (now “X”) data to estimate not just how many users were exposed to a specific news story but also how many were likely to believe it.

“What is particularly innovative about our approach in this research is that the method combines social media data tracking the spread of both true news and misinformation on Twitter with surveys that assessed whether Americans believed the content of these articles,” explains Joshua A. Tucker, a co-director of CSMaP and an NYU professor of politics, one of the paper’s authors. “This allows us to track both susceptibility to believing false information and the spread of that information across the same articles in the same study.”

The researchers captured 139 news articles (November 2019-February 2020)—102 rated as true and 37 rated as false or misleading by professional fact-checkers—and calculated their spread across Twitter from initial publication.

This sample was drawn from five types of news streams: mainstream left-leaning publications, mainstream right-leaning publications, low-quality left-leaning publications, low-quality right-leaning publications, and low-quality publications without an apparent ideological lean. To establish veracity, each article was sent within 48 hours of publication to professional fact-checkers who rated them as “true” or “false/misleading.”

To estimate exposure and belief in these articles, researchers combined two types of data: Twitter data identifying potentially exposed users' ideological placement based on prominent accounts they follow; real-time surveys asking habitual internet users if they believed an article was true or false along with demographic information including ideology. From this survey data, authors calculated proportions within each ideological category believing each article true.

Overall findings showed while false news reached users across political spectrums those with more extreme ideologies were far more likely both see and believe it encountering it early in its spread through Twitter.

Research design allowed simulation impacts different intervention types stopping misinformation spread showing earlier applied interventions likelier effective visibility interventions reducing reach susceptible users better than making them less likely share misinformation.

“Our research indicates understanding who is likely receptive misinformation not just exposed key developing better strategies fight online,” advises Tokita now tech industry data scientist.

Study's other authors included Kevin Aslett CSMaP postdoctoral researcher University Central Florida professor time study now tech industry researcher William P Godel NYU doctoral student time study now tech industry researcher CSMaP researchers Jonathan Nagler Richard Bonneau

Research supported graduate research fellowship National Science Foundation DGE1656466

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS