It's time to break the news.The Messenger's slogan

Social media sites can slow the spread of deadly misinformation with modest interventions

Basic efforts by Twitter can cut misinfo sharing in half, according to a new study.

Social media platforms can slow the spread of misinformation if they want to — and Twitter could have better reduced the spread of bad information in the lead-up to the 2020 election, according to a new research paper released Thursday.

Depending on how fast a fact-checked piece of misinformation is removed, its spread can be reduced by about 55 to 93 percent, the researchers found. Nudges toward more careful reposting behavior resulted in 5 percent less sharing and netted a 15 percent drop in engagement with a misinforming post. Banning verified accounts with large followings that were known to spread misinformation can reduce engagement with false posts by just under 13 percent, the researchers concluded.

The researchers modeled “what-if” scenarios using a dataset of 23 million election-related posts collected between Sept. 1 and Dec. 15, 2020, which were connected to “viral events.” It’s a similar approach to how researchers study infectious disease — for example, modeling how masking and social distancing mandates interact with covid spread.

This simulation work can allow researchers to experiment with how a given content moderation policy may play out before it is implemented, the study’s lead author, Joseph Bak-Coleman, a postdoctoral fellow at the center, told Grid’s Anya van Wagtendonk.

“We can use models and data to try to understand how policies will impact misinformation spread before we apply them. This might be one of many, hopefully, that we wind up using,” he said. “Because the current thing is, we try something and see if it works … so we’re kind of fixing the problem after the fact.”

Bak-Coleman and his team acknowledged that, without insight into Twitter’s algorithm and content moderation practices more broadly, they cannot account for existing practices. But by implementing a combined approach, they argued, platforms can reduce misinformation “without having to catch everything, convince most people to share better or resort to the extreme measure of account removals.”

This interview has been edited for length and clarity.

The vague words like “free speech,” they sound really nice as slogans, but you have to actually code that up somehow and make decisions about hard cases. When someone makes a threat and it isn’t really a threat, do you remove that or not?

So, on one hand, I think it’s scary he’s making decisions. On the other hand, I don’t think he quite understands what’s ahead of him.

Personally, I think it’d be nice if there’s a democratic process of some sort involved — the same way we make other hard calls. But the “what we should do about it” is, I think, the question we’re able to ask after having models of what could we do, how might it work.

The analogy is climate change, where we have these climate models that tell us, “If we increase carbon [emissions] by however much, then the world will get so much warmer and cause these problems.” And that tells us what will happen under different scenarios, and then we have to kind of choose as a society what we do.

So someone has to make a call about how we triage things going into a moderation framework. That’s way above my pay grade.

I think the big missing thing is that we don’t really have a handle on what’s happening in terms of the algorithmic amplification of Twitter. In our data, all we can know is what we saw and what actually spread. So in lieu of all this moderation, there might be options to just adjust their algorithm to be a little bit less engaging, and that might have the same effect. For example, if the algorithms are picking up on anything that’s engaging, they might pick up on things that are engaging regardless of how true they are and kind of undermine normal human truth-seeking tendencies.

There probably are some really important algorithmic tweaks that we can’t know that they could make, because we don’t have access to their code and we can’t probe their algorithms. I think the dream where this could go next is trying to make sense of what’s happening server-side on their end.

Hopefully, as — or if — Elon Musk takes over and learns that he has to implement his free speech ideals into code, models can provide a framework for how that’s balanced.

To me, the model suggests that reducing misinformation is much more accomplishable than it was during the election. We could do more. And whether or not we should, I think, is a question we should ask and then have a procedure for implementing. But it’s quite clear that things were much more hands-off than they could have been, for better or for worse.

I think some of this is a tension between the core business models of these companies and the problems that those business models cause. So if you want to push the most engaging content, there’s no reason to believe that content would be truthful or beneficial. And if you think about it, the amount of things that are false and/or bad for you are nearly infinite, right? But the number of things that are true and good for you are finite.

So if you have an algorithm trying to select from that pool of most engaging things, it’s probably going to pull a lot of garbage out. Above and beyond all the technical challenges and ethical challenges of moderation, probably the elephant in the room is the business challenge of making it as profitable as it currently is.

Start your day with the biggest stories and exclusive reporting from The Messenger Morning, our weekday newsletter.
 
By signing up, you agree to our privacy policy and terms of use.
Sign Up.