It's time to break the news.The Messenger's slogan

Elon Musk was right: Bots really are everywhere online

It’s getting harder to tell a bot from a person.

The making of a bot

The most basic definition of a bot is a bundle of code that automates the activities of an account on a platform. Bots can do a wide variety of tasks and behave in different ways, all depending on what they are coded to do. They can be built without much technical expertise and bought in bulk for cheap.

There are even “broker bots,” which Carley said try to bridge differences between two groups or get them to pay attention to each other. Imagine, for example, that an election is drawing near. “If I can build a bot that’s a broker between two groups and make the two groups think that they’re the same and have shared concerns, then I rebuilt the groups and could have affected my political agenda,” she said.

It’s these kinds of behaviors, from amplification to chat and “brokering,” that can make bots as benignly obnoxious as replies boosting cryptocurrency in your mentions along with phrases like “to the moon!” or as potent as potentially influencing an election. And while we often assume bots to be a negative force, it’s not always that straightforward.

“Bots are a little bit tricky because they’re just a tool, right?” said Kai-Cheng Yang, a Ph.D. candidate in informatics at Indiana University. Yang runs Botometer, a tool that lets people input accounts and get a numerical score back that is supposed to indicate how likely it is that the account is a bot. “Everybody should be able to use it to do good things, but of course, you can also use it for bad.”

Hunting down bots

Identifying bots on any platform begins with understanding how people behave on it. It might seem obvious, but humans use different platforms in very different ways. There is a reason, for example, that your grandparents are on Facebook but not Twitter, and your brother-in-law is on Reddit but you aren’t.

By understanding how tools that are used to build bots work, Carley said researchers can also identify the kinds of things that bots would be good at. Most of the bot identification technologies out there today are based off some form of machine learning.

“They’re trained using these data sets that have been collected over the years that have a bunch of accounts,” said Carley. “The accounts have been marked as being a bot or not being a bot by somebody, typically several somebodies. And then those are used to train the tools.”

For example, say there is a whole new social platform that comes into existence, and no one’s ever seen any data on it.

“The bots on that might look totally different,” said Carley. “And we wouldn’t know because our tools have not been trained for that. So bots on different platforms look different because the platforms are different.”

And bots, or rather their makers, evolve to avoid detection.

Yang said Twitter has gotten more and more aggressive when it comes to taking down accounts, and it’s doing a fairly good job. Even so, he’s seeing more new types of accounts show up, hoping to avoid detection.

“For example, recently, I started to realize there are some bot accounts using fake faces, using neural network to generate the face,” said Yang, whereas before many bot accounts didn’t have human profile photos. “These are human faces that don’t exist, and they’re using them as their profiles.”

Carley said bots began as almost fun random accounts that would, for example, tweet out the time of day every hour, or others that just tweeted out random words. But these evolved into bots that would amplify Chinese state media in one case. And even then, if a bot isn’t retweeting everything that state media account is posting, for example, or is only keying on certain phrases, it takes time to update the identification models given before it might have been retweeting everything.

“It’s kind of like the nuclear arms race with bots,” she added.

Twitter, rating bots and the cyborg bot

Even though identifying a bot is hard, Botometer tries. On a scale of 1 to 5, it rates accounts as being “bot-like” with 1 being the least likely and 5 being the most.

Yang chalked Musk’s brief 3.5 score up to the fact that the billionaire is a special case. The algorithm Botometer uses has to fetch the most recent 200 tweets from an account to most accurately evaluate the behavior of the user.

“But the problem with Elon is that we can only get something like 20 tweets,” said Yang. “It’s a bug in Twitter’s API, and we had a confirmation with Twitter be they know this, and they are not going to fix this — we will have to wait for the next iteration of the API.”

Beyond Musk though, cyborg bots are blurring the line between bot and human and only making bot identification harder. These kinds of accounts are controlled by a human sometimes, but a bot at other times.

Say you generally tweet from your account, but then you go on vacation and don’t want to keep tweeting while you’re away. A bot can fix that and perform actions for you while you’re gone. But for machine-learning programs focused on account behavior, this very much so blurs the line between humans and bots.

“They’re much more difficult [to identify], and they are not that common yet, but they do exist,” said Carley. “And with all the new technologies that are coming out to do computer-assisted things, they should become increasingly common in the future”

As for that Twitter estimation of bots on the platform that Musk is disputing? Carley agrees with Musk that it’s likely to be higher.

Start your day with the biggest stories and exclusive reporting from The Messenger Morning, our weekday newsletter.
 
By signing up, you agree to our privacy policy and terms of use.
Sign Up.