Skip to content

Trending tags

4 Reasons Twitter Struggles to Shut Down Bots

Jason Sattler

28.10.17 4 min. read

Tags:

Twitter is admitting that it has a problem.

The site announced that it will no longer sell advertising to Russian media outlets Russia Today (RT) and Sputnik “effective immediately.” The world’s fourth largest social media network says an internal investigation of the site’s role in the 2016 United States’ presidential election determined that two organizations attempted to “interfere with the election on behalf of the Russian government” thus banning their advertising will “help protect the integrity of the user experience on Twitter.”

Vanity Fair‘s Maya Kosoff called this step “damage control” ahead of Twitter executives appearance in front of the Senate Intelligence committee and compared it to putting a “Band-Aid on a brain tumor.” Kosoff notes that the ads’ effects were minor compared to the generally unfettered activity of troll accounts and bots, which one study found accounted for 1 out of every 5 election-related tweets.

The use of bots to at least attempt to sway voters was not limited to the U.S., of course. Researchers have found that 13,000 suspected bots tried to sway UK voters toward voting “leave” on Brexit.

Ad sales are one aspect of the site that Twitter has the most control over, and banning two purchasers isn’t going to affect its bottom line much. But the site’s ads are nowhere near as pervasive or effective as those on Facebook — another site facing considerable scrutiny of its role in the 2016 election — as evidenced by Facebook’s massive advantage when it comes to soaking up ad dollars.

Shutting down individual troll accounts also seems doable and Twitter has pursued that path more avidly since the election. But I wondered why Twitter can’t just do more to take on the bots that seem to be driving so much political conversation.

F-Secure Labs’ Andy Patel began playing with Twitter’s API and monitoring Twitter bots and sockpuppets by taking a look at the activity around @realDonaldTrump. Since then he’s done forensics on Twitter activity on the site related recent elections in France, the UK and Germany. He has found that Twitter’s backend logic does “take automatic action against bots” so I asked him why Twitter just can’t just shut all bots down.

“Twitter themselves have access to all of the raw data in their databases, which includes fields that we don’t even see,” he told me. “They probably also have powerful tools that allow them to look at relationships between accounts, behaviour, and so on However, even with all of this power at their disposal, I’d venture to say that finding malicious politically-oriented groups of Twitter users is still difficult and time-consuming.”

He then offered four quick reasons it’s so hard to shut bots down:

1. Separating propaganda from advertising is difficult.
“Shutting off the wrong accounts can lead to allegations of censorship, so any methodology they employ to separate propaganda from valid political discourse has to be accurate,” Andy wrote. “One way they’ve employed is to separate ‘bot’ accounts from real people.”

2. Twitter’s constant avalanche of tweets works to the advantage of bots.
“Finding suspicious accounts starts with identifying the correct search terms to monitor, and preferably some accounts known to participate in the behavior you’re looking to monitor. Even with logic in place to catch certain behaviors, owners of malicious accounts and bot nets often change their tactics to avoid being caught by simple analysis techniques.”

3.Bots can be automated but detection requires humans.
“Any automation designed to catch malicious behavior on Twitter needs to be setup with the correct parameters (which are derived from manual research) and needs to be constantly reconfigured as the situation changes (which requires people to follow political discourse, user accounts, and tactics used by malicious users). Output from automation ultimately needs to be monitored by human beings in order to verify or spot anomalies. All of these steps are manual and can be prone to error.”

4. Who can keep up?
“Delving through historical data to find malicious patterns is quite different from catching actual malicious activity on-the-fly. While research into historical data can provide researchers with clues on how malicious accounts behave, reproducing the same analysis on a live stream of data requires a slightly different methodology (and is more difficult). The fact that there are multiple political agendas being pushed on social media at any given time (with new ones appearing and old ones being ditched) means that owners of social networks will likely always be hard-pressed to keep up with every situation that develops.”

Andy notes that “Twitter actively monitors and shuts down jihadist accounts” so “it is feasible that they could devote resources into creating similar processes to identify and shut down other types of unwanted discourse.” But the ability to do this in real-time will likely remain a challenge in the near future.

Jason Sattler

28.10.17 4 min. read

Categories

Highlighted article

Related posts

Close

Newsletter modal

Thank you for your interest towards F-Secure newsletter. You will shortly get an email to confirm the subscription.

Gated Content modal

Congratulations – You can now access the content by clicking the button below.