They make liberal use of cat photos and emojis, drape themselves in American flag iconography, and go by handles like “Saving America,” “4GodandCountry,” and “ROCK ON OHIO.”
And yesterday, scores of them started attacking me online.
@ChrisBolman Check out this turd burglar must be sniffing the methane off his dildo’s before he types his rants. Libs I don’t agree with you so let me make a list of who I think are bots and report them roflmaosmh
— 🇺🇸 NavySP🚨 (@NavySP3) November 8, 2017
Here’s a short story about it.
First, A Quick Bit Of Context On Russian Twitter Trolls
It’s no secret Twitter has a bot problem. I’m not talking about bots that automate your posts or compose funny song lyrics, I mean hostile propaganda bots (and the people who sit behind them, operate and leverage their automation).
The vast majority of these foreign disinformation trolls are known cyber-operatives within Russian “web brigades” (Веб-бригады), state-sponsored groups that spin up bots, trolls and propaganda distribution accounts across Facebook, Twitter and other social media. And their activities — and impact — have been widely reported by outlets like The New York Times, CNN, BuzzFeed and The Daily Beast.
One of the most famous troll factories, the blandly-named “Internet Research Agency” (IRA) in St. Petersburg, even created a specific team, the “Department of Provocations,” dedicated to spreading fake news and creating cultural division through social media. Sources say management at the IRA even required its 1,000+ employees and remote contractors to watch episodes of Neflix’s now-defunct show “House of Cards” to improve their English and learn political terminology.
The agency was reportedly “disbanded” in December 28, 2016, a month after the conclusion of the U.S. Presidential Election, though many experts agree their work still continues under different names and fronts.
And Russia’s trolling is particularly prolific on Twitter, due to the design of the platform’s newsfeed algorithms, coupled with the company’s own lax oversight.
Twitter’s leadership, for its part, continues to be — at least publicly — asleep at the wheel on the issue. Testifying on Capital Hill on October 31 in front of the House Intelligence Committee, Twitter said it identified 2,752 accounts controlled by Russian operatives and more than 36,000 “bots” that tweeted 1.4 million times during the election, well higher than its previously reported 201 accounts linked to Russia.
But not only is the 2,752 estimate still wildly conservative, Twitter has taken few if any steps to address its troll scourge.
“I’m concerned that Twitter seems to be vastly underestimating the number of fake accounts and bots pushing disinformation,” noted Senator Mark Warner of Virginia in his opening statement. “Independent researchers have estimated that up to 15 percent of Twitter accounts – or potentially 48 million accounts – are fake or automated.”
And at least 400,000 bots posted political messages during the 2016 U.S. presidential election on Twitter, according to research by Emilio Ferrara, an assistant professor at the University of Southern California.
Based on my own personal analysis, there are well north of 100,000 coordinated bot and troll accounts active on Twitter today, a network I have reason to believe is linked to Russian intelligence and disinfo operations. @SparkleSoup45 alone has over 90,000 followers, the vast majority of whom are fellow bots and propaganda handles. Dozens of linked accounts have similarly questionable followings in the tens of thousands.
The more I peer down the rabbit hole, the deeper it goes, and one of my next steps will be to dust off my programming skills to collect, build and graph a more comprehensive database of this ecosystem.
There are fake websites, fake church groups, fake media brands, and even a few Twitter-verified pundits with highly suspicious account activity, all operating in these same networks.
The right question to me at this point is not “Is there a Russian bot/troll problem on Twitter?” but more “Why won’t Twitter police or even acknowledge the extent of its Russian bot/troll problem?”
Because it can’t? Or because it knowingly won’t?
“Ban Russian bots” projecting on Twitter HQ right now pic.twitter.com/musk8g085y
— Sean Knox 🌐 (@smk) October 11, 2017
Poking The Russian Propaganda Hornets Nest
What’s frustrating to me about Twitter’s Russian propaganda problem is it’s so transparently out in the open. Scroll down the responses to any tweet from Trump, Fox News or other major political or media handle, and you’ll see the trolling and disinformation instantly.
There are of course interesting questions about to what extent “trolling” should be protected as free speech, but my view is any trolling that spreads intentionally hostile or harmful information (which often contains racist, divisive or discriminatory sub-texts) deserves no censorship protection whatsoever.
After making repeated traditional attempts to get Twitter to take action — including emailing Twitter, submitting online complain forms, and reporting accounts through the app — I put together a list of several hundred Russian propaganda accounts (a small sample size of the tens of thousands of active ones), tweeted it out, and mentioned a few journalists, including The Observer’s John Schindler and The Daily Beat’s Noah Shachtman,
now that I started publicly exposing russian propaganda bots/accounts on twitter the trolls have really come out of the woodworks 😘
— Chris Bolman (@ChrisBolman) November 8, 2017
Needless to say the response from St. Petersburg was swift.
Within minutes, I started getting dozens of responses, many in broken English, ranging from dodgy denials:
28 responses from you and your friends is a lot of not caring 😉
— Chris Bolman (@ChrisBolman) November 8, 2017
to some less subtle retorts:
— 🌺A S H L E E 🌺 (@TrumpWonAlready) November 9, 2017
Mostly though, the Russian trolls seemed to simply enjoy their trolling, as if to say “yeah, you got us, but there’s nothing you or Twitter can do to stop it.” Several of the accounts that joined the conversation also appear to be active meme-creators and participants in the 4chan ecosystem, signaling the murky intersection between Kremlin propaganda and Alt-Right discourse.
24 hours later, the responses are still coming in. My original tweet’s racked up 100+ replies from Russian trolls, I’ve been placed on several harassment lists, seen shady emails popping up in my inbox, and observed at least one attempt to hack and spam my website.
Some of this activity is coming from automated bots, but a lot of it is being carried out by actual people on the other side of the screen.
I’m not pointing any of these out for sympathy. I did this expecting consequences and possibly backlash.
What I do want to point out is how serious this operation is. In the grand scheme of internet influence, I’m a relative nobody. Nonetheless, the trolling response was swift and sizable as soon as I started to peel away at the truth. The G.R.U., F.S.B. and S.V.R. do not like when one of them is caught with its hand in the active measures cookie jar.
And if Jack or any other Twitter executive thinks this is trivial or just an inconvenient consequence of using Twitter in 2017 — and I say this respectfully, as a lover of the core product — they’ve really lost touch with the reality of what they’ve brought to life.
What’s interesting as well is just how long this problem has festered under the surface. Just in the half hour I spent researching I came across propaganda accounts who were registered on Twitter as far back as 2009. This isn’t a short-term bug in the system; many of these accounts are here to stay (until they’re exposed, at least).
Similarly, as existing accounts are uncovered, deleted or banned, new ones are created to replace them. Many of the larger ones even have dedicated backup accounts that are already set up.
Will Twitter Join The Fight Against Propaganda?
Finding the right balance between creating an open platform for expression and battling malicious communications within it no doubt poses challenges. But the mechanisms exist for Twitter to solve — or at least, significantly curtail — its troll and bot problems if it’s willing to take a long hard look in the mirror and commit to do so.
For one, methods of bot detection are growingly increasingly sophisticated, and can be done based on factors like time series analysis, semantic analysis, natural language processing (NLP), account interaction patterns, IP address, and other attributes.
It’s also likely that the best overall solution is a person-program pairing, much like centaur or Ivanov chess where a human player is paired with software. Facebook has recently taken steps in this direction, moving to hire 1,000 editors to help it fight ad fraud and fake news. Recurring queries could be run at regular intervals, then flagged accounts could be manually reviewed and researched by human security editors.
Yes, mistakes will happen, but the overall outcome will be far better for the integrity and quality of Twitter’s user experience than the disinformation buffet that exists today.
And of course there may be reasons why Twitter isn’t stepping up to the task, one in particular being Wall Street. Although Twitter’s stock is up 22% YTD, investors have penalized the company for lack of active user growth.
If bots and trolls do in fact make up 9-15% of Twitter’s active user base, as University of Southern California and Indiana University researchers suggest, Twitter’s leadership may have little interest in cleaning house.
If that’s the case, it’s sad and short-sighted.
At the end of the day, active user growth is the vanity metric, whereas Twitter’s lasting measurement will be its reputation and legacy.
I’d love to see Jack and the rest of the Twitter team take responsibility for its role in the Russia problem. If they don’t take it seriously, history may not be kind.