Twitter’s appeal has long been connected in part to its reputation as a user-controlled firehose, as opposed to a place where an algorithm determines what you see and don’t see. But that’s changed recently, at least incrementally.
The company rolled out two abuse filters last spring, to help improve Twitter’s response to harassment on its site: an opt-in, aggressive quality filter for any verified Twitter user; and a second, less comprehensive abuse screener that’s currently being tested on all users of the site, with no option to opt out, according to a person familiar with the matter. (The Guardian first reported on the abuse screener in April, when it was introduced.) These measures have been welcomed by many who advocated for the site to find a better way to address harassment.
But a recent spate of mysteriously disappearing tweets triggered an intense debate last week over how the social networking site handles objectionable content – and to what extent the site has begun to filter content automatically.
Paul Dietrich, an activist who regularly “de-redacts” leaked documents, noticed last week that something weird was happening with one of his tweets about the Drone Papers, a widely discussed report on the Intercept, based on a cache of leaked documents.
Jacob Appelbaum, a computer security researcher who is a member of the Tor Project, had retweeted him. But for some reason, Dietrich’s tweet wasn’t showing up on Appelbaum’s timeline as it normally would – at least when he accessed it from a U.S. IP address. When Dietrich pulled up Appelbaum’s timeline in Tor, the tweet appeared as normal.
Dietrich found more examples, after asking for help: There was a tweet critical of GE that someone said had also briefly disappeared. He also flagged another Twitter user who had a similar issue with a tweet criticizing Hillary Clinton. The tweets and retweets weren’t deleted, mind you, they just weren’t showing up where they should be. Both tweets appeared to disappear more than a month ago.
Dietrich wrote an analysis of the whole thing, and said he believed what he was seeing was “deliberate.” The analysis pointed to Twitter’s universal abuse filter, the one that’s currently being tested on all users. In an April announcement, the company said it had “begun to test a product feature to help us identify suspected abusive Tweets and limit their reach,” and that the screenings would use “a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive,” in order to automatically filter some content from view for all users.
It seemed to him, and to others following along, that the disappearing tweets and the mechanisms of the abuse screener likely had some connection.
Motherboard wrote up Dietrich’s findings, eventually prompting Twitter to issue a short statement, which it also sent to The Washington Post last week: “Earlier this week, an issue caused some Tweets to be delivered inconsistently across browsers and geographies. We’ve since resolved the issue though affected Tweets may take additional time to correct.”
Twitter declined to comment further – to The Washington Post and to Motherboard – on what caused the issue in question, or whether it was related to the company’s universal abuse screener tests. It’s not clear, from Twitter’s statement or from the documentation collected so far, how many tweets were affected, along with the nature of those tweets.
Dietrich – and many who read his analysis – weren’t convinced by Twitter’s statement. And although Dietrich isn’t yet claiming that he has enough evidence to conclusively prove deliberate censorship, the suspicion that Twitter was “shadowbanning” users blew up among some who have long accused the site of censoring them.
These included Gamergaters and its diaspora, who have been opposed to the Twitter abuse filters more or less since their introduction, several months after the movement’s name became notorious for an escalation of vicious, abusive online harassment of its often female critics, propagated by a subset of Gamergate’s supporters. Some Gamergaters interpreted the filters as a direct attack on their entire movement. Their Twitter accounts lit up with the hashtag #ItsOver last week, spreading copies of Dietrich’s analysis.
Over the past several days, we reached out to Dietrich, and to Gilad Lotan, a data scientist who is an expert in social networking algorithms, to try and get a better sense of what’s going on. Both gave us their observations via email.
“To be perfectly honest, sounds like this is happening due to a combination of systems and/or features,” Lotan wrote, after reading and discussing Dietrich’s analysis on a listserv he subscribes to. He doubts that Twitter is allowing brands to censor content they don’t like, “but it may be the case that their automated classifiers which deal with harmful content make mistakes.”
Twitter’s abuse filters work by limiting how far a tweet can spread, Lotan noted, including by limiting when a retweet shows up in a user’s feed. Twitter has also had the ability to withhold tweets and accounts by country, to comply with applicable laws. Twitter labels those tweets and accounts as “withheld” for users in those places.
The filtering approach is designed to be more moderate than, for instance, automatically deleting potentially abusive tweets from the entire site. But for Dietrich, the limitation approach amounts to a potential for “censorship that doesn’t look like censorship,” and it’s making it very difficult for him to document his suspicions.
Dietrich isn’t convinced that a bug would lead to a tweet disappearing for a full month – as he has observed – and is skeptical that a single glitch could explain the hidden tweets, along with the disappearing favorites and retweets he also documented. There’s also the matter of Twitter’s statement: Why wouldn’t the company just say what the glitch was?
Lotan said he thinks it’s possible Twitter doesn’t know exactly why those posts were disappearing, “which is partly why, I suspect, their response was so short.”
It’s often really difficult for even the companies running these algorithms to track what goes wrong in the event of a glitch. Lotan pointed to an analysis he did of weirdness in Apple’s iTunes top charts rankings, which he also thinks is the result of “algorithmic glitches.”
“I’ve been working on Social Media analysis for years,” Lotan said. “There are so many cases where I remember people blaming the networks for censorship, when the issue was their own expectation of the algorithmic system in question.”
Dietrich is in the process of collecting additional documentation will help to clarify the scope of the issue – along with whether his suspicions about censorship are correct. And both Dietrich and Lotan agree that doing so will not be an easy task.
“I wouldn’t have been able to find many of the other cases I’ve noted if my original post hadn’t generated the interest it did. Others pointed them out. My method right now is to stir the pot, find out if other people have had similar experiences, analyze and verify,” he said. Determining the true scale of how many tweets were affected – and whether there’s any rhyme or reason behind it – would be an ambitious project requiring a bot built for that purpose, he added.
“The only way to track these kinds of phenomenons, is in real time, grab as much information as possible, from as many geographies as possible. Due to the nature of what’s being examined, raw data isn’t enough – what’s needed in this case are screenshots,” Lotan said.
He added: “Wish it were easier to track.”
[“source-gadgets.ndtv”]