What happened to the US in 2016 is historical and terrifying, and most of us were unprepared for it. This year saw the rise to power of a vocal, right-wing extremism with white supremacist and fascist leanings not considered a legitimate part of American politics in over half a century, while the ideological differences between the left and right splintered so far that it seems to have fractured reality itself.
That’s not hyperbole. In 2016, it became difficult even to achieve consensus about what’s actually real.
These reality distortions took hold deeply in 2016. “Post-truth” was the OED’s word of the year, reflecting the fact that we now have an incumbent White House staffed with people who do not all believe in a consistent version of reality. Trump has appointed many climate change deniers who question basic scientific consensus. His national security adviser is a man whose “flimsy” grasp on factual information got him fired from the Obama administration, and whose son has already been fired from the Trump transition team after spreading fake news that resulted in a real-world armed conflict. And a host of Trump’s other staffers have helped perpetuate wild conspiracy theories.
In the middle of all this looms the biggest reality distortion of all: the remaking of white supremacy and fascism into a legitimate modern political platform. White supremacists who have refashioned themselves as the “alt-right” now speak openly about the hope Trump has given them that their dream — an ethnic cleanse of all nonwhite Americans from the US and the establishment of a literal Aryan nation — might one day come to pass.
All of this happened, to a large degree, because of the internet — specifically because of social media, and a convergence of elements that played out across social media.
Social media did much of the work of Trump’s campaign. It provided an outlet for Americans to express and spread their deepest fears and darkest opinions. It allowed right-wing extremists to call for an upending of social norms and the reestablishment of a white male-centric society. And although this regressive ideology was built around preexisting white nationalist rhetoric, it found its way into the mainstream disguised as memes, fake news, and populist conspiracies. Social media helped remodel white supremacy into a more palatable ideology centered on fear of the other and a desire for “law and order” that caters to that fear. In other words, social media laid the groundwork for the rise of authoritarianism that carried Trump into office.
These elements all existed online before 2016. But during this year’s election, they all came together in a perfect storm that altered the real world as we know it. It’s necessary to understand how that happened, and how social media was the tool that shifted us toward a post-truth future. Above all, it’s necessary to realize that it wasn’t the inhuman parts of social media — the faceless fake news suppliers or the robotic algorithms or hordes of faceless trolls — that got us here.
All of those elements were powered by human behavior; the trolls were never faceless. In 2016, the way in which we interact with and understand — or misunderstand — what social media is had consequences. If we’re to make changes in 2017, we need to start by realizing that online culture and behavior has real-world consequences.
False and morally reprehensible Google searches helped shape an election
Many people don’t think of Google as social media, but Google’s many moving parts, from YouTube to Gmail to Docs, are among the most impactful ways we communicate on the internet. And its essential, algorithm-driven search engine is the main way we receive information on the internet.
But in 2016, if we wanted to understand the main elements at play during this election, we couldn’t always just Google it.
Guardian writer Carole Cadwalladr recently noticed a problem with Google’s autofill search predictions: When she typed the words “are Jews” into the search engine, one of its suggested autocomplete searches was, “are Jews evil?” In response, Google said it removed the suggestion from its search engine, along with similar autocomplete suggestions like “are Muslims bad?” But when I tested Google’s autocomplete suggestions myself, once a week ago and once this week, I was met both times on multiple browsers with queries like “are black people real” and “are black people evil” even when all I typed was “are bl.”
And when I searched “are Muslims evil,” Google helpfully suggested the following “related” questions to help me expand my thinking:
Google rewards search behavior by algorithmically weighting the links people click on to be more prominent in future searches, which is how we got into this mess: as Wired put it last year, Google search algorithms are racist “because the internet is racist.”
Human behavior — in this case, the racist thought patterns that lead people to type in racist search queries — dictates the results Google returns, which then leads innocent search queries like my “are bl” to return horrifying results. And every time I click on a result that leads to a sketchy or biased source — even though I’m doing so for professional reasons — I’m boosting that page’s ranking and making it that much harder for actually accurate results to reach the next person who searches. The next person who searches for, and receives, harmful distorted information through Google is also helping to boost those inaccurate results, especially if they continue searching for more info. In both cases, the algorithm is learning from our human behavior that these searches are desirable.
Google tries to combat this trend in a variety of ways, from fact-checking digital news to frequent algorithm tweaks to curtail Google bombing and other inaccuracies, but it can’t be everywhere at once. A Google spokesperson told Vox the site constantly works to correct and remove offensive patterns from its algorithms — in fact, it quickly removed “are black people evil” from its autofill predictions following my email to the company — but that it’s an ongoing, nebulous process:
Autocomplete predictions are algorithmically generated based on users’ search activity and interests. Users search for such a wide range of material on the web — 15% of searches we see every day are new. Because of this, terms that appear in Autocomplete may be unexpected or unpleasant. We do our best to prevent offensive terms, like porn and hate speech, from appearing, but we don’t always get it right. Autocomplete isn’t an exact science and we’re always working to improve our algorithms.
None of this is anyone’s fault — it’s just how the algorithm works. But Google’s algorithmic search results can significantly impact public opinion. Not only does Google’s search ranking influence politics, but it reinforces ideological silos — it helps you agree with yourself. Instead of providing accurate and balanced info, Google allows people to, for example, search for deliberately negative and biased stories against Hillary Clinton.
The worst ramification of the search engine’s learned bias may already have occurred with the 2016 election. In August of 2015, statistician Robert Epstein released a widely cited study on human bias in reaction to Google search results. His research suggested that Google search results could shift the vote in November by up to 2.6 million votes. That’s because what gives these searches power, according to Epstein, is that generally, people think they’re fair and balanced. The public perceives Google search as an arbiter of truth and reality rather than an algorithmic system of learned biases specific to the individual user.
Facebook’s deluge of viral fake news had real-world repercussions
Until August, Facebook’s trending topics were curated by a special team of staffers whose main task was to vet the site’s news algorithm and identify whether stories it picked up were news. But the curation process came under fire in May when Gizmodo reported that “stories covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories.”
It’s important to note that Facebook curation guidelines were clear that trending news had to be about “a real-world event.” Many of the example conservative websites, despite being popular, have beendescribed by media watchdogs as promoting false or misleading information. (Though it’s worth noting that Breitbart, at least, is on Facebook’s list of over 1,000 media outlets from which curators were told to corroborate stories.) Thus it’s arguable that any human effort to suppress these stories from trending results was less politically driven and more about weeding out inaccurate information that did not correlate to a real-world event — the very problem Facebook would soon have.
Nonetheless, conservatives were swift to accuse Facebook of bias; Facebook issued a response noting its guidelines. But then, in late August — just as the election cycle hit its peak — Facebook fired its news curation team, announcing that it was adopting “a more algorithmically driven process” for managing trending topics.
Chaos erupted as soon as Facebook’s human news curators left the building. Much as racist Google searches led Google to suggest racist searches, recurring keywords in various conversations across Facebook led Facebook’s algorithm to define the topics of those conversations as “newsworthy,” even if the conversations themselves were misleading or their claims blatantly false, such as an entirely false item that claimed Fox anchor Megyn Kelly had been fired for voting for Hillary Clinton. And without a human editor to oversee the process, fake news began trending almost immediately.
It’s difficult to overstate the potential of fake news on Facebook to influence people’s opinions. According to the Pew Research Center, 62 percent of American adults get their news from social media, 44 percent primarily from Facebook. And since the election, news outlets have frequently cited this Pew statistic to argue that Facebook’s fake news problem played a serious role in leading Trump to victory.
The alt-right used social media to spread fake news and manufacture reality distortion
On internet forums like 4chan, 8chan, and Reddit, the alt-right has spent years strategically developing methods to mask its sincere ideology, which is nothing less than pure, old-school, KKK levels of white supremacy and racism. These methods include harassment, conspiracy theories, and ironic trolling and meme-ing, as well as the use of male-centric mainstream communities as a recruitment pool.
In 2016 we saw all of these elements surfacing to influence the election. The Harambe meme became a racist harassment tool. The Pepe the frog meme became a way for Trump supporters to unite around the ironic belief that his presidency was pre-ordained. The racist alt-right trolls on Gamergate enclave 8chan drove thousands of hits to Donald Trump’s website — nearly a 600 percent increase from a year ago.
Meanwhile, the conspiracy theories generated and stirred on social media by the alt-right and other Trump supporters sent people into zealous overdrive. Innumerable false and unsourced claims surrounded Clinton — that she was terminally ill, that she had had FBI agents killed, that she sold weapons to ISIS, that her daughter was stealing money from the Clinton Foundation, that her associate George Soros was an evil foreign (Jewish) overlord who was paying protesters and rigging voting machines, that her campaign manager took part in Satanic rituals — the list is endless.
Normally this is all the kind of extremist right-wing propaganda that circulates on the fringes of the internet. But in 2016, it filtered into the mainstream again and again: at the end of the election, fake news on Facebook outperformed real news, and 17 of the 20 highest-performing fake news stories were anti-Clinton.
These fake news stories and conspiracy theories helped shift people’s ability to understand or accept what was real. Pew released a study conducted after the election which found that nearly two-thirds of American adults believed fake news caused “a great deal of confusion.” A Stanford study released in late November found that students lacked the critical thinking necessary to parse real news from fake news. A man shot up a DC pizza joint because he feared it was housing a bogus child sex trafficking ring as part of the anti-Clinton conspiracy known as Pizzagate. Grieving relatives of victims in the Sandy Hook Elementary School shooting were continually harassed by conspiracy theorists who were convinced the entire shooting never happened, as were victims of the Pulse nightclub shooting.
This reality distortion has a foundation of hate
All of these distortions of reality tended to favor right-wing extremism and worldviews based on fear and suspicion of social systems. This is because they generally filtered into the mainstream from the alt-right. Even the Macedonian teenagers who spent most of the year monetizing fake news on Facebook took their fake claims directly from fringe alt-right sources. It’s a reduction to say that someone shot up a pizza joint because of white supremacy, but ultimately, Pizzagate and the response to it were extensions of the alt-right’s online tactics: ironic, hyperbolic distortions of reality disseminated as memes and alarmist propaganda, but masking real white-supremacist hate and vitriol.
Writing gleefully before the election about how Trump’s campaign had ushered in a new era of hatred for Jewish people, Andrew Anglin, a Nazi whose ultimate goal is the eradication of Jewish people, predicted that the more strident and unapologetic the alt-right’s Nazi rhetoric was, the more it would spread. Memes like Pepe, Anglin writes, “embody the goal of couching idealism within irony” so that it can spread subtly. The “idealism” he’s referring to is a belief system of racialized hatred, divisiveness, and extreme white nationalism.
Through the tools of social media and the conventions of internet culture, the alt-right got people in the mainstream to enjoy racist memes and listen to extreme right-wing conspiracy theories. And thus, they worked to create a reality where fascism and white supremacy were that much more palatable and easier to accept as part of American culture.
But perhaps the biggest boost the alt-right got was the direct access to Trump provided by Twitter. In January 2016, marketing research company Little Bird ran an analysis of Trump’s Twitter and found that a majority of the accounts he retweeted in a given week had significant ties to white supremacy. Trump’s ideas fed off the alt-right and the alt-right fed off Trump; he has now brought those ideas, and the reality distortion they signal, with him to the White House.
“The fact is social media did help elect Trump,” the prominent white supremacist Richard Spencer said in a YouTube video after the election. “This is a clear sign that we have power. Even if it’s just in our own little small way — even if it’s just sending a sarcastic tweet or two — we have power, and we’re changing the world.”
The moral is this: What we do on the internet has always mattered
Ironically, all of this may ultimately have happened because we have such a hard time believing that the internet is real. Research has shown that trolling decreases real-world empathy precisely because trolls don’t believe what they do matters offline. Reviewing Karla Mantilla’s book Gendertrolling, critic Richard Seymour writes, “The new inflection that the internet appears to make possible is the trolls’ disavowal of moral commitment, which depends on a strict demarcation between the ‘real’ offline self, and online anonymity. I am not what I do, as long as I do it online.”
The rest of us forget this, too. Many people, including those of us who have spent years reporting on the online phenomena that reached their apotheosis in 2016 and helped usher Trump into office, did not take his campaign seriously until it was too late. Perhaps this is because he was a candidate born from the worst impulses of the internet. After all, we’ve spent a generation teaching ourselves to ignore and dismiss those impulses when we meet them online, to write them off as “just trolling,” and to ignore and block manifestations of hate in the form of online harassment. So, too, the media, again and again, dismissed Trump and his noisiest followers.
This is what Nazis like Anglin, who strategized ways to meme their way into the mainstream, were counting on. Anglin notes that one of the key elements of the alt-right’s success in online communities like 4chan was that they could be anonymous — they could share their socially unacceptable views under cover of darkness, without having to be accountable for their racism, while they found their views echoed and emboldened by the other anons around them. 4chan was the online equivalent of KKK members’ white hoods. And in 2016, the people under the hoods — the trolls — became a movement, and that movement echoed in the mainstream.
Perhaps this was the inevitable result of our failure to understand that what happens on the internet can never stay on the internet; it is and always has been an extension of real life. Fake news can fuel real news cycles. Conspiracy theories can have real-world consequences. A Google search can change real thought patterns. Online trolls are real people. And real people can vote.
But as we move into 2017, one thing social media will also do is help remind us that none of this is normal. While online communities have brewed morally repugnant extremism, they have also grown social activism, allowed marginalized voices to speak out against discrimination and hate, and brought people of all walks of life together. We have seen and recognize the tools extremists used to bring us to this point. But if social media can be deployed to spread disinformation and sanitize hate, it can also be deployed to spread accurate facts and bolster progressive voices for equality and freedom.
In 2016, social media gave rise to some of our worst impulses. In 2017, hopefully it will give rise to more of our best.
[“source-ndtv”]