Blog Ipsa Loquitur

Published on under Getting it All Out There

In the immediate aftermath of the (remarkably close) presidential election, you can ask twenty different people what made the difference and you’ll probably get twenty-one different answers. “Facebook failed to prevent the spread of hoaxes at best and propaganda at worst” is a popular answer, and whether or not it determined the outcome of the election, the story behind it has been fascinating to watch. Let’s start in 2012.

Alex Kantrowitz, writing for Buzzfeed News, notes that a photo shared on election night 2012 by the Obama campaign was shared over 500,000 times on Twitter, and less than 100,000 times on Facebook. Twitter has a fraction of the number of users that Facebook does, so this disparity is even worse in the proportional “shares per user” metric that I’m sure Facebook uses to measure this stuff.

Here’s where we pick up Kantrowitz’s story of How The 2016 Election Blew Up In Facebook’s Face:

Facebook’s transformation began almost immediately after the 2012 election. On November 14, 2012, a full eight days after the vote, a TechCrunch headline proclaimed: “Facebook Finally Launches ‘Share’ Button For The Mobile Feed, Its Version Of ‘Retweet.’”

The move, seemingly minor at the time, set the table for a behavior shift on Facebook, encouraging people to share quickly and without much thought. That in turn helped all forms of content boom across the network. As the TechCrunch article astutely noted: “When people do use the Share button on the web, they often give their own description of a link. But on mobile where typing is more of a pain, a Share button could encourage people to rapidly re-share link after link.”

Now, remember that Facebook doesn’t show you a chronological list of what your social network has posted. An algorithm filters everything but the “most interesting” content. YouTube does it, Instagram does it, and Twitter has been experimenting with it in a limited capacity for a while now.

Enter the Memes

So Facebook’s new share function supercharged the echo chamber created by the algorithm. And then came what’s variously described as satire, hoaxes, or political propaganda. Again, from Kantrowitz:

It’s no coincidence that Jestin Coler started National Report, his wildly successful fake news site, only a few months after Facebook added the mobile share button. The California-based satirist watched in a bit of amazement as articles from fringe conservative news sites began booming across Facebook, and decided he wanted in on the action. […]

Coler could have reported the news, or simply blogged. But he noticed that fringe political pages would pick up just about anything that helped them make their point, including fabricated news. So National Report began publishing fake news about gun control, abortion, and President Obama, which Coler suspected would set off the right. It sure did. The sites quickly began aggregating his stories. “We really went for the confirmation bias thing,” Coler said. “What we assumed people wanted to hear, that was really what we were selling.”

National Report had impact. Real impact. Shortly after Montgomery fabricated a story about food stamps being used in Colorado to buy weed, the Colorado legislature introduced a real bill to ban the practice.

Hoaxes, satire so dry it inspires punative legislation, and propaganda notwithstanding, Facebook moved ahead with its Twitter-ification and implemented a version of Twitter’s “Trending Stories” in 2014. A team of Facebook-employed editors compiled the most popular topics, but from a list of approved sources that had the byproduct of avoiding hoaxes and propaganda. By May of 2016, Gizmodo reported that some folks who worked at Facbeook felt that the Trending Stories editors were institutionally biased against conservatives because they weren’t surfacing stories about:

former IRS official Lois Lerner, who was accused by Republicans of inappropriately scrutinizing conservative groups; Wisconsin Gov. Scott Walker; popular conservative news aggregator the Drudge Report; Chris Kyle, the former Navy SEAL who was murdered in 2013; and former Fox News contributor Steven Crowder.

In a fun bit of foreshadowing, the Gizmodo report trended on Facebook, but users who clicked the Trending Story were directed to coverage of the report on RedState.com and the Faith and Freedom Coalition instead of to Gizmodo directly.

Rise of the Machines

Despite these problems, in August 2016, Facebook fired the editors, and announced that an algorithm was going to select Trending Stories. This is presumably a separate algorithm from the “you want to see baby photos from this cousin but not that cousin” algorithm. And then on Monday, this happened:

Just days after [firing the human editors and putting the algorithm in charge], Facebook’s algorithm chose a very bad, factually incorrect headline to explain to its news-hungry users why Megyn Kelly was trending. The headline, which was visible to anyone who hovered over Megyn Kelly’s name on the Trending list, refers to the Fox News personality as a “traitor” and claims that the cable channel has “Kick[ed] her out for backing Hillary.” (They have not.)

The trending “news” article about Kelly is an Ending the Fed article that is basically just a block quote of a blog post from National Insider Politics, which itself was actually aggregated from a third conservative site, Conservative 101. All three sites use the same “BREAKING” headline.

The Conservative 101 post is three paragraphs long, and basically reads as anti-Kelly fan fiction accusing her of being a “closet liberal” who is about to be removed from the network by a Trump-supporting O’Reilly. It cites exactly one news source as a basis for its speculation: the Vanity Fair piece.

Look, any idiot could tell these trending stories about the not-really-fired Kelly were unsourced fan fiction. Not even good fan fiction; the sort usually reserved for speculation about President Obama’s original Kenyan birth certificate, or the time George Bush Did 9/11.

Okay, but there have been inflammatory headlines and hoaxes since the day the written word was invented. Facebook hasn’t invented confirmation bias or echo chambers or anything like that. But there’s an argument to be made that this is an issue of scale. Sam Biddle, writing in The Intercept days after the election, wrote Facebook, I’m Begging You, Please Make Yourself Better:

Confirmation bias doesn’t begin to describe what Facebook offers partisans in both directions: a limitless, on-demand narrative fix, occasionally punctuated by articles grounded in actual world events, when those suit their preferences. But it was the Trump camp more than its opponent that encouraged this social media story time, because theirs was a candidate who was willing to stand at a podium and recite things he knew to be false, day after day. Trump rallies were a place to propagate conspiracy theories plucked from Facebook (and Reddit, and Twitter, and 4chan, and …), but also to plant what would become the next social media hoax. Trump warned his fans of ISIS commandos creeping across the Mexican border, Hillary’s failing health, cash payoffs to Iran, Benghazi murders, and a litany of other tales that included proper nouns from real life, but little else.

Whether or not Facebook is directly culpable, this much can’t be overstated: The combination of a media literacy nadir combined with an unstoppable firehose of untrue media gave Donald Trump the ability to say virtually anything during a presidential election, without consequence. There’s no reason to believe this won’t continue to happen in every election hereafter, to say nothing of the rest of the world, where Facebook is desperate to plant roots.

Everything has a -gate now

It’s not just the internet. It’s not just elections. Take Pizzagate: part hoax, part ironic conspiracy theory among trolls, and part actual conspiracy theory. If you’re not familiar with the (nonexistent) child sex trafficking ring at the center of Pizzagate, all you need to know is that a guy named Welch was fed a nonstop diet of these hoaxes by algorithms (Facebook’s and Google’s alike), and he failed to understand that it wasn’t true. Long story short, Welch showed up to a pizzeria with a rifle and started shooting.

David Graham in The Atlantic describes the absurd details of the story, but also notes that it provides a glimpse of a frightening future. It’s not just about fake news, it’s about the process of radicalization:

The allegations against Welch are interesting because they follow the archetypal narrative of Islamist terrorism self-radicalization. A young man begins reading on the Internet; over time, he comes to believe mainstream sources that are questionable, misleading, or downright false; eventually, he decides to arm himself and take matters into his own hands on behalf of a political cause.

Just as authorities continue to struggle with how to stop self-radicalized Islamist terrorists, it seems inevitable that there will be more incidents like the one on Sunday. Already, at least one fake-news site is positing that the whole episode was a false-flag operation designed to facilitate a crackdown on purveyors of fake news. Across the country, some number of people are reading the story and nodding in agreement. Some of them might even decide to pick up a gun and do something about it.

Graham compellingly argues that hoaxes and propaganda doesn’t exist in a vacuum, and that there are real-world consequences for all this stuff.

Gotta hear both sides

Let’s leave the pizza and child trafficking bits for a second and get back to the election. Mark Zuckerberg, the CEO of Facebook, wrote after the election that:

only a very small amount [of posts on Facebook are] fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.

Zuckerberg says it was about 1% of posts are fake. I’m not sure how he decides that, but the Buzzfeed report way back in the beginning of this post cites several studies that show hoaxes and propaganda get an outsize number of shares compared to real (read: boring) news stories.

And then there’s Rick Webb, an economist who studies advertising, and who has an endlessly quotable rebuttal to Zuckerberg’s thesis, titled I’m Sorry Mr. Zuckerberg, But You Are Wrong:

For the past three years, I’ve been reading all the economic and academic literature around advertising, propaganda and marketing. Every empirical study — from the behavioral economists like Kahneman to the studies that marketing behemoths like Proctor and Gamble commission, to wonky math nerds sequestered away in ivory towers with coveted datasets. Going back to the 1700’s. Time and time again, we hear the same refrain: this stuff works because it works on all of us just a little bit.

We may not even feel it working — which is why, when asked, the average person may not even notice that their preferences between two products they don’t even care about — say Coke or Pepsi — they will tell you their preferences haven’t changed after absorbing a marketing assault. But those preferences have changed — on the order of 1–2%. Now, this doesn’t make a huge difference in the soda consumption habits of any one individual, but it makes a huge difference at scale. It’s not hard to see how, in this election, the same forces are at play.

One or two percent might not sound like a lot, but ​Donald Trump won four states—worth 75 electoral votes—by less than a 2% margin of victory: in Michigan, it was just 0.2%. Again, close election.

Sidebar: You should really just read Webb’s whole post. It’s one of the best takes on Facebook’s problems that I’ve read, and as you might have gathered by now, I’ve read a lot of these pieces since the election.

Webb moves on to Facebook’s steps to correct these problems, and what he sees is not inspiring:

Thus far, all of your proposed solutions to these very real problems are predicated upon a still-unproven hypothesis: That communities at scale can police themselves (“We have already launched work enabling our community to flag hoaxes and fake news…”). This is not true, and has never been true. Just because Silicon Valley has desperately wanted to believe for twenty years that communities can self-police does not make it true.

Point to one example where this has worked. Twitter is a cesspool — one which you are rapidly joining. Google gave up long ago on simple “wisdom of the crowds” algorithms around its search rankings and heavily skews the algorithm against spammers, cheats and other anomalies (and God knows what else, but that’s a different story). At Tumblr, we employed real, live humans to edit. At scale. It worked. Did fake news still appear? Yes. Did it leverage our “curating” and cause us as a platform to help falsities spread? No.

And this is where I think Zuckerberg really missed the boat. Facebook wanted to catch up to Twitter and provide an experience that was as engaging. And as Biddle described it above, ‘confirmation bias on steroids’ turns out to be incredibly engaging. People marinate themselves in a toxic stew of hoaxes and satire and propaganda and vote as if all that stuff was true, or worse: pick up a gun and act as if it was true.

Mable Madness

Webb ends his post by complaining about the algorithm where I started this article a couple thousand words ago: the one that only shows you the most interesting portion of the stuff you subscribe to. It’s one thing to filter out cat photos if I’m a dog person, or vice versa. No harm, no foul. But filtering the news isn’t just bad for the readers, it’s bad for journalism: ​

Following a news organization on a social network is a good idea on paper, sure. But you mucked it up with the algorithm. I can’t actually follow The New York Times or Buzzfeed or Fox News on Facebook. “Follow” is a misleading term. All I can do is click a button that says “hey if one of their stories is super popular, maybe think about showing it to me.” It’s like if I subscribed to a newspaper and, rather than being able to read the whole thing, I let my crazy aunt Mable cut it up, annotate it, and only let me read those parts. […]

In short, you’ve set foot into being a player in the news media, with zero interest in actually helping the news media, or in the social responsibilities that come with it. Now sure. You share ad revenue. But only popular stories garner ad revenue. You’ve aggravated the fundamental problem with internet news: only the most sensationalist stories generate the revenue. Whether the income came from subscriptions or ad revenue, in the old days, revenue to a paper was revenue to a paper. […] You could have helped fix this on the internet, but you didn’t. You made it worse.

Facebook’s outsized role in public life, whether it’s at the expense of traditional new providers or not, is enough to raise eyebrows. Its reliance on the gospel of algorithms should raise flags, as well.