Fake News

2016-11-16 作者: Ben Thompson 原文 #Stratechery 的其它文章

Fake News

Between 2001 and 2003, Judith Miller wrote a number of pieces in the New York Times asserting that Iraq had the capability and the ambition to produce weapons of mass destruction. It was fake news.

Looking back, it’s impossible to say with certainty what role Miller’s stories played in the U.S.’s ill-fated decision to invade Iraq in 2003; the same sources feeding Miller were well-connected with the George W. Bush administration’s foreign policy team. Still, it meant something to have the New York Times backing them up, particularly for Democrats who may have been inclined to push back against Bush more aggressively. After all, the New York Times was not some fly-by-night operation, it was the preeminent newspaper in the country, and one generally thought to lean towards the left. Miller’s stories had a certain resonance by virtue of where they were published.

It’s tempting to make a connection between the Miller fiasco and the current debate about Facebook’s fake news problem; the cautionary tale that “fake news is bad” writes itself. My takeaway, though, is the exact opposite: it matters less what is fake and more who decides what is news in the first place.

Facebook’s Commoditization of Media

In Aggregation Theory I described the process by which the demise of distribution-based economic power has resulted in the rise of powerful intermediaries that own the customer experience and commoditize their suppliers. In the case of Facebook , the social network started with the foundation of pre-existing offline networks that were moved online. Given that humans are inherently social, users started prioritizing time on Facebook over time spent reading, say, the newspaper (or any of the effectively infinite set of alternatives for attention).

It followed, then, that it was in the interest of media companies, businesses, and basically anyone else who wanted to get the attention of users, to be on Facebook as well. This was great for Facebook: the more compelling content it could provide to its users, the more time they would spend on Facebook; the more time they spent on Facebook, the more opportunities Facebook would have to place advertisements in front of them. And, critically, the more time users spent on Facebook, the less time they had to read anything else, further increasing the motivation for media companies (and businesses of all types) to be on Facebook themselves, resulting in a virtuous cycle in Facebook’s favor: by having the users Facebook captured the suppliers, which deepened their hold on the users, increasing their power over suppliers.

This process reduced Facebook’s content suppliers — media companies — into pure commodity providers. All that mattered for everyone was the level of engagement: media companies got ad views, Facebook got shares, and users got the psychic reward of having flipped a bit in a database. Of course not all content was engaging to all users; that’s what the algorithm was for: show people only what they want to see, whether it be baby pictures, engagement announcements, cat pictures, quizzes, or, yes, political news. It was, from Facebook’s perspective — and, frankly, from its users’ perspective — all the same. That includes fake news too, by the way: it’s not that there is anything particularly special about news from Macedonia , it’s that according to the algorithm there isn’t anything particularly special about any content, beyond the level of engagement it drives.

The Media and Trump

There has been a lot of discussion — in the media, naturally — about how the media made President-elect Donald Trump. The story is that Trump would have never amounted to anything had the media not given him billions of dollars worth of earned media — basically news coverage (as opposed to paid media, which is advertising) — and that the industry needed to take responsibility. It’s a lovely bit of self-reflection that lets the industry deny the far more discomforting reality: that the media couldn’t have done a damn thing about Trump if they had wanted to.

The reason the media covered Trump so extensively is quite simple: that is what users wanted. And, in a world where media is a commodity, to act as if one has the editorial prerogative to not cover a candidate users want to see is to face that reality square in the face, absent the clicks that make the medicine easier to take.

Indeed, this is the same reason fake news flourishes: because users want it. These sites get traffic because users click on their articles and share them, because they confirm what they already think to be true. Confirmation bias is a hell of a drug — and, as Techcrunch reporter Kim-Mai Cutler so aptly put it on Twitter , it’s a hell of a business model.

Why Facebook Should Fix Fake News

So now we arrive at the question of what to do about fake news. Perhaps the most common sentiment was laid out by Zeynep Tufekci in the New York Times : Facebook should eliminate fake news and the filter effect — the tendency to see news you already agree with — while they’re at it. Tufekci writes:

Mark Zuckerberg, Facebook’s chief, believes that it is “a pretty crazy idea” that “fake news on Facebook, which is a very small amount of content, influenced the election in any way.” In holding fast to the claim that his company has little effect on how people make up their minds, Mr. Zuckerberg is doing real damage to American democracy — and to the world…

The problem with Facebook’s influence on political discourse is not limited to the dissemination of fake news. It’s also about echo chambers. The company’s algorithm chooses which updates appear higher up in users’ newsfeeds and which are buried. Humans already tend to cluster among like-minded people and seek news that confirms their biases. Facebook’s research shows that the company’s algorithm encourages this by somewhat prioritizing updates that users find comforting…

Tufekci offers up a number of recommendations for Facebook, including sharing data with outside researchers to better understand how misinformation spreads and the extent of filter bubbles, 1 acting much more aggressively to eliminate fake news like it does spam and other objectionable content, rehiring human editors, and retweaking its algorithm to favor news balance, not just engagement.

Why Facebook Should Not

All seem reasonable on their face, but in fact Tufekci’s recommendations are radical in their own way.

First, there is no incentive for Facebook to do any of this; while the company denies this report in Gizmodo that the company shelved a change to the News Feed algorithm that would have eliminated fake news stories because it disproportionately affected right-wing sites, the fact remains that the company is heavily incentivized to be perceived as neutral by all sides; anything else would drive away users, a particularly problematic outcome for a social network. 2

Moreover, any move away from a focus on engagement would, by definition, decrease the time spent on Facebook, and here Tufekci is wrong to claim that this is acceptable because there is “no competitor in sight.” In fact, Facebook is in its most challenging position in a long time: Snapchat is stealing attention from its most valuable demographics, even as the News Feed is approaching saturation in terms of ad load, and there is a real danger Snapchat will beat the company to the biggest prize in consumer tech: TV-centric brand advertising dollars.

There are even more fundamental problems, though: how do you decide what is fake and what isn’t? Where is the line? And, perhaps most critically, who decides? To argue that the existence of some number of fake news items amongst an ocean of other content ought to result in active editing of Facebook content is not simply a logistical nightmare but, at least when it comes to the potential of bad outcomes, far more fraught than it appears.

That goes double for the filter bubble problem: there is a very large leap from arguing Facebook impacts its users’ flow of information via the second-order effects of driving engagement, to insisting the platform actively influence what users see for political reasons. It doesn’t matter that the goal is a better society, as opposed to picking partisan sides; after all, partisans think their goal is a better society as well. Indeed, if the entire concern is the outsized role that Facebook plays in its users’ news consumption, then the far greater fear should be the potential of someone actively abusing that role for their own ends.

I get why top-down solutions are tempting: fake news and filter bubbles are in front of our faces, and wouldn’t it be better if Facebook fixed them? The problem is the assumption that whoever wields that top-down power will just so happen to have the same views I do. What, though, if they don’t? Just look at our current political situation: those worried about Trump have to contend with the fact that the power of the executive branch has been dramatically expanded over the decades; we place immense responsibility and capability in the hands of one person, forgetting that said responsibility and capability is not so easily withdrawn if we don’t like the one wielding it.

To that end I would be far more concerned about Facebook were they to begin actively editing the News Feed; as I noted last week I’m increasingly concerned about Zuckerberg’s utopian-esque view of the world, and it is a frighteningly small step from influencing the world to controlling the world. Just as bad would be government regulation: our most critical liberty when it comes to a check on tyranny is the freedom of speech, and it would be directly counter to that liberty to put a bureaucrat — who reports to the President — in charge of what people see.

The key thing to remember is that the actual impact of fake news is dependent on who delivers it: sure, those Macedonian news stories aren’t great, but their effect, such as it is, comes from confirming what people already believe. Contrast that to Miller’s stories in the New York Times: because the New York Times was a trusted gatekeeper, many people fundamentally changed their opinions, resulting in a disaster the full effects of which are still being felt. In that light, the potential downside of Facebook coming anywhere close to deciding the news can scarcely be imagined.

Liberty and Laziness

There may be some middle ground here: perhaps some sources are so obviously fake that Facebook can easily exclude them, ideally with full transparency about what they are doing and why. And, to the extent Facebook can share data with outside researchers without compromising its competitive position, it should do so. The company should also provide even more options to users to control their feed if they wish to avoid filter bubbles.

In truth, though, you and I know that few users will bother. And that, seemingly, is what bothers many of Facebook’s critics the most. If users won’t seek out the “right” news sources, well, then someone ought to make them see them. It all sounds great — and, without question, a far more convenient solution to winning elections than actually making the changes necessary to do so — until you remember that that someone you just entrusted with such awesome power could disagree with you, and that the very notion of controlling what people read is the hallmark of totalitarianism.

Let me be clear: I am well aware of the problematic aspects of Facebook’s impact; I am particularly worried about the ease with which we sort ourselves into tribes, in part because of the filter bubble effect noted above (that’s one of the reasons Why Twitter Must Be Saved ). But the solution is not the reimposition of gatekeepers done in by the Internet; whatever fixes this problem must spring from the power of the Internet, and the fact that each of us, if we choose, has access to more information and sources of truth than ever before, and more ways to reach out and understand and persuade those with whom we disagree. Yes, that is more work than demanding Zuckerberg change what people see, but giving up liberty for laziness never works out well in the end.


For more about how the Internet has fundamentally changed politics, please see this piece from March, The Voters Decide .

  1. Facebook has done a study about the latter, but as Tufekci and others have documented, the study was full of problems [ ]
  2. Indeed, it wasn’t that long ago that I was making this exact argument in response to those who insisted Facebook would alter the News Feed to serve their own political purposes [ ]

文章版权归原作者所有。
二维码分享本站