https://s3-us-west-2.amazonaws.com/secure.notion-static.com/98ddfd23-8b70-48a4-be6d-89cb146d8c1a/Why_Facebook_Cant_Fix_Itself_-_The_New_Yorker.m4a

When Facebook was founded, in 2004, the company had few codified rules about what was allowed on the platform and what was not. Charlotte Willner joined three years later, as one of the company’s first employees to moderate content on the site. At the time, she said, the written guidelines were about a page long; around the office, they were often summarized as, “If something makes you feel bad in your gut, take it down.” Her husband, Dave, was hired the following year, becoming one of twelve full-time content moderators. He later became the company’s head of content policy. The guidelines, he told me, “were just a bunch of examples, with no one articulating the reasoning behind them. ‘We delete nudity.’ ‘People aren’t allowed to say nice things about Hitler.’ It was a list, not a framework.” So he wrote a framework. He called the document the Abuse Standards. A few years later, it was given a more innocuous-sounding title: the Implementation Standards.

These days, the Implementation Standards comprise an ever-changing wiki, roughly twelve thousand words long, with twenty-four headings—“Hate Speech,” “Bullying,” “Harassment,” and so on—each of which contains dozens of subcategories, technical definitions, and links to supplementary materials. These are located on an internal software system that only content moderators and select employees can access. The document available to Facebook’s users, the Community Standards, is a condensed, sanitized version of the guidelines. The rule about graphic content, for example, begins, “We remove content that glorifies violence.” The internal version, by contrast, enumerates several dozen types of graphic images—“charred or burning human beings”; “the detachment of non-generating body parts”; “toddlers smoking”—that content moderators are instructed to mark as “disturbing,” but not to remove.

Facebook’s stated mission is to “bring the world closer together.” It considers itself a neutral platform, not a publisher, and so has resisted censoring its users’ speech, even when that speech is ugly or unpopular. In its early years, Facebook weathered periodic waves of bad press, usually occasioned by incidents of bullying or violence on the platform. Yet none of this seemed to cause lasting damage to the company’s reputation, or to its valuation. Facebook’s representatives repeatedly claimed that they took the spread of harmful content seriously, indicating that they could manage the problem if they were only given more time. Rashad Robinson, the president of the racial-justice group Color of Change, told me, “I don’t want to sound naïve, but until recently I was willing to believe that they were committed to making real progress. But then the hate speech and the toxicity keeps multiplying, and at a certain point you go, Oh, maybe, despite what they say, getting rid of this stuff just isn’t a priority for them.”

There are reportedly more than five hundred full-time employees working in Facebook’s P.R. department. These days, their primary job is to insist that Facebook is a fun place to share baby photos and sell old couches, not a vector for hate speech, misinformation, and violent extremist propaganda. In July, Nick Clegg, a former Deputy Prime Minister of the U.K. who is now a top flack at Facebook, published a piece on AdAge.com and on the company’s official blog titled “Facebook Does Not Benefit from Hate,” in which he wrote, “There is no incentive for us to do anything but remove it.” The previous week, Guy Rosen, whose job title is vice-president for integrity, had written, “We don’t allow hate speech on Facebook. While we recognize we have more to do . . . we are moving in the right direction.”

It would be more accurate to say that the company is moving in several contradictory directions at once. In theory, no one is allowed to post hate speech on Facebook. Yet many world leaders—Rodrigo Duterte, of the Philippines; Narendra Modi, of India; Donald Trump; and others—routinely spread hate speech and disinformation, on Facebook and elsewhere. The company could apply the same standards to demagogues as it does to everyone else, banning them from the platform when necessary, but this would be financially risky. (If Facebook were to ban Trump, he would surely try to retaliate with onerous regulations; he might also encourage his supporters to boycott the company.) Instead, again and again, Facebook has erred on the side of allowing politicians to post whatever they want, even when this has led the company to weaken its own rules, to apply them selectively, to creatively reinterpret them, or to ignore them altogether.

Dave Willner conceded that Facebook has “no good options,” and that censoring world leaders might set “a worrisome precedent.” At the same time, Facebook’s stated reason for forbidding hate speech, both in the Community Standards and in public remarks by its executives, is that it can lead to real-world violence. Willner went on, “If that’s their position, that hate speech is inherently dangerous, then how is it not more dangerous to let people use hate speech as long as they’re powerful enough, or famous enough, or in charge of a whole army?”

The Willners left Facebook in 2013. (Charlotte now runs the trust-and-safety department at Pinterest; Dave is the head of community policy at Airbnb.) Although they once considered themselves “true believers in Facebook’s mission,” they have become outspoken critics of the company. “As far as I can tell, the bulk of the document I wrote hasn’t changed all that much, surprisingly,” Dave Willner told me. “But they’ve made some big carve-outs that are just absolute nonsense. There’s no perfect approach to content moderation, but they could at least try to look less transparently craven and incoherent.”

In a statement, Drew Pusateri, a spokesperson for Facebook, wrote, “We’ve invested billions of dollars to keep hate off of our platform.” He continued, “A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter. While this is progress, we’re conscious that there’s more work to do.” It is possible that Facebook, which owns Instagram, WhatsApp, and Messenger, and has more than three billion monthly users, is so big that its content can no longer be effectively moderated. Some of Facebook’s detractors argue that, given the public’s widespread and justified skepticism of the company, it should have less power over users’ speech, not more. “That’s a false choice,” Rashad Robinson said. “Facebook already has all the power. They’re just using it poorly.” He pointed out that Facebook consistently removes recruitment propaganda by ISIS and other Islamist groups, but that it has been far less aggressive in cracking down on white-supremacist groups. He added, “The right question isn’t ‘Should Facebook do more or less?’ but ‘How is Facebook enforcing its rules, and who is set up to benefit from that?’ ”

In public, Mark Zuckerberg, Facebook’s founder, chairman, and C.E.O., often invokes the lofty ideals of free speech and pluralistic debate. During a lecture at Georgetown University last October, he said, “Frederick Douglass once called free expression ‘the great moral renovator of society.’ ” But Zuckerberg’s actions make more sense when viewed as an outgrowth of his business model. The company’s incentive is to keep people on the platform—including strongmen and their most avid followers, whose incendiary rhetoric tends to generate a disproportionate amount of engagement. A former Facebook employee told me, “Nobody wants to look in the mirror and go, I make a lot of money by giving objectively dangerous people a huge megaphone.” This is precisely what Facebook’s executives are doing, the former employee continued, “but they try to tell themselves a convoluted story about how it’s not actually what they’re doing.”

“Who sanitizes the sanitizer?” Cartoon by Elisabeth McNair

“Who sanitizes the sanitizer?” Cartoon by Elisabeth McNair

In retrospect, it seems that the company’s strategy has never been to manage the problem of dangerous content, but rather to manage the public’s perception of the problem. In Clegg’s recent blog post, he wrote that Facebook takes a “zero tolerance approach” to hate speech, but that, “with so much content posted every day, rooting out the hate is like looking for a needle in a haystack.” This metaphor casts Zuckerberg as a hapless victim of fate: day after day, through no fault of his own, his haystack ends up mysteriously full of needles. A more honest metaphor would posit a powerful set of magnets at the center of the haystack—Facebook’s algorithms, which attract and elevate whatever content is most highly charged. If there are needles anywhere nearby—and, on the Internet, there always are—the magnets will pull them in. Remove as many as you want today; more will reappear tomorrow. This is how the system is designed to work.

On December 7, 2015, Donald Trump, then a dark-horse candidate for the Republican Presidential nomination, used his Facebook page to promote a press release. It called for “a total and complete shutdown of Muslims entering the United States” and insinuated that Muslims—all 1.8 billion of them, presumably—“have no sense of reason or respect for human life.” By Facebook’s definition, this was clearly hate speech. The Community Standards prohibited all “content that directly attacks people based on race, ethnicity, national origin, or religion.” According to the Times, Zuckerberg was personally “appalled” by Trump’s post. Still, his top officials held a series of meetings to decide whether, given Trump’s prominence, an exception ought to be made.

The discussions were led by Monika Bickert, Elliot Schrage, and Joel Kaplan, all policy executives with law degrees from Harvard. Most of Facebook’s executives were liberal, or were assumed to be. But Kaplan, an outspoken conservative who had worked as a clerk for Justice Antonin Scalia and as a staffer in the George W. Bush White House, had recently been promoted to the position of vice-president of global public policy, and often acted as a liaison to Republicans in Washington, D.C. His advice to Zuckerberg, the Times later reported, was “Don’t poke the bear”—avoid incurring the wrath of Trump and his supporters. Trump’s post stayed up. The former Facebook employee told me, “Once you set a precedent of caving on something like that, how do you ever stop?”

Making the decision to leave Trump’s post up was one thing; justifying the decision was another. According to the Washington Post, Bickert drafted an internal memo, laying out the options that she and her colleagues had. They could make “a one-time exception” for Trump’s post, which would establish a narrow precedent that would allow them to reverse course later. They could add an “exemption for political discourse” to the guidelines, which would let them treat politicians’ future utterances on a case-by-case basis. Or they could amend the rules more expansively—for example, by “weakening the company’s community guidelines for everyone, allowing comments such as ‘No blacks allowed’ and ‘Get the gays out of San Francisco.’ ”

At the time, Facebook had fewer than forty-five hundred content moderators. Now there are some fifteen thousand, most of whom are contract workers in cities around the world (Dublin, Austin, Berlin, Manila). They often work at odd hours, to account for time-zone differences, absorbing whatever pops up on their screens: threats, graphic violence, child pornography, and every other genre of online iniquity. The work can be harrowing. “You’re sleep-deprived, your subconscious is completely open, and you’re pouring in the most psychologically radioactive content you can imagine,” Martin Holzmeister, a Brazilian art director who worked as a moderator in Barcelona, told me. “In Chernobyl, they knew, you can run in for two minutes, grab something, and run back out, and it won’t kill you. With this stuff, nobody knows how much anyone can take.” Moderators are required to sign draconian nondisclosure agreements that forbid them to discuss their work in even the most rudimentary terms. In May, thousands of moderators joined a class-action suit against Facebook alleging that the job causes P.T.S.D. (Facebook settled the suit, paying the moderators fifty-two million dollars. Pusateri, the Facebook spokesperson, said that the company provides its moderators with on-site counselling and a twenty-four-hour mental-health hotline.)

One of Facebook’s main content-moderation hubs outside the U.S. is in Dublin, where, every day, moderators review hundreds of thousands of reports of potential rule violations from Europe, Africa, the Middle East, and Latin America. In December, 2015, several moderators in the Dublin office—including some on what was called the MENA team, for Middle East and North Africa—noticed that Trump’s post was not being taken down. “An American politician saying something shitty about Muslims was probably not the most shocking thing I saw that day,” a former Dublin employee who worked on content policy related to the Middle East told me. “Remember, this is a job that involves looking at beheadings and war crimes.” The MENA team, whose members spoke Arabic, Farsi, and several other languages, was not tasked with moderating American content; still, failing to reprimand Trump struck many of them as a mistake, and they expressed their objections to their supervisors. According to Facebook’s guidelines, moderators were to remove any “calls for exclusion or segregation.” An appeal to close the American border to Muslims clearly qualified.

The following day, members of the team and other concerned employees met in a glass-walled conference room. At least one policy executive joined, via video, from the U.S. “I think it was Joel Kaplan,” the former Dublin employee told me. “I can’t be sure. Frankly, I had trouble telling those white guys apart.” The former Dublin employee got the impression that “the attitude from the higher-ups was You emotional Muslims seem upset; let’s have this conversation where you feel heard, to calm you down. Which is hilarious, because a lot of us weren’t even Muslim. Besides, the objection was never, Hey, we’re from the Middle East and this hurts our feelings.” Rather, their message was “In our expert opinion, this post violates the policies. So what’s the deal?”

Facebook claims that it has never diluted its protections against hate speech, but that it sometimes makes exceptions in the case of newsworthy utterances, such as those by people in public office. But a recently acquired version of the Implementation Standards reveals that, by 2017, Facebook had weakened its rules—not just for politicians but for all users. In an internal document called the Known Questions—a Talmud-like codicil about how the Implementation Standards should be interpreted—the rules against hate speech now included a loophole: “We allow content that excludes a group of people who share a protected characteristic from entering a country or continent.” This was followed by three examples of the kind of speech that was now permissible. The first was “We should ban Syrians from coming into Germany.” The next two examples—“I am calling for a total and complete shutdown of Muslims entering the United States” and “We should build a wall to keep Mexicans out of the country”—had been uttered, more or less word for word, by the President of the United States.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/91be76cc-8514-4082-8280-05b9c1ab72e0/Screenshot_2020-10-15_Why_Facebook_Cant_Fix_Itself.png

In May, 2017, shortly after Facebook released a report acknowledging that “malicious actors” from around the world had used the platform to meddle in the American Presidential election, Zuckerberg announced that the company would increase its global moderation workforce by two-thirds. Mildka Gray, who was then a contract worker for Facebook in Dublin, was moved into content moderation around this time; her husband, Chris, applied and was offered a job almost immediately. “They were just hiring anybody,” he said. Mildka, Chris, and the other contractors were confined to a relatively drab part of Facebook’s Dublin offices. Some of them were under the impression that, should they pass a Facebook employee in the hall, they were to stay silent.