Wednesday 23 May 2018

Fortune/Michal Lev-Ram: Facebook's Fix-It Team

Fortune

Facebook's Fix-It Team

The social-media giant is deploying AI and thousands of moderators to fight bad content. But will the clean-up efforts create a new mess over user privacy?
By Michal Lev-Ram
May 22, 2018

For 20 minutes on the morning of May 1, Facebook users saw a curious query at the end of every update on their feeds. “Does this post contain hate speech?” they were asked, in small font next to “yes” and “no” buttons. (If they clicked yes, a pop-up box of follow-up prompts emerged; if no, the question disappeared.) Users of the social network have long been able to report disturbing posts, but this in-your-face approach was unsettling. Even more perplexing: The question was appended to all posts, including photos of fuzzy kittens and foodie breakfast check-ins.

It didn’t take long for word—and snark—to spread around the web. “So glad Facebook has finally given me the ability to report every single pro–New York Mets post as ‘hate speech,’ ” quipped one Twitter user. Adding to the embarrassment, May 1 was opening day for F8, the company’s annual developer conference—and a cheerful “coming soon” status update from CEO Mark Zuckerberg himself was among those festooned with the query. “Even on a post from Zuck, it asked, ‘Is this hateful?’ ” says Guy Rosen, VP of product for the social media giant’s safety and security team, who sat down with Fortune later that same day.

As it turns out, the hate-speech feature was a bug—an “uncooked test,” in Rosen’s words, released prematurely. But though he was, broadly speaking, responsible for the blunder, he wasn’t apologetic about the technology. At some point soon, Rosen explained, feedback from such queries (applied smartly and sparingly) could be added to Facebook’s growing stockpile of weapons in its fight against harassment and other offensive or illicit activity that has proliferated on the platform. Those reports, in turn, would help train artificial intelligence systems to distinguish between innocuous fluff and posts that infringe on Facebook’s code of conduct.

In hindsight, it was ironically appropriate that the Zuckerberg post that was tagged that day read in part, “I’m going to share more about the work we’re doing to keep people safe.”

That Facebook needs cleaning up is something only a free-speech absolutist would dispute these days. The platform, with its 2.2 billion users, has an unmatched global reach. And its spreading swamp of harmful content, from election-manipulating “fake news,” to racist and terrorist propaganda, to the streaming of assaults and suicides via Facebook Live, has prompted an unprecedented outcry, with critics in the U.S. and abroad demanding that Facebook police itself better—or be policed by regulators.

    “The technology needs large amounts of training data,” gleaned from users’ posts, to spot “meaningful patterns.”

    Guy Rosen: VP of Product

Rosen is responsible for developing the tech to help Facebook flag hate speech, illicit photos, and other bad behavior.
Jessica Chou for Fortune

The social media giant recently disclosed the mind-boggling quantity of some of these transgressions. In mid-May, Facebook reported that in the first quarter of 2018 alone, it had discovered 837 million instances of spam, false advertising, fraud, malicious links, or promotion of counterfeit goods, along with 583 million fake accounts (all of which it says it disabled). It also found 21 million examples of “adult nudity and sexual activity violations,” 3.4 million of graphic violence, 2.5 million of hate speech, and 1.9 million of terrorist propaganda related to ISIS, al Qaeda, or their affiliates. Facebook’s mission is to bring the world closer together—but this is not the closeness it had in mind.

Part of the fault lies in Facebook’s business model, explains Sarah Roberts, an assistant professor at UCLA’s Graduate School of Education and Information Studies who researches social media: “The only way to encourage user engagement without going broke is to ask people to contribute content for free. But when you ask unknown parties anywhere in the world to express themselves any way they see fit, you will get the full gamut of human expression.”

Granted, it wasn’t such expression that most recently got Facebook in trouble: It was the Cambridge Analytica scandal, in which it emerged that data of some 87 million Facebook users had been obtained by a third-party developer—and used by Donald Trump and other candidates in 2016 to target voters. The privacy breach earned Zuckerberg two grueling days of grilling from Congress. Some legislators, though, were just as eager to press him on fake news and opioid-sales scams. In front of the nation, Zuckerberg conceded, “I agree we are responsible for the content [on Facebook]”—a remarkable admission from a company that for years insisted it was just providing a platform and thus absolved from blame for what gets said, done, or sold on its network.
Rank 76
2017 Company Profile: Facebook.
Revenues     Profits
$40.7 Billion     $15.9 Billion
Employees     Total return to shareholders
25,105     31.4%*
*Total Return to Shareholders assumes the 2007–2017 Annual Rate.

By the end of 2018, Facebook plans to double, to nearly 20,000, the number of moderators and other “safety and security” personnel whose job it is to catch and remove inappropriate content. And because even 20,000 people can’t possibly patrol all of the billions of videos, chats, and other posts on the massive network, Facebook is simultaneously developing artificial intelligence technologies to help do so.

Over several weeks this spring, Fortune spent time at Facebook’s Menlo Park, Calif., headquarters to see what that policing might look like. An irony quickly became apparent: For these people and machines to be more effective at their jobs, they will need to rely on increasingly invasive tactics. More humans will need to pore through more of your photos, comments, and updates. To improve their pattern recognition, A.I. tools will need to do the same. (As for your “private” messages, Facebook A.I. already scans those.) And to put particularly high-risk posts in context, humans and machines alike could dig through even more of a user’s history.

Such surveillance “is a bit of a double-edged sword,” says Roberts. It also doesn’t come cheap. Facebook has said that it expects its total expenses in 2018 to grow 50% to 60%, compared with 2017, partly owing to spending on human and A.I. monitoring. (The company doesn’t separately break out its monitoring expenses.)

It’s an outlay Facebook can certainly afford. The company, No. 76 on this year’s Fortune 500, has so far absorbed the recent controversies without taking a serious financial hit. Its first-quarter revenue jumped an impressive 49% year over year, to $12 billion, and its stock, which lost $134 billion in market value after the Cambridge Analytica news broke, now trades near pre-scandal levels. “[Monitoring] would have to be a pretty intensive investment to materially impact the margins of the business,” says John Blackledge, a senior research analyst with Cowen and a longtime follower of the company.

Still, it’s not an investment on which Facebook can skimp. Facebook can afford to lose some squeamish users, but if it drives away advertisers, who account for 98% of its revenue, it’s in big trouble. For now, Facebook says it hasn’t seen tangible disruption to the business, but some brands were voicing concerns over the presence of fake news and criminal activity well before the Cambridge Analytica exposé. And while beefed-up policing could create a safer user experience, that safety could come at an additional price. At a time when Facebook’s handling of private data and the sheer amount of information it holds have come under scrutiny, will consumers trust it to sift through even more of their posts? “We take user privacy very seriously and build our systems with privacy in mind,” Rosen asserts. But the more users know about Facebook’s cleanup efforts, the bigger the mess that might ensue.

On a spring morning in Menlo Park, more than 30 senior staffers gathered in Facebook’s Building 23 to discuss several meaty topics, including how the network should categorize hateful language. Such conversations happen every two weeks, when the company’s Content Standards Forum convenes to discuss possible updates to its rules on what kind of behavior crosses the line between obnoxious and unacceptable.

Five years ago, Facebook assigned the task of running the forum to Monika Bickert, a former assistant U.S. attorney who first joined the company as counsel for its security team. Bickert works closely with Rosen to ensure that Facebook develops tools to help implement the policies her team sets. The duo, in turn, collaborates with Justin Osofsky, whose duties as VP of global operations include overseeing the company’s growing ranks of content reviewers. If Facebook’s worst posts resemble dumpster fires, these three lead the bucket brigade.

Tall, redheaded, and athletic, Bickert exudes a bluntness that’s rare at the social network. She speaks openly about uncomfortable topics like the presence of sex offenders and beheading videos on the platform. And unlike some of the more idealistic executives, she doesn’t seem stunned by the fact that not everyone uses Facebook for good. “The abusive behaviors that we’re addressing are the same ones you would see off-line—certainly as a prosecutor,” she says.

    “The abusive behaviors that we’re addressing are the same ones you would see off-line.”

    Monica Bickert: Head of Global Policy Management

Bickert served more than a decade as a federal prosecutor before joining the social network. Now she runs its Content Standards Forum.
Jessica Chou for Fortune

Bickert’s team includes subject-matter experts and policy wonks whose credentials are as impressive as they are grim. (Think former counterterrorism specialists, rape crisis counselors, and hate-group researchers.) Their collective job is to develop enforceable policies to target and eradicate the nefarious activities they are all familiar with from the real world, while keeping the platform a bastion of (somewhat) free expression. “If Facebook isn’t a safe place, then people won’t feel comfortable coming to Facebook,” says Bickert.

Just defining impropriety is a tall order, however, especially on a global network. Take hate speech. The intention behind specific words can be tricky to parse: A Portuguese term might be considered a racial slur in Brazil, but not in Portugal. Language is also fluid: People in Russia and Ukraine have long used slang to describe one another, but as conflict between them has escalated in recent years, certain words have taken on more hateful meaning.

In an effort to be more transparent about its rules, Facebook in late April publicly released for the first time its entire, 27-page set of “community standards.” Some of its codes tackle racy content with an ultradry vocabulary. (“Do not post content that depicts or advocates for any form of nonconsensual sexual touching, crushing, necrophilia, or bestiality.”) Others are surprising for what they don’t ban. It’s okay to discuss how to make explosives, for example, if it’s for “scientific or educational purposes.” And while anyone convicted of two or more murders is banned from Facebook, a single homicide won’t get someone exiled from the land of the “Like” button—unless the individual posts an update about it. (The reason: While people may commit a single homicide accidentally or in self-defense, it is easier to establish intent with a multiple murderer; meanwhile, no users are allowed to promote or publicize crime of any kind.)

“They will not be definitions that every person will agree with,” Bickert says of the standards. “But [we want to] at least be clear on what those definitions are.” In the spirit of clarity, Facebook plans to host multiple “interactive” summits in the coming months to get feedback on its rules from the public and the press. Ultimately, though, it is up to the company to decide what it allows and what it bans—even in matters of life and death.

In the spring of 2017, Rosen put a team of engineers on “lockdown,” a Facebook practice in which people drop everything to solve a problem. The problem was dire indeed: People were using Facebook Live, a video-streaming service that had just launched, to announce their intention to kill themselves, and even to stream themselves doing it.

Dressed in a black T-shirt, jeans, and slip-on gray shoes, Rosen looks the part of a Silicon Valley techie-dude. But his casual demeanor belies the urgency with which his cross-disciplinary team of a few dozen took on the tragic issues on Facebook Live. “The purpose is to help accelerate work that’s already happening,” says the exec, seated in the same conference room where last year’s lockdown took place. The work that came out of the two-months-long period serves as a case study for how Facebook hopes to police content—and it hints at how powerful and pervasive those efforts could become.

Facebook doesn’t disclose the frequency of suicide attempts on its platform. But broader data hints at the scope of the problem. In the U.S. alone, about 45,000 people a year kill themselves, while some 1.3 million try to do so. The U.S. population stands at 325 million; Facebook’s user base tops 2.2 billion. “The scale at which Facebook is dealing with this has to be enormous,” says Dan Reidenberg, executive director of SAVE, a nonprofit aimed at raising suicide awareness.

During and after the lockdown, with Reidenberg’s help, Facebook designed policies to help those in need while reducing the amount of traumatic content on the platform—and the likelihood of “contagion,” or copycats. Company policy now states that Facebook removes content that “encourages suicide or self-injury, including real-time depictions of suicide,” but also that it has been advised not to “remove live videos of self-harm while there is an opportunity for loved ones and authorities to provide help or resources.”

That’s obviously a difficult distinction to make, which is one reason Facebook also brought on 3,000 moderators—one of its biggest expansions of that workforce—to sift through videos of at-risk users. To serve them, engineers developed better review tools, including “speed controls” that allowed reviewers to easily go back and forth within a Live video; automated transcripts of flagged clips; and a “heat map” that showed the point in a video where viewer reactions spike, a sign that the streamers might be about to do harm.
Zuckerberg testifies before the U.S. Senate on April 10.
Ting Shen — Xinhua News Agency/Getty Images
DUDE, WHERE’S MY DATA?
Deploying user data to help advertisers remains key to Facebook’s business model. But big changes are looming.
A lot has happened since mid-March, when a series of news articles exposed how the now-defunct British data firm Cambridge Analytica used improperly obtained information to build voter profiles for upwards of 87 million Facebook users (and then sold the data to political campaigns). The ensuing public outcry forced CEO Mark Zuckerberg to embark on an apology tour in front of lawmakers, users, and the press.
Facebook has since begun to notify the millions of people affected by the breach. It has also implemented more restrictions on the scope of the data currently available to third-party developers, and it kicked off a comprehensive audit of all apps that had access to large amounts of data on the platform, in order to identify other potential abuses. (An update on the audit, published May 14, stated that thousands of apps have already been investigated, with around 200 of them suspended from the platform.)
By the end of May, Facebook and other Internet companies will have to comply with the General Data Protection Regulation (GDPR), a new privacy regime aimed at protecting European Union citizens. The GDPR mandates sweeping consumer controls, including allowing users to access and delete personal information, and enabling them to file class-action-style complaints and to download their data and port it to competitors. GDPR raises the bar in more painful ways too: Corporate violators can be punished by fines of up to 4% of annual revenue. That means just one infraction could cost Facebook $1.6 billion.

Facebook says that all of its moderators receive ongoing training. The company gave Fortune a rare glimpse of material used to prep moderators (in this case, on what to do when dealing with content about “regulated goods” like prescription drugs and firearms), and it’s admirably extensive. Still, even armed with training and high-tech tools, reviewers in suicide-risk situations have an emotionally taxing and hugely impactful task—and whether they’re equipped to handle the weight of it is an open question. Like other tech companies that use content reviewers, including Twitter and YouTube, Facebook discloses little about their qualifications, or about how much they’re paid. The company does say that all are offered psychological counseling: “The reality of this work is hard,” admits Osofsky, who spoke with Fortune by phone while on paternity leave.

That makes the role of technology even more crucial. Facebook now deploys A.I. systems that can detect suicidal posts; the software searches for phrases like “Are you OK?,” alerts human reviewers, and directs resources to users it deems at risk. It can even alert a user’s friends and urge them to offer help. At some point soon, chatbots could act more directly, sending messages of concern and even automatically calling first responders.

Rosen says that since last year’s lockdown, Facebook has referred more than 1,000 suicide-risk cases worldwide to first responders. Each life saved is a profound achievement, and the increased reliance on software suggests there’s more progress to come. That’s why Reidenberg is optimistic about A.I. tools. “I believe that technology provides us the best hope of reducing the risk of suicide in the world,” he says. Still, he concedes, “This is uncharted territory.”

Assessing risks and parsing posts, on such a global scale, is indeed unprecedented. To do it effectively, Facebook will likely end up accessing and analyzing ever more of our data.

A.I. is already offering a radical shortcut, because it can sift through so much information in such little time. In cases of sexual exploitation and unlawful nudity, for example, software can already detect the presence of nipples. How do A.I. tools learn to do this? By studying lots and lots of photos—our photos—and looking for patterns. But while technology can ascertain an areola, it can’t distinguish between an acceptable depiction of the body part—breastfeeding pics—and so-called “revenge porn,” a major no-no on the platform.
Osofsky, VP of global operations, oversees Facebook’s fast-growing team of content reviewers—including staff who screen posts for hate speech and suicide risks.
Jessica Chou for Fortune

Where tech fails, human surveillance fills the gaps. Here, too, more information and more context can lead to more informed decision-making. Facebook’s content cops point out that its reviewers don’t have access to data that isn’t pertinent to the issue at hand. “The tools our content reviewers use provide a limited view and context based on the type of content that is being reviewed,” says Rosen. The implication: Facebook doesn’t have to know all your business to help you avoid hate speech or get help.

But where to draw that line—how much context is enough context—is a call Facebook will increasingly be making behind the scenes. Whether we can accept the tradeoff, giving the network more latitude to assess our data in exchange for safety, is a question too complex to answer with yes or no buttons.

This article originally appeared in the June 1, 2018 issue of Fortune.

    40 Under 40
    100 Best Companies to Work For
    Fortune 500
    Global 500
    Most Powerful Women in Business
    World's 50 Greatest Leaders
    World's Most Admired Companies
    All Rankings

    Home
    Automotive
    Careers
    Design
    Energy & Environment
    Executive Travel
    Finance
    Commentary

    Health
    International
    Leadership
    Luxury
    Markets
    Most Powerful Women
    Retail
    Sports

    Technology
    The Ledger
    Venture
    Photography
    Podcasts
    Videos
    Newsletters
    Magazine

    Fortune Knowledge Group
    Fortune Data Store
    Fortune Conferences
    Customer Service
    Site Map
    Privacy Policy
    Advertising

    Subscribe
    Give a Gift
    Ad Choices
    Terms of Use
    Your California Privacy Rights

© 2018 Time Inc. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy (Your California Privacy Rights).
Fortune may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
Quotes delayed at least 15 minutes. Market data provided by Interactive Data. ETF and Mutual Fund data provided by Morningstar, Inc. Dow Jones Terms & Conditions: http://www.djindexes.com/mdsidx/html/tandc/indexestandcs.html.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.

No comments: