The creepiness that is Facebook

Moderators: Elvis, DrVolin, Jeff

Re: The creepiness that is Facebook

Postby Sounder » Sat Feb 16, 2019 6:16 am

https://thefreethoughtproject.com/faceb ... officials/

As the Free Thought Project has previously reported, the phrase “Facebook is a private company” is not accurate as they have formed a partnership with an insidious neoconservative “think tank” known as the Atlantic Council which is directly funded and made up of groups tied to the pharmaceutical industry, the military industrial complex, and even government itself. The Atlantic Council dictates to Facebook who is allowed on the platform and who is purged.

Because the Atlantic Council is funded in part by the United States government—and they are making decisions for Facebook—this negates the claim that the company is private.

Since our six million followers and years of hard work were wiped off the platform during the October purge, TFTP has consistently reported on the Atlantic Council and their ties to the social media giant. This week, however, we’ve discovered something just as ominous—the government to Facebook pipeline and revolving door.

It is a telltale sign of a corrupt industry or company when they create a revolving door between themselves and the state. Just like Monsanto has former employees on the Supreme Court and Pharmaceutical industry insiders move back and fourth from the FDA to their companies, we found that Facebook is doing the same thing.

Below are just a few of corrupt connections we’ve discovered while digging through the list of current and former employees within Facebook.

Facebook’s Head of Cybersecurity Policy—aka, the man who doles out the ban hammer to anyone he wishes—is Nathaniel Gleicher. Before Gleicher was censoring people at Facebook, he prosecuted cybercrime at the U.S. Department of Justice, and served as Director for Cybersecurity Policy at the National Security Council (NSC) in the Obama White House.

While Facebook may have an interest in seeking out Gleicher’s expertise, this man is an outspoken advocate of tyranny.

After deleting the pages of hundreds of antiwar and pro-peace media and activist outlets in October, last month, Facebook made another giant move to silence. This time, they had no problem noting that they went after pages whose specific missions were “anti-corruption” or “protest” movements. And it was all headed up by Gleicher.

“Some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption,” Gleicher wrote in a blog post. “We are constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people.”

Seems totally legit, right?

The list goes on.

In 2017, as the Russian/Trump propaganda ramped up, Facebook hired Joel Benenson, a former top adviser to President Barack Obama and the chief strategist for Hillary Clinton’s failed 2016 presidential campaign, as a consultant.

While filling team Zuck with Obama and Clinton advisers, Facebook hired Aneesh Raman, a former Obama speechwriter who now heads up Facebook’s “economic impact programming.”

Highlighting the revolving door aspect of Facebook and the US government is Sarah Feinberg who left the Obama train in 2011 to join Facebook as the director of corporate and strategic communications. She then moved on after and went back to Obama in 2015 to act as the administrator of the Federal Railroad Administration (FRA).

David Recordon also highlights the revolving door between Facebook and the government. Recordon was the former Director of IT for Obama’s White House. He was also Engineering Director at Facebook prior to his role at the White House, and returned to the position after the 2016 election. He is currently Engineering Director for the Chan-Zuckerberg initiative.

Starting to see a pattern of political influence here? You should. But just in case you don’t, the list goes on.

Meredith Carden—who, you guessed, came from the Obama administration—joined the Facebook clan last year to be a part of Facebook’s “News Integrity Team.” Now, she’s battling fake news on the platform and as we’ve shown, there is a ridiculous amount of selective enforcement of these so-called “standards.”

In fact, there are dozens of former Obama staffers, advisers, and campaign associates who quite literally fill Facebook’s ranks. It is no wonder the platform has taken such a political shift over the past few years. David Ploufe, Josh W. Higgins, Lauryn Ogbechie, Danielle Cwirko-Godycki, Sarah Pollack, Ben Forer, Bonnie Calvin, and Juliane Sun, are just some of the many Facebook execs hailing out of the Obama era White House.

But fret not right wingers, Facebook likes their neocons too.

Jamie Fly, who was a top adviser to neocon Florida Senator Marco Rubio and who started his career in US political circles as an adviser to the George W. Bush administration, actually took credit for the massive purge of peaceful antiwar pages that took place last October.

“They can invent stories that get repeated and spread through different sites. So we are just starting to push back. Just this last week Facebook began starting to take down sites. So this is just the beginning,” Fly said in December.

Fly backs up his words with the fact that he works with Facebook’s arm of the Atlantic Council to ensure those dangerous antiwar folks don’t keep pushing their propaganda of peace and community.


And yes, this list goes on.

Joel David Kaplan is Facebook’s vice president of global public policy. Prior to his major role within Facebook, Kaplan took the place of neocon extraordinaire Karl Rove as the White House Deputy Chief of Staff for George W. Bush. Before that, from 2001 to 2003 he was Special Assistant to the President for Policy within the White House Chief of Staff’s office. Then he served as Deputy Director of the Office of Management And Budget (OMB).

Myriah Jordan was a special policy assistant in the Bush White House, who was hired on as a policy manager for Facebook’s congressional relations team—aka, a lobbyist. Jordan has moved back and forth between the private sector and the US government multiple times over his career as he’s made millions greasing the skids of the state for his corrupt employers.

So there you have it. Facebook, who claims to be a private entity, is quite literally made up of and advised by dozens of members of government. We’re ready for a change, are you?
Sounder
 
Posts: 4054
Joined: Thu Nov 09, 2006 8:49 am
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby seemslikeadream » Thu Feb 21, 2019 9:11 am

I do NOT have a FB account


Anyone with a Facebook account may wind up deleting it as the filmmakers lay out the crimes at hand, detailing the company’s eerie ability to map voter profiles for thousands of people in a single region using information from seemingly innocuous online surveys.



‘The Great Hack’ Explains How Cambridge Analytica Made Trump President — Sundance Review

This sprawling look at the company behind Brexit and Trump will scare you from ever using Facebook again.

Eric Kohn
As the trauma of the 2016 presidential election gave way to self reflection, Cambridge Analytica epitomized a unique form of 21st-century villainy. The British technology firm’s covert use of Facebook user data to map voter behavior boosted the Trump campaign and Brexit alike by sowing disinformation, and it only faced comeuppance once a few employees decided to speak out. Co-founded by Steve Bannon, and tied to broader concerns about Facebook’s loose privacy standards, the company’s impact says much about the divisiveness of the last two years.

Cambridge Analytica’s exploitative online behavior became public in piecemeal, culminating with the company’s decision to close in early 2018. As a result, the full scope of its impact has been elusive. Netflix production “The Great Hack,” a sprawling 137-minute documentary from the directors of “Startup.com” and “Control Room,” goes to great lengths to resolve that. Billed as a work-in-progress at Sundance, it runs far too long and struggles to find a natural endpoint for its saga, juggling reams of dense information. Yet directors Jehane Noujaim and Karim Amer have assembled an engaging overview that positions the company’s rise and fall at the center of an information technology market changing too fast for anyone to wholly comprehend.

Noujaim and Amer excel at capturing the complex inner workings of companies through a personal lens, and here they find their key subject in Brittany Kaiser, the young and lively former business director for Cambridge Analytica who became a whistleblower last year. Kaiser, who would make a great vehicle for Julia Stiles in the inevitable narrative adaptation, has a remarkable backstory that helps explain just how much the company managed to infiltrate both sides of the political spectrum to achieve its results. A former Obama campaign intern going back to his 2008 election, she was lured by former Cambridge Analytica CEO Alexander Nix to use her skills for more devious ends with the promise of a better paycheck. In the process, she embraced right-wing politics with the commitment of method actor.

The movie stays close to Kaiser’s side for much of its running time, following her through an off-the-grid Thailand trip as she recounts her sad story and watching as she deals with the media fallout surrounding her decision to come forward. But it also positions Kaiser as part of a much larger epic, including Nix’s many corrupt schemes to sow disinformation about political candidates to bolster the firm’s results. Captured by Channel 4 cameras boasting about prostitution and bribery for opposition research, Nix became a face of evil as Cambridge Analytica was forced to reckon with its misdeeds in public. But “The Great Hack” positions him as one cog in a massive machine.

The documentary opens with a sweeping dystopian vision, as if setting the stage for a “Black Mirror” episode. In a dense collage of people using their phones in everyday life, the filmmakers show blurry pixels emanating from countless screens, as the impressive CGI visuals explore the emerging industry of predicting behavior through online data. A sea of voices muse on the potential challenges of this breakthrough technology. “When does it turn sour?” one person wonders, as the trillion-dollar data mining industry comes into focus. And the Cambridge Analytica story provides an answer.

Anyone with a Facebook account may wind up deleting it as the filmmakers lay out the crimes at hand, detailing the company’s eerie ability to map voter profiles for thousands of people in a single region using information from seemingly innocuous online surveys. The company got away with its scheme until American professor David Carroll sued Cambridge Analytica to access data after he was providing with only a handful of details about his profile, despite the company’s claims that it managed some 5,000 data points for each person. As Carroll explains in “The Great Hack,” his personal curiosity led to international outrage; if he didn’t make the effort, the company’s influence might never have waned in the first place.

But now, with the help of Kaiser and other former employees who have since come forward, “The Great Hack” explains the company’s rapid-fire impact, from its success in carrying Ted Cruz through the Iowa caucus to the way it yielded a new contract with the Trump campaign. At each step of the process, illustrations elucidate the dramatic speed with which Cambridge Analytica solidified its power using the loopholes hiding in plain sight. As former executive Julian Wheatland puts it: “There was always going to be a Cambridge Analytica. It just sucks to me that it was Cambridge Analytica.” It’s hard to tell if he’s regretful or just bummed they got caught.

With its epic length and many overlapping narratives, “The Great Hack” feels like a rough edit in search of an elusive final cut, as its story continues to develop. In its current form, it flies past one logical end point, when Cambridge Analytica declared bankruptcy and shut down in May 2018. Then it consumes another half hour of recent developments, from the Mueller investigation, to Nix’s public testimony, and further pontifications from Kaiser. This potential epilogue becomes an entire concluding act, and yet the story still dangles on a cliffhanger when the credits finally roll.

No matter what form it takes, “The Great Hack” exists as a giant contradiction sure to evoke strong responses from anyone impacted by its drama, which is basically everyone. As a Netflix production, it has a puzzling identity in the marketplace: Audiences for this revealing movie are poised to discover it through the very same process of hidden algorithms at the center of its alarming narrative. That’s either a bitter irony or exactly right.

Grade: B-

“The Great Hack” premiered in the Documentary Premieres at the 2019 Sundance Film Festival. Netflix releases it later this year.
https://www.indiewire.com/2019/01/the-g ... 202038704/
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Sat Mar 02, 2019 5:45 pm

Revealed: Facebook’s global lobbying against data privacy laws

Carole CadwalladrSat 2 Mar 2019 09.00 EST
Social network targeted legislators around the world, promising or threatening to withhold investment

Facebook has targeted politicians around the world – including the former UK chancellor, George Osborne – promising investments and incentives while seeking to pressure them into lobbying on Facebook’s behalf against data privacy legislation, an explosive new leak of internal Facebook documents has revealed.

The documents, which have been seen by the Observer and Computer Weekly, reveal a secretive global lobbying operation targeting hundreds of legislators and regulators in an attempt to procure influence across the world, including in the UK, US, Canada, India, Vietnam, Argentina, Brazil, Malaysia and all 28 states of the EU. The documents include details of how Facebook:

• Lobbied politicians across Europe in a strategic operation to head off “overly restrictive” GDPR legislation. They include extraordinary claims that the Irish prime minister said his country could exercise significant influence as president of the EU, promoting Facebook’s interests even though technically it was supposed to remain neutral.

• Used chief operating officer Sheryl Sandberg’s feminist memoir Lean In to “bond” with female European commissioners it viewed as hostile.

• Threatened to withhold investment from countries unless they supported or passed Facebook-friendly laws.

The documents appear to emanate from a court case against Facebook by the app developer Six4Three in California, and reveal that Sandberg considered European data protection legislation a “critical” threat to the company. A memo written after the Davos economic summit in 2013 quotes Sandberg describing the “uphill battle” the company faced in Europe on the “data and privacy front” and its “critical” efforts to head off “overly prescriptive new laws”.

Most revealingly, it includes details of the company’s “great relationship” with Enda Kenny, the Irish prime minister at the time, one of a number of people it describes as “friends of Facebook”. Ireland plays a key role in regulating technology companies in Europe because its data protection commissioner acts for all 28 member states. The memo has inflamed data protection advocates, who have long complained about the company’s “cosy” relationship with the Irish government.

The memo notes Kenny’s “appreciation” for Facebook’s decision to locate its headquarters in Dublin and points out that the new proposed data protection legislation was a “threat to jobs, innovation and economic growth in Europe”. It then goes on to say that Ireland is poised to take on the presidency of the EU and therefore has the “opportunity to influence the European Data Directive decisions”. It makes the extraordinary claim that Kenny offered to use the “significant influence” of the EU presidency as a means of influencing other EU member states “even though technically Ireland is supposed to remain neutral in this role”.

Chief operating officer Sheryl Sandberg
Chief operating officer Sheryl Sandberg’s book Lean In was seen as a possible way of bonding with female legislators, the memos suggest. Photograph: Tom Williams/CQ-Roll Call,Inc.
It goes on: “The prime minister committed to using their EU presidency to achieve a positive outcome on the directive.” Kenny, who resigned from office in 2017, did not respond to the Observer’s request for comment.

John Naughton, a Cambridge academic and Observer writer who studies the democratic implications of digital technology, said the leak was “explosive” in the way it revealed the “vassalage” of the Irish state to the big tech companies. Ireland had welcomed the companies, he noted, but became “caught between a rock and a hard place”. “Its leading politicians apparently saw themselves as covert lobbyists for a data monster.”

A spokesperson for Facebook said the documents were still under seal in a Californian court and it could not respond to them in any detail: “Like the other documents that were cherrypicked and released in violation of a court order last year, these by design tell one side of a story and omit important context.”

The 2013 memo, written by Marne Levine, who is now a Facebook senior executive, was cc-ed to Elliot Schrage, Facebook’s then head of policy and global communications, the role now occupied by Nick Clegg. As well as Kenny, dozens of other politicians, US senators and European commissioners are mentioned by name, including then Indian president Pranab Mukherjee, Michel Barnier, now the EU’s Brexit negotiator, and Osborne.

The then chancellor used the meeting with Sandberg to ask Facebook to invest in the government’s Tech City venture, the memo claims, and Sandberg said she would “review” any proposal. In exchange, she asked him to become “even more active and vocal in the European Data Directive debate and really help shape the proposals”. The memo claims Osborne asked for a detailed briefing and said he would “figure out how to get more involved”. He offered to host a launch for Sandberg’s book in Downing Street, an event that went ahead in spring 2013.

Osborne told the Observer: “I don’t think it’s a surprise that the UK chancellor would meet the chief operating officer of one of the world’s largest companies … Facebook and other US tech firms, in private, as in public, raised concerns about the proposed European Data Directive. To your specific inquiry, I didn’t follow up on those concerns, or lobby the EU, because I didn’t agree with them.”

He noted it was “not a secret” that he had helped launch Sandberg’s book at 11 Downing Street and added: “The book’s message about female empowerment was widely praised, not least in the Guardian and the Observer.”

Enda Kenny
The documents say Facebook believed it enjoyed a ‘great relationship’ with Ireland’s former prime minister, Enda Kenny. Photograph: Stephen McCarthy/Sportsfile via Getty Images
In fact, the memo reveals that Sandberg’s feminist memoir was perceived as a lobbying tool by the Facebook team and a means of winning support from female legislators for Facebook’s wider agenda.

In a particularly revealing account of a meeting with Viviane Reding, the influential European commissioner for justice, fundamental rights and citizenship, the memo notes her key role as “the architect of the European Data Directive” and describes the company’s “difficult” relationship with her owing to her being, it claims, “not a fan” of American companies.

“She attended Sheryl’s Lean In dinner and we met with her right afterwards,” the memo says, but notes that she felt it was a “very ‘American’ discussion”, a comment the team regarded as a setback since “getting more women into C-level jobs and on boards was supposed to be how they bonded, and it backfired a bit”.

The Davos meetings are just the tip of the iceberg in terms of Facebook’s global efforts to win influence. The documents reveals how in Canada and Malaysia it used the promise of siting a new data centre with the prospect of job creation to win legislative guarantees. When the Canadians hesitated over granting the concession Facebook wanted, the memo notes: “Sheryl took a firm approach and outlined that a decision on the data center was imminent. She emphasized that if we could not get comfort from the Canadian government on the jurisdiction issue, we had other options.” The minister supplied the agreement Facebook required by the end of the day, it notes.
https://www.theguardian.com/technology/ ... are_btn_tw
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Thu Mar 14, 2019 2:15 am


Facebook’s Data Deals Are Under Criminal Investigation

March 13, 2019
Facebook’s offices in Menlo Park, Calif. A federal grand jury is looking at partnerships that gave tech companies broad access to Facebook users’ information.Jason Henry for The New York Times


Facebook’s offices in Menlo Park, Calif. A federal grand jury is looking at partnerships that gave tech companies broad access to Facebook users’ information.Jason Henry for The New York Times
Federal prosecutors are conducting a criminal investigation into data deals Facebook struck with some of the world’s largest technology companies, intensifying scrutiny of the social media giant’s business practices as it seeks to rebound from a year of scandal and setbacks.

A grand jury in New York has subpoenaed records from at least two prominent makers of smartphones and other devices, according to two people who were familiar with the requests and who insisted on anonymity to discuss confidential legal matters. Both companies had entered into partnerships with Facebook, gaining broad access to the personal information of hundreds of millions of its users.

The companies were among more than 150, including Amazon, Apple, Microsoft and Sony, that had cut sharing deals with the world’s dominant social media platform. The agreements, previously reported in The New York Times, let the companies see users’ friends, contact information and other data, sometimes without consent. Facebook has phased out most of the partnerships over the past two years.

“We are cooperating with investigators and take those probes seriously,” a Facebook spokesman said in a statement. “We’ve provided public testimony, answered questions and pledged that we will continue to do so.”

[Read Brian Chen’s story on what he found when he downloaded his Facebook data.]

It is not clear when the grand jury inquiry, overseen by prosecutors with the United States attorney’s office for the Eastern District of New York, began or exactly what it is focusing on. Facebook was already facing scrutiny by the Federal Trade Commission and the Securities and Exchange Commission. And the Justice Department’s securities fraud unit began investigating it after reports that Cambridge Analytica, a political consulting firm, had improperly obtained the Facebook data of 87 million people and used it to build tools that helped President Trump’s election campaign.

The Justice Department and the Eastern District declined to comment for this article.

The Cambridge investigation, still active, is being run by prosecutors from the Northern District of California. One former Cambridge employee said investigators questioned him as recently as late February. He and three other witnesses in the case, speaking on the condition of anonymity so they would not anger prosecutors, said a significant line of inquiry involved Facebook’s claims that it was misled by Cambridge.

Facebook’s chief executive, Mark Zuckerberg, testifying before Congress in April.Tom Brenner/The New York Times


Facebook’s chief executive, Mark Zuckerberg, testifying before Congress in April.Tom Brenner/The New York Times
[Read more on the 5 ways Facebook shared your data.]

In public statements, Facebook executives had said that Cambridge told the company it was gathering data only for academic purposes. But the fine print accompanying a quiz app that collected the information said it could also be used commercially. Selling user data would have violated Facebook’s rules at the time, yet the social network does not appear to have regularly checked that apps were complying. Facebook deleted the quiz app in December 2015.

The disclosures about Cambridge last year thrust Facebook into the worst crisis of its history. Then came news reports last June and December that Facebook had given business partners — including makers of smartphones, tablets and other devices — deep access to users’ personal information, letting some companies effectively override users’ privacy settings

The sharing deals empowered Microsoft’s Bing search engine to map out the friends of virtually all Facebook users without their explicit consent, and allowed Amazon to obtain users’ names and contact information through their friends. Apple was able to hide from Facebook users all indicators that its devices were even asking for data.

Privacy advocates said the partnerships seemed to violate a 2011 consent agreement between Facebook and the F.T.C., stemming from allegations that the company had shared data in ways that deceived consumers. The deals also appeared to contradict statements by Mark Zuckerberg and other executives that Facebook had clamped down several years ago on sharing the data of users’ friends with outside developers.

F.T.C. officials, who spent the past year investigating whether Facebook violated the 2011 agreement, are now weighing the sharing deals as they negotiate a possible multibillion-dollar fine. That would be the largest such penalty ever imposed by the trade regulator.

Facebook has aggressively defended the partnerships, saying they were permitted under a provision in the F.T.C. agreement that covered service providers — companies that acted as extensions of the social network.

The company has taken steps in the past year to tackle data misuse and misinformation. Last week, Mr. Zuckerberg unveiled a plan that would begin to pivot Facebook away from being a platform for public sharing and put more emphasis on private communications.

Nicholas Confessore, Alan Feuer and Rebecca R. Ruiz contributed reporting.

https://www.nytimes.com/2019/03/13/tech ... w0UzLTWihH
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Fri Mar 22, 2019 8:09 am

Numbers from Facebook on the New Zealand shooting livestream

Image

A Further Update on New Zealand Terrorist Attack


By Guy Rosen, VP, Product Management

We continue to keep the people, families and communities affected by the tragedy in New Zealand in our hearts. Since the attack, we have been working directly with the New Zealand Police to respond to the attack and support their investigation. In addition, people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack, and we wanted to provide additional information from our review into how our products were used and how we can improve going forward.

Timeline

As we posted earlier this week, we removed the attacker’s video within minutes of the New Zealand Police’s outreach to us, and in the aftermath, we have people working on the ground with authorities. We will continue to support them in every way we can. In light of the active investigation, police have asked us not to share certain details. At present we are able to provide the information below:

The video was viewed fewer than 200 times during the live broadcast.
No users reported the video during the live broadcast.
Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook.
Before we were alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site.
The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.
In the first 24 hours, we removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.
Safety on Facebook Live

We recognize that the immediacy of Facebook Live brings unique challenges, and in the past few years we’ve focused on enabling our review team to get to the most important videos faster. We use artificial intelligence to detect and prioritize videos that are likely to contain suicidal or harmful acts, we improved the context we provide reviewers so that they can make the most informed decisions and we built systems to help us quickly contact first responders to get help on the ground. We continue to focus on the tools, technology and policies to keep people safe on Live.

Artificial Intelligence

Many people have asked why artificial intelligence (AI) didn’t detect the video from last week’s attack automatically. AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it’s not perfect.

AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video. This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.

AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year we more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers, and why we encourage people to report content that they find disturbing.

Reporting

During the entire live broadcast, we did not get a single user report. This matters because reports we get while a video is broadcasting live are prioritized for accelerated review. We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground.

Last year, we expanded this acceleration logic to also cover videos that were very recently live, in the past few hours. Given our focus on suicide prevention, to date we applied this acceleration when a recently live video is reported for suicide.

In Friday’s case, the first user report came in 29 minutes after the broadcast began, 12 minutes after the live broadcast ended. In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review.

Circulation of the Video

The video itself received fewer than 200 views when it was live, and was viewed about 4,000 times before being removed from Facebook. During this time, one or more users captured the video and began to circulate it. At least one of these was a user on 8chan, who posted a link to a copy of the video on a file-sharing site and we believe that from there it started circulating more broadly. Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.

This isn’t the first time violent, graphic videos, whether live streamed or not, have gone viral on various online platforms. Similar to those previous instances, we believe the broad circulation was a result of a number of different factors:

There has been coordination by bad actors to distribute copies of the video to as many people as possible through social networks, video sharing sites, file sharing sites and more.
Multiple media channels, including TV news channels and online websites, broadcast the video. We recognize there is a difficult balance to strike in covering a tragedy like this while not providing bad actors additional amplification for their message of hate.
Individuals around the world then re-shared copies they got through many different apps and services, for example filming the broadcasts on TV, capturing videos from websites, filming computer screens with their phones, or just re-sharing a clip they received.
People shared this video for a variety of reasons. Some intended to promote the killer’s actions, others were curious, and others actually intended to highlight and denounce the violence. Distribution was further propelled by broad reporting of the existence of a video, which may have prompted people to seek it out and to then share it further with their friends.

Blocking the Video

Immediately after the attack, we designated this as a terror attack, meaning that any praise, support, or representation violates our Community Standards and is not permitted on Facebook. Given the severe nature of the video, we prohibited its distribution even if shared to raise awareness, or only a segment shared as part of a news report.

In the first 24 hours, we removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.

We’ve been asked why our image and video matching technology, which has been so effective at preventing the spread of propaganda from terrorist organizations, did not catch those additional copies. What challenged our approach was the proliferation of many different variants of the video, driven by the broad and diverse ways in which people shared it:

First, we saw a core community of bad actors working together to continually re-upload edited versions of this video in ways designed to defeat our detection.

Second, a broader set of people distributed the video and unintentionally made it harder to match copies. Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats.

In total, we found and blocked over 800 visually-distinct variants of the video that were circulating. This is different from official terrorist propaganda from organizations such as ISIS – which while distributed to a hard core set of followers, is not rebroadcast by mainstream media organizations and is not re-shared widely by individuals.

We’re learning to better understand techniques which would work for cases like this with many variants of an original video. For example, as part of our efforts we employed audio matching technology to detect videos which had visually changed beyond our systems’ ability to recognize automatically but which had the same soundtrack.

Next Steps

Our greatest priorities right now are to support the New Zealand Police in every way we can, and to continue to understand how our systems and other online platforms were used as part of these events so that we can identify the most effective policy and technical steps. This includes:

Most importantly, improving our matching technology so that we can stop the spread of viral videos of this nature, regardless of how they were originally produced. For example, as part of our response last Friday, we applied experimental audio-based technology which we had been building to identify variants of the video.
Second, reacting faster to this kind of content on a live streamed video. This includes exploring whether and how AI can be used for these cases, and how to get to user reports faster. Some have asked whether we should add a time delay to Facebook Live, similar to the broadcast delay sometimes used by TV stations. There are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos. More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground.
Third, continuing to combat hate speech of all kinds on our platform. Our Community Standards prohibit terrorist and hate groups of all kinds. This includes more than 200 white supremacist organizations globally, whose content we are removing through proactive detection technology.
Fourth, expanding our industry collaboration through the Global Internet Forum to Counter Terrorism (GIFCT). We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis.
What happened in New Zealand was horrific. Our hearts are with the victims, families and communities affected by this horrible attack.

https://newsroom.fb.com/news/2019/03/te ... w-zealand/




Facebook acknowledges concerns over Cambridge Analytica emerged earlier than reported

Company confirms suspicions of separate incident following Washington DC attorney general court filing

Julia Carrie Wong
First published on Thu 21 Mar 2019 19.07 EDT
Facebook employees were aware of concerns about“improper data-gathering practices” by Cambridge Analytica months before the Guardian first reported, in December 2015, that the political consultancy had obtained data on millions from an academic. The concerns appeared in a court filing by the attorney general for Washington DC and were subsequently confirmed by Facebook.

The new information “could suggest that Facebook has consistently mislead [sic]” British lawmakers “about what it knew and when about Cambridge Analytica”, tweeted Damian Collins, the chair of the House of Commons digital culture media and sport select committee (DCMS) in response to the court filing.

In a statement, a company spokesperson said: “Facebook absolutely did not mislead anyone about this timeline.”

After publication of this article, the spokesperson acknowledged that Facebook employees heard rumors of data scraping by Cambridge Analytica in September 2015. The spokesperson said that this was a “different incident” from Cambridge Analytica’s acquisition of a trove of data about as many as 87m users that has been widely reported on for the past year.

“In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service,” the spokesperson said. “In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.”

The filing raised questions about when Facebook first learned about the misuse of personal data by Cambridge Analytica, the now defunct political consultancy.

This timeline has long been complicated by the different corporate entities involved in Cambridge Analytica’s data misuse. The data of as many as 87m people was extracted from Facebook by GSR, a company formed by the former Cambridge University academic Aleksandr Kogan, then transferred to Cambridge Analytica’s parent company, SCL.

The data extraction, though highly controversial, was not against Facebook’s policies, which at the time allowed GSR to take information not only from users who consented but from all their friends. It was the transfer of the data from GSR to SCL that was against Facebook’s policies, the company has long maintained.

After Cambridge Analytica’s acquisition of the data for political purposes became an international scandal, Mark Zuckerberg stated that Facebook “learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica” in 2015. The article detailing this data sharing was published on 11 December 2015.

The attorney general for Washington DC sued Facebook over its failure to protect user data from Cambridge Analytica in late 2018. Facebook has sought to have the case dismissed and to seal a document – currently redacted in filings – that the DC attorney general cited as evidence in his opposition to the motion to dismiss.

The document is “an email exchange between Facebook employees discussing how Cambridge Analytica (and others) violated Facebook’s policies”, according to a Monday court filing by the DC attorney general.

Those emails include “candid employee assessments that multiple third-party applications accessed and sold consumer data in violation of Facebook’s policies during the 2016 United States Presidential Election”, according to the filing. “It also indicates Facebook knew of Cambridge Analytica’s improper data-gathering practices months before news outlets reported on the issue,” the filing continues.

The filing further asserts that “as early as September 2015, a DC-based Facebook employee warned the company that Cambridge Analytica” was doing something that is currently redacted and “received responses” – also redacted – relating to “Cambridge Analytica’s data-scraping practices”.

A Facebook spokesperson clarified after publication that there may have been two separate instances of data misuse by Cambridge Analytica. The data-scraping referenced in the filing was not the same data harvesting that has become synonymous with Cambridge Analytica’s name over the past year, the company said.

“Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath,” the Facebook spokesperson said. “These were two different incidents.”

And while the email exchange appears to have been referenced in a report by parliament’s DCMS, it appears that the committee mistakenly assumed that the emails referred to the Kogan/GSR data, and not a separate Cambridge Analytica scraping incident.

The report states: “We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO [Information Commissioner’s Office] confirmed, in correspondence with the Committee, that three ‘senior managers’ were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.”

Facebook will face off with the District of Columbia in court on Friday, where a judge will hear arguments over the company’s motion to dismiss the lawsuit. The judge may also decide then whether to keep the email exchange sealed.

This article was updated to reflect new information provided by Facebook after publication
https://www.theguardian.com/uk-news/201 ... urt-filing
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Thu Mar 28, 2019 11:35 am

Facebook fights for dismissal of D.C. privacy protection suit

The Facebook Inc. application is displayed for a photograph on an Apple Inc. iPhone in Washington, D.C., U.S., on Wednesday, March 21, 2018. Facebook is struggling to respond to growing demands from Washington to explain how the personal data of millions of its users could be exploited by a consulting firm that helped Donald Trump win the presidency. , Bloomberg

The Facebook Inc. application is displayed for a photograph on an Apple Inc. iPhone in Washington, D.C., U.S., on Wednesday, March 21, 2018. Facebook is struggling to respond to growing demands from Washington to explain how the personal data of millions of its users could be exploited by a consulting firm that helped Donald Trump win the presidency.

Facebook Inc. asked a Washington judge to throw out a lawsuit by the District of Columbia accusing it of failing to protect users’ data, calling the suit a “broadside” against the company and saying the court lacks proper jurisdiction.

The political-consulting firm Cambridge Analytica used the data to target and influence voters on behalf of Donald Trump’s 2016 presidential election campaign. District of Columbia Attorney General Karl Racine sued Facebook in December, accusing it of allowing a third party to gain access to the personal information of some 70 million Americans -- 340,000 of whom live in the capital -- in violation of D.C. consumer protection laws.

Earlier this month, his office filed documents it said show Facebook was aware of the breach months before it came to light.

The suit seeks a court order barring Facebook from continuing the practice, as well as unspecified monetary damages.

“Facebook believes that this case is not properly before this court,” defense attorney Joshua Lipshutz told D.C. Superior Court Judge Fern Saddler on Friday afternoon, adding that if she reread the complaint, she would “search in vain” for an allegation of company misconduct that would show the case belonged in her court.

At least three states are investigating the Menlo Park, California-based company’s user data-protection practices, as is a federal grand jury in New York. The U.S. Federal Trade Commission is investigating Facebook for its role in the Cambridge Analytica saga. On Thursday, the agency announced a broader probe of tech company data collection practices.

While Facebook has acknowledged the firestorm set off by last year’s Cambridge Analytica revelations, it attacked the Racine suit in court papers filed last month as an unwarranted “broadside” that duplicated litigation elsewhere and was lodged without a legitimate connection between the company and the district.

“This is the wrong case, in the wrong place, at the wrong time, and it should be dismissed,” Facebook’s defense lawyers said then.

D.C. lawyers struck back, asserting that Facebook’s Washington-based employees played a lead role in responding to the uproar over how the company’s user data migrated to Cambridge Analytica through user-installed third-party apps.

“Nearly half of all D.C. residents are Facebook consumers, and the District’s complaint alleges that Facebook made unlawful misrepresentations and omissions to its vast D.C. consumer base in the course of monetizing their data into millions of dollars in advertising revenue,” according to papers filed by Racine’s office.

When the District included in its court filings documents it said support its claim that Facebook has more than enough contact with D.C. to warrant its court’s jurisdiction, the company asked Saddler to keep that information under seal.

The documents indicate that “Facebook knew of Cambridge Analytica’s improper data-gathering practices months before news outlets reported on the issue,” Racine’s lawyers said in a March 18 filing.

Saddler said she plans to rule by the end of April.

On Thursday, Racine said he was unveiling legislation to bolster legal protections for the personal data of district residents.

The hearing comes shortly after Facebook disclosed a flaw that let its employees see the passwords of hundreds of millions of users and said it has now fixed the bug.

Facebook’s privacy problems date to the 2006 introduction of its news feed, which became a cultural fixture around the world. Back then, users were surprised and indignant to see their personal photos and posts suddenly out in the open. Now, as pressure on the company mounts, Chief Executive Officer Mark Zuckerberg has declared that it will focus on private messaging and small-group chats.

The case is District of Columbia v. Facebook Inc., 2018 CA 008715 B, District of Columbia Superior Court (Washington).
https://www.bnnbloomberg.ca/facebook-fi ... -1.1233408


Facebook Is Accused of Knowing Cambridge Mined Its User Data
(Bloomberg) -- Facebook Inc. employees knew the Cambridge Analytica political-consulting firm had mined users’ personal data but didn’t tell those users before the news became public, an attorney for the District of Columbia said in court.

D.C. Assistant Deputy Attorney General Jimmy Rock made the claim while arguing against the social network’s bid to dismiss the district’s consumer protection lawsuit against it. The British consultancy used the data to target and influence voters on behalf of Donald Trump’s 2016 presidential election campaign.

District of Columbia Attorney General Karl Racine sued Facebook in December, accusing it of allowing Cambridge University researcher Aleksandr Kogan to gain access to the personal information of some 70 million Americans, 340,000 of whom live in the capital. The company’s failure to safeguard that information allegedly violated D.C. consumer protection laws. The suit seeks a court order barring Facebook from continuing such practices, as well as unspecified monetary damages.

“Facebook believes that this case is not properly before this court,” defense attorney Joshua Lipshutz told D.C. Superior Court Judge Fern Saddler on Friday afternoon, adding that if she reread the complaint, she would “search in vain” for an allegation of company misconduct that would show the case belonged in her court.

More: Social Media Giants Duck for Cover as Washington Grows ‘Fed Up’

At least three states are investigating the Menlo Park, California-based company’s user data-protection practices, as is a federal grand jury in New York. The U.S. Federal Trade Commission is investigating Facebook for its role in the Cambridge Analytica saga. On Thursday, the agency announced a broader probe of tech company data collection practices. Also on Thursday, Racine said he was unveiling legislation to bolster legal protections for the personal data of district residents.

While Facebook has acknowledged the firestorm set off by last year’s Cambridge Analytica revelations, it attacked the Racine suit in court papers filed last month as an unwarranted “broadside” that duplicated litigation elsewhere and was lodged without a legitimate connection between the company and the district.

D.C. lawyers struck back, asserting that Facebook’s Washington-based employees played a lead role in responding to the uproar over how the company’s user data migrated to Cambridge Analytica through a user-installed third-party app.

“Nearly half of all D.C. residents are Facebook consumers, and the District’s complaint alleges that Facebook made unlawful misrepresentations and omissions to its vast D.C. consumer base in the course of monetizing their data into millions of dollars in advertising revenue,” according to papers filed by Racine’s office.

The company reaped $10 million in ad revenue from the District of Columbia during the last three months of 2018, Rock said in court Friday.

More: Facebook to Block Discriminatory Ads in ‘Historic’ Legal Accord

When the District included in its court filings documents it said support its claim that Facebook has more than enough contact with D.C. to warrant its court’s jurisdiction, the company asked Saddler to keep that information under seal.

The documents indicate that “Facebook knew of Cambridge Analytica’s improper data-gathering practices months before news outlets reported on the issue,” Racine’s lawyers said in a March 18 filing.

Company employees in Washington knew the firm’s activities were a “a problem,” Rock said, calling it “material information that should be disclosed to consumers.”

The incursions started in 2013, about 18 months before Facebook employees began discussing them in a string of email messages Racine’s office submitted with court papers, the company’s lawyer said.

“Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath,” a Facebook spokesperson said in an email to Bloomberg News on Friday, referring to the creator of the app that mined Facebook user data and shared it with Cambridge. Facebook “first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action,” the spokesperson said.

More: Facebook Says Millions of Passwords Were Visible Internally

During the hourlong hearing, Lipshutz argued that Facebook had disclosed to its users that their information would be shared with third parties, a practice they would be familiar with, he said. Cambridge obtained the user info from the third-party app installed by some users, This Is Your Digital Life, a possibility “clearly and completely disclosed” by Facebook user policy, Lipshutz said.

Rock countered that the agreements themselves were internally contradictory and inherently misleading.

“Facebook failed to enforce its policies against third parties,” he told Saddler. The company told users that while it required those parties to respect user privacy, it wasn’t responsible for what they did with the information they gleaned. “Facebook cannot have it both ways,” he said.

“Today, we made strong arguments before the D.C. Superior Court for why this lawsuit on behalf of harmed District residents should continue to move forward in our local courts,” Racine said in a statement.

Saddler said she plans to rule by the end of April.

The case is District of Columbia v. Facebook Inc., 2018 CA 008715 B, District of Columbia Superior Court (Washington).

https://news.yahoo.com/facebook-fights- ... 14814.html
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby Karmamatterz » Fri Mar 29, 2019 1:40 pm

Anyone with a Facebook account may wind up deleting it as the filmmakers lay out the crimes at hand, detailing the company’s eerie ability to map voter profiles for thousands of people in a single region using information from seemingly innocuous online surveys.


This can be done without Facebook, though maybe not quite as easy peasy. All of your purchases, age, education, employment history, type of home, number of children, marital status, gender, school district etc... etc....can be used to create psychographic profiles. This has been going on well before Facebag existed. Still, Facebag sucks and is a menace to our society.
User avatar
Karmamatterz
 
Posts: 828
Joined: Sun Aug 19, 2012 10:58 pm
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby seemslikeadream » Tue Apr 16, 2019 10:48 am

15 Months of Fresh Hell Inside Facebook
Scandals. Backstabbing. Resignations. Record profits. Time Bombs. In early 2018, Mark Zuckerberg set out to fix Facebook. Here's how that turned out.


In early 2018, Mark Zuckerberg set out to fix Facebook. Here's how that turned out.
Adam Maida
The streets of Davos, Switzerland, were iced over on the night of January 25, 2018, which added a slight element of danger to the prospect of trekking to the Hotel Seehof for George Soros’ annual banquet. The aged financier has a tradition of hosting a dinner at the World Economic Forum, where he regales tycoons, ministers, and journalists with his thoughts about the state of the world. That night he began by warning in his quiet, shaking Hungarian accent about nuclear war and climate change. Then he shifted to his next idea of a global menace: Google and Facebook. “Mining and oil companies exploit the physical environment; social media companies exploit the social environment,” he said. “The owners of the platform giants consider themselves the masters of the universe, but in fact they are slaves to preserving their dominant position ... Davos is a good place to announce that their days are numbered.”

Across town, a group of senior Facebook executives, including COO Sheryl Sandberg and vice president of global communications Elliot Schrage, had set up a temporary headquarters near the base of the mountain where Thomas Mann put his fictional sanatorium. The world’s biggest companies often establish receiving rooms at the world’s biggest elite confab, but this year Facebook’s pavilion wasn’t the usual scene of airy bonhomie. It was more like a bunker—one that saw a succession of tense meetings with the same tycoons, ministers, and journalists who had nodded along to Soros’ broadside.

Over the previous year Facebook’s stock had gone up as usual, but its reputation was rapidly sinking toward junk bond status. The world had learned how Russian intelligence operatives used the platform to manipulate US voters. Genocidal monks in Myanmar and a despot in the Philippines had taken a liking to the platform. Mid-level employees at the company were getting both crankier and more empowered, and critics everywhere were arguing that Facebook’s tools fostered tribalism and outrage. That argument gained credence with every utterance of Donald Trump, who had arrived in Davos that morning, the outrageous tribalist skunk at the globalists’ garden party.

CEO Mark Zuckerberg had recently pledged to spend 2018 trying to fix Facebook. But even the company’s nascent attempts to reform itself were being scrutinized as a possible declaration of war on the institutions of democracy. Earlier that month Facebook had unveiled a major change to its News Feed rankings to favor what the company called “meaningful social interactions.” News Feed is the core of Facebook—the central stream through which flow baby pictures, press reports, New Age koans, and Russian-­made memes showing Satan endorsing Hillary Clinton. The changes would favor interactions between friends, which meant, among other things, that they would disfavor stories published by media companies. The company promised, though, that the blow would be softened somewhat for local news and publications that scored high on a user-driven metric of “trustworthiness.”

Davos provided a first chance for many media executives to confront Facebook’s leaders about these changes. And so, one by one, testy publishers and editors trudged down Davos Platz to Facebook’s headquarters throughout the week, ice cleats attached to their boots, seeking clarity. Facebook had become a capricious, godlike force in the lives of news organizations; it fed them about a third of their referral traffic while devouring a greater and greater share of the advertising revenue the media industry relies on. And now this. Why? Why would a company beset by fake news stick a knife into real news? And what would Facebook’s algorithm deem trustworthy? Would the media executives even get to see their own scores?

Facebook didn’t have ready answers to all of these questions; certainly not ones it wanted to give. The last one in particular—about trustworthiness scores—quickly inspired a heated debate among the company’s executives at Davos and their colleagues in Menlo Park. Some leaders, including Schrage, wanted to tell publishers their scores. It was only fair. Also in agreement was Campbell Brown, the company’s chief liaison with news publishers, whose job description includes absorbing some of the impact when Facebook and the news industry crash into one another.

But the engineers and product managers back at home in California said it was folly. Adam Mosseri, then head of News Feed, argued in emails that publishers would game the system if they knew their scores. Plus, they were too unsophisticated to understand the methodology, and the scores would constantly change anyway. To make matters worse, the company didn’t yet have a reliable measure of trustworthiness at hand.

Heated emails flew back and forth between Switzerland and Menlo Park. Solutions were proposed and shot down. It was a classic Facebook dilemma. The company’s algorithms embraid choices so complex and interdependent that it’s hard for any human to get a handle on it all. If you explain some of what is happening, people get confused. They also tend to obsess over tiny factors in huge equations. So in this case, as in so many others over the years, Facebook chose opacity. Nothing would be revealed in Davos, and nothing would be revealed afterward. The media execs would walk away unsatisfied.

After Soros’ speech that Thursday night, those same editors and publishers headed back to their hotels, many to write, edit, or at least read all the news pouring out about the billionaire’s tirade. The words “their days are numbered” appeared in article after article. The next day, Sandberg sent an email to Schrage asking if he knew whether Soros had shorted Facebook’s stock.

Far from Davos, meanwhile, Facebook’s product engineers got down to the precise, algorithmic business of implementing Zuckerberg’s vision. If you want to promote trustworthy news for billions of people, you first have to specify what is trustworthy and what is news. Facebook was having a hard time with both. To define trustworthiness, the company was testing how people responded to surveys about their impressions of different publishers. To define news, the engineers pulled a classification system left over from a previous project—one that pegged the category as stories involving “politics, crime, or tragedy.”

That particular choice, which meant the algorithm would be less kind to all kinds of other news—from health and science to technology and sports—wasn’t something Facebook execs discussed with media leaders in Davos. And though it went through reviews with senior managers, not everyone at the company knew about it either. When one Facebook executive learned about it recently in a briefing with a lower-­level engineer, they say they “nearly fell on the fucking floor.”

The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis, based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shifts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success.

Facebook’s powerful network effects have kept advertisers from fleeing, and overall user numbers remain healthy if you include people on Insta­gram, which Facebook owns. But the company’s original culture and mission kept creating a set of brutal debts that came due with regularity over the past 16 months. The company floundered, dissembled, and apologized. Even when it told the truth, people didn’t believe it. Critics appeared on all sides, demanding changes that ranged from the essential to the contradictory to the impossible. As crises multiplied and diverged, even the company’s own solutions began to cannibalize each other. And the most crucial episode in this story—the crisis that cut the deepest—began not long after Davos, when some reporters from The New York Times, The Guardian, and Britain’s Channel 4 News came calling. They’d learned some troubling things about a shady British company called Cambridge Analytica, and they had some questions.

II.

It was, in some ways, an old story. Back in 2014, a young academic at Cambridge University named Aleksandr Kogan built a personality questionnaire app called ­thisisyourdigitallife. A few hundred thousand people signed up, giving Kogan access not only to their Facebook data but also—because of Facebook’s loose privacy policies at the time—to that of up to 87 million people in their combined friend networks. Rather than simply use all of that data for research purposes, which he had permission to do, Kogan passed the trove on to Cambridge Analytica, a strategic consulting firm that talked a big game about its ability to model and manipulate human behavior for political clients. In December 2015, The Guardian reported that Cambridge Analytica had used this data to help Ted Cruz’s presidential campaign, at which point Facebook demanded the data be deleted.

This much Facebook knew in the early months of 2018. The company also knew—because everyone knew—that Cambridge Analytica had gone on to work with the Trump campaign after Ted Cruz dropped out of the race. And some people at Facebook worried that the story of their company’s relationship with Cambridge Analytica was not over. One former Facebook communications official remembers being warned by a manager in the summer of 2017 that unresolved elements of the Cambridge Analytica story remained a grave vulnerability. No one at Facebook, however, knew exactly when or where the unexploded ordnance would go off. “The company doesn’t know yet what it doesn’t know yet,” the manager said. (The manager now denies saying so.)

The company first heard in late February that the Times and The Guardian had a story coming, but the department in charge of formulating a response was a house divided. In the fall, Facebook had hired a brilliant but fiery veteran of tech industry PR named Rachel Whetstone. She’d come over from Uber to run communications for Facebook’s WhatsApp, Insta­gram, and Messenger. Soon she was traveling with Zuckerberg for public events, joining Sandberg’s senior management meetings, and making decisions—like picking which outside public relations firms to cut or retain—that normally would have rested with those officially in charge of Facebook’s 300-person communications shop. The staff quickly sorted into fans and haters.

And so it was that a confused and fractious communications team huddled with management to debate how to respond to the Times and Guardian reporters. The standard approach would have been to correct misinformation or errors and spin the company’s side of the story. Facebook ultimately chose another tack. It would front-run the press: dump a bunch of information out in public on the eve of the stories’ publication, hoping to upstage them. It’s a tactic with a short-term benefit but a long-term cost. Investigative journalists are like pit bulls. Kick them once and they’ll never trust you again.

Facebook’s decision to take that risk, according to multiple people involved, was a close call. But on the night of Friday, March 16, the company announced it was suspending Cambridge Analytica from its platform. This was a fateful choice. “It’s why the Times hates us,” one senior executive says. Another communications official says, “For the last year, I’ve had to talk to reporters worried that we were going to front-run them. It’s the worst. Whatever the calculus, it wasn’t worth it.”

The tactic also didn’t work. The next day the story—focused on a charismatic whistle-­blower with pink hair named Christopher Wylie—exploded in Europe and the United States. Wylie, a former Cambridge Analytica employee, was claiming that the company had not deleted the data it had taken from Facebook and that it may have used that data to swing the American presidential election. The first sentence of The Guardian’s reporting blared that this was “one of the tech giant’s biggest ever data breaches” and that Cambridge Analytica had used the data “to build a powerful software program to predict and influence choices at the ballot box.”

The story was a witch’s brew of Russian operatives, privacy violations, confusing data, and Donald Trump. It touched on nearly all the fraught issues of the moment. Politicians called for regulation; users called for boycotts. In a day, Facebook lost $36 billion in its market cap. Because many of its employees were compensated based on the stock’s performance, the drop did not go unnoticed in Menlo Park.

To this emotional story, Facebook had a programmer’s rational response. Nearly every fact in The Guardian’s opening paragraph was misleading, its leaders believed. The company hadn’t been breached—an academic had fairly downloaded data with permission and then unfairly handed it off. And the software that Cambridge Analytica built was not powerful, nor could it predict or influence choices at the ballot box.

But none of that mattered. When a Facebook executive named Alex Stamos tried on Twitter to argue that the word breach was being misused, he was swatted down. He soon deleted his tweets. His position was right, but who cares? If someone points a gun at you and holds up a sign that says hand’s up, you shouldn’t worry about the apostrophe. The story was the first of many to illuminate one of the central ironies of Facebook’s struggles. The company’s algorithms helped sustain a news ecosystem that prioritizes outrage, and that news ecosystem was learning to direct outrage at Facebook.

As the story spread, the company started melting down. Former employees remember scenes of chaos, with exhausted executives slipping in and out of Zuckerberg’s private conference room, known as the Aquarium, and Sandberg’s conference room, whose name, Only Good News, seemed increasingly incongruous. One employee remembers cans and snack wrappers everywhere; the door to the Aquarium would crack open and you could see people with their heads in their hands and feel the warmth from all the body heat. After saying too much before the story ran, the company said too little afterward. Senior managers begged Sandberg and Zuckerberg to publicly confront the issue. Both remained publicly silent.

“We had hundreds of reporters flooding our inboxes, and we had nothing to tell them,” says a member of the communications staff at the time. “I remember walking to one of the cafeterias and overhearing other Facebookers say, ‘Why aren’t we saying anything? Why is nothing happening?’ ”

According to numerous people who were involved, many factors contributed to Facebook’s baffling decision to stay mute for five days. Executives didn’t want a repeat of Zuckerberg’s ignominious performance after the 2016 election when, mostly off the cuff, he had proclaimed it “a pretty crazy idea” to think fake news had affected the result. And they continued to believe people would figure out that Cambridge Analytica’s data had been useless. According to one executive, “You can just buy all this fucking stuff, all this data, from the third-party ad networks that are tracking you all over the planet. You can get way, way, way more privacy-­violating data from all these data brokers than you could by stealing it from Facebook.”

“Those five days were very, very long,” says Sandberg, who now acknowledges the delay was a mistake. The company became paralyzed, she says, because it didn’t know all the facts; it thought Cambridge Analytica had deleted the data. And it didn’t have a specific problem to fix. The loose privacy policies that allowed Kogan to collect so much data had been tightened years before. “We didn’t know how to respond in a system of imperfect information,” she says.

Facebook’s other problem was that it didn’t understand the wealth of antipathy that had built up against it over the previous two years. Its prime decisionmakers had run the same playbook successfully for a decade and a half: Do what they thought was best for the platform’s growth (often at the expense of user privacy), apologize if someone complained, and keep pushing forward. Or, as the old slogan went: Move fast and break things. Now the public thought Facebook had broken Western democracy. This privacy violation—unlike the many others before it—wasn’t one that people would simply get over.

Finally, on Wednesday, the company decided Zuckerberg should give a television interview. After snubbing CBS and PBS, the company summoned a CNN reporter who the communications staff trusted to be reasonably kind. The network’s camera crews were treated like potential spies, and one communications official remembers being required to monitor them even when they went to the bathroom. (Facebook now says this was not company protocol.) In the interview itself, Zuckerberg apologized. But he was also specific: There would be audits and much more restrictive rules for anyone wanting access to Facebook data. Facebook would build a tool to let users know if their data had ended up with Cambridge Analytica. And he pledged that Facebook would make sure this kind of debacle never happened again.

A flurry of other interviews followed. That Wednesday, WIRED was given a quiet heads-up that we’d get to chat with Zuckerberg in the late afternoon. At about 4:45 pm, his communications chief rang to say he would be calling at 5. In that interview, Zuckerberg apologized again. But he brightened when he turned to one of the topics that, according to people close to him, truly engaged his imagination: using AI to keep humans from polluting Facebook. This was less a response to the Cambridge Analytica scandal than to the backlog of accusations, gathering since 2016, that Facebook had become a cesspool of toxic virality, but it was a problem he actually enjoyed figuring out how to solve. He didn’t think that AI could completely eliminate hate speech or nudity or spam, but it could get close. “My understanding with food safety is there’s a certain amount of dust that can get into the chicken as it’s going through the processing, and it’s not a large amount—it needs to be a very small amount,” he told WIRED.

The interviews were just the warmup for Zuckerberg’s next gauntlet: A set of public, televised appearances in April before three congressional committees to answer questions about Cambridge Analytica and months of other scandals. Congresspeople had been calling on him to testify for about a year, and he’d successfully avoided them. Now it was game time, and much of Facebook was terrified about how it would go.

As it turned out, most of the lawmakers proved astonishingly uninformed, and the CEO spent most of the day ably swatting back soft pitches. Back home, some Facebook employees stood in their cubicles and cheered. When a plodding Senator Orrin Hatch asked how, exactly, Facebook made money while offering its services for free, Zuckerberg responded confidently, “Senator, we run ads,” a phrase that was soon emblazoned on T-shirts in Menlo Park.

Adam Maida
III.

The Saturday after the Cambridge Analytica scandal broke, Sandberg told Molly Cutler, a top lawyer at Facebook, to create a crisis response team. Make sure we never have a delay responding to big issues like that again, Sandberg said. She put Cutler’s new desk next to hers, to guarantee Cutler would have no problem convincing division heads to work with her. “I started the role that Monday,” Cutler says. “I never made it back to my old desk. After a couple of weeks someone on the legal team messaged me and said, ‘You want us to pack up your things? It seems like you are not coming back.’ ”

Then Sandberg and Zuckerberg began making a huge show of hiring humans to keep watch over the platform. Soon you couldn’t listen to a briefing or meet an executive without being told about the tens of thousands of content moderators who had joined the company. By the end of 2018, about 30,000 people were working on safety and security, which is roughly the number of newsroom employees at all the newspapers in the United States. Of those, about 15,000 are content reviewers, mostly contractors, employed at more than 20 giant review factories around the world.

Facebook was also working hard to create clear rules for enforcing its basic policies, effectively writing a constitution for the 1.5 billion daily users of the platform. The instructions for moderating hate speech alone run to more than 200 pages. Moderators must undergo 80 hours of training before they can start. Among other things, they must be fluent in emoji; they study, for example, a document showing that a crown, roses, and dollar signs might mean a pimp is offering up prostitutes. About 100 people across the company meet every other Tuesday to review the policies. A similar group meets every Friday to review content policy enforcement screwups, like when, as happened in early July, the company flagged the Declaration of Independence as hate speech.

The company hired all of these people in no small part because of pressure from its critics. It was also the company’s fate, however, that the same critics discovered that moderating content on Facebook can be a miserable, soul-scorching job. As Casey Newton reported in an investigation for the Verge, the average content moderator in a Facebook contractor’s outpost in Arizona makes $28,000 per year, and many of them say they have developed PTSD-like symptoms due to their work. Others have spent so much time looking through conspiracy theories that they’ve become believers themselves.

Ultimately, Facebook knows that the job will have to be done primarily by machines—which is the company’s preference anyway. Machines can browse porn all day without flatlining, and they haven’t learned to unionize yet. And so simultaneously the company mounted a huge effort, led by CTO Mike Schroepfer, to create artificial intelligence systems that can, at scale, identify the content that Facebook wants to zap from its platform, including spam, nudes, hate speech, ISIS propaganda, and videos of children being put in washing machines. An even trickier goal was to identify the stuff that Facebook wants to demote but not eliminate—like misleading clickbait crap. Over the past several years, the core AI team at Facebook has doubled in size annually.

Even a basic machine-learning system can pretty reliably identify and block pornography or images of graphic violence. Hate speech is much harder. A sentence can be hateful or prideful depending on who says it. “You not my bitch, then bitch you are done,” could be a death threat, an inspiration, or a lyric from Cardi B. Imagine trying to decode a similarly complex line in Spanish, Mandarin, or Burmese. False news is equally tricky. Facebook doesn’t want lies or bull on the platform. But it knows that truth can be a kaleidoscope. Well-meaning people get things wrong on the internet; malevolent actors sometimes get things right.

Schroepfer’s job was to get Facebook’s AI up to snuff on catching even these devilishly ambiguous forms of content. With each category the tools and the success rate vary. But the basic technique is roughly the same: You need a collection of data that has been categorized, and then you need to train the machines on it. For spam and nudity these databases already exist, created by hand in more innocent days when the threats online were fake Viagra and Goatse memes, not Vladimir Putin and Nazis. In the other categories you need to construct the labeled data sets yourself—ideally without hiring an army of humans to do so.

One idea Schroepfer discussed enthusiastically with WIRED involved starting off with just a few examples of content identified by humans as hate speech and then using AI to generate similar content and simultaneously label it. Like a scientist bioengineering both rodents and rat terriers, this approach would use software to both create and identify ever-more-complex slurs, insults, and racist crap. Eventually the terriers, specially trained on superpowered rats, could be set loose across all of Facebook.

The company’s efforts in AI that screens content were nowhere roughly three years ago. But Facebook quickly found success in classifying spam and posts supporting terror. Now more than 99 percent of content created in those categories is identified before any human on the platform flags it. Sex, as in the rest of human life, is more complicated. The success rate for identifying nudity is 96 percent. Hate speech is even tougher: Facebook finds just 52 percent before users do.

These are the kinds of problems that Facebook executives love to talk about. They involve math and logic, and the people who work at the company are some of the most logical you’ll ever meet. But Cambridge Analytica was mostly a privacy scandal. Facebook’s most visible response to it was to amp up content moderation aimed at keeping the platform safe and civil. Yet sometimes the two big values involved—privacy and civility—come into opposition. If you give people ways to keep their data completely secret, you also create secret tunnels where rats can scurry around undetected.

In other words, every choice involves a trade-off, and every trade-off means some value has been spurned. And every value that you spurn—particularly when you’re Facebook in 2018—means that a hammer is going to come down on your head.

IV.

Crises offer opportunities. They force you to make some changes, but they also provide cover for the changes you’ve long wanted to make. And four weeks after Zuckerberg’s testimony before Congress, the company initiated the biggest reshuffle in its history. About a dozen executives shifted chairs. Most important, Chris Cox, longtime head of Facebook’s core product—known internally as the Blue App—would now oversee WhatsApp and Insta­gram too. Cox was perhaps Zuckerberg’s closest and most trusted confidant, and it seemed like succession planning. Adam Mosseri moved over to run product at Insta­gram.

Insta­gram, which was founded in 2010 by Kevin Systrom and Mike Krieger, had been acquired by Facebook in 2012 for $1 billion. The price at the time seemed ludicrously high: That much money for a company with 13 employees? Soon the price would seem ludicrously low: A mere billion dollars for the fastest-growing social network in the world? Internally, Facebook at first watched Insta­gram’s relentless growth with pride. But, according to some, pride turned to suspicion as the pupil’s success matched and then surpassed the professor’s.

Systrom’s glowing press coverage didn’t help. In 2014, according to someone directly involved, Zuckerberg ordered that no other executives should sit for magazine profiles without his or Sandberg’s approval. Some people involved remember this as a move to make it harder for rivals to find employees to poach; others remember it as a direct effort to contain Systrom. Top executives at Facebook also believed that Insta­gram’s growth was cannibalizing the Blue App. In 2017, Cox’s team showed data to senior executives suggesting that people were sharing less inside the Blue App in part because of Insta­gram. To some people, this sounded like they were simply presenting a problem to solve. Others were stunned and took it as a sign that management at Facebook cared more about the product they had birthed than one they had adopted.

By the time the Cambridge Analytica scandal hit, Instagram founders Kevin Systrom and Mike Krieger were already worried that Zuckerberg was souring on them.
Most of Insta­gram—and some of Facebook too—hated the idea that the growth of the photo-sharing app could be seen, in any way, as trouble. Yes, people were using the Blue App less and Insta­gram more. But that didn’t mean Insta­gram was poaching users. Maybe people leaving the Blue App would have spent their time on Snapchat or watching Netflix or mowing their lawns. And if Insta­gram was growing quickly, maybe it was because the product was good? Insta­gram had its problems—bullying, shaming, FOMO, propaganda, corrupt micro-­influencers—but its internal architecture had helped it avoid some of the demons that haunted the industry. Posts are hard to reshare, which slows virality. External links are harder to embed, which keeps the fake-news providers away. Minimalist design also minimized problems. For years, Systrom and Krieger took pride in keeping Insta­gram free of hamburgers: icons made of three horizontal lines in the corner of a screen that open a menu. Facebook has hamburgers, and other menus, all over the place.

Systrom and Krieger had also seemingly anticipated the techlash ahead of their colleagues up the road in Menlo Park. Even before Trump’s election, Insta­gram had made fighting toxic comments its top priority, and it had rolled out an AI filtering system in June 2017. By the spring of 2018, the company was working on a product to alert users that “you’re all caught up” when they’d seen all the new posts in their feed. In other words, “put your damn phone down and talk to your friends.” That may be a counterintuitive way to grow, but earning goodwill does help over the long run. And sacrificing growth for other goals wasn’t Facebook’s style at all.

By the time the Cambridge Analytica scandal hit, Systrom and Krieger, according to people familiar with their thinking, were already worried that Zuckerberg was souring on them. They had been allowed to run their company reasonably independently for six years, but now Zuckerberg was exerting more control and making more requests. When conversations about the reorganization began, the Insta­gram founders pushed to bring in Mosseri. They liked him, and they viewed him as the most trustworthy member of Zuckerberg’s inner circle. He had a design background and a mathematical mind. They were losing autonomy, so they might as well get the most trusted emissary from the mothership. Or as Lyndon Johnson said about J. Edgar Hoover, “It’s probably better to have him inside the tent pissing out than outside the tent pissing in.”

Meanwhile, the founders of WhatsApp, Brian Acton and Jan Koum, had moved outside of Facebook’s tent and commenced fire. Zuckerberg had bought the encrypted messaging platform in 2014 for $19 billion, but the cultures had never entirely meshed. The two sides couldn’t agree on how to make money—WhatsApp’s end-to-end encryption wasn’t originally designed to support targeted ads—and they had other differences as well. WhatsApp insisted on having its own conference rooms, and, in the perfect metaphor for the two companies’ diverging attitudes over privacy, WhatsApp employees had special bathroom stalls designed with doors that went down to the floor, unlike the standard ones used by the rest of Facebook.

Eventually the battles became too much for Acton and Koum, who had also come to believe that Facebook no longer intended to leave them alone. Acton quit and started funding a competing messaging platform called Signal. During the Cambridge Analytica scandal, he tweeted, “It is time. #deletefacebook.” Soon afterward, Koum, who held a seat on Facebook’s board, announced that he too was quitting, to play more Ultimate Frisbee and work on his collection of air-cooled Porsches.

The departure of the WhatsApp founders created a brief spasm of bad press. But now Acton and Koum were gone, Mosseri was in place, and Cox was running all three messaging platforms. And that meant Facebook could truly pursue its most ambitious and important idea of 2018: bringing all those platforms together into something new.

V.

By the late spring, news organizations—even as they jockeyed for scoops about the latest meltdown in Menlo Park—were starting to buckle under the pain caused by Facebook’s algorithmic changes. Back in May of 2017, according to Parse.ly, Facebook drove about 40 percent of all outside traffic to news publishers. A year later it was down to 25 percent. Publishers that weren’t in the category “politics, crime, or tragedy” were hit much harder.

At WIRED, the month after an image of a bruised Zuckerberg appeared on the cover, the numbers were even more stark. One day, traffic from Facebook suddenly dropped by 90 percent, and for four weeks it stayed there. After protestations, emails, and a raised eyebrow or two about the coincidence, Facebook finally got to the bottom of it. An ad run by a liquor advertiser, targeted at WIRED readers, had been mistakenly categorized as engagement bait by the platform. In response, the algorithm had let all the air out of WIRED’s tires. The publication could post whatever it wanted, but few would read it. Once the error was identified, traffic soared back. It was a reminder that journalists are just sharecroppers on Facebook’s giant farm. And sometimes conditions on the farm can change without warning.

Inside Facebook, of course, it was not surprising that traffic to publishers went down after the pivot to “meaningful social interactions.” That outcome was the point. It meant people would be spending more time on posts created by their friends and family, the genuinely unique content that Facebook offers. According to multiple Facebook employees, a handful of executives considered it a small plus, too, that the news industry was feeling a little pain after all its negative coverage. The company denies this—“no one at Facebook is rooting against the news industry,” says Anne Kornblut, the company’s director of news partnerships—but, in any case, by early May the pain seemed to have become perhaps excessive. A number of stories appeared in the press about the damage done by the algorithmic changes. And so Sheryl Sandberg, who colleagues say often responds with agitation to negative news stories, sent an email on May 7 calling a meeting of her top lieutenants.

That kicked off a wide-ranging conversation that ensued over the next two months. The key question was whether the company should introduce new factors into its algorithm to help serious publications. The product team working on news wanted Facebook to increase the amount of public content—things shared by news organizations, businesses, celebrities—allowed in News Feed. They also wanted the company to provide stronger boosts to publishers deemed trustworthy, and they suggested the company hire a large team of human curators to elevate the highest-quality news inside of News Feed. The company discussed setting up a new section on the app entirely for news and directed a team to quietly work on developing it; one of the team’s ambitions was to try to build a competitor to Apple News.

Some of the company’s most senior execs, notably Chris Cox, agreed that Facebook needed to give serious publishers a leg up. Others pushed back, especially Joel Kaplan, a former deputy chief of staff to George W. Bush who was now Facebook’s vice president of global public policy. Supporting high-quality outlets would inevitably make it look like the platform was supporting liberals, which could lead to trouble in Washington, a town run mainly by conservatives. Breitbart and the Daily Caller, Kaplan argued, deserved protections too. At the end of the climactic meeting, on July 9, Zuckerberg sided with Kaplan and announced that he was tabling the decision about adding ways to boost publishers, effectively killing the plan. To one person involved in the meeting, it seemed like a sign of shifting power. Cox had lost and Kaplan had won. Either way, Facebook’s overall traffic to news organizations continued to plummet.

VI.

That same evening, Donald Trump announced that he had a new pick for the Supreme Court: Brett Kavanaugh. As the choice was announced, Joel Kaplan stood in the background at the White House, smiling. Kaplan and Kavanaugh had become friends in the Bush White House, and their families had become intertwined. They had taken part in each other’s weddings; their wives were best friends; their kids rode bikes together. No one at Facebook seemed to really notice or care, and a tweet pointing out Kaplan’s attendance was retweeted a mere 13 times.

Meanwhile, the dynamics inside the communications department had gotten even worse. Elliot Schrage had announced that he was going to leave his post as VP of global communications. So the company had begun looking for his replacement; it focused on interviewing candidates from the political world, including Denis McDonough and Lisa Monaco, former senior officials in the Obama administration. But Rachel Whetstone also declared that she wanted the job. At least two other executives said they would quit if she got it.

The need for leadership in communications only became more apparent on July 11, when John Hegeman, the new head of News Feed, was asked in an interview why the company didn’t ban Alex Jones’ InfoWars from the platform. The honest answer would probably have been to just admit that Facebook gives a rather wide berth to the far right because it’s so worried about being called liberal. Hegeman, though, went with the following: “We created Facebook to be a place where different people can have a voice. And different publishers have very different points of view.”

This, predictably, didn’t go over well with the segments of the news media that actually try to tell the truth and that have never, as Alex Jones has done, reported that the children massacred at Sandy Hook were actors. Public fury ensued. Most of Facebook didn’t want to respond. But Whetstone decided it was worth a try. She took to the @facebook account—which one executive involved in the decision called “a big fucking marshmallow we shouldn’t ever use like this”—and started tweeting at the company’s critics.

“Sorry you feel that way,” she typed to one, and explained that, instead of banning pages that peddle false information, Facebook demotes them. The tweet was very quickly ratioed, a Twitter term of art for a statement that no one likes and that receives more comments than retweets. Whetstone, as @facebook, also declared that just as many pages on the left pump out misinformation as on the right. That tweet got badly ratioed too.

Five days later, Zuckerberg sat down for an interview with Kara Swisher, the influential editor of Recode. Whetstone was in charge of prep. Before Zuckerberg headed to the microphone, Whetstone supplied him with a list of rough talking points, including one that inexplicably violated the first rule of American civic discourse: Don’t invoke the Holocaust while trying to make a nuanced point.

About 20 minutes into the interview, while ambling through his answer to a question about Alex Jones, Zuckerberg declared, “I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down, because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.” Sometimes, Zuckerberg added, he himself makes errors in public statements.

The comment was absurd: People who deny that the Holocaust happened generally aren’t just slipping up in the midst of a good-faith intellectual disagreement. They’re spreading anti-Semitic hate—intentionally. Soon the company announced that it had taken a closer look at Jones’ activity on the platform and had finally chosen to ban him. His past sins, Facebook decided, had crossed into the domain of standards violations.

Eventually another candidate for the top PR job was brought into the headquarters in Menlo Park: Nick Clegg, former deputy prime minister of the UK. Perhaps in an effort to disguise himself—or perhaps because he had decided to go aggressively Silicon Valley casual—he showed up in jeans, sneakers, and an untucked shirt. His interviews must have gone better than his disguise, though, as he was hired over the luminaries from Washington. “What makes him incredibly well qualified,” said Caryn Marooney, the company’s VP of communications, “is that he helped run a country.”

Adam Maida
VII.

At the end of July, Facebook was scheduled to report its quarterly earnings in a call to investors. The numbers were not going to be good; Facebook’s user base had grown more slowly than ever, and revenue growth was taking a huge hit from the company’s investments in hardening the platform against abuse. But in advance of the call, the company’s leaders were nursing an additional concern: how to put Insta­gram in its place. According to someone who saw the relevant communications, Zuckerberg and his closest lieutenants were debating via email whether to say, essentially, that Insta­gram owed its spectacular growth not primarily to its founders and vision but to its relationship with Facebook.

Zuckerberg wanted to include a line to this effect in his script for the call. Whetstone counseled him not to, or at least to temper it with praise for Insta­gram’s founding team. In the end, Zuckerberg’s script declared, “We believe Insta­gram has been able to use Facebook’s infrastructure to grow more than twice as quickly as it would have on its own. A big congratulations to the Insta­gram team—and to all the teams across our company that have contributed to this success.”

After the call—with its payload of bad news about growth and investment—Facebook’s stock dropped by nearly 20 percent. But Zuckerberg didn’t forget about Insta­gram. A few days later he asked his head of growth, Javier Olivan, to draw up a list of all the ways Facebook supported Insta­gram: running ads for it on the Blue App; including link-backs when someone posted a photo on Insta­gram and then cross-published it in Facebook News Feed; allowing Insta­gram to access a new user’s Facebook connections in order to recommend people to follow. Once he had the list, Zuckerberg conveyed to Insta­gram’s leaders that he was pulling away the supports. Facebook had given Insta­gram servers, health insurance, and the best engineers in the world. Now Insta­gram was just being asked to give a little back—and to help seal off the vents that were allowing people to leak away from the Blue App.

Systrom soon posted a memo to his entire staff explaining Zuckerberg’s decision to turn off supports for traffic to Insta­gram. He disagreed with the move, but he was committed to the changes and was telling his staff that they had to go along. The memo “was like a flame going up inside the company,” a former senior manager says. The document also enraged Facebook, which was terrified it would leak. Systrom soon departed on paternity leave.

The tensions didn’t let up. In the middle of August, Facebook prototyped a location-­tracking service inside of Insta­gram, the kind of privacy intrusion that Insta­gram’s management team had long resisted. In August, a hamburger menu appeared. “It felt very personal,” says a senior Insta­gram employee who spent the month implementing the changes. It felt particularly wrong, the employee says, because Facebook is a data-driven company, and the data strongly suggested that Insta­gram’s growth was good for everyone.

The Instagram founders' unhappiness with Facebook stemmed from tensions that had brewed over many years and had boiled over in the past six months.
Friends of Systrom and Krieger say the strife was wearing on the founders too. According to someone who heard the conversation, Systrom openly wondered whether Zuckerberg was treating him the way Donald Trump was treating Jeff Sessions: making life miserable in hopes that he’d quit without having to be fired. Insta­gram’s managers also believed that Facebook was being miserly about their budget. In past years they had been able to almost double their number of engineers. In the summer of 2018 they were told that their growth rate would drop to less than half of that.

When it was time for Systrom to return from paternity leave, the two founders decided to make the leave permanent. They made the decision quickly, but it was far from impulsive. According to someone familiar with their thinking, their unhappiness with Facebook stemmed from tensions that had brewed over many years and had boiled over in the past six months.

And so, on a Monday morning, Systrom and Krieger went into Chris Cox’s office and told him the news. Systrom and Krieger then notified their team about the decision. Somehow the information reached Mike Isaac, a reporter at The New York Times, before it reached the communications teams for either Facebook or Insta­gram. The story appeared online a few hours later, as Insta­gram’s head of communications was on a flight circling above New York City.

After the announcement, Systrom and Krieger decided to play nice. Soon there was a lovely photograph of the two founders smiling next to Mosseri, the obvious choice to replace them. And then they headed off into the unknown to take time off, decompress, and figure out what comes next. Systrom and Krieger told friends they both wanted to get back into coding after so many years away from it. If you need a new job, it’s good to learn how to code.

VIII.

Just a few days after Systrom and Krieger quit, Joel Kaplan roared into the news. His dear friend Brett Kavanaugh was now not just a conservative appellate judge with Federalist Society views on Roe v. Wade; he had become an alleged sexual assailant, purported gang rapist, and national symbol of toxic masculinity to somewhere between 49 and 51 percent of the country. As the charges multiplied, Kaplan’s wife, Laura Cox Kaplan, became one of the most prominent women defending him: She appeared on Fox News and asked, “What does it mean for men in the future? It’s very serious and very troubling.” She also spoke at an #IStandWithBrett press conference that was live­streamed on Breitbart.

On September 27, Kavanaugh appeared before the Senate Judiciary Committee after four hours of wrenching recollections by his primary accuser, Christine Blasey Ford. Laura Cox Kaplan sat right behind him as the hearing descended into rage and recrimination. Joel Kaplan sat one row back, stoic and thoughtful, directly in view of the cameras broadcasting the scene to the world.

Kaplan isn’t widely known outside of Facebook. But he’s not anonymous, and he wasn’t wearing a fake mustache. As Kavanaugh testified, journalists started tweeting a screenshot of the tableau. At a meeting in Menlo Park, executives passed around a phone showing one of these tweets and stared, mouths agape. None of them knew Kaplan was going to be there. The man who was supposed to smooth over Facebook’s political dramas had inserted the company right into the middle of one.

Kaplan had long been friends with Sandberg; they’d even dated as undergraduates at Harvard. But despite rumors to the contrary, he had told neither her nor Zuckerberg that he would be at the hearing, much less that he would be sitting in the gallery of supporters behind the star witness. “He’s too smart to do that,” one executive who works with him says. “That way, Joel gets to go. Facebook gets to remind people that it employs Republicans. Sheryl gets to be shocked. And Mark gets to denounce it.”

If that was the plan, it worked to perfection. Soon Facebook’s internal message boards were lighting up with employees mortified at what Kaplan had done. Management’s initial response was limp and lame: A communications officer told the staff that Kaplan attended the hearing as part of a planned day off in his personal capacity. That wasn’t a good move. Someone visited the human resources portal and noted that he hadn’t filed to take the day off.

What Facebook Fears
In some ways, the world’s largest social network is stronger than ever, with record revenue of $55.8 billion in 2018. But Facebook has also never been more threatened. Here are some dangers that could knock it down.

US Antitrust Regulation
In March, Democratic presidential candidate Elizabeth Warren proposed severing Instagram and WhatsApp from Facebook, joining the growing chorus of people who want to chop the company down to size. Even US attorney general William Barr has hinted at probing tech’s “huge behemoths.” But for now, antitrust talk remains talk—much of it posted to Facebook.

Federal Privacy Crackdowns
Facebook and the Federal Trade Commission are negotiating a settlement over whether the company’s conduct, including with Cambridge Analytica, violated a 2011 consent decree regarding user privacy. According to The New York Times, federal prosecutors have also begun a criminal investigation into Facebook’s data-sharing deals with other technology companies.

European Regulators
While America debates whether to take aim at Facebook, Europe swings axes. In 2018, the EU’s General Data Protection Regulation forced Facebook to allow users to access and delete more of their data. Then this February, Germany ordered the company to stop harvesting web-browsing data without users’ consent, effectively outlawing much of the company’s ad business.

User Exodus
Although a fifth of the globe uses Facebook every day, the number of adult users in the US has largely stagnated. The decline is even more precipitous among teenagers. (Granted, many of them are switching to Instagram.) But network effects are powerful things: People swarmed to Facebook because everyone else was there; they might also swarm for the exits.
The hearings were on a Thursday. A week and a day later, Facebook called an all-hands to discuss what had happened. The giant cafeteria in Facebook’s headquarters was cleared to create space for a town hall. Hundreds of chairs were arranged with three aisles to accommodate people with questions and comments. Most of them were from women who came forward to recount their own experiences of sexual assault, harassment, and abuse.

Zuckerberg, Sandberg, and other members of management were standing on the right side of the stage, facing the audience and the moderator. Whenever a question was asked of one of them, they would stand up and take the mic. Kaplan appeared via video conference looking, according to one viewer, like a hostage trying to smile while his captors stood just offscreen. Another participant described him as “looking like someone had just shot his dog in the face.” This participant added, “I don’t think there was a single male participant, except for Zuckerberg looking down and sad onstage and Kaplan looking dumbfounded on the screen.”

Employees who watched expressed different emotions. Some felt empowered and moved by the voices of women in a company where top management is overwhelmingly male. Another said, “My eyes rolled to the back of my head” watching people make specific personnel demands of Zuckerberg, including that Kaplan undergo sensitivity training. For much of the staff, it was cathartic. Facebook was finally reckoning, in a way, with the #MeToo movement and the profound bias toward men in Silicon Valley. For others it all seemed ludicrous, narcissistic, and emblematic of the liberal, politically correct bubble that the company occupies. A guy had sat in silence to support his best friend who had been nominated to the Supreme Court; as a consequence, he needed to be publicly flogged?

In the days after the hearings, Facebook organized small group discussions, led by managers, in which 10 or so people got together to discuss the issue. There were tears, grievances, emotions, debate. “It was a really bizarre confluence of a lot of issues that were popped in the zit that was the SCOTUS hearing,” one participant says. Kaplan, though, seemed to have moved on. The day after his appearance on the conference call, he hosted a party to celebrate Kavanaugh’s lifetime appointment. Some colleagues were aghast. According to one who had taken his side during the town hall, this was a step too far. That was “just spiking the football,” they said. Sandberg was more forgiving. “It’s his house,” she told WIRED. “That is a very different decision than sitting at a public hearing.”

In a year during which Facebook made endless errors, Kaplan’s insertion of the company into a political maelstrom seemed like one of the clumsiest. But in retrospect, Facebook executives aren’t sure that Kaplan did lasting harm. His blunder opened up a series of useful conversations in a workplace that had long focused more on coding than inclusion. Also, according to another executive, the episode and the press that followed surely helped appease the company’s would-be regulators. It’s useful to remind the Republicans who run most of Washington that Facebook isn’t staffed entirely by snowflakes and libs.

IX.

That summer and early fall weren’t kind to the team at Facebook charged with managing the company’s relationship with the news industry. At least two product managers on the team quit, telling colleagues they had done so because of the company’s cavalier attitude toward the media. In August, a jet-lagged Campbell Brown gave a presentation to publishers in Australia in which she declared that they could either work together to create new digital business models or not. If they didn’t, well, she’d be unfortunately holding hands with their dying business, like in a hospice. Her off-the-­record comments were put on the record by The Australian, a publication owned by Rupert Murdoch, a canny and persistent antagonist of Facebook.

In September, however, the news team managed to convince Zuckerberg to start administering ice water to the parched executives of the news industry. That month, Tom Alison, one of the team’s leaders, circulated a document to most of Facebook’s senior managers; it began by proclaiming that, on news, “we lack clear strategy and alignment.”

Then, at a meeting of the company’s leaders, Alison made a series of recommendations, including that Facebook should expand its definition of news—and its algorithmic boosts—beyond just the category of “politics, crime, or tragedy.” Stories about politics were bound to do well in the Trump era, no matter how Facebook tweaked its algorithm. But the company could tell that the changes it had introduced at the beginning of the year hadn’t had the intended effect of slowing the political venom pulsing through the platform. In fact, by giving a slight tailwind to politics, tragedy, and crime, Facebook had helped build a news ecosystem that resembled the front pages of a tempestuous tabloid. Or, for that matter, the front page of FoxNews.com. That fall, Fox was netting more engagement on Facebook than any other English-language publisher; its list of most-shared stories was a goulash of politics, crime, and tragedy. (The network’s three most-shared posts that month were an article alleging that China was burning bibles, another about a Bill Clinton rape accuser, and a third that featured Laura Cox Kaplan and #IStandWithBrett.)

Politics, Crime, or Tragedy?

In early 2018, Facebook’s algorithm started demoting posts shared by businesses and publishers. But because of an obscure choice by Facebook engineers, stories involving “politics, crime, or tragedy” were shielded somewhat from the blow—which had a big effect on the news ecosystem inside the social network.

Source: Parse.ly

That September meeting was a moment when Facebook decided to start paying indulgences to make up for some of its sins against journalism. It decided to put hundreds of millions of dollars toward supporting local news, the sector of the industry most disrupted by Silicon Valley; Brown would lead the effort, which would involve helping to find sustainable new business models for journalism. Alison proposed that the company move ahead with the plan hatched in June to create an entirely new section on the Facebook app for news. And, crucially, the company committed to developing new classifiers that would expand the definition of news beyond “politics, crime, or tragedy.”

Zuckerberg didn’t sign off on everything all at once. But people left the room feeling like he had subscribed. Facebook had spent much of the year holding the media industry upside down by the feet. Now Facebook was setting it down and handing it a wad of cash.

As Facebook veered from crisis to crisis, something else was starting to happen: The tools the company had built were beginning to work. The three biggest initiatives for the year had been integrating WhatsApp, Insta­gram, and the Blue App into a more seamless entity; eliminating toxic content; and refocusing News Feed on meaningful social interactions. The company was making progress on all fronts. The apps were becoming a family, partly through divorce and arranged marriage but a family nonetheless. Toxic content was indeed disappearing from the platform. In September, economists at Stanford and New York University revealed research estimating that user interactions with fake news on the platform had declined by 65 percent from their peak in December 2016 to the summer of 2018. On Twitter, meanwhile, the number had climbed.

There wasn’t much time, however, for anyone to absorb the good news. Right after the Kavanaugh hearings, the company announced that, for the first time, it had been badly breached. In an Ocean’s 11–style heist, hackers had figured out an ingenious way to take control of user accounts through a quirk in a feature that makes it easier for people to play Happy Birthday videos for their friends. The breach was both serious and absurd, and it pointed to a deep problem with Facebook. By adding so many features to boost engagement, it had created vectors for intrusion. One virtue of simple products is that they are simpler to defend.

X.

Given the sheer number of people who accused Facebook of breaking democracy in 2016, the company approached the November 2018 US midterm elections with trepidation. It worried that the tools of the platform made it easier for candidates to suppress votes than get them out. And it knew that Russian operatives were studying AI as closely as the engineers on Mike Schroepfer’s team.

So in preparation for Brazil’s October 28 presidential election and the US midterms nine days later, the company created what it called “election war rooms”—a term despised by at least some of the actual combat veterans at the company. The rooms were partly a media prop, but still, three dozen people worked nearly around the clock inside of them to minimize false news and other integrity issues across the platform. Ultimately the elections passed with little incident, perhaps because Facebook did a good job, perhaps because a US Cyber Command operation temporarily knocked Russia’s primary troll farm offline.

Facebook got a boost of good press from the effort, but the company in 2018 was like a football team that follows every hard-fought victory with a butt fumble and a 30-point loss. In mid-November, The New York Times published an impressively reported stem-winder about trouble at the company. The most damning revelation was that Facebook had hired an opposition research firm called Definers to investigate, among other things, whether George Soros was funding groups critical of the company. Definers was also directly connected to a dubious news operation whose stories were often picked up by Breitbart.

After the story broke, Zuckerberg plausibly declared that he knew nothing about Definers. Sandberg, less plausibly, did the same. Numerous people inside the company were convinced that she entirely understood what Definers did, though she strongly maintains that she did not. Meanwhile, Schrage, who had announced his resignation but never actually left, decided to take the fall. He declared that the Definers project was his fault; it was his communications department that had hired the firm, he said. But several Facebook employees who spoke with WIRED believe that Schrage’s assumption of responsibility was just a way to gain favor with Sandberg.

Inside Facebook, people were furious at Sandberg, believing she had asked them to dissemble on her behalf with her Definers denials. Sandberg, like everyone, is human. She’s brilliant, inspirational, and more organized than Marie Kondo. Once, on a cross-country plane ride back from a conference, a former Facebook executive watched her quietly spend five hours sending thank-you notes to everyone she’d met at the event—while everyone else was chatting and drinking. But Sandberg also has a temper, an ego, and a detailed memory for subordinates she thinks have made mistakes. For years, no one had a negative word to say about her. She was a highly successful feminist icon, the best-selling author of Lean In, running operations at one of the most powerful companies in the world. And she had done so under immense personal strain since her husband died in 2015.

But resentment had been building for years, and after the Definers mess the dam collapsed. She was pummeled in the Times, in The Washington Post, on Breit­bart, and in WIRED. Former employees who had refrained from criticizing her in interviews conducted with WIRED in 2017 relayed anecdotes about her intimidation tactics and penchant for retribution in 2018. She was slammed after a speech in Munich. She even got dinged by Michelle Obama, who told a sold-out crowd at the Barclays Center in Brooklyn on December 1, “It’s not always enough to lean in, because that shit doesn’t work all the time.”

Everywhere, in fact, it was becoming harder to be a Facebook employee. Attrition increased from 2017, though Facebook says it was still below the industry norm, and people stopped broadcasting their place of employment. The company’s head of cybersecurity policy was swatted in his Palo Alto home. “When I joined Facebook in 2016, my mom was so proud of me, and I could walk around with my Facebook backpack all over the world and people would stop and say, ‘It’s so cool that you worked for Facebook.’ That’s not the case anymore,” a former product manager says. “It made it hard to go home for Thanksgiving.”

XI.

By the holidays in 2018, Facebook was beginning to seem like Monty Python’s Black Knight: hacked down to a torso hopping on one leg but still filled with confidence. The Alex Jones, Holocaust, Kaplan, hack, and Definers scandals had all happened in four months. The heads of WhatsApp and Insta­gram had quit. The stock price was at its lowest level in nearly two years. In the middle of that, Facebook chose to launch a video chat service called Portal. Reviewers thought it was great, except for the fact that Facebook had designed it, which made them fear it was essentially a spycam for people’s houses. Even internal tests at Facebook had shown that people responded to a description of the product better when they didn’t know who had made it.

Two weeks later, the Black Knight lost his other leg. A British member of parliament named Damian Collins had obtained hundreds of pages of internal Facebook emails from 2012 through 2015. Ironically, his committee had gotten them from a sleazy company that helped people search for photos of Facebook users in bikinis. But one of Facebook’s superpowers in 2018 was the ability to turn any critic, no matter how absurd, into a media hero. And so, without much warning, Collins released them to the world.

One of Facebook’s superpowers in 2018 was the ability to turn any critic, no matter how absurd, into a media hero.
The emails, many of them between Zuckerberg and top executives, lent a brutally concrete validation to the idea that Facebook promoted growth at the expense of almost any other value. In one message from 2015, an employee acknowledged that collecting the call logs of Android users is a “pretty high-risk thing to do from a PR perspective.” He said he could imagine the news stories about Facebook invading people’s private lives “in ever more terrifying ways.” But, he added, “it appears that the growth team will charge ahead and do it.” (It did.)

Perhaps the most telling email is a message from a then executive named Sam Lessin to Zuckerberg that epitomizes Facebook’s penchant for self-justification. The company, Lessin wrote, could be ruthless and committed to social good at the same time, because they are essentially the same thing: “Our mission is to make the world more open and connected and the only way we can do that is with the best people and the best infrastructure, which requires that we make a lot of money / be very profitable.”

The message also highlighted another of the company’s original sins: its assertion that if you just give people better tools for sharing, the world will be a better place. That’s just false. Sometimes Facebook makes the world more open and connected; sometimes it makes it more closed and disaffected. Despots and demagogues have proven to be just as adept at using Facebook as democrats and dreamers. Like the communications innovations before it—the printing press, the telephone, the internet itself—Facebook is a revolutionary tool. But human nature has stayed the same.

XII.

Perhaps the oddest single day in Facebook’s recent history came on January 30, 2019. A story had just appeared on TechCrunch reporting yet another apparent sin against privacy: For two years, Facebook had been conducting market research with an app that paid you in return for sucking private data from your phone. Facebook could read your social media posts, your emoji sexts, and your browser history. Your soul, or at least whatever part of it you put into your phone, was worth up to $20 a month.

Other big tech companies do research of this sort as well. But the program sounded creepy, particularly with the revelation that people as young as 13 could join with a parent’s permission. Worse, Facebook seemed to have deployed the app while wearing a ski mask and gloves to hide its fingerprints. Apple had banned such research apps from its main App Store, but Facebook had fashioned a workaround: Apple allows companies to develop their own in-house iPhone apps for use solely by employees—for booking conference rooms, testing beta versions of products, and the like. Facebook used one of these internal apps to disseminate its market research tool to the public.

Apple cares a lot about privacy, and it cares that you know it cares about privacy. It also likes to ensure that people honor its rules. So shortly after the story was published, Apple responded by shutting down all of Facebook’s in-house iPhone apps. By the middle of that Wednesday afternoon, parts of Facebook’s campus stopped functioning. Applications that enabled employees to book meetings, see cafeteria menus, and catch the right shuttle bus flickered out. Employees around the world suddenly couldn’t communicate via messenger with each other on their phones. The mood internally shifted between outraged and amused—with employees joking that they had missed their meetings because of Tim Cook. Facebook’s cavalier approach to privacy had now poltergeisted itself on the company’s own lunch menus.

But then something else happened. A few hours after Facebook’s engineers wandered back from their mystery meals, Facebook held an earnings call. Profits, after a months-long slump, had hit a new record. The number of daily users in Canada and the US, after stagnating for three quarters, had risen slightly. The stock surged, and suddenly all seemed well in the world. Inside a conference room called Relativity, Zuckerberg smiled and told research analysts about all the company’s success. At the same table sat Caryn Marooney, the company’s head of communications. “It felt like the old Mark,” she said. “This sense of ‘We’re going to fix a lot of things and build a lot of things.’ ” Employees couldn’t get their shuttle bus schedules, but within 24 hours the company was worth about $50 billion more than it had been worth the day before.

Less than a week after the boffo earnings call, the company gathered for another all-hands. The heads of security and ads spoke about their work and the pride they take in it. Nick Clegg told everyone that they had to start seeing themselves the way the world sees them, not the way they would like to be perceived. It seemed to observers as though management actually had its act together after a long time of looking like a man in lead boots trying to cross a lightly frozen lake. “It was a combination of realistic and optimistic that we hadn’t gotten right in two years,” one executive says.

Soon it was back to bedlam, though. Shortly after the all-hands, a parliamentary committee in the UK published a report calling the company a bunch of “digital gangsters.” A German regulatory authority cracked down on a significant portion of the company’s ad business. And news broke that the FTC in Washington was negotiating with the company and reportedly considering a multibillion-­dollar fine due in part to Cambridge Analytica. Later, Democratic presidential hopeful Elizabeth Warren published a proposal to break Facebook apart. She promoted her idea with ads on Facebook, using a modified version of the company’s logo—an act specifically banned by Facebook’s terms of service. Naturally, the company spotted the violation and took the ads down. Warren quickly denounced the move as censorship, even as Facebook restored the ads.

It was the perfect Facebook moment for a new year. By enforcing its own rules, the company had created an outrage cycle about Facebook—inside of a larger outrage cycle about Facebook.

XIII.

This January, George Soros gave another speech on a freezing night in Davos. This time he described a different menace to the world: China. The most populous country on earth, he said, is building AI systems that could become tools for totalitarian control. “For open societies,” he said, “they pose a mortal threat.” He described the world as in the midst of a cold war. Afterward, one of the authors of this article asked him which side Facebook and Google are on. “Facebook and the others are on the side of their own profits,” the financier answered.

The response epitomized one of the most common critiques of the company now: Everything it does is based on its own interests and enrichment. The massive efforts at reform are cynical and deceptive. Yes, the company’s privacy settings are much clearer now than a year ago, and certain advertisers can no longer target users based on their age, gender, or race, but those changes were made at gunpoint. The company’s AI filters help, sure, but they exist to placate advertisers who don’t want their detergent ads next to jihadist videos. The company says it has abandoned “Move fast and break things” as its motto, but the guest Wi-Fi password at headquarters remains “M0vefast.” Sandberg and Zuckerberg continue to apologize, but the apologies seem practiced and insincere.

At a deeper level, critics note that Facebook continues to pay for its original sin of ignoring privacy and fixating on growth. And then there’s the existential question of whether the company’s business model is even compatible with its stated mission: The idea of Facebook is to bring people together, but the business model only works by slicing and dicing users into small groups for the sake of ad targeting. Is it possible to have those two things work simultaneously?

To its credit, though, Facebook has addressed some of its deepest issues. For years, smart critics have bemoaned the perverse incentives created by Facebook’s annual bonus program, which pays people in large part based on the company hitting growth targets. In February, that policy was changed. Everyone is now given bonuses based on how well the company achieves its goals on a metric of social good.

Another deep critique is that Facebook simply sped up the flow of information to a point where society couldn’t handle it. Now the company has started to slow it down. The company’s fake-news fighters focus on information that’s going viral. WhatsApp has been reengineered to limit the number of people with whom any message can be shared. And internally, according to several employees, people communicate better than they did a year ago. The world might not be getting more open and connected, but at least Facebook’s internal operations are.

“It’s going to take real time to go backwards,” Sheryl Sandberg told WIRED, “and figure out everything that could have happened.”
In early March, Zuckerberg announced that Facebook would, from then on, follow an entirely different philosophy. He published a 3,200-word treatise explaining that the company that had spent more than a decade playing fast and loose with privacy would now prioritize it. Messages would be encrypted end to end. Servers would not be located in authoritarian countries. And much of this would happen with a further integration of Facebook, WhatsApp, and Insta­gram. Rather than WhatsApp becoming more like Facebook, it sounded like Facebook was going to become more like WhatsApp. When asked by WIRED how hard it would be to reorganize the company around the new vision, Zuckerberg said, “You have no idea how hard it is.”

Just how hard it was became clear the next week. As Facebook knows well, every choice involves a trade-off, and every trade-off involves a cost. The decision to prioritize encryption and interoperability meant, in some ways, a decision to deprioritize safety and civility. According to people involved in the decision, Chris Cox, long Zuckerberg’s most trusted lieutenant, disagreed with the direction. The company was finally figuring out how to combat hate speech and false news; it was breaking bread with the media after years of hostility. Now Facebook was setting itself up to both solve and create all kinds of new problems. And so in the middle of March, Cox announced that he was leaving. A few hours after the news broke, a shooter in New Zealand livestreamed on Facebook his murderous attack on a mosque.

Sandberg says that much of her job these days involves harm prevention; she’s also overseeing the various audits and investigations of the company’s missteps. “It’s going to take real time to go backwards,” she told WIRED, “and figure out everything that could have happened.”

Zuckerberg, meanwhile, remains obsessed with moving forward. In a note to his followers to start the year, he said one of his goals was to host a series of conversations about technology: “I’m going to put myself out there more.” The first such event, a conversation with the internet law scholar Jonathan Zittrain, took place at Harvard Law School in late winter. Near the end of their exchange, Zittrain asked Zuckerberg what Facebook might look like 10 or so years from now. The CEO mused about developing a device that would allow humans to type by thinking. It sounded incredibly cool at first. But by the time he was done, it sounded like he was describing a tool that would allow Facebook to read people’s minds. Zittrain cut in dryly: “The Fifth Amendment implications are staggering.” Zuckerberg suddenly appeared to understand that perhaps mind-reading technology is the last thing the CEO of Facebook should be talking about right now. “Presumably this would be something someone would choose to use,” he said, before adding, “I don’t know how we got onto this.”

Nicholas Thompson (@nxthompson) is WIRED’s editor in chief. Fred Vogelstein (@­fvogelstein) is a contributing editor at the magazine.

This article appears in the May issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.
https://www.wired.com/story/facebook-ma ... resh-hell/
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Thu Apr 18, 2019 5:48 am

Facebook collected 1.5 million users' email contacts without their knowledge
Volatile year at Facebook exposed by Wired magazine
Hong Kong (CNN Business) — Facebook has admitted that it collected up to 1.5 million users' email contacts without their consent, in the latest privacy issue to hit the giant tech firm.

The world's biggest social network said Wednesday night that the email contact lists had been "unintentionally" uploaded to Facebook (FB) following a design change almost two years ago, and the company was now in the process of deleting them.
Facebook said the issue began three years ago when it made changes to the step-by-step verification process users go through when signing up for an account on the platform. Prior to those changes, users were given the option to upload their email contact lists when opening an account to help them find friends already on Facebook.

Facebook Fast Facts
But in May 2016, Facebook removed language that explained users' contact lists could be uploaded to the company's servers when they signed up for an account. This meant that in some cases people's email contact lists were uploaded to Facebook without their knowledge or consent.

A Facebook spokesperson said Wednesday the firm did not realize this was happening until April of this year, when it stopped offering email password verification as an option for people signing up to Facebook for the first time.

"When we looked into the steps people were going through to verify their accounts, we found that in some cases people's email contacts were also unintentionally uploaded to Facebook when they created their account," the spokesperson added.

The company said the mistakenly uploaded contact lists had not been shared with anyone outside of Facebook. The news was first reported by Business Insider on Wednesday.
Ashkan Soltani, a former chief technology officer for the Federal Trade Commission, tweeted Wednesday evening that he thought this was "one of the most legally actionable behaviors by @facebook to date."
"I'm confident regulators will be taking a look," he said.

Alexandria Ocasio-Cortez left Facebook. Good for her
The incident is the latest privacy issue to rock Facebook, which has more than two billion users globally. Over the last 18 months these have included the Cambridge Analytica data scandal and the biggest security breach in its history.
CEO Mark Zuckerberg has responded to criticism by promising to introduce more privacy-focused measures on the platform, such as encrypted messaging and better data security.
Facebook was also engulfed in controversy after a shooter in New Zealand livestreamed his March 15 attack on two mosques in Christchurch using the social network's video tools. The shooter killed 50 people.
Its WhatsApp instant messaging application has been accused of enabling the spread of misinformation in India.
https://www.cnn.com/2019/04/18/business ... index.html
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Fri Apr 19, 2019 10:02 am

Mark Zuckerberg shared private user data with Facebook 'friends', leaked documents reveal
Internal emails suggest companies were incentivised to share information with social media giant

2 hours ago
Facebook CEO Mark Zuckerberg gave access to sensitive user data to dozens of app developer friends, according to thousands of leaked documents.

Zuckerberg reportedly used the data as a reward to third party companies and developers who either had a favourable relationship with company executives or spent considerable amounts on ads.

Companies were also incentivised to share their data with Facebook, while those that did not were shut out and denied access.

These methods helped to consolidate Facebook’s position as the leading social network platform between 2011 and 2015, the leak suggests.

The trove of documents obtained by investigative journalist Duncan Campbell included emails, presentations, meeting summaries and internal webchats from within the company.

Facebook birthday: 15 defining moments for the social network

Show all 15
Facebook is born
Around 4,000 pages shared with NBC News, Computer Weekly and Germany’s Süddeutsche Zeitung newspaper revealed the extent of the data sharing collaborations.

Facebook did not immediately respond to a request for comment from The Independent but a statement provided to NBC News said the documents were authentic but misleading.

“The set of documents, by design, tells only one side of the story and omits important context,” said Paul Grewal, vice president and deputy general counsel at Facebook.

“We will stand by the platform changes we made in 2014/2015 to prevent people from sharing their friend’s information with developers... The facts are clear: we’ve never sold people’s data.”

Support free-thinking journalism and subscribe to Independent Minds
Facebook did not provide additional documents to support its claim the documents had been cherry picked.
https://www.independent.co.uk/life-styl ... 72111.html



Facebook challenged to give TED talk
By Jane Wakefield Technology reporter
Carole CadwalladrReuters
Carole Cadwalladr exposed the Cambridge Analytica data scandal
The investigative journalist who revealed the Cambridge Analytica scandal has demanded answers from tech giants about political ads.

In her TED talk, Carole Cadwalladr called on the executives of Facebook and Twitter to come to the conference and discuss their role in influencing elections around the world.

Twitter boss Jack Dorsey is due to speak later this week.

TED curator Chris Anderson also invited Facebook to address the conference.

Alongside staff of the New York Times, Cadwalladr was named as a finalist for the prestigious Pulitzer Prize for journalism for her work on the Cambridge Analytica story.

It involved the discovery that an academic at the University of Cambridge used a personality quiz to harvest up to 87 million Facebook users' details.

Some of this was subsequently shared with the political consultancy Cambridge Analytica, which used it to target political advertising in the US.

Facebook's UK political ad rules kick in
Regulate Facebook now, say UK MPs
Early Cambridge Analytica fears revealed
Cadwalladr, who writes for the Guardian and Observer, used her TED talk to directly address who she called the "gods of Silicon Valley".

Many of the top executives of technology firms attend the TED conference in Vancouver, Canada.

"We are what happens to a Western democracy when elections are disrupted by technology," said Cadwalladr, referring to how voters in the Brexit referendum may have been influenced by online political campaigns.

"Technology has been amazing but now it is a crime scene," she added.

She said the technology giants had acted as "accessories to spreading lies".

'Wrong side of history'

She challenged Facebook boss Mark Zuckerberg to come to TED and criticised his refusal to address the UK parliamentary committee tasked with investigating Facebook's role in the Brexit referendum.

The Digital, Culture, Media and Sport Committee has suggested that the government makes major changes in electoral law to ensure future online campaigns are more transparent.

Mark ZuckerbergPA
Mark Zuckerberg refused to face a UK parliamentary committee
Cadwalladr said that there were still questions for Facebook to answer.

"The whole referendum took place on Facebook and we have no idea who saw what ads, who placed them and what money was spent," she said.

"Facebook is on the wrong side of history in refusing to give answers," she added.

The BBC asked Facebook for its response but it has not replied.

The social network has changed its rules around political ads in the UK, asking anyone placing them to verify their identity and location, and prove who is paying for the advert.

People buying the ads must provide their identity by submitting ID, which will be verified by a third party. They must also demonstrate that they have a UK address.

After Cadwalladr's talk, TED curator Mr Anderson promised to "hold a space" at the conference for Facebook executives, some of whom he said "were watching".

A challenge like this has been met before. In 2014, former National Security Agency (NSA) worker Edward Snowden was a surprise guest at TED, appearing by telerobot from an undisclosed location in Russia.

After his talk, a representative of the NSA also made an unscheduled appearance at the conference, offering to be more transparent about its surveillance work in future.
https://www.bbc.com/news/technology-47942278
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby Elvis » Thu Jul 25, 2019 1:07 pm

Instead of embracing Facebook’s Libra, we should be rallying for a public option for digital currency.

https://www.thenation.com/article/faceb ... y-digital/

In a telling moment during the House Committee hearing on Wednesday, [b]Representative Ocasio-Cortez asked Marcus whether he considered Libra a “public good,” like roads, parks, schools, and the legal system. He demurred, even though Libra’s white paper explicitly states that Libra “believe[s] that a global currency and financial infrastructure should be designed and governed as a public good.” It was the only moment during the entire hearing when Marcus appeared to veer off-message, likely out of recognition that it would be difficult to make such a bald lie directly to a panel of elected public officials.

The exchange underscored another crucial point made by Representative Ayanna Pressley (D-MA): Libra owes its existence to governments’ failures to develop their own public digital currency platforms that connect communities and people around the world together in ways that encourage global citizenship and coordination rather than price gouging and data mining. As John Kenneth Galbraith once observed: Private affluence thrives amid public squalor. [/b]
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7411
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby Elvis » Tue Jul 30, 2019 10:34 am

https://therealnews.com/stories/faceboo ... ns-privacy
Facebook’s Libra Currency Monetizes Identity and Threatens Privacy
July 5, 2019


Bill Black discusses Feacebook's new proposed crypto-currency, called “Libra.” Facebook could use this technology to standardize identity and create a world of ultimate surveillance, and then profit from it, says Black



https://www.youtube.com/watch?v=wHgpUuu7OUY
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7411
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby 82_28 » Tue Jul 30, 2019 3:46 pm

I'm not saying that I haven't been skimmed but I have a "fake" account. I use it with a bare minimum of "friends". I think I have already said somewhere upthread what I am about to repeat, I don't want to peer into the lives of people from the past and people I don't know. I use it strictly for messaging and just interesting links. Other than that it says I am in Seattle, that's it. When I say "skimmed" I mean maybe I have been triangulated to a degree. I don't use it on my phone at all and they are never getting my phone number even though it keeps asking me to give it to them. I feel it can be kind of benign as long as you don't fuck around with it or look at what they want you to look at. Also I never post photos of myself or others on it. But it's not that bad if you use it with its unintended purposes.
There is no me. There is no you. There is all. There is no you. There is no me. And that is all. A profound acceptance of an enormous pageantry. A haunting certainty that the unifying principle of this universe is love. -- Propagandhi
User avatar
82_28
 
Posts: 11194
Joined: Fri Nov 30, 2007 4:34 am
Location: North of Queen Anne
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby Belligerent Savant » Tue Jul 30, 2019 5:35 pm

.

Facebook uses tracking pixels and cookies to track user activity, ostensibly to help target audiences for branding/marketing purposes.

These tracking tools will have plenty of details about you based on your IP address and profile, along with whatever you like, click or visit while logged in.


Here's how it's presented to marketing entities:

https://blog.hootsuite.com/facebook-pixel/

And here's another view, from 2016 (they've refined their M.O. since then):

https://www.govloop.com/community/blog/ ... d-website/

Snippet:
The “pixel” refers to an HTML code snippet placed on specific pages that lets Facebook track user “conversions” from advertisements we place. Basically, it lets us know if our ads are effective or not.

At issue is the fact that the data Facebook collects from its pixel feature is not anonymous and can be tied to individual users, often by name. This kind of data tracking violates our privacy policy, and is therefore not allowed on King County websites.


And from the comments section of the 2nd link:

In short: If you have the Facebook pixel installed, it will track the movements of any visitors on your website who are simultaneously logged into Facebook. It will record which pages on your site they visit, which pages they don’t visit, and when they visit. Using this data, you can advertise to very targeted groups of people.

The part about tracking people “who are simultaneously logged into Facebook” is the key. Let’s say someone is using Facebook and sees an ad about shoes that they click on. If the shoe company’s website has the Facebook pixel installed, Facebook’s advertising metrics will track whether that user actually bought the shoe—or whatever the goal is for a certain website.

Facebook reports this information to you in its analytics dashboard so you can see that of the 100 people who clicked your ad, only 2 of them bought a pair of shoes. That’s what we get out of it as the business. Makes sense, and it’s very helpful for marketers.

What does Facebook get out of it? A LOT MORE! Back to the part about people “who are simultaneously logged into Facebook.” Facebook is watching very carefully what all of us does on its website so that it can use that information to build advertising profiles on each of us. For example, if you click on lots of stories about dogs and cats you can expect to start seeing advertising in your news feed about pets. That’s how Facebook works.

The pixel allows Facebook to track what each individual user does on OTHER websites. So those people who visited your shoe company’s website and giving Facebook extra data on what you did on the shoe company’s website and what you clicked on or purchased. Facebook, then, uses that information to keep building its advertising profile on you, which its sells to others looking to reach certain audiences.

This is great for the shoe company, but we don’t feel it’s appropriate to give Facebook that kind of data about what people are doing on a government website. As a side note, Google Analytics does not track people by name, only behavior. Facebook, on the other hand, needs to know WHO is clicking on what on the shoe company’s website so it can use that information in the future to better target advertising to that one person.
User avatar
Belligerent Savant
 
Posts: 5214
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby seemslikeadream » Fri Aug 23, 2019 9:50 am

Inside the World of Cambridge Analytica
viewtopic.php?f=8&t=40900&p=675561&hilit=Cambridge#p675561


Facebook learned about Cambridge Analytica as early as September 2015, new documents show
Lauren Feiner
Published an hour ago
Updated Moments Ago
Premium: Facebook Privacy Scandal: Mark Zuckerberg 180410
Newly released documents appear to shed light on when Facebook first learned about potential violations by Cambridge Analytica, the firm that used Facebook data to profile and target voters in the 2016 U.S. election.

The internal communication refers to concerns over Cambridge Analytica as early as Sept. 2015. In the documents, employees discuss Cambridge Analytica and other third parties that it had been warned were using its data in ways that may violate its policies. The employees said they were reaching out to the companies in question to investigate their use of Facebook data.

In April 2018, Facebook CEO Mark Zuckerberg testified in front of the Senate that the company learned in 2015 that “Cambridge Analytica had bought data from an app developer on Facebook that people had shared it with,” and demanded the firm delete and stop using data from Facebook. The correspondence between employees gives more insight into what Facebook knew at the time and when it knew it.

Facebook said in a blog post Friday that the document “has the potential to confuse two different events surrounding our knowledge of Cambridge Analytica.” The company said that at the time, “a Facebook employee shared unsubstantiated rumors from a competitor of Cambridge Analytica, which claimed that the data analytics company was scraping public data.” Facebook said the scraping of public profiles is distinct from the data Cambridge Analytica reportedly used from users’ friends who did not consent to sharing their data. Cambridge Analytica reportedly sold data it obtained from users and their friends once they took a personality test developed by app developer Aleksandr Kogan.

Facebook included a link to the same email correspondence in the post, which it said it agreed to jointly release with the District of Columbia Attorney General. Reached for comment, Facebook referred back to its blog post. The office of the D.C. Attorney General did not immediately return a request for comment.

Stil, the documents show that Facebook was aware of potential policy violations by Cambridge Analytica as early as September 2015. In the documents, one employee suggests keeping the issue of third party data usage and collection “open” and say they will work on “a more business friendly description” of its policies.

Later in the communication, one of the employees called the Cambridge Analytica situation “hi pri” for high priority after The Guardian ran its December 2015 article claiming the presidential campaign for Sen. Ted Cruz, R-TX was using data on Facebook users largely without their consent.

“This story just ran in the Guardianand is now prompting other media requests,” the employee wrote. “We need to sort this out ASAP.”

This story is developing.
https://www.cnbc.com/2019/08/23/faceboo ... ments.html


Facebook bans ads from The Epoch Times after huge pro-Trump buy
https://www.cnbc.com/2019/08/23/faceboo ... p-buy.html


Falun Gong think trump is their savior


The Epoch Times is the largest pro-Trump spender on Facebook.

The Epoch Times is run by people who “believe that Trump was sent by heaven to destroy the Communist Party”

It's run by a religious sect that believes judgment day, which sends those they label “communists” to Hell⁠, is 30 years late—and that Trump is making it happen
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 46 guests