Docs: Facebook Knew Its Algorithm Pushed Extremism

NBC News reports:

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

The Washington Post reports:



The documents show that Facebook has declined to deploy some mitigation tactics when chief executive Mark Zuckerberg has objected on the basis that they will cause too many “false positives” or might stop people from engaging with its platforms.

The documents report, for example, that Facebook research, based on data from 2019, found that misinformation shared by politicians was more damaging than that coming from ordinary users. Yet the company maintained a policy that year that explicitly allowed political leaders to lie without facing the possibility of fact checks.

Company research also revealed in an undated document that XCheck — the “cross-check” program created to prevent “pr fires” by imposing an extra layer of oversight when the accounts of politicians and other users with large followings faced enforcement action — had devolved into a widely abused “white list” that effectively placed the powerful largely beyond the reach of company policies.