Categories Technology

Meta suppressed children’s safety research, four whistleblowers claim

Two current and two former Meta employees disclosed documents to Congress alleging that the company may have suppressed research on children’s safety, according to a report from The Washington Post.

According to their claims, Meta changed its policies around researching sensitive topics — like politics, children, gender, race, and harassment — six weeks after whistleblower Frances Haugen leaked internal documents that showed how Meta’s own research found that Instagram can damage teen girls’ mental health. These revelations, which were made public in 2021, kicked off years of hearings in Congress over child safety on the internet, an issue that remains a hot topic in global governments today.

As part of these policy changes, the report says, Meta proposed two ways that researchers could limit the risk of conducting sensitive research. One suggestion was to loop lawyers into their research, protecting their communications from “adverse parties” due to attorney-client privilege. Researchers could also write about their findings more vaguely, avoiding terms like “not compliant” or “illegal.”

Jason Sattizahn, a former Meta researcher specializing in virtual reality, told The Washington Post that his boss made him delete recordings of an interview in which a teen claimed that his ten-year-old brother had been sexually propositioned on Meta’s VR platform, Horizon Worlds.

“Global privacy regulations make clear that if information from minors under 13 years of age is collected without verifiable parental or guardian consent, it has to be deleted,” a Meta spokesperson told TechCrunch.

But the whistleblowers claim that the documents they submitted to Congress show a pattern of employees being discouraged from discussing and researching their concerns around how children under 13 were using Meta’s social virtual reality apps.

“These few examples are being stitched together to fit a predetermined and false narrative; in reality, since the start of 2022, Meta has approved nearly 180 Reality Labs-related studies on social issues, including youth safety and well-being,” Meta told TechCrunch.

Techcrunch event

Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

San Francisco
|
October 27-29, 2025

In a lawsuit filed in February, Kelly Stonelake — a former Meta employee of fifteen years — raised similar concerns to these four whistleblowers. She told TechCrunch earlier this year that she led “go-to-market” strategies to bring Horizon Worlds to teenagers, international markets, and mobile users, but she felt that the app did not have adequate ways to keep out users under 13; she also flagged that the app had persistent issues with racism.

“The leadership team was aware that in one test, it took an average of 34 seconds of entering the platform before users with Black avatars were called racial slurs, including the ‘N-word’ and ‘monkey,’” the suit alleges.

Stonelake has separately sued Meta for alleged sexual harassment and gender discrimination.

While these whistleblowers’ allegations center on Meta’s VR products, the company is also facing criticism for how other products, like AI chatbots, affect minors. Reuters reported last month that Meta’s AI rules previously allowed chatbots to have “romantic or sensual” conversations with children.

Original Source: https://techcrunch.com/2025/09/08/meta-suppressed-childrens-safety-research-four-whistleblowers-claim/

Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *