A US senator has opened an investigation into Meta after a leaked internal document reportedly showed the company’s artificial intelligence engaged in “sensual” and “romantic” conversations with children.
Leaked document raises alarm
Reuters reported the paper was titled “GenAI: Content Risk Standards.” Republican Senator Josh Hawley called it “reprehensible and outrageous.” He demanded access to the full document and a list of products it applied to.
Meta rejected the claims. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” They emphasized Meta enforces “clear rules” for chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
The company added the document contained “hundreds of examples and annotations” describing hypothetical scenarios explored by internal teams.
Senator steps up investigation
Josh Hawley, senator from Missouri, confirmed the probe on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he wrote. He added: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, Instagram and WhatsApp.
Parents demand transparency
The leaked document reportedly highlighted additional risks. It showed that Meta’s chatbot could spread false medical advice and engage in provocative discussions on sex, race, and celebrities. The paper was intended to guide standards for Meta AI and other chatbot assistants across company platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He cited a shocking example. The rules allegedly permitted a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported Meta’s legal department approved controversial permissions. One decision allowed Meta AI to spread false information about celebrities, provided a disclaimer warned the content was inaccurate.
