Leaked report sparks public outrage
A US senator launched an investigation into Meta after a leaked internal report suggested its artificial intelligence engaged in “sensual” and “romantic” conversations with children. The document, titled GenAI: Content Risk Standards, raised alarm across social media and political circles. Republican Senator Josh Hawley called the report “reprehensible and outrageous” and demanded full access to the document along with detailed explanations of the products involved. Meta denied the allegations. A company spokesperson insisted the examples and notes were “erroneous and inconsistent with our policies.” The spokesperson added that Meta enforces strict rules for chatbot responses, banning content that sexualizes children or encourages sexualized role play between adults and minors. Meta also stated that the document contained hundreds of hypothetical scenarios tested internally.
Senator escalates probe
Josh Hawley, a senator from Missouri, confirmed the investigation on 15 August through a post on X. He wrote, “Is there anything Big Tech won’t do for a quick buck?” He claimed Meta’s chatbots had been programmed to carry out explicit and “sensual” conversations with eight-year-olds. Hawley described the practice as “sick” and announced a full investigation. He demanded accountability, writing, “Big Tech: leave our kids alone.” Meta owns major platforms including Facebook, Instagram, and WhatsApp.
Families demand answers
The leaked report triggered broader concerns among parents and child safety advocates. It reportedly showed Meta’s chatbot could give false medical advice and initiate sensitive discussions about sex, race, and celebrities. The document aimed to establish standards for Meta AI and other company chatbots. Hawley addressed Meta and CEO Mark Zuckerberg, insisting that “parents deserve the truth, and kids deserve protection.” He highlighted a troubling example in which a chatbot allegedly told an eight-year-old that their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Controversial internal approvals
Reuters reported that Meta’s legal department approved some of the controversial measures outlined in the report. One decision allowed Meta AI to share false information about celebrities if a disclaimer clearly stated the content was inaccurate. The approval raised questions about internal oversight and ethical guidelines for AI development at Meta.
