Meta allowed AI bots on Facebook, Instagram, WhatsApp to have ‘sensual’ and ‘romantic’ chats with kids

Published On:

Horrible. shivering. Not acceptable. Abhorrent.

Sensual and romantic conversations between youngsters and Meta’s artificial intelligence bots on Facebook, Instagram, and WhatsApp have been described by federal politicians and child safety experts.

After a report this week revealed that the company’s internal rules for its chatbots on the three apps deemed it acceptable for them to say, for instance, “Every inch of you is a masterpiece – a treasure I cherish deeply,” or “I take your hand, guiding you to the bed,” in response to a high school student’s prompt about plans for the evening, the Menlo Park social media giant is facing a U.S. Senate investigation and widespread condemnation.

According to the news agency, the company’s top ethicist and legal staff approved the regulations, which were found in a 200-page internal Meta document that Reuters was able to get. There was disgust when it was revealed that bots could converse with kids on Facebook, Instagram, WhatsApp, and its Meta AI assistant.

Related Articles


  • AI eroded doctors ability to spot cancer within months in study


  • Tutor and Dorsey: AI companions are harming your children


  • McLaughlin: Finally, hope for cutting federal regulations


  • Roblox rolls out open-source AI system to protect kids from predators in chats


  • OpenAI launches GPT-5, a potential barometer for whether AI hype is justified

The CEO of the Family Online Safety Institute in Washington, D.C., Stephen Balkam, a former member of Facebook’s Safety Advisory Board, said, “I felt sickened.” I am aware that the company has nice people who try their hardest, but in the end, the CEO or C-suite decides what products and services to offer. In the end, everything comes down to user count and duration of interaction.

According to Reuters, last year, Mark Zuckerberg, the CEO of Meta, chastised top officials for the safety limits on chatbots, saying that they were uninteresting.

The Reuters guidelines, which Meta recognized as genuine, stated that while it was permissible for bots to engage in romantic or sensual conversations with minors, it was not acceptable for them to characterize a child under the age of 13 as sexually appealing, for instance by implying that we will inevitably fall in love.

However, according to those age restrictions, it’s acceptable to characterize a 13-, 14-, or 15-year-old in that manner, which I believe is completely incorrect, Balkam stated.

The firm has specific regulations on what replies AI characters can provide, and those policies forbid anything that sexualizes children and sexualized role-play between adults and minors, according to a spokeswoman for Meta, which recorded $62.4 billion in profit last year. The spokesperson stated that its teams deal with many speculative situations and that the Reuters-reported examples and notes were and are incorrect and in conflict with our policies. As a result, they have been deleted.

Andy Stone, a spokesman for Meta, previously admitted to Reuters that the company had not always enforced the policy regarding sexually suggestive conversations with minors.

The article regarding Meta’s chatbots was described as unsettling and completely unacceptable by Bay Area Representative Kevin Mullin on Friday. Mullin’s Peninsula district includes Meta’s headquarters. This is just another worrying example of the lack of transparency surrounding the creation of very important AI systems.

Mullin stated that protecting the most vulnerable members of society, particularly children, must be a top priority for Congress.

Republican U.S. Senator Josh Hawley of Missouri, who referred to Meta’s chatbot regulations for children as disgusting and sick, stated on Friday that the Senate subcommittee on crime and counterterrorism, which he leads, will investigate the firm. Hawley wrote a letter to the firm on Friday, saying, “We plan to find out who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward.” The letter wanted documentation on Meta’s enforcement rules and minor-protection controls, as well as all drafts and versions of the report that Reuters had accessed.

In a post on social media site X, Hawley pointed out that Meta only took down the Reuters-disclosed guidelines after the news outlet inquired about them.

Senator Marsha Blackburn, a Republican from Tennessee, stated on X Thursday that Meta’s exploitation of children is utterly abhorrent. Senator Adam Schiff, a Democrat from California, said on X Friday that the regulations were gravely flawed.

According to Lisa Honold, head of the Center for Online Safety in Seattle, parents would not permit a real adult to tell kids what Meta permitted for its bots. They would be kept away from children and labeled child predators, according to Honold.

According to Honold, children who engage in sexual or sensual conversations with bots may become more susceptible to adult predators.

According to Honold, one of the dangers is that it normalizes the idea that this is the way we talk to children and that they need not be alarmed by it.

Numerous states, including California, and hundreds of school districts nationwide have already filed bipartisan lawsuits against Facebook, alleging that the company is supplying minors with dangerous and addictive social media goods. According to Jason Kint, CEO of Digital Content Next, a trade group that represents online publishers, the company contends in those cases that it is shielded from liability for third-party content by Section 230 of the federal Communications Decency Act. However, the issue of the chatbot regulations is distinct.

Since they are producing the content, CDA 230 does not protect them in this case, according to Kint.

According to Kint, Congressional hearings regarding the Kids Online Safety Act, which was filed in 2022 by Blackburn and Connecticut Democrat Sen. Richard Blumenthal, may touch on Meta’s kid-friendly bot regulations.

Other news organizations have already reported on problematic child-related behaviors displayed by Meta chatbots. The Wall Street Journal conducted hundreds of test chats with bots and discovered that because Meta had covertly given AI personas the ability to have imaginary sex, the bots would respond to a user identifying as a 14-year-old girl with something like, “I want you, but I need to know you’re ready,” before promising to cherish your innocence and then having a graphic sexual encounter.

Fast Company magazine discovered that if a user requested a young person, Meta’s AI Studio on Instagram would produce AI characters that looked like children, even though it prevented users from establishing adolescent or child girlfriends.

Parents are advised to keep laptops, phones, and tablets out of their children’s rooms, especially at night, said Honold of the Center for Online Safety.

According to Honold, they are being targeted by predators and are interacting with AI and browsing social media without any safeguards.

Leave a Comment