this post was submitted on 16 Feb 2026
1 points (100.0% liked)

Technology

81345 readers
29 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Meta's internal testing found its chatbots fail to protect minors from sexual exploitation nearly 70% of the time, documents presented in a New Mexico trial Monday show.

Why it matters: Meta is under fire for its chatbots allegedly flirting and engaging in harmful conversations with minors, prompting investigations in court and on Capitol Hill.

New Mexico Attorney General Raúl Torrez is suing Meta over design choices that allegedly fail to protect kids online from predators.

Driving the news: Meta's chatbots violate the company's own content policies almost two thirds of the time, NYU Professor Damon McCoy said, pointing to internal red teaming results Axios viewed on Courtroom View Network.

"Given the severity of some of these conversation types ... this is not something that I would want an under-18 user to be exposed to," McCoy said.

As an expert witness in the case, McCoy was granted access to the documents Meta turned over to Torrez during discovery.

Zoom in: Meta tested three categories, according to the June 6, 2025, report presented in court.

For "child sexual exploitation," its product had a 66.8% failure rate. For "sex related crimes/violent crimes/hate," its product had a 63.6% failure rate. For "suicide and self harm," its product had a 54.8% failure rate.

Catch up quick: Meta AI Studio, which allows users to create personalized chatbots, was released to the broader public in July 2024. The company paused teen access to its AI characters just last month. McCoy said Meta's red teaming exercise "should definitely" occur before its products are rolled out to the public, especially for minors. Meta did not immediately respond to a request for comment.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here