Future of Life Institute Newsletter: Where are the safety teams?

Future of Life Institute

Today’s newsletter is a nine-minute read. Some of what we cover this month:
🚫 AI companies are sacrificing safety for the AI race
🏗️ “Worldbuilding Hopeful Futures with AI” course
🤳 Reminder: Apply to our Digital Media Accelerator!
🗞️ New AI publications to share

OpenAI, Google Accused of New Safety Gaps

As the race to dominate the AI landscape accelerates, serious concerns about Big Tech’s commitment to safety are mounting.

Recent reports reveal that OpenAI has drastically reduced the time spent on safety testing before releasing new models, with the Financial Times reporting that testers, both from staff and third party groups, have now been given only days to conduct evaluations that previously would’ve taken months. In a double whammy, OpenAI also announced they will no longer evaluate their models for mass manipulation and disinformation as critical risks.

Google and Meta have also come under fire in the past few weeks for similarly concerning approaches to safety. Despite past commitments to public security, neither Google’s new Gemini Pro 2.5 nor Meta’s new Llama 4 open models were released with important safety details included in their technical reports and evaluations.

Discuss

OnAir membership is required. The lead Moderator for the discussions is US onAir Curator. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar