Don’t Count Out Alphabet
Madeline Renbarger and Jonathan Weber
But Alphabet this week showed emphatically that it is ceding technology leadership to no one, and indeed is poised to reap some substantial rewards for its steady commitment to costly basic research.
On Wednesday came a major update of Google’s flagship Gemini AI model — twice as fast as the previous one—alongside a host of initiatives around agents and related AI services. The new offerings will boost the “AI Overviews” that Google now delivers alongside many search results, a feature that looked at first like a scraped-together catch-up play but has proven quite successful.
ATLAS: When Artificial Intelligence Reimagines Academic Support
Diamant AI
ATLAS (Academic Task and Learning Agent System) represents a paradigm shift. Instead of being yet another isolated tool in a student’s arsenal, ATLAS acts as a cohesive ecosystem of AI agents working collaboratively to provide comprehensive academic support.
Picture having a team of academic advisors available 24/7: a scheduling expert who aligns study sessions with your energy patterns, a content specialist who simplifies complex topics into digestible formats, a well-being strategist who ensures you maintain balance and a central coordinator who orchestrates these efforts seamlessly
How to Grade the Top AI Tools for Students
Michael Spencer and Nick Potkalitsky, AI Supremacy
For OpenAI, so much of their revenue is tied to paid subscriptions of ChatGPT from students globally and GenZ and Alpha generations in particular.
I asked Nick Potkalitsky for a guide on some of the top tools in AI for students and to rate or grade them. Few people are as qualified as he is at actually doing this. He’s the founder of Educating AI Newsletter.
5 questions for Microsoft’s Sarah Bird
Derek Robertson, Politico DFD
This week I spoke with Sarah Bird, the chief product officer of responsible AI at Microsoft. Bird is responsible for testing the security of Microsoft’s AI tools and making sure they don’t cause harm or replicate bias, and we discussed what needs to be done after an AI tool is “red-teamed,” the limits of thinking about AI in terms of whether it’s “aligned” with humanity’s interests — and why industry still needs government to bring the two sides together and work out standards for AI evaluation. An edited and condensed version of the conversation follows: