US onAir Curators
The Future of AI Agents
Michael Spencer, AI Supremacy
An AI agent is a software program designed to autonomously perform tasks on behalf of users or other systems, often without human intervention.
You might recall in the OpenAI hype for AGI, this is just a stage on the path to AGI. Supposedly, Level 3. But what does this level 3 even mean and how useful will automated tasks even be to us or our productivity? It’s a big question mark. All of this demand for AI chips has to lead somewhere, but where?
What is AI superintelligence? Could it destroy humanity? And is it really almost here?
Flora Salim
Let’s suppose we do one day have superintelligent, fully autonomous AI agents. Will we then face the risk they could concentrate power or act against human interests?
Not necessarily. Autonomy and control can go hand in hand. A system can be highly automated, yet provide a high level of human control.
Like many in the AI research community, I believe safe superintelligence is feasible. However, building it will be a complex and multidisciplinary task, and researchers will have to tread unbeaten paths to get there.
On the EU AI Code of Practice
Dean W. Ball and Miles Brundage
Summary of our reaction to the first draft
The Code of Practice thoughtfully and comprehensively raises many of the most important questions in the fields of AI governance and safety. There are several aspects of the Code of Practice which we broadly support, and we suggest only minor modifications to them. At the same time, there are many places where the draft merely gestures at answers to the questions it thoughtfully raises—or provides none at all. Notably (and admirably), the authors acknowledge that the document is very early and that many open questions remain. We hope that our and others’ comments on this first version prove useful in drafting the next one.