­

Hyperdimensional: Decentralized Training and the Fall of Compute Thresholds

Hyperdimensional

Will decentralized training break compute thresholds for good?

I’ve been skeptical of regulation based on “compute thresholds” since the idea first entered mainstream policy discussion. The threshold-based approach taken by the Biden Administration in its Executive Order on AI struck me as a plausible first draft, particularly given that the basic intention behind it was “make sure OpenAI, DeepMind, Anthropic, and Meta talk to the federal government about their AI development and safety plans.” Compute thresholds are fine for that contingent and transitory need. But for anything more robust, and especially for any regulation that imposes actual or potential costs on model developers, I concluded long ago that compute thresholds would probably be ineffective.

A few weeks ago, I wrote that OpenAI’s new o1 models challenge compute thresholds with the inference-based scaling paradigm (if you don’t understand this, I’ll explain more below). But there’s yet another new paradigm in AI development that could complicate compute thresholds as a governance mechanism: decentralized training. Let’s take a look.

Discuss

OnAir membership is required. The lead Moderator for the discussions is US onAir Curator. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar