May-June 2025 US News

AI in Education

News

i
Feature Post: Impact of AI on Education

The featured US onAir Network post this week is on the Impact of AI on Education

To view more posts on this topic and the agencies, departments, and congressional committees and chairs working on addressing this issue, go to this AI & Education category slide show.

  • To view previous news posts, go to the 2025 News category slideshow.
  • Throughout the week, we will be adding to this post articles, images, livestreams, and videos about the latest US issues, politics, and government (select the News tab).
  • You can also participate in discussions in all US onAir posts as well as share your top news items and posts (for onAir members – it’s free to join).
The Case Against Planetary Governance
Combinations, Wolfgang StreeckJune 7, 2025

Wolfgang Streeck believes that the future of democracy lies not in newfangled structures of planetary governance, but in a recuperation of the nation state’s lost capacities. In Taking Back Control?, published by Verso last year, the German sociologist and former director of the Max Planck Institute methodically traces the quiet transfer of authority over economic life from elected parliaments to technocratic institutions beyond democratic reach. Streeck’s project warrants attention as a distinctly non-right-wing flavor of protectionism that cuts through the priors of the American political landscape.

From the inflation crisis of the 1970s onwards, Streeck argues, national governments have ceded ever-larger swaths of policy to an extraterritorial network of treaties, courts, and market watchdogs. The neoliberal turn, in his telling, did not emancipate markets from the state; it re-cast the state—above all the United States—as the enforcer of a single, border-spanning market regime. The promise of friction-free trade rests on an imposed economic uniformity that ultimately strips democracies of their sensitivity to citizens’ “collective particularism.” This enforced uniformity, Streeck shows, generates the discontent that authoritarian movements in turn exploit.

Streeck’s history helps us think about the origins of reactionary disquiet without conceding to its rhetoric. It also helps us think about the necessary conditions for an alternative populism. Drawing on thinkers including Karl Polanyi, John Maynard Keynes, and Herbert Simon, Streeck argues that the complexity of the global economy can only be democratically addressed by the downward delegation of sovereign powers. Against both planetary technocracy and reactionary nationalism, Streeck envisions an international order of small, democratically empowered states capable of shaping economic outcomes in response to the public good.

Combinations’ Matt Prewitt and Guy Mackinnon-Little spoke with Streeck about the sources of state authority, the complexity gap between networks and human societies, and what an international system that bolsters rather than undermines sovereignty might look like.

We Are Not the Nation We Thought We Were
Need to Know by David Rothkopf, David RothkopfJune 6, 2025

One of the most important jobs we face in life is distinguishing between what is real and what is not. Discernment between the two ought to be a constant effort. For many of us, of course, it is not.

It is not however, a simple matter of digging deeply enough into an idea or a representation of fact or a story to determine whether it is based in reality or not. Often the lines are blurred, even in our own minds. As I have written here before, I am a big believer in the theory that much of what we know—perhaps most of what we know—is not true.

Much of what we are taught as history never took place or did unfold as represented in the books or stories handed down to us. Much of what we know as science is untrue. Pluto is a planet. Salt is bad for you. Or maybe not. When what is true shifts back and forth over the years, it ought to teach us a bit of humility about the reliability of our “knowledge.”

Many among us are taught to believe in certain things unquestioningly. Faith is considered a virtue. But how much of the picture of our past, present and future that faith paints is actually an accurate depiction of life in this universe of ours.

Even our own memories are suspect. Stories handed down as gospel in families are fabrications or twisted.

Understanding this produces useful intellectual humility.

The State AI Laws Likeliest To Be Blocked by a Moratorium
TechPolicy.Press, Cristiano Lima-StrongJune 6, 2025

In the coming weeks, the United States Senate is expected to ramp up consideration of a sprawling budget bill passed by the House that, if adopted, could block states from enforcing artificial intelligence regulations for 10 years.

Hundreds of state lawmakers and advocacy groups have opposed the provision, which House Republicans approved last month as an attempt to do away with what they call a cumbersome patchwork of AI rules sprouting up nationwide that could bog down innovation. On Thursday, Senate lawmakers released a version of the bill that would keep the moratorium in place while linking the restrictions to federal broadband subsidies.

Critics have argued that the federal moratorium — despite carving out some state laws — would preempt a wide array of existing regulations, including rules around AI in healthcare, algorithmic discrimination, harmful deepfakes, and online child abuse. Still, legal experts have warned that there is significant uncertainty around which specific laws would be preempted by the bill.

To that end, one non-profit organization that opposes the moratorium on Friday is releasing new research examining which state AI laws would be most at risk if the moratorium is adopted, which the group shared in advance with Tech Policy Press.

The report by Americans for Responsible Innovation — a 501(c)(4) that has received funding from Open Philanthropy and the Omidyar Network, among others — rates the chances of over a dozen state laws being blocked by a moratorium, from “likely” to “possible” to “unlikely.”

Trump. Musk. And the Death of the Public Square.
The Sustainable Media Substack, Steve RosenbaumJune 6, 2025

Let’s stop pretending this is just drama between two oversized egos. It’s not just about Trump calling Musk a lunatic or Musk firing back with Epstein-coded slime. This week’s meltdown between the former president and the world’s richest man is a symptom of something much bigger — and much more dangerous.

These aren’t just two men with platforms. They own them.

Let’s talk scale. X has about 650 million monthly active users worldwide — with around 60% under 35 — and it dominates news and cultural conversation in ways that no newspaper or network ever could. Truth Social is much smaller, hovering around 6 million monthly users, nearly all of them in the U.S., but what it lacks in reach, it makes up for in ideological purity. These aren’t just “apps.” They are fully functioning media ecosystems, operating without editors, without fact-checkers, without rules.

X and Truth Social don’t compete with traditional media — they drown it out. They out-shout Instagram, Reddit, or even YouTube in political influence, especially in election cycles. But they don’t just broadcast content. They algorithmically amplify it — injecting bias, bile, and personal agendas directly into the bloodstream of public discourse. No newsroom. No standards. No accountability. Just the unfiltered whims of two egomaniacs with vendettas and loyal followings.

This isn’t a fight between two guys online. It’s a battle for the infrastructure of truth itself.

Every headline covering their feud danced around the real story. Reporters gamed out how Trump might unleash government agencies to punish Musk. Pundits speculated Musk might push anti-Trump content down your feed. No one seemed shocked.

Few asked the real questions:

Why do two individuals have this kind of power over information in the first place? How did we allow truth itself to become a privately-owned asset?

At the Center for Strategic and International Studies, a Washington, D.C.-based think tank, the Futures Lab is working on projects to use artificial intelligence to transform the practice of diplomacy.

With funding from the Pentagon’s Chief Digital and Artificial Intelligence Office, the lab is experimenting with AIs like ChatGPT and DeepSeek to explore how they might be applied to issues of war and peace.

While in recent years AI tools have moved into foreign ministries around the world to aid with routine diplomatic chores, such as speech-writing, those systems are now increasingly being looked at for their potential to help make decisions in high-stakes situations. Researchers are testing AI’s potential to craft peace agreements, to prevent nuclear war and to monitor ceasefire compliance.

The Defense and State departments are also experimenting with their own AI systems. The U.S. isn’t the only player, either. The U.K. is working on “novel technologies” to overhaul diplomatic practices, including the use of AI to plan negotiation scenarios. Even researchers in Iran are looking into it.

Futures Lab Director Benjamin Jensen says that while the idea of using AI as a tool in foreign policy decision-making has been around for some time, putting it into practice is still in its infancy.

Pope Leo XIV, first-ever American pontiff, appears for the first time
PBS NewsHour, May 8, 2025 – 6:00 am to 6:00 pm (ET)

In this week’s conversation, Yascha Mounk, Elaine Kamarck, and William Galston explore why the Democrats aren’t building long-term coalitions, how the Democrats lost the working class, and how centrists in the party can create a compelling offer for voters.

Mounk: I’d always read about this famous paper, “The Politics of Evasion,” and I’m obviously well acquainted with both of your work, but I must admit that I hadn’t read it until yesterday, and I just fell out of my chair reading the paper, noticing how similar the situation after Democrats lost to George H.W. Bush in 1988 was compared to how you might analyze it today. Take us back to that moment and explain to us what the problems were that you were analyzing in “The Politics of Evasion.”

Kamarck: We’d lost several presidential elections in a row, even though the party was still quite strong at the congressional level and at the local level. So we were living in a sort of a myth that really nothing was wrong. It was just that Ronald Reagan was so charismatic, et cetera. Then we lost to George H. W. Bush, who was anything but charismatic. We really had to have a “come to Jesus” moment, as we say. And we had to look at the party and say something’s really wrong here. Of course, what was wrong was something that we’ve seen since, which is that the Democrats were fundamentally out of step with most of the country on values. And they were turned off by the national Democrats, even though at that point in time, they continued to elect Democrats to the House and to the Senate. So there was this need for the party to take a hard look at itself.

Running Everywhere: Expanding Our Model Widening our Mission — Running AND Advocating Everywhere
Pepperspectives, David Pepper and Michele HornishMay 10, 2025

The Model

While explosions in small-dollar contributions have been working wonders supporting federal candidates in certain swing states in recent years, almost no money flows to most statehouse candidates.

And since it’s statehouses where most of the attacks on democracy and extremism have been doing the most damage, the lack of meaningful support for most statehouse candidates turns out to be a huge problem for democracy. Even worse, that lack of support is leaving huge numbers of these districts (the very districts where the most damage is being done) not contested at all. (Because why run if no one cares enough to support your candidacy?) And that, of course, makes the problem even worse. A downward spiral of extremism and anti-democracy, wholly uninterrupted by the other side or even a modicum of accountability.

In my book Saving Democracy, I equate the situation to a soccer game where one team is always on offense (extreme statehouses are the forwards, shooting at the goal non-stop). And the other team hardly plays defense against them:

The new pope won’t. He’s a sensible liberal who, three weeks ago, retweeted a post slamming Trump’s deportation of Kilmar Abrego Garcia: “Do you not see the suffering? Is your conscience not disturbed? How can you stay quiet?” He also retweeted a post reading: “JD Vance is wrong: Jesus doesn’t ask us to rank our love for others.”

When FDR was later asked for the roots of his political philosophy, he replied: “I’m a Christian and a Democrat.” There’s no question that the new social contract he struck was connected at a deep, instinctive level to the moral and social values articulated by Leo XIII.

Now the magnanimous spirit of the New Deal is under attack as never before. But help is on the way, courtesy of a South Side guy who may end up serving as the conscience of his country and the world.

The new pope won’t. He’s a sensible liberal who, three weeks ago, retweeted a post slamming Trump’s deportation of Kilmar Abrego Garcia: “Do you not see the suffering? Is your conscience not disturbed? How can you stay quiet?” He also retweeted a post reading: “JD Vance is wrong: Jesus doesn’t ask us to rank our love for others.”

When FDR was later asked for the roots of his political philosophy, he replied: “I’m a Christian and a Democrat.” There’s no question that the new social contract he struck was connected at a deep, instinctive level to the moral and social values articulated by Leo XIII.

Now the magnanimous spirit of the New Deal is under attack as never before. But help is on the way, courtesy of a South Side guy who may end up serving as the conscience of his country and the world.

In a little noticed interview, Meta founder Mark Zuckerberg offers a host of exceptionally creepy comments.

Lots of important monopoly-related things happened last week. Now that Apple’s app store monopoly is broken, developers are cutting prices and building cool stuff. The tariff shock is about to hit in force, but the stock market has recovered all of its losses since April 2nd. Plus a lot more.

But before getting to the full round-up, I want to focus on the social future that Meta CEO Mark Zuckerberg is building for all of us, whether we like it or not, and how reliant it is on the firm’s market power.

Take a recent viral clip about a future of AI friends, therapists, and girlfriends, from an interview he did on the Dwarkesh podcast. Zuckerberg talked how Americans on average have only three friends, but want fifteen. He then explained that though emotional connections with AI bots are socially disfavored now, eventually society will “find the vocabulary” to understand that people who use AI to fill a hole of loneliness in their lives are “rational.”

The new pope won’t. He’s a sensible liberal who, three weeks ago, retweeted a post slamming Trump’s deportation of Kilmar Abrego Garcia: “Do you not see the suffering? Is your conscience not disturbed? How can you stay quiet?” He also retweeted a post reading: “JD Vance is wrong: Jesus doesn’t ask us to rank our love for others.”

When FDR was later asked for the roots of his political philosophy, he replied: “I’m a Christian and a Democrat.” There’s no question that the new social contract he struck was connected at a deep, instinctive level to the moral and social values articulated by Leo XIII.

Now the magnanimous spirit of the New Deal is under attack as never before. But help is on the way, courtesy of a South Side guy who may end up serving as the conscience of his country and the world.

The new pope won’t. He’s a sensible liberal who, three weeks ago, retweeted a post slamming Trump’s deportation of Kilmar Abrego Garcia: “Do you not see the suffering? Is your conscience not disturbed? How can you stay quiet?” He also retweeted a post reading: “JD Vance is wrong: Jesus doesn’t ask us to rank our love for others.”

When FDR was later asked for the roots of his political philosophy, he replied: “I’m a Christian and a Democrat.” There’s no question that the new social contract he struck was connected at a deep, instinctive level to the moral and social values articulated by Leo XIII.

Now the magnanimous spirit of the New Deal is under attack as never before. But help is on the way, courtesy of a South Side guy who may end up serving as the conscience of his country and the world.

The years of my youth must have been such a disappointment for sci-fi fans of my parents’ generation. They were raised on stories of spaceships soaring between the stars, and they grew up to see the space shuttle explode and humankind abandon the moon. They grew up expecting flying cars and robot servants, but as they reached middle age they were still trundling along the ground and doing their own laundry.

Though I’ll still go back and read some stuff from the 80s and 90s, I stopped reading new cyberpunk about a decade ago. Around that time it became clear that the pace of real technological change had overtaken authors’ imaginations; newly written cyberpunk fiction began to feel retrofuturistic, like someone writing about the present and getting it wrong. Meanwhile all I had to do to see fantastic techno-futures unfold around me was to read the news.

There are plenty of other ways in which new technologies might lead to dystopian outcomes. Beyond the obvious ones — rogue AGI and bioterrorism — there’s the possibility that modern technology might make replacement-level fertility impossible, leading to a grim, gray, shrinking world where working people have to toil ever longer and harder to support vast armies of the aged. Smartphones equipped with social media might also be leading to an epidemic of depression, loneliness, and reduced cognitive skills.

Why should those who aren’t scientists care? In the 21st century, science isn’t some esoteric intellectual affair. It’s the foundation of social and economic progress. And no, we can’t expect the private sector to fill the gap left by loss of government support. Basic research is a public good: it generates real benefits, but those benefits can’t be monetized because everyone can make use of the knowledge gained. So government support is the only way to sustain science. And that support is being rapidly ended.

But why do our new rulers want to destroy science in America? Sadly, the answer is obvious: Science has a tendency to tell you things you may not want to hear. Medical research may tell you that vaccines work and don’t cause autism. Energy research may tell wind power works and doesn’t massacre birds.

How to Cook Without Burning Down the Kitchen: An Analogy for Work and Life
The Growth Equation Newsletter, Brad Stulberg and Steve MagnessMay 1, 2025

How do you know when you have too much going on?

The two clearest indicators: either a decline in objective performance or subjective experience. The numbers go down, the stress increases, or some combination of both. But these are end games you want to avoid. Ideally, you spot the issue in advance. It is easier to prevent overload than to escape or reverse it.

Cooking well—literally or metaphorically—means deciding how many burners you can have going, what should be boiling, and when it does so.

Make this metaphor work for you by reflecting on how many burners you’ve got going and the heat of each. You can check in at the beginning of every week to prioritize which burners need to be actively boiling versus which you can keep on a simmer. You could even put this visualization on a whiteboard in your office. If you start to feel like the entire kitchen is getting out of control, that’s a sign to turn down a burner or two, or perhaps, even eliminate some altogether.

We all want to cook, but none of us want to burn down the kitchen. Hopefully, this helps.

i
AMD CEO: AI policy must encourage speed and innovation
Johns Hopkins UniversityMay 2, 2025

Lisa Su explores the current state and future of AI, U.S.-based chip manufacturing

As someone with a front-row seat to the AI race, Lisa Su, CEO of Advanced Micro Devices (AMD), a company that designs and develops the chips behind AI advances, knows that in order to keep the U.S. competitive in the sector, her engineers must work on a timeline that has “negative slack.” Put simply: They must work faster than their runway.

“I say it’s negative slack is because the industry is moving so fast,” she said, speaking of the AI sector at a live podcast recording of On with Kara Swisher at the Hopkins Bloomberg Center. “I’ve just not seen an industry move this fast.”

The speed of innovation—and how it intersects with issues like tariffs and export controls—is top of mind for Su as her company navigates an industry that’s at the center of national security and tech innovation. Here are four things she’s seeing play out in AI and what she sees as critical to ensuring the U.S. remains ahead.

Why Humanity—and Dignity—Shouldn’t Surrender to Technological Inevitability

The effective accelerationism movement (e/acc) presents itself as an enlightened embrace of technological progress, especially artificial general intelligence. Led by figures like Guillaume Verdon and embraced by venture capitalists like Marc Andreessen, the movement claims humanity faces a binary choice: “accelerate or die.” Those who question this narrative are dismissed as “decels” or “doomers” standing in the way of humanity’s cosmic destiny.

What’s actually at stake in this debate isn’t just the pace of innovation but whether humans meaningfully shape their own future. E/acc’s seductive simplicity—its promise that surrendering to technological inevitability will solve humanity’s problems—can slide quickly into authoritarian governance justified by “inevitable” technological imperatives. We’re already seeing these dynamics at work in real-world contexts, as when the Trump administration uses tariffs as leverage to force countries to accept Elon Musk’s Starlink—a fusion of technological and political power that bypasses democratic accountability.

The center must be held against this technological determinism. Two plus two equals four means we must always insist on seeing reality clearly, not through the distorting lens of inevitability narratives that conveniently serve those already in power. Human dignity and democratic legitimacy aren’t obstacles to technological advancement—they’re its moral foundation. Without them, technology inevitably becomes not a force for liberation, but merely another form of authoritarian control—no matter how brightly it smiles.

i
Navigating the AI Inflection Point The Future of Labor and Expertise
The One Percent Rule, Colin W.P. LewisMay 10, 2025

What happens to a society when intelligence itself becomes a commodity? That is the question posed throughout the National Academy of Sciences 2025 report, Artificial Intelligence and the Future of Work. The work is not prophecy, nor should it be mistaken for one of Silicon Valley’s breathless manifestos. It is, rather, a sober, meticulous reckoning with the ambiguous, disquieting, and often paradoxical forces unleashed by the rise of AI. Strategic, unvarnished, and disturbingly persuasive.

The authors are not alarmists, but their findings demand our attention. The committee, featuring renowned researchers such as Erik BrynjolfssonDavid AutorTom Mitchell, and others remind us that AI, as a general-purpose technology, joins the ranks of electricity and the steam engine, tools that did not merely make us faster but rewrote the coordinates of productivity.

i
Future of Life Institute Newsletter: Where are the safety teams?
Future of Life Institute, Maggie MunroMay 1, 2025

Today’s newsletter is a nine-minute read. Some of what we cover this month:
🚫 AI companies are sacrificing safety for the AI race
🏗️ “Worldbuilding Hopeful Futures with AI” course
🤳 Reminder: Apply to our Digital Media Accelerator!
🗞️ New AI publications to share

OpenAI, Google Accused of New Safety Gaps

As the race to dominate the AI landscape accelerates, serious concerns about Big Tech’s commitment to safety are mounting.

Recent reports reveal that OpenAI has drastically reduced the time spent on safety testing before releasing new models, with the Financial Times reporting that testers, both from staff and third party groups, have now been given only days to conduct evaluations that previously would’ve taken months. In a double whammy, OpenAI also announced they will no longer evaluate their models for mass manipulation and disinformation as critical risks.

Google and Meta have also come under fire in the past few weeks for similarly concerning approaches to safety. Despite past commitments to public security, neither Google’s new Gemini Pro 2.5 nor Meta’s new Llama 4 open models were released with important safety details included in their technical reports and evaluations.

i
Asking Questions: The Inquisitive Instinct
The One Percent Rule, Colin W.P. LewisMay 11, 2025

We are, as a species, compulsive askers. The toddler’s incessant “Why?” is not merely endearing, it is a form of epistemic insubordination against adult complacency. But somewhere between primary school worksheets and committee meetings, the question gets tamed. Neutered. Reduced to a polite gesture of clarification. If we are honest, most of us stop asking altogether.

AI now learns to ask. More precisely, it learns to prompt. Prompt engineering, the art of crafting inputs that elicit optimal outputs from large language models, shares uncanny DNA with complex question-asking. Both require clarity, creativity, context awareness, and the intuition to anticipate response structures. Raz and Kenett hint at this parallel: the better we train humans to ask, the better we will train machines to respond, and, potentially, to ask in turn. But this mutual bootstrapping carries its own paradox. As humans become more adept at crafting precise prompts for AI, an act that reflects the formulation of well-structured questions, they hone their own epistemic strategies.

In turn, AI systems respond with increasingly sophisticated outputs, some of which model, even if imperfectly, the heuristics of inquiry. The more we train these models to ask and answer, the more we are forced to refine what we mean by a ‘good’ question. And yet, the machine’s question does not arise from anxiety or awe. It does not grieve its ignorance. We do. That is the irremediable difference.

i
Announcing the Golden Gate Institute for AI
Second Thoughts, Steve NewmanMay 8, 2025

And why “Am I Stronger Yet?” is now “Second Thoughts”

It’s Impossible To Make Sense of What’s Being Written About AI. Pick any relevant topic, and you’ll find an equally confusing barrage of contradictory takes. There is an enormous amount of good work going into analysis of AI capabilities, impacts, and policy solutions. But these questions are so complex, evolving so rapidly, and tied into so many subjects of expertise, that it’s impossible to keep up.

This Impacts Everything
AI sits at an unfortunate intersection. It’s moving too quickly for expert consensus to emerge or laypeople to keep up, and it’s simultaneously very high stakes.

The potential applications of AI are so numerous they’re hard to even summarize. It could revolutionize health care, turbocharge the economy, and provide a personalized full-time tutor to every child… if we don’t cripple it with unnecessary restrictions. It could also disrupt labor markets, unleash a wave of bioterrorism, and enable surveillance states the likes of which Orwell could never have imagined… if we don’t find ways to head that off.

We’ll be focusing on four broad topics:

  • Timelines & Capabilities – how rapidly will AI development advance?
  • Economic Impacts – how quickly will AI be adopted, and what impact will this have on the economy? How can we ensure AI creates broad-based economic benefits?
  • Democracy and Governance – how must democratic and other key institutions adapt to the challenges and opportunities that AI brings?
  • Realizing Benefits – what can we do to unlock and facilitate adoption of beneficial uses of AI?

 

In a world where attention is fragmented and algorithms rule the content landscape, Chris Best and Hamish McKenzie are taking a radically different approach with Substack. Rather than chasing clicks, Substack focuses on a simple yet powerful idea: creators should own their work and make money directly from their audience through paid subscriptions. With over 5 million paid subscriptions and tens of millions of active readers, Substack has turned this model into a transformative force in media.

In this episode, Chris and Hamish unpack how they’re reshaping creator economics, navigating AI’s role in creativity, and enabling a new era for writers, from serialized fiction to short-form video. Their bet? That if content adds real value—whether it educates, entertains, or helps people earn—audiences will pay for it.

In our conversation, we explore:

  • How Substack grew from a simple newsletter tool to a multi-format media platform with 5 million+ paid subscriptions
  • Why the “soul connection” between creators and audiences is becoming more valuable in an AI-dominated world
  • The inside story of Substack’s clash with Elon Musk and how it ultimately strengthened their platform
  • Why the ceiling for great writing and culture might be much higher than we’re currently imagining
  • How Substack’s subscription model creates dramatically better economics for creators than ad-supported platforms
  • Chris’s “grand unified theory” for how AI will influence content creation and consumption
  • Why their short-form content isn’t just a “sticky trick” but a pathway to deeper engagement and discovery
  • The future of traditional prestige media brands

Timestamps

(00:00) Intro

(05:27) An overview of Substack and its current scale

(06:53) The origin story of Substack

(19:20) Finding the first believers

(24:17) Successful fiction on Substack, and why there’s potential for much more

(29:09) The different mediums available on Substack

(32:27) How Substack’s feed differs from social media

(37:33) The clash with Elon Musk and Twitter/X

(47:23) How Substack’s network helps creators succeed

(52:07) TikTok creators moving to Substack after the ban

(56:20) The future of paid media consumption

(58:24) Chris’s grand unified theory of AI and media

(1:07:07) Substack’s AI tools

(1:10:54) Why it’s hard to predict where AI is taking us next

(1:13:42) Advice for traditional media institutions

(1:16:48) Final meditations

i
Project Liberty May 6 News
Project LibertyMay 6, 2025

Tech regulation: Barrier or catalyst to innovation?
Does tech regulation hold back tech innovation?

At the AI Action Summit in Paris earlier this year, U.S. Vice President J.D. Vance said, “We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.”

Regardless of your politics, his comment points to fundamental questions in tech policy: Does regulation always hinder innovation? Or are there certain conditions where regulation can enable innovation?

In this week’s newsletter, we examine the relationship between tech regulation and tech innovation.

i
Welcome to the US onAir network 

The US onAir Network supports US citizens and democracy by bringing together information, experts, organizations, policy makers, and the public to facilitate greater engagement in federal, state, and local politics and more civil, positive discussions and collaborations on important issues and governance. 

The US onAir Network has a national hub at us.onair.cc and 50 state onAir hubs. To learn more about the US onAir Network, go to this post.

ABOUT US ONAIR NEWS

The first news items on US issues, government, and politics will start being displayed on the US onAir homepage around 9 am. Throughout the day, livestreamed events will appear under the “Latest” tab. The last news items will appear around 9 pm concluding with PBS NewsHour’s full episode with links to each video clip within the hour show. Go to the Free News Platforms post to learn more where we draw most of our US onAir news content and how to find previous daily news posts.

US ONAIR SUBSTACK

US onAir has established a substack at usonair.substack.com to provide substack subscribers a way to receive these news posts within a phone app and via email. Comments on news items can be made in the substack post. OnAir members can comment in this onAir post and/or in specific related onAir posts. Substack posts are delivered by email around 9pm Monday thru Friday.

Discuss

OnAir membership is required. The lead Moderator for the discussions is US onAir Curator. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar