top of page

Trump Takes Victory Lap At 100-Day Celebration Rally In Michigan

Trump's campaign-like rally was to celebrate his second first 100 days in office as he touted his victories on the border crisis and culture issues.

Regulation and the Case for Permissionless Innovation



By the time most policymakers finish saying “regulation,” innovators have already built the future. That’s the core message from Adam Thierer, senior fellow at the R Street Institute, who joined Andrew Langer on Episode 128 of the Federal Newswire’s Lunch Hour Podcast. In a wide-ranging and engaging conversation, Thierer made the case for why America must stick to a model of permissionless innovation—and reject the risk-averse, bureaucratic mindset that has calcified progress in much of the developed world.


Thierer’s philosophy is clear: “Permissionless innovation is about freedom,” he told Langer. It’s about letting entrepreneurs and inventors build without having to ask permission first. It’s a model that has powered American technological dominance—and one that stands in stark contrast to the European precautionary approach, where bureaucrats act as gatekeepers of invention.


As Thierer explains, most regulatory policy debates boil down to one question: “Do you believe in letting people try new things until proven harmful, or do you require proof of safety before anything new is allowed?” In Europe, the answer often defaults to the latter. In America—at least historically—the answer has been the former. That difference, Thierer argues, is why the U.S. continues to lead in innovation.


“What Could Possibly Go Wrong?”

Langer pushed back on a common critique: what about when innovation creates serious risk? What if someone straps a flamethrower to a robot dog?


Thierer acknowledged that not every form of permissionless innovation is risk-free. In fact, he builds a framework around when precaution makes sense. If a potential harm is “highly probable, physical in nature, immediate, irreversible, and catastrophic,” he concedes, regulators may have a case for intervening early. “But that should be the exception, not the rule,” he said.


In other words, be serious about actual risks—but don’t invent theoretical ones to stop progress.


The real issue, Langer noted, is that we’re terrible at comparative risk assessment—we often fail to ask, “What happens if we don’t innovate?” Take the example of ethylene oxide, a chemical used to sterilize medical equipment. The EPA has pushed to ban it over long-term health risks, but without it, countless medical procedures would become far riskier, if not impossible. That kind of regulatory tunnel vision can cost lives.


The balance, Thierer insists, is simple: “Regulate when there’s real harm, not hypothetical fear.”


The War on Computation

Thierer and Langer spent a good chunk of the conversation digging into the hot topic of artificial intelligence. Thierer has been tracking the explosion of AI-related bills—over 820 were introduced in the U.S. in the first 80 days of 2025 alone. “That’s 12 bills a day,” he warned. “In my 34 years of policy work, I’ve never seen anything like it.”


Those bills fall into three categories:


  • Existential Risk – Attempting to regulate AI models or computing power itself (what Thierer calls a “war on computation”).

  • Conduct and Fairness Regulation – Focusing on supposed bias, fairness, or discrimination in AI algorithms.

  • Sector-Specific Rules – Governing how AI is used in areas like health care, education, elections, or policing.


Each presents its own dangers to innovation, but the first—model-level regulation—is particularly chilling. “The more powerful a system is, the more they want to clamp down,” Thierer noted. But that’s a flawed premise. Power doesn’t equal danger. “If we treat every hypothetical risk as inevitable, then no innovation ever gets off the ground.”


The real-world costs of this approach are staggering. Thierer compared it to the overregulation of nuclear power—where excessive precaution wiped out one of the cleanest energy sources available. AI, if bottled up now, risks meeting the same fate.


Bottling Up Progress

Worse, Thierer warned, much of this regulatory push is being driven by incumbent firms looking to protect themselves from competition. “Old players love regulation—they can afford it, and it locks out new entrants.” It’s regulatory capture dressed up as safety.


That same dynamic played out in the early internet era, where old telecom monopolies tried to use policy to stifle emerging digital platforms. Fortunately, the internet was “born free,” as Thierer puts it. Now, he argues, we must ensure AI gets the same treatment.


And the stakes are high. “AI is not just another app,” he said. “It’s the most important general-purpose technology of our time.” Curbing it would be like banning electricity or the printing press.


Everything New Is Feared… Until It’s Demanded

Part of the regulatory hysteria around AI, Thierer believes, stems from cultural conditioning. From The Terminator to Black Mirror, the public has been taught to fear new technologies. “We’ve trained ourselves to see every innovation as dangerous by default,” he said.


That mindset is a problem. “Yes, some tech can go wrong. But if we regulate based on worst-case hypotheticals, we lock out all the best-case realities,” Thierer warned. He cited medical breakthroughs, traffic safety improvements, and rapid diagnostics as just a few of AI’s upside potentials.


Langer added that this negativity bias even infects how we view the information age itself. In an era where millions have access to near-infinite knowledge at little to no cost, critics still pine for the “good old days” of local news monopolies and gatekeepers. “It’s nostalgia for a world that never really existed,” Thierer said.


The Productivity Paradox—and the Value of Remote Work

The conversation shifted toward workplace regulation, specifically the remote work revolution. COVID gave employers a real-world experiment in remote productivity—and many found it worked just fine. But something was lost, too.


Thierer acknowledged the tradeoffs: less spontaneous collaboration, fewer mentoring moments. “Some of the best ideas I’ve ever worked on came from hallway conversations,” he said. Still, he believes flexible work is here to stay—and for the better.


The key, again, is choice and freedom. Don’t mandate remote work, but don’t ban it either. Let people and businesses decide what works best.


The Fight for Smarter Regulation

Thierer has worked across the free-market intellectual landscape—from Heritage to Cato to Mercatus to R Street. Each has its own style and strengths. At R Street, he focuses on tech policy and the broader philosophy of keeping government out of the way of good ideas.


And his personal story is a classic example of serendipity in the policy world. In the early 1990s, no one at Heritage even knew what the internet was. But when Thierer pitched it as a new policy frontier, they let him run with it. “I was in the right place at the right time,” he said.


Now, he’s working to make sure the next generation of innovators gets the same freedom to run.


Conclusion: A Warning and a Challenge

In the end, Thierer’s message is a simple one. Freedom works. Fear doesn’t. Regulation must be grounded in real harms, not imagined ones. And above all, innovation must remain the default—not the exception.


As AI, biotech, and other transformative technologies emerge, America must resist the temptation to regulate first and think later. Because what’s at stake isn’t just progress—it’s our ability to solve the problems that matter most.


And that, more than anything, is worth defending.

America Uncanceled

Hosted by Matt and Mercy

It's Not About Us

Hosted by Elaine Beck

Liberty and Justice

Hosted by Matt Whitaker

The Bill Walton Show

Hosted by Bill Walton

Stream the Movement

The Culture Killers: The Woke Wars

Watch this award winning documentary by CPAC. The woke wars are coming to a neighborhood near you. From major corporations to school boards to social media, free expression is under attack.

bottom of page