As artificial intelligence (AI) continues to evolve, a key challenge has emerged: the convergence of two critical constraints governance and physical limitations. On one hand, policymakers are pushing for regulations around issues like deepfakes, model risks, and accountability. On the other, industries are grappling with physical barriers such as semiconductor supply shortages, rising compute costs, and stress on energy grids. The result is a growing tension between the need for rules and the reality of what infrastructure can actually support.
While much of the public debate around AI focuses on the behavior of AI models how they are used and the risks they pose—there’s another layer of complexity. The strategic challenge lies in the supply chains that support AI technologies: advanced chips, memory components, packaging, and the geopolitical vulnerabilities that arise from concentrated manufacturing capacity. AI’s reliance on these physical elements means that when compute power becomes scarce or expensive, the question of “who gets to build” AI becomes just as critical as “what gets built.”
The implications of this new reality are already nudging regulation toward a more pragmatic approach. It’s no longer just about banning harmful outcomes or restricting certain uses of AI. Instead, we are seeing a shift toward more nuanced policies that require transparency and accountability. This includes measures such as disclosure requirements, audit standards, and ensuring the provenance of synthetic media. Different sectors like healthcare, finance, and elections will also face sector-specific rules aimed at addressing the unique challenges AI presents in those areas. These evolving regulations reflect the growing complexity of AI governance, moving beyond simple restrictions to a more detailed approach that involves oversight and accountability throughout the AI lifecycle.
This shift is also forcing companies to rethink their product designs. As AI technology becomes more integrated into every aspect of society, the pressure is on to design AI systems that are not only powerful but also compute-efficient. Efficiency is now a competitive advantage in the AI market. Companies are increasingly focusing on optimizing their systems to make the best use of limited compute resources, whether by developing new algorithms or creating more efficient hardware. In this environment, AI companies that can maximize efficiency will likely have an edge in both cost and performance, making efficiency a key driver of innovation.
The next phase of AI policy is likely to look very different from what we’ve seen so far. It will move away from broad speeches and general principles toward something more akin to industrial engineering: standards bodies, procurement rules, and infrastructure permitting. As the AI era advances, governance will become more about building the necessary frameworks and infrastructure to support AI development and usage. This could involve establishing industry-wide standards for hardware and software, creating rules around how AI systems are procured and deployed, and ensuring that there is the physical infrastructure like data centers and energy grids to support the scale of AI that’s being developed.
In essence, AI governance is increasingly being shaped by the physical realities of the technology. Just as the success of the internet was determined by the global deployment of fiber optic cables and data centers, the future of AI will depend on the availability of advanced chips, energy resources, and secure supply chains. This is a shift in how we think about governance: rather than being a purely regulatory or policy-driven issue, AI governance is becoming a matter of engineering and infrastructure. The question of how to regulate AI is now deeply intertwined with the practicalities of building and maintaining the infrastructure that powers it.
In conclusion, as AI continues to evolve, the challenge of governance is becoming more complex and physical. The conversation is shifting from abstract discussions about model behavior to practical concerns about how AI can be built, scaled, and regulated within the constraints of existing physical systems. As the AI era progresses, governance will increasingly look like industrial plumbing—focused on standards, infrastructure, and the physical realities of supporting this transformative technology. Policymakers will need to balance the need for regulation with the physical and economic constraints of the technology, ensuring that AI can grow responsibly and sustainably without sacrificing innovation or security.