Fathoming America’s plan to manage AI proliferation

The announcement by the United States of the rescission of its Framework for AI Diffusion, a set of export controls for Artificial Intelligence (AI) technology announced earlier this year, has been viewed as a good thing. The Framework was considered counterproductive to AI technology development and diplomatic relations. However, recent developments suggest that controls on AI are likely to persist, albeit in different forms.

A flawed blueprint

Earlier this year, during the final week of its tenure, the Joe Biden administration announced the AI Diffusion Framework. Combining export controls and export licences for AI chips and model weights, it effectively viewed AI like nuclear weapons. Under the proposed framework, countries such as China and Russia were embargoed, trusted allies were favoured, and others restricted in their access to advanced AI technology. The rationale for these rules was that computational power dictates AI capabilities: the greater the compute, the better the AI. In the last decade, the compute used in advanced AI models has nearly doubled every 10 months. Following this logic, for the U.S. to preserve its lead, it needed to prevent adversaries from acquiring powerful compute while ensuring that AI development stays within the U.S. and its close allies.

While export controls on AI hardware predated the framework, they were not sweeping. The Framework aimed to tighten these controls and establish a predictable system to streamline regulatory processes and standardise conditions. However, imposing such sweeping restrictions, affecting adversaries and partners alike, brought many unintended effects, proving counterproductive.

The framework set a concerning precedent for technology cooperation with the U.S., especially for its allies. It signalled U.S. willingness to dictate how other nations conducted their affairs, incentivising them to hedge against U.S. actions. Consequently, U.S. allies had reasons to invest in alternatives to the U.S. ecosystem, pursuing their own strategic autonomy and technological sovereignty.

Additionally, the framework would treat AI, a civilian technology with military applications, as if it were a military technology with civilian uses. Unlike nuclear technology, AI innovation is inherently civilian in its origins and international in scope. Confining the development geographically within the U.S. could prove counterproductive.

Finally, the system created an enduring incentive for the global scientific ecosystem to develop pathways to circumvent the need for powerful compute to make powerful AI, thereby undermining the very lever that the U.S. sought to employ. China’s DeepSeek R1 exemplifies this. Years of export controls spurred algorithmic and architectural breakthroughs, enabling DeepSeek to rival the best AI models from the U.S. with a fraction of the compute. Such trends can make export controls on AI chips an ineffective policy instrument.

It is for these reasons that the Trump administration revoked the AI Diffusion Framework. This is welcome news for India, which was not favourably placed under the framework. However, the underlying U.S. thinking and approach towards AI diffusion will likely persist, manifesting in other forms. The AI technology race is still on, and the U.S. intent to restrict Chinese access to AI chips still endures.

The possible replacement

Notwithstanding the rescinded Framework, the current U.S. administration has taken firm steps toward further preventing Chinese access to AI chips. For instance, in March 2025, the administration expanded the scope of the existing export controls and added several companies to its entity list (blacklist). It has also released several new guidelines to strengthen the enforcement of these controls.

New provisions are reportedly under consideration, such as on-chip features to monitor and restrict the usage of AI chips. These could include rules at the hardware level limiting chip functionality or restricting certain use cases. Recently, U.S. lawmakers introduced new legislation mandating built-in location tracking for AI chips to prevent their illicit diversion into China, Russia and other countries of concern. In effect, these measures seek to enforce the goals of the AI diffusion framework technologically rather than through trade restrictions.

The related concerns

Such measures are problematic in their own way. New concerns related to ownership, privacy and surveillance will proliferate. While malicious actors might be sufficiently motivated to circumvent these controls, legitimate and beneficial use by others could be inadvertently discouraged. Such developments undermine user autonomy and lead to trust deficits. Just like the old framework, this will lead to concerns about losing strategic autonomy for any nation buying AI chips. Yet again, both adversaries and allies will feel compelled to hedge against their reliance on the U.S. AI ecosystem and invest in alternatives.

The rescission of the AI Diffusion Framework represents a notable policy reversal. Yet, it appears to be more a change in tactics than a fundamental shift in the U.S. strategy to manage AI proliferation. Should these technologically-driven control measures gain traction in U.S. policy discourse and be implemented, they risk replicating the negative consequences of the original AI Diffusion Framework. Ultimately, should this path be pursued, it would indicate that the crucial lessons from the Framework and its eventual withdrawal have not been fully assimilated, potentially jeopardising the very U.S. leadership in AI it ostensibly seeks to protect.

Rijesh Panicker is a Fellow at the Takshashila Institution. Bharath Reddy is an Associate Fellow at the Takshashila Institution. Ashwin Prasad is a Research Analyst at the Takshashila Institution

Published – June 27, 2025 12:08 am IST

Leave a Comment