What happens when the world’s most powerful political leaders can’t name the company powering the AI revolution? During a high-profile speech in Washington, President Trump confessed, “What the hell is Nvidia? I’ve never heard of it before,” moments after threatening to break up the company over its market dominance. The admission, delivered at the unveiling of the new U.S. AI Action Plan, bared a jarring disconnect between the halls of political power and the technical engines powering global innovation.

Nvidia’s rise is seismic. Within two years, the firm’s market capitalization doubled from $1 trillion to $4 trillion, riding its dominance of AI hardware and the bottomless appetite for computational power across industries. This explosive expansion is not a matter of chance, but the culmination of decades of innovation in engineering specifically, the development of GPU architectures for parallel processing and AI acceleration. Fundamentally at the center of Nvidia’s success are CUDA cores and, more recently, Tensor Cores, which allow GPUs to execute enormous numbers of calculations simultaneously. These high-specialization units, which debuted with the Volta architecture, have been improved with Turing, Ampere, and now Hopper generations, with each generational jump bringing orders-of-magnitude speedups in AI training and inference Tensor Cores are tuned for throughput tensor calculations, which allow operations such as matrix multiplication and accumulation to occur at rates that ordinary GPU cores cannot match.
The technical acumen behind Nvidia hardware is not simply a tale of increased numbers of cores or accelerated chips. Tensor Cores are designed for mixed-precision training, for instance, using lower-precision math for speed and higher-precision math for accuracy. The outcome: deep learning models can be trained faster, with less energy, and at larger scales. In Ampere generation, the A100 GPU provides throughputs of up to 1555 GB/s, while the new Hopper-based H100 offers up to 6x the performance of the last generation, unlocking new frontiers for large language models and generative AI the H100 brings new support for FP8 precision formats, which NVIDIA says will boost large language model performance by a whopping 30x compared to the previous generation.
But as Nvidia CEO Jensen Huang was being hailed from the podium, Trump’s ignorance of the company highlighted a more profound policy dilemma: how to regulate and govern technologies that move faster than politics can follow. The just-unveiled AI Action Plan tries to close the gap, outlining three pillars: pushing the pace of AI innovation by streamlining regulatory red tape, creating strong domestic AI infrastructure, and increasing the spread of American AI technologies among allies. The plan is intended to spur a “new industrial renaissance,” simplifying permits for data centers and semiconductor fabs, and initiating national efforts to develop the workforce required for an AI-driven economy The federal government hopes to be a catalyst and a customer, encouraging an atmosphere of permissiveness for AI development while maintaining values of free speech, security, and worker empowerment.
But the plan is not merely about growth. It also reins in strategic exports, particularly advanced semiconductors. The United States has spearheaded a worldwide effort to limit China’s access to state-of-the-art chips and production equipment, using the Foreign Direct Product Rule to exert control over technologies assembled anywhere with American tools or expertise. These export controls have reconfigured the international supply chain, leading firms such as Nvidia to create “nerfed” chips for China and bringing about Beijing’s retaliatory efforts, such as SMIC creating indigenous 7nm chips and Huawei SMIC was already making and selling 7nm chips no later than July 2022 and possibly even as early as July 2021, with no EUV machines.
Their effects are far-reaching. statistically significant drops in revenue, profitability, and market capitalization have been reported by U.S. companies after the introduction of new regulations. The PHLX Semiconductor Sector Index, for example, dropped 8 percent following the October 2022 announcement, and impacted companies experienced a sustained 2.5 percent decline in valuation equating to a combined loss of $130 billion The announcement of the semiconductor export controls on October 7, 2022, was accompanied by a 2.5 percent decline in stock market valuation among impacted U.S. semiconductor companies that lasted for at least 20 days. While there are some policymakers who hold that these controls are essential in order to preserve a technological lead, others risk a “death spiral” if declining sales erode R&D spending and long-term competitiveness.
The tech arms race is irrevocably tied to geopolitics now. As America fortifies its semiconductor foundation, China is investing tens of billions into its own chip sector, creative workarounds enabled by doing more with less. The power balance in the world is ever more determined not by who possesses the most powerful algorithms but by who owns the hardware, and the supply chains that enable the algorithms.
Trump’s blunt comments “I figured we could go in and we could sort of break them up a little bit, get them a little competition, and I found it’s not easy in that business” mirror the intimidating breadth of the AI hardware world. The truth is Nvidia’s dominance is built on decades of relentless engineering, ecosystem lock-in, and the unforgiving economics of scale. While the world’s political leaders wrestle with these facts, the stakes cannot be higher: the future of AI innovation, national security, and economic power all rest on choices made at the nexus of policy and technology.

