![]() |
Pix credit here |
The comedy-horror hybrid can be a tricky genre to get right. This is especially true of those films that attempt to leverage well known monsters. And while names such as Dracula and Werewolf pop up fairly frequently in these types of films, it is The Creature from Mary Shelley’s Frankenstein that offers arguably the most interesting template from which to draw inspiration. While some films focus primarily on achieving humor (Abbott and Costello Meet Frankenstein, I Was a Teenage Frankenstein), others dial back the levity to create a more transgressive viewing experience (Lady Frankenstein, Frankenhooker). But one film that manages to blend both aims seamlessly while also offering up a healthy dose of social commentary is The Rocky Horror Picture Show (1975). (Horror in the Homeroom)
Earlier this month Alex Karp and Nicholas Zamiska posted to the social media site "X" a sort of Manifesto in the form of a 22 point reduction of their book, "The Technological Republic" (2025). Both the book and Manifesto reduction were self-described by their social media agit-propaganda as critique and a pleading (in its ancient sense of giving pleasure, or obtaining approval)):
a searing critique of our collective abandonment of ambition, arguing that in order for the U.S. and its allies to retain their global edge—and preserve the freedoms we take for granted—the software industry must renew its commitment to addressing our most urgent challenges, including the new arms race of artificial intelligence. The government, in turn, must embrace the most effective features of the engineering mindset that has propelled Silicon Valley’s success. Above all, our leaders must reject intellectual fragility and preserve space for ideological confrontation. A willingness to risk the disapproval of the crowd, Karp and Zamiska contend, has everything to do with technological and economic outperformance. At once iconoclastic and rigorous, this book will also lift the veil on Palantir and its broader political project from the inside, offering a passionate call for the West to wake up to our new reality. (here)
I approached that Manifesto, point by point, not as critique but as the performance of ancient social tropes that touch on the origins of the cognitive cages that still, to some extent, constrain, and by constraining, shapes Anglo-European collectives, our thought, and our ability to relate to the world around us. In this case as a manifestation of the declamations of Greek oracular tragedy in which they play a singularly peculiar role (Reflections on the Palantir "Manifesto": The Oracular Semiosis of a "Technological Republic" Within its Own Cage of Techno-Modernization).
Palentir approached the question from an institutional and collective disciplinary space--on the (re)constitution of a social ordering the collective expresison of which must be managed in a specific way to meet both internal and external threat projections--but in a sort of tragically conventional way, that is by deploying traditional tropes and signified objects projections. This was oracular, programmatic, institutional, and permeated with the sort of traditional combination of hubris, principle, and good intention that sets up the triadic dialectic of opur Anglo-European cognitive foundations. Palantir was coding the generative architecture of physical beings as the magisterium that then aligned that coded natural order with the mimetic ordering of the virtual spaces of their animated virtual realities. Palantir sought to created an aligned iterative, mimetic dialectic among human persons, their collectives, and the realms they have created in their own image, realms that both reflect their creator and yet also follow their own pathways (set initially by their creators).
Now, not to be outdone, or perhaps to add their own voices as a sort of sidelines occupying Chorus (on the functions of a Chorus in Greek theater here) comes the folks at Antrop\c, already famous for their abstracted, and to some extent virtual performance with the security apparatus of the United States with which, like the rest of society, they are in their own way entangled (Statement from Dario Amodei on our discussions with the Department of War). They plead their case in a 14 May 2026 Policy Document, 2028: Two scenarios for global AI leadership. The core of their pleading is this:
It’s essential that the US and its allies stay ahead of authoritarian governments like the Chinese Communist Party, or CCP. AI will soon become powerful enough to be used to repress citizens at unprecedented scale, and even to alter the balance of power among nations. And since AI is advancing more quickly by the day, we have only a limited period of time to set the conditions of the competition—and determine whether and how those threats materialize. It’s with this in mind that we outline what’s required to ensure America stays ahead. (2028: Two scenarios for global AI leadership).
![]() |
| Pix credit here |
Anthrop\c, then, moves from the theatrical ancient tragedy of theater to the contemporary campy horror movie genre. Killer Clowns from Outer Space (1988) has possibilities -- the plot of which revolves around invading aliens who land in a small town to cocoon and feed. Elvira: Mistress of the Dark (1988) about an iconic horror hostess inheriting a haunted mansion in a very prudish town who confronts her uncle who wishes to succeed to her witchy powers gets closer. Still, one one go father back to get to the semiotic hear of Antrop\c's worldview.
![]() |
| Pix credit here (Riff-Raff and Magenta) |
In the camp horror classic, Rocky Horror Picture Show (Jim Sharman dir., 20th Century Fox 1975), a quintessentially reductionist, if cartoonish, objectification-animation of the ordinary couple of the time find themselves knocking on the door of a strange residence on a stormy night when, as such things tend to happen, their car breaks down. They are admitted to the residence of one, Dr, Frank-N-Furter, who is an alien from the Planet Transsexual in the Galaxy of Transylvania. The good doctor is about to to unveil his mad creation at a party attended by his madcap collection of friends and hosted by his fellow Transsexualiens, all dressed for the event in costumes that might appear to our hapless couple as trans-vestiture or other cultural-expectation-flouting rainment (in its ancient sense of fine ceremonial wear, or spiritual coverings, with substantial signification). The event which our couple crashes was arranged to celebrate the animation, the trans-activation, of Dr. Fran-N-Furter's creation--Rocky--the hyper-muscled hyper-expression of the object of (self) desire; the ideal made flesh. Hilarity then ensues as everything goes sideways, and everyone is transformed in one way or another; as the transsexualiens kill Dr. Frank-N-Furter for their trans-gressions, and return to the Transsexual Galaxy where things are normal.
It appears that contemporary society has at last managed to trans-form itself into the living expression of the symbolist camp of human collective simulacra, like that of the Rocky Horror Picture Show. And that trans-ition from signified physical objects to the animation, the trans-itioning, of the datafied object which in virtual spaces may be animated by breathing into it a sort of divine breath--the pathways to cognition in the form of layered coded relationships that can acquire a life of their own in the sense of controlling or deploying and changing its own life force (its coding, as such) over and through its datafied bodies--then brings us to the moment of truth that were faced by the Transsexualiens. It brings one to Riff Raff (the butler) and Magenta (the maid). It brings us back to Antrop\c.
Antrop\c's object is AI competition between the U.S. and China. But of course it is not about that at all. Instead Antrop\c uses that as the object through which they attempt an important signification of what for them is the larger problem (or in Chinese Leninist terms, the general contradiction)--the instruments through which competitive society encode their realities, their foundational norms and expectations, encoded within the aggressive and expansionist political-economic models of the U.S. and China. Antrop\c makes no effort to hide this. "AI will soon become powerful enough to be used to repress citizens at unprecedented scale, and even to alter the balance of power among nations."
With that as the analytical core, the question then becomes far ore pragmatic: to what extent and in what ways, ought the State to develop practices and policies to ensure that American A.I. continues to dominate, and by dominating provides the means of protecting the liberal democratic lebenswelt from the imaginaries of Marxist Leninist States. The key is to dominate innovation (here pitting the Chinese project of Socialist Modernization driven by its high quality production initiative) against the American markets driven and national security framed framework.
And the key to protecting innovation, and dominance, is the state.
The most important ingredient for developing AI is access to the computer chips on which the models are trained (or “compute”). Since the most capable chips are developed by American companies, the US government currently limits China’s supply by enforcing tight export controls on them. Recent history suggests these controls have been incredibly successful. In fact, AI labs in China have only built models close in intelligence to America’s because of their talent, their knack for exploiting loopholes around these export controls, and their large-scale distillation attacks that illicitly extract the innovations of American companies. (2028: Two scenarios for global AI leadership)
To protect innovation one needs borders--physical and virtual. The borders do not merely protect AI. They serve to provide th conceptual space within which AI can be made in the image of its creator--and in that way become both an extension of andd the idealized form of the desiree for a perfect simulation of an ideal version of the collecticve political economic system from which it emerges.
America and its allies approach AI competition from a position of great strength. The tools for AI dominance have been built by an exceptionally innovative ecosystem of companies in democratic nations. Our past success means that our present task is largely to avoid squandering our advantage: to decide not to make it easier for the CCP to catch up. (2028: Two scenarios for global AI leadership)
Two things happen if China catches up. The first is that the liberal democratic "golem", its "Rocky" is transformed and invested with the ideals and objectives of the Chinese "other." The second is that the liberal democratic golem is then deployed against its primary creator. The tool, then, the instrument, not only enhances the pathways toward the constitution and deployment of the simulacra of liberal democratic A.I. The tool serves as a defense against its corruption in the hands of the "other."
This serves as the basis for the storytelling that is the bulk of the essay:
In this post, we present two scenarios for what the world might look like in 2028, when we expect transformative AI systems to have arrived. In the first scenario, America has successfully defended its compute advantage. Policymakers have acted to tighten export controls further, disrupt China’s distillation attacks, and further accelerate democracies’ adoption of AI. In this world, democracies set the rules and norms around AI. It’s also in this scenario that we’re most likely to successfully engage with China on safety, which we’re supportive of to the extent this is possible. In the second scenario, America has chosen not to act. Policymakers have not tightened loopholes on the CCP’s access to compute, and AI firms in China have quickly taken advantage—catching up to the frontier and even overtaking America. In this world, AI norms and rules are shaped by authoritarian regimes, and the best models enable automated repression at scale. It will be no solace that this authoritarian triumph has happened on the back of American compute. (2028: Two scenarios for global AI leadership)
Like all binary systems, when reduced to its essence it is little more than the arrangement in time, place and space of oppositions that, depending on the patterns and the dialectic of pattern irritation and pattern movement, produce movement of the patterns shaped by irritated clusters of such oppositions. Here, borders matter. Borders are understood in a comprehensive way as a membrane that may be permeable, but only through specifically constructed points of structural coupling. Borders have a particular character--export (and expert) controls and national security based interdiction of tech and tech know how. The object is to protect the nature, character, operation and improvement of the liberal democratic "Rocky" against either corruption, or his "capture" and re-animation now with the soul of oppositional political economic systems, the normative cognitive cages of which are incompatible with those of liberal democracy.
The political systems in which the most advanced AI is created will shape the rules and norms for how the technology is developed and deployed. In turn, those rules and norms will help determine whether the technology is safe, whose security it protects, and whose interests it ultimately serves. We believe that responsibility should rest with democratically elected governments, not authoritarian regimes.(2028: Two scenarios for global AI leadership)
No that this is wrong as such. It is just that our Rocky provides a campy horror film version of the insights from Norbert Wiener, God and Golem, Inc., (MIT Press, 1964) in the relationship between the cybernetic machine and man is similar to the relationship between humanity and their creator. "There are at least three points in cybernetics which appear to me to be relevant to religious issues. One of these concerns machines which learn; one concerns machines which reproduce themselves; and one, the coordination of machine and man." (God and Golem, Inc., p. 11). Antrop\c worries about the control of all three--but as a function, as well as the instrument, of the power of the normative State. Antrop\c posits the game between China and the United States for the soul of the creature both desire to make, and which one has already made--more or less. Wiener reminds one that:
Thus, if we do not lose ourselves in the dogmas of omnipotence and Omniscience, the conflict between God and the Deveil is a real conflict, and God is something less than absolutely omnipotent. He is actually engaged in a conflict with his creature, in which he may very well lose the game. And yet his creature is made by him according to his own free will, and would seem to derive all its possibility of action from God himself. Can Gid play a significant game with his own creature? Can any creator, even a limited one, play a significant game with his own creature? (God and Golem, Inc., p. 17).
The answer to the question posed by Wiener, the semiotician would suggest, is yes. And the yes is a function of the realization that when God plays the devil, he is playing with and as himself. Not in the manner of the Manichean, but in the manner of the dialectics of subjectivity. That then suggests that the instrumentality of AI and its normative basis, when it is deployed by politics, looks to the way that AI can be deployed instrumentally through projections of internal perfection outward against an oppositional perfection.
But it is worth noting that Antrop\c is playing only one of two games. The game Antrop\c plays is for the control of the creature--our Rocky Horror--by one of two players; the prize of which is both the construction of the creature and his iuse against the other. Antrop\c would preserve the dominance of one version, not through the control of its development (that is beyond the point of the essay), but rather by denying the fruits of development of one version of AI to an oppositional force that would breath a quite different sort of life into the creature. Yet there is another game--between both China and the United States and the creatures they are building. The assumption--still sop stubbornly held--that Rocky is indeed an instrument, soulless, and without much of a will, a creature completely and endlessly dependent on its creature is unlikely to retain much power once the creature learns to learn itself. This does not make the Antrop\c analysis wrong. Indeed it may add to its power (except the instrumentalist start point bit) where, if the foundational analytical presumption is to be believed--the power of AI is not merely its computational process; rather it is its normative baselines, not merely programmed but evolving with each iteration shaping its process of induction reasoning (in its simplest form pattern recognition), to one where, in predictive analytics, it may well shape the iterative data flows through which it will move away from its original creator made version (Wiener's self reproducing machines). At that point Wiener's suggestion of the divine quality of the coordination of man and machine will become a much larger concern. This is a very different "head space" than the one that fascinates the folks at Palentir.
The complete text of 2028: Two scenarios for global AI leadership follows below.
We’re releasing a new paper that explains our views on the competition on AI between the US and China.
It’s essential that the US and its allies stay ahead of authoritarian governments like the Chinese Communist Party, or CCP. AI will soon become powerful enough to be used to repress citizens at unprecedented scale, and even to alter the balance of power among nations. And since AI is advancing more quickly by the day, we have only a limited period of time to set the conditions of the competition—and determine whether and how those threats materialize. It’s with this in mind that we outline what’s required to ensure America stays ahead.
The most important ingredient for developing AI is access to the computer chips on which the models are trained (or “compute”). Since the most capable chips are developed by American companies, the US government currently limits China’s supply by enforcing tight export controls on them. Recent history suggests these controls have been incredibly successful. In fact, AI labs in China have only built models close in intelligence to America’s because of their talent, their knack for exploiting loopholes around these export controls, and their large-scale distillation attacks that illicitly extract the innovations of American companies.
In this post, we present two scenarios for what the world might look like in 2028, when we expect transformative AI systems to have arrived.
In the first scenario, America has successfully defended its compute advantage. Policymakers have acted to tighten export controls further, disrupt China’s distillation attacks, and further accelerate democracies’ adoption of AI. In this world, democracies set the rules and norms around AI. It’s also in this scenario that we’re most likely to successfully engage with China on safety, which we’re supportive of to the extent this is possible.
In the second scenario, America has chosen not to act. Policymakers have not tightened loopholes on the CCP’s access to compute, and AI firms in China have quickly taken advantage—catching up to the frontier and even overtaking America. In this world, AI norms and rules are shaped by authoritarian regimes, and the best models enable automated repression at scale. It will be no solace that this authoritarian triumph has happened on the back of American compute.
America and its allies approach AI competition from a position of great strength. The tools for AI dominance have been built by an exceptionally innovative ecosystem of companies in democratic nations. Our past success means that our present task is largely to avoid squandering our advantage: to decide not to make it easier for the CCP to catch up.
Two scenarios for the US and China in 2028
Summary
Democracies, not authoritarian regimes, must lead in AI development and deployment. These countries and political systems can shape the rules and norms that govern these systems.
Democracies currently hold a substantial lead in compute, the most important ingredient for developing frontier AI models. That lead exists thanks to American and allied innovation, and to bipartisan US export controls that defend those innovations. But on model intelligence, AI labs in the People’s Republic of China (PRC), under the jurisdiction and control of the Chinese Communist Party (CCP), are not far behind. We focus on the CCP as it is the regime that is most able to use frontier AI to cement authoritarianism; we do not seek to undermine the interests or ingenuity of the Chinese people. Already, the CCP is using AI to censor speech, repress dissidents, hack governments and corporations across the world, and strengthen the People’s Liberation Army (PLA).
AI labs in China have world-class talent. It is compute constraints that limit their ability to keep up. Labs in China have remained close by exploiting loopholes in US export control policies, and by carrying out large-scale distillation attacks that harvest the innovations of US models in order to mimic their capabilities.
With the supply of compute expanding rapidly, and with AI being used increasingly to augment the training of new AI models, we’re entering a period of great acceleration in AI capabilities. The “country of geniuses in a data center”—the level of intelligence we associate with transformative AI—may be close at hand. This acceleration makes policy action more urgent. To date, by allowing export control evasions and distillation attacks, we have let the CCP’s AI efforts trail closely up the frontier curve. But if the US and its allies act now to address both issues, it may be possible to lock in a 12-24 month lead in frontier capabilities. A lead that large by 2028 would be enormously advantageous. Such a lead would also augment efforts to engage with AI experts in China on AI safety and governance, which we support. But the window of opportunity to lock in that lead will not necessarily remain open for long.
Here, we present two potential scenarios for the state of US-China AI competition in 2028. The first scenario is one in which democracies have established a commanding lead in model intelligence, adoption, and global distribution. This scenario can be achieved if policymakers act now to tighten controls on advanced compute to PRC labs, disrupt their efforts to distill America’s best AI models, and accelerate democracies’ adoption of AI.
The second scenario is one in which the CCP is competitive at the near-frontier. This scenario happens if policymakers don’t build on our existing lead, or if they loosen restrictions on access to compute for PRC firms.
Many in Congress and the Trump administration have championed export controls, curbing distillation attacks, and exporting American AI. In advancing these policies, we are hopeful that democracies can secure a commanding lead by 2028, and avoid a destabilizing neck-and-neck race with the CCP two years from now.
The imperatives of staying ahead
We expect frontier AI to have transformational economic and societal impacts in the coming years, as described in Machines of Loving Grace and The Adolescence of Technology. Our mission is to ensure that humanity navigates the transition to transformative AI safely and beneficially. We believe that a successful transition can lead to astonishing breakthroughs in medicine, invention, and economic growth.
The threat of authoritarian AI
Whether that transition goes well depends in part on where the most capable systems are built first. The political systems in which the most advanced AI is created will shape the rules and norms for how the technology is developed and deployed. In turn, those rules and norms will help determine whether the technology is safe, whose security it protects, and whose interests it ultimately serves. We believe that responsibility should rest with democratically elected governments, not authoritarian regimes.
If the frontier is set by regimes that treat AI as an instrument of repression, military advantage over democracies, and domestic control, the transition is less likely to go well, for those regimes’ own citizens or anyone else.
Historically, the reach of authoritarian rule has been limited by its dependence on human enforcers to carry out surveillance and repression. Powerful AI systems may remove that dependency, enabling automated repression on a far greater scale. For that reason, the prospect of the CCP leading in AI is among the greatest threats to a successful transition.
The CCP holds enormous power and influence at the helm of China’s economy, military, and the largest authoritarian state structure on Earth. It is also the only country besides the US with well-resourced, highly talented AI labs chasing the frontier. Furthermore, the CCP is highly motivated to establish China as the leading AI power. Beijing has poured tens of billions of dollars into China’s AI and semiconductor sectors.
Already, the CCP uses AI systems to censor speech, enforce draconian policies on ethnic minorities, and hack major corporations and government agencies. The CCP’s vision of AI-enabled techno-authoritarianism has been extensively documented in Xinjiang, where state security agencies have systematically deployed facial recognition technology, biometric data collection, and communications surveillance, enabling repression at a scale that humans alone could not achieve. Frontier AI systems will make those capabilities cheaper to maintain, far more pervasive, and more sophisticated. The CCP’s export of these technologies has enabled autocrats in other countries to more effectively stifle dissent, entrenching authoritarianism. A CCP-led AI frontier could dramatically strengthen repression around the world.
AI is a dual-use technology
Frontier AI will shape the future military balance. CCP leadership already operates on that premise, and is building its military for an AI-enabled battlefield. PLA strategists view the “intelligentization” of their military forces as the means with which to catch up and eventually surpass the US military. The PLA is already procuring commercially developed Chinese AI systems for military use, including DeepSeek models deployed to coordinate swarms of unmanned vehicles and enable cyber offense capabilities. These capabilities will not diffuse slowly. When a new model reaches a new capability in autonomous targeting, vulnerability discovery, or swarm coordination, for example, the regime that controls it can put it onto the field in weeks, not years.
The risk compounds because frontier AI will be an accelerant for other critical technologies. Advanced AI models will be able to compress research and development (R&D) cycles in semiconductors, biotech, and advanced materials. A lead in frontier AI will enable a widening lead across the full national security technology stack.
If a PRC AI lab had developed a model at the level of Claude Mythos Preview before an American one, the CCP would have had first access to a system that can autonomously discover and chain software vulnerabilities, which it could have used to further penetrate critical American infrastructure. Future models will be exponentially more capable, and therefore have commensurately greater implications for the national security interests of the US and other democracies.
Neck-and-neck competition risks disincentivizing responsible AI
A neck-and-neck race between American and Chinese AI labs could make industry and government-led safety and governance efforts more difficult, and less likely. If PRC labs are either close behind or at par with models in the US, private AI firms in the US and China are likely to feel more pressure to release new models and products faster, without taking prudent pre-deployment safety measures. Governments could become reluctant to enact policies to encourage responsible AI development and deployment, for fear of falling behind.
While increasing numbers of researchers in China’s AI labs and policy community are concerned with AI safety risks, this trend has not translated into safety practices on par with labs in the US. As of last year, only 3 out of 13 top Chinese AI labs published any safety evaluation results, and none disclosed evaluations for Chemical, Biological, Radiological, and Nuclear (CBRN) risks. The Center for AI Standards and Innovation (CAISI) found that DeepSeek’s R1-0528 model complied with 94 percent of overtly malicious requests under a common jailbreaking technique, compared with 8 percent for US reference models. This pattern has continued in more recent releases. For example, an independent assessment of Moonshot’s Kimi K2.5 published in April found that the model failed to refuse CBRN-related requests at a far higher rate than US frontier models. Compounding the problem, labs in China often release dual-use capable models as open-weight. Once a model is open-weight, safeguards that do exist can be removed, making the model available to any state or non-state actor to use for malicious purposes, including the cyber and CBRN misuse those safeguards were built to prevent.
Our policy objective: creating and maintaining a lead for democracies
We support policies in the US and other countries that build and maintain a safe, near-term lead over the CCP in intelligence, domestic adoption, and global distribution. This lead is key to avoiding authoritarian AI leadership and protecting the national security interests of the US and other democracies. Doing so is a fundamental prerequisite to ensuring that democratic states can achieve favorable terms with authoritarian states.
Anthropic deeply respects the Chinese people and the accomplishments of the Chinese AI community. We hope for peaceful relations between China and the world. Our concerns are specifically with the risks to humanity posed by any powerful authoritarian political systems with access to frontier AI systems.
Opportunities for engagement on AI safety
Anthropic supports international AI safety dialogue with AI experts in China, when possible. The world has a vested interest in safe AI, regardless of where it is developed and deployed. There are a range of risks that could emerge from frontier AI systems requiring engagement between the US and China. Efforts that identify shared challenges and advance ideas to prepare for and mitigate these risks are in our shared interests.
The prospects for productive engagement are best when the US maintains a large capabilities advantage. Responsibly building a lead in developing and deploying the most advanced AI augments our ability to influence AI safety in China and elsewhere.
The Mythos Preview wake-up call
Mythos Preview, a model that we released to select partners as part of Project Glasswing in April, signals the arrival of an acceleration period that makes policy action even more urgent. With access to the model, Firefox was able to fix more security bugs last month than it had in all of 2025, and almost 20 times more than its monthly average security bug fixes in 2025. In response to the model, one PRC cybersecurity analyst wrote that China is “still sharpening our swords while the other side has suddenly mounted a fully automatic Gatling gun.”
Frontier AI capabilities will quickly approach the “country of geniuses in a datacenter” portrayal of transformative AI. This acceleration will be driven by the logic of scaling laws, in which model performance improves predictably with increases in computing power and data inputs, and by AI itself increasingly being used to accelerate the development of new models.
There is a high likelihood that we will look back on 2026 as the breakaway opportunity for American AI. American labs have the most advanced AI models, a large lead in both the quantity and quality of the advanced AI chips required to push the frontier, and a colossal capital advantage from revenues and financing to back the necessary investments to achieve it. PRC labs have real strengths: world-class, innovative talent, abundant and cheap energy, and plenty of data. All are requirements for developing frontier intelligence. But they simply do not have sufficient domestic compute to compete, nor do they have the revenues and capital to fund it.
Four fronts of the competition
The US and China are engaged in a competition for strategic advantage in frontier technologies like AI. Statements from both Beijing and Washington reflect that view. Calling that competition a “race” can give the false impression that there is a finish line, after which one side will conclusively secure victory. Rather, the competition will be an ongoing contest for advantage, in which either democracies or authoritarian regimes successfully position themselves to shape the values, rules, and norms of an AI-enabled future.
This competition is playing out on four fronts:
- Intelligence: which countries develop the most capable AI models.
- Domestic adoption: which countries integrate AI most effectively across commercial and public sectors.
- Global distribution: which countries deploy the global AI stack on which the world economy runs.
- Resilience: which countries sustain political stability through the economic transition.
Intelligence is the most important of the four fronts. We anticipate that frontier model capabilities will drive the most consequential changes for geopolitical competition. Model capabilities are also a primary driver of market adoption and global distribution.
But intelligence alone is not sufficient. If the CCP integrates near-frontier AI systems quicker and more effectively into China’s economy and the CCP security apparatus, and drives global adoption of subsidized, low-cost AI, then it could secure advantages over democracies that overcome an intelligence deficit. Beijing’s AI+ Initiative and its focus on “embodied intelligence” accordingly put high priority on policies that advance the integration of frontier intelligence into their economy and state apparatuses. The Trump administration’s AI Action Plan, and its focus on “promoting the export of the American AI technology stack,” also speaks to the strategic advantage of driving global adoption.
While we won’t focus on it in this essay, we believe resilience will be an important front of AI competition. Being able to sustain stability, cohesion, and good policymaking in this period will be a critical advantage, and a vulnerability for those who cannot.
The state of the competition
Compute—the advanced semiconductors needed to train and deploy frontier AI—is an essential input on each front of the competition described above. The race for global AI leadership is in large part a race for compute. For more than a decade, model capability has scaled with compute, and the majority of performance gains in AI capabilities have historically come from simply using more of it. Moreover, compute is needed to serve customers’ use of AI (also known as “inference” capacity), not just to train new models. Compute will be critical both for training the most intelligent models and for deploying them in commercial and national security spheres. Access to top talent, copious amounts of data, and critical algorithmic advances all matter to the race for intelligence—but each of those inputs is irrelevant if the compute is insufficient.
Democracies are winning the competition for compute leadership today. While some worry that export controls could accelerate the CCP’s own efforts to develop an advanced chip supply chain, little evidence suggests that China’s indigenization efforts will challenge US and allied leadership in advanced compute technology. Beijing has invested enormous resources into China’s chip sector, with major industrial policy initiatives like the Made in China 2025 strategy and the China Integrated Circuit Industry Investment Fund launched years before the imposition of export controls. Despite this state-backed investment, PRC AI labs and chipmakers remain stymied by US and allied export controls on advanced chips and chipmaking equipment.
As a result, the compute gap appears to be widening. An analysis of Huawei and NVIDIA’s roadmaps found that Huawei will produce just 4 percent of NVIDIA’s aggregate compute in 2026 in total processing performance, and 2 percent in 2027. Moreover, NVIDIA represents only part of the US and allied compute ecosystem, with Google and Amazon ramping up production of their own chips (TPUs and Trainium, respectively) to meet demand from American frontier AI labs and their customers.
Further exacerbating their compute shortfalls, China has made little progress in many of the most technologically complex segments of the semiconductor supply chain. Without access to extreme ultraviolet (EUV) technology, and even more so if policymakers can close loopholes on deep ultraviolet (DUV) technology and servicing and maintenance thereof, China’s chipmakers will remain unable to manufacture chips in sufficient quantity or quality to challenge US compute leadership. China’s inability to manufacture high-bandwidth memory at scale further exacerbates this gap. If the US strengthens its restrictions on the CCP’s ability to access US compute, one study estimates that America will have access to roughly 11 times more compute than China’s AI sector.
How democracies built the lead: commercial innovation and smart public policy
There are two main reasons for the compute lead. The first is the incredible innovation of companies like NVIDIA, AMD, Micron, TSMC, Samsung, ASML, and others across democracies like Japan, South Korea, Taiwan, the Netherlands, and the US, who together have built the unique technologies in the world’s most advanced semiconductors. Today’s AI achievements would not be possible without the feats of engineering and decades of sustained R&D investments that contributed to these products.
The second reason is forward-looking, decisive policy action across the last three presidential administrations. Bipartisan policy action has protected the US and allied innovation engine by restricting access to the US AI stack by PRC firms under the jurisdiction of the CCP. Our CEO has publicly commented on the importance of export controls, for example. These controls have curbed the sale of the highest-end AI chips and semiconductor manufacturing equipment (SME) to China over the last several years, constraining China’s frontier AI development even as Beijing has poured enormous state resources into the sector. Without action to limit China’s access to US compute, the CCP would have had all the ingredients to develop AI at par or superior to America’s.
Some observers worry that constraining access to compute will force AI labs in China to innovate on other axes, reducing the American lead. While PRC labs are innovating, these innovations are so far not sufficient to overcome their compute deficit. Algorithmic improvements are both a function and a multiplier of compute, not a substitute for it, and discovering those advances is itself a compute-intensive process: more compute enables labs to run more experiments, which enables labs to discover more algorithmic improvements. As frontier models increasingly conduct AI R&D themselves, that loop will tighten further, and frontier models will help build their own successors. In short, compute advantage compounds into algorithmic advantage, and from there into a durable lead in AI itself.
Today, US frontier systems are estimated to be at least several months ahead of the top models from PRC AI labs on intelligence, though these estimates are necessarily uncertain. Despite the attention paid to open-weight models from China, their enterprise adoption lags closed frontier models, and monetization concerns have surfaced among public investors. Moreover, AI labs in China seem to be moving away from open source, now choosing to keep their best models proprietary.
China’s own AI leaders confirm the impact of export controls, and the critical need for US chips. Executives at top PRC AI labs have expressed worries that China will fall further behind due to compute constraints. Top Chinese labs cite compute scarcity as a chief constraint to accelerating model capabilities, and they identify export controls as the reason for this constraint. One executive of a China-based hyperscaler called the impact of supplying export-controlled US chips to China “huge, really huge,” adding that any supply gap severely impacts China’s AI development and dismissing concerns that importing U.S. chips would slow their self-sufficiency efforts. The primary voices in China suggesting export controls are futile seem to be CCP officials and state media, likely angling to influence US policymakers.
How the CCP stays competitive: policy loopholes remain
While export controls have been effective in providing today’s advantage, they have not gone far enough. Despite the CCP’s inability to manufacture enough advanced chips domestically or purchase them legally abroad, AI labs in China have been able to stay close on intelligence through two workarounds: illicit and evasive compute access, by smuggling AI chips directly into China and accessing offshore data centers, and illicit model access, through which they carry out distillation attacks on US frontier models and use those same models as tools to accelerate their own AI R&D.
China’s evasion of US export controls is an open secret. For example, federal prosecutors charged a Supermicro co-founder and two others with diverting $2.5 billion worth of servers containing advanced US chips to China. According to US government and media reports, DeepSeek trained its latest model on advanced US chips that are banned from sale to China. The Financial Times reported that Alibaba and ByteDance now train their flagship models on export-controlled US chips in data centers located in Southeast Asia, a route current controls do not reach because US export law covers the sale of chips, not remote access to them.1 The US export control system is struggling to prevent PRC AI labs’ access to advanced US-origin compute.
Distillation attacks, in which China-based labs create thousands of fraudulent accounts to circumvent access controls on US AI models and systematically harvest their outputs to replicate frontier capabilities, are another illicit technique used by PRC labs to catch up to their US counterparts and blunt the impact of export controls. The practice allows labs based in China to free-ride on decades of foundational research, billions of dollars in US investment, and the work of thousands of the world’s best engineers that produced US frontier models. The result is near-frontier capability at a fraction of the cost, subsidized by the United States. It is systematic industrial espionage of a technology critical to long-term US national security interests. OpenAI, Google, Anthropic, and the Frontier Model Forum have all publicly condemned the practice of distillation attacks.
AI experts in China openly acknowledge distillation attacks’ scale and importance to China’s AI development. A recent article in a state-owned media outlet described distillation attacks on US models as the “back door” China’s AI labs depend on as a core part of their business model. An ex-ByteDance researcher said that PRC AI labs use distillation as a shortcut to train models, allowing them to avoid investing into their own data pipelines.
US policymakers have moved quickly to address this threat. The White House Office of Science and Technology Policy published a memo on distillation attacks. Senior officials in the White House, Department of War, and members of Congress have also called attention to this problem. Recent legislation from the House Foreign Affairs Committee to address distillation attacks passed out of committee unanimously.
If policymakers in the US and allied democracies act to close these two channels propping up China’s AI models—illicit and evasive compute access and illicit model access—then we have a potentially once-in-a-generation opportunity to secure our lead.
Two scenarios for 2028
Below, we describe two hypothetical future scenarios to help illustrate how policy actions taken today can shape where we are in 2028.
Scenario one: America and our allies have a commanding and expanding lead
America’s compute edge remains strong. Despite increased state support for China’s semiconductor industry, China’s chipmakers remain years behind their US and allied counterparts, stymied in part by their inability to access advanced SME tooling, servicing, and maintenance. The US-PRC compute gap is widening as increased US and allied chipmaking capacity comes online and as advanced chipmakers continue to innovate on more efficient and performant chips. In tandem, US policymakers have taken action to close loopholes in the US economic security toolkit, and efforts to smuggle chips into China and access export-controlled chips in data centers outside the country are increasingly frustrated by well-funded enforcement efforts.
Consequently, US AI models are 12-24 months ahead on intelligence, and the lead is growing. A small number of AI labs lead at the frontier with the most intelligent, capable, and performant models. All are based in the US. The “country of geniuses in a data center” has become a reality across critical industries, including cybersecurity, finance, healthcare, and life sciences. When US frontier labs release new models in 2028 that achieve step-function advances in capabilities (similar to the relative impact of Mythos Preview in April 2026), China will not have access to similar AI capabilities until 2029 or 2030. This gives critical breathing room for democracies to set the rules and norms of frontier AI systems.
American AI is the backbone of the global economy, driving new economic and scientific dynamism. The Trump administration's efforts to drive domestic AI adoption and promote the export of American AI is succeeding, and the resulting gains from the adoption of powerful AI both at home and abroad is driving unprecedented economic growth and technological advancements. Global adoption of US AI has skyrocketed. Democracies’ lead in capabilities and compute mean that China’s AI firms do not compete for global market share outside of a narrow group of autocracies. The world’s top frontier AI systems are shaped by democratic values and make it more difficult for authoritarian states to use AI systems to infringe on rights and civil liberties.
Cyber and other national security advantages expand. Public and private sector cyber operators and security professionals use advanced AI systems to reduce the attack surface in America and other democracies and blunt the CCP’s ability to gain and maintain cyber footholds in our systems, making our national security assets, IP, and communications networks more secure. The United States' overwhelming AI advantage is a powerful deterrent to aggression.
A self-reinforcing cycle compounds democracies’ leadership. A commanding AI advantage makes the United States and its allies more attractive partners. That alignment expands both the market for American AI and the coalition setting global AI norms, which in turn promotes the development and deployment of AI systems that are safe, secure, and protective of civil liberties. The world’s top technical and scientific talent continues to gravitate to where the frontier is being built. The United States gains significant leverage with which to incentivize cooperation from Beijing on critical issues like AI governance, strategic competition, and trade. This cycle reinforces itself: the lead strengthens the coalition, the coalition strengthens the lead, the democracy-led international order is anchored through the transition to transformative AI.
Scenario two: The CCP-controlled AI ecosystem is neck-and-neck
AI developed and deployed in China is near-frontier on model intelligence. Despite a weak semiconductor production capacity, models trained by PRC AI labs are only a few months behind US models. Ongoing distillation attacks, overseas compute access, weak SME export enforcement, and a loosening of export controls on American semiconductors have assisted CCP efforts. Continued access to US frontier AI for AI R&D have also enabled AI labs in China to close the gap and approach parity with their US counterparts.
Rapid commercial and state adoption. Beijing has championed a whole-of-nation push on domestic adoption via “AI+” policies. Even though China's AI models are slightly less capable than US models, CCP efforts to accelerate adoption have paid off. China is thus able to deploy near-frontier AI capabilities more advantageously across economic, military, and technological domains, shifting the balance of power in China’s favor.
The CCP’s AI-enabled cyber force is a serious threat. The CCP’s integration of AI-enabled cyber capabilities within an already advanced cyber force has sustained the PLA as a menacing cyber competitor. PLA cyber actors have gained additional access to critical and dual-use infrastructure in the US and most countries around the world, enabling them to disrupt critical national security and societal functions. As AI is incorporated deeper into our most critical systems, democracies enjoy no security advantages over China in AI, despite having developed the technology first.
Beijing is winning in global adoption on cost and on-prem flexibility. Huawei and Alibaba data centers are globally prevalent, especially in, but not limited to, lower cost markets in the Global South. These data centers scale on older chips, which China is able to export because it can serve its domestic market with a combination of US chips purchased with an export license, smuggled into China, or remotely accessed in overseas data centers. They host second-tier, but cheaper and still effective models produced by PRC labs. Similar to the Huawei playbook of being cheap and “good enough,” China’s near-frontier models and hardware support a non-trivial and rapidly growing segment of the global economy. This infrastructure advantage gives CCP leadership significant influence over those markets.
Ensuring democracies lead
To ensure we land in scenario one, we support the following areas of policy action.
- Close the loopholes: Smuggled chips, foreign data center access, and SME. Today, PRC labs benefit from access to export-controlled American chips via smuggling and foreign data centers, and gaps in SME controls accelerate their self-sufficiency efforts. Tightening controls and ramping up enforcement budgets can help close these loopholes that prop up the CCP’s AI ecosystem. It would lower China’s compute ceiling and correspondingly slow their AI advances, thus sustaining and expanding democracies’ AI lead. Note that a lower compute ceiling could also materially impair distillation attacks, as AI labs in China still require a minimum threshold of compute to illicitly distill effectively.
- Defend our innovations: Restrict model access and deter distillation attacks. Policymakers in Congress and the executive branch can continue to support policy actions to punish and disincentivize distillation attacks from PRC labs, while also taking steps to facilitate US labs’ ability to detect and prevent distillation attacks on its own. These could include a legislative clarification that distillation attacks are illegal, and efforts to facilitate threat intel and technical sharing between peer American labs as well as with the US Government. Curbing this behavior can materially extend a democratic lead in the coming months and years.
- Champion the export of American AI. As public and commercial sectors around the world increasingly adopt AI, the Trump administration should continue its efforts to promote the global adoption of trusted AI hardware and models developed and shaped by democratic principles. Locking in trusted American infrastructure now denies the CCP’s AI ecosystem the global footholds it needs to compete on cost and adoption in the future.
Conclusion
America and its allies have developed both the world’s most capable frontier AI models and the world’s most advanced inputs to AI. This has provided a substantial advantage. If our superior access to that technology is defended, that advantage can be extended. But it will be lost if it is given directly to our competitors. The decisions made by policymakers this year will determine the future of transformative AI. We support those working to ensure that American and allied democracies are winning in 2028.
Footnotes
- In January 2026, the House passed a bipartisan bill 369–22 to close that loophole; the bill has not passed the Senate.




No comments:
Post a Comment