Monday, July 01, 2024

Satyricon, or Tragedy at Play--Artificial Intelligence and the New World Order: Leopold Aschenbrenner, "Situational Awareness"

 

Pix Credit here

 This past month (June 2024),  the carbon encased consciousness self-identifying as Leopold Aschenbrenner published "Situational Awareness" dedicated to Ilya Sutskever. It is meant to offer a roadmap describing "a" (but perhaps not "the") pathway toward the shaping of human interaction with the generative and virtual intelligence, silicon encased, and how that interaction will, in its dialectical mimesis, will reshape both. These visions from the pragmatic side of things tend to align with what may already have been anticipated from the philosophical side (see my take. "The Soulful Machine, the Virtual Person, and the "Human" Condition" here). In both, cases a phenomenological semiotics--inductive, iterative, mimetic--and uncontrollable, in the way that Greeks understood hubris as a sort of lèse majesté in the form of a presumption toward the gods--suggests that there is no way to control the trajectories of human or generative consciousness as each is now compelled to "work" on the other, and that what follows will reshape both and the nature of the relations between them. But there may be a way to profit from them, or at least to protect oneself from its most negative effects.

Legal regulation, in this schema, assumes its traditional marginal role, as window dressing and risk allocation device. The pretensions of law as an instrument shaping events disappears in the world of those actually doing the shaping--coders, engineers, structuralists, and perhaps the money that fuels their work.  In all of this law making is both a necessary but pathetically misguided performance by those who represent what is about to be swept aside--in both the coding and societal communities and their apparatus. But the past must busy itself at least with appearances if only to create the necessary curtain behind which things will change. With respect to what comes after, Aschenbrenner believes he can see at least the glimmerings of a  possibility, or at least a process that makes predictive analytics of trajectories possible--though perhaps less so on the generative side (if only because he is not it). The strength of his belief in its analytics (at least with its repercussions on the human side) is strong enough to induce him to put his (and others) money where his brain is. For that reason along (though many would do it as a function of status and position within the human hierarchies of AI development), it is worth taking the journey with him, at least as far as he is willing to let us see.

Certainly within the tech community much of this, one way or another, is not missed, and they are going along for the ride, whether as critic, observer or otherwise. "Situational Awareness" has started quickly to make the rounds of the AI community, especially though in the technological and financial vanguard (that is the producers, financiers and potential users  of big data technologies in their generative forms)  (but see also here). It is also leaking into the spaces within which other elites also guard their territories and prerogatives (the Wall Street Journal article touching on AI community preemptive contracts for energy from nuclear facilities is instructive). More importantly, as Aschenbrenner sees it at least, and not incorrectly, these include--always and hovering in the background just out of sight--the security apparatus of states (primarily) but also of other collectives that may deploy military or quasi-military authority/power. That, in part is a function of the status of the author within that community, as well as his now apparently well financed iconoclasm.

On the AI side of things, Timothy Lee, for "Understanding AI" reminded readers that " Ilya Sutskever and Jan Leike left OpenAI within days of one another. The two were co-leads of OpenAI’s superalignment team, which got disbanded shortly afterwards. Their departure brought renewed attention to an April report that Leopold Aschenbrenner, a member of the superalignment team, had been fired for allegedly leaking information." Nonetheless there is analytical power here as well.

Mike Allen, for Axios, in his ten takeaways, emphasized Aschennbrenner's impulse toward the inductive and the iterative, toward the mimesis of virtual consciousness and its sensibilities; but also what motivates both digital and analogue consciousness--wealth or welfare enhancing bits: data and money.  And that makes inevitable the juxtaposition of wealth in the form of "effective altruism"--it is all in the measure of value. 

Luke Dawes, for the Effective Altruism Forum, raises some quite interesting interpretive and perspective issues--from the arborescence of "Situational Awareness", to its embedding in the current historical era's contest among variations in the human construction of social collectives in the form of liberal democratic and Marxist Leninist variations. But always the arborescent schema--one that is easily coded into the mimetic virtual consciousness that is one form of the AI that may be coming. This even as the impulse is to honoring the rhizomatic element in the creation of virtual intelligence in the image of its creators. It is likely that the rhizomatic element will triumph, one way or another, and in its own way--yet likely only in its own space.  For the moment it will exist, like the market economy in Cuba, as the "non-state sector"--the unofficial economy, the alternative 'verse, both essential and an existential threat to the arborescent schema that lies at the heart of Aschenbrenner's analytics. 

Dirk Songuer, for Medium, was a bit darker about what he terms Aschenbrenner's 165 page AI Manifesto. He suggests:
Don’t get me wrong — I do think that LLMs and generative AI in general have their use cases. But I wonder if their impact, their outcomes, their actual return on investment, is also growing in such an exponential manner. Because I don’t think it is. * * * This is why AI companies now work on a veneer of sexualized and manipulative user interfaces that talk at you flirty and funny. Because why wouldn’t you want to be attracted to your iPhone? But also to distract you from the fact that these things are not really turning into a revolution.

Pix credit here
And perhaps darker still are some of the reflections of Rob Bensinger for LessWrong. "I think this report is drastically mischaracterizing the situation. ‘This is an awesome exciting technology, let's race to build it so we can reap the benefits and triumph over our enemies’ is an appealing narrative, but it requires the facts on the ground to shake out very differently than how the field's trajectory currently looks. The more normal outcome, if the field continues as it has been, is: if anyone builds it, everyone dies." (Ibid.). In the face of this, more regulation, and more self control is necessary.  Yet that, precisely, is what human hubris--fully aware of the necessity--is likely to compulsively ignore. As Aesop once reminded us in a Fable of the Scorpion and the Frog, one cannot overcome one's nature even when that leads us to our doom. Perhaps. Ideologically compelled self control may save some, though we may be quite different once we get to where we might wind up. 

Sharaku Satoh, also for Medium, has produced a rebuttal. The issue of energy consumption is reconsidered.  And indeed, one wonders whether the battles between those seeking to lock up power for electronic cars, home cooling, and AI encasing may not play out differently. "In reality, the large-scale expansion of energy infrastructure requires a long time and faces many political and economic challenges." That is true, but prioritization in the face of crisis has a way of upending the complications of the administrative techno-bureaucratic state. The timeline for the creation of super-intelligence, much less its character (was God surprised when Eve followed the serpent's advice about the utility of eating the fruit of the tree of knowledge?), remains contested. But perhaps not its inevitability. On the oter hand, Satoh concedes that the "prediction that AI technology will become the focal point of international competition is a realistically conceivable scenario." (Ibid.). But Satoh notes, again correctly absent crisis that upends normality, that reality is come complicated and contested within stable orders. And so one.

Aschenbrenner remains both an insider and a player: From his website "For Our Posterity" he writes:

Hi, I'm Leopold Aschenbrenner. I recently founded an investment firm focused on AGI, with anchor investments from Patrick Collison, John Collison, Nat Friedman, and Daniel Gross.  Before that, I worked on the Superalignment team at OpenAI. In a past life, I did research on economic growth at Oxford's Global Priorities Institute. I graduated as valedictorian from Columbia at age 19. I originally hail from Germany and now live in the great city of San Francisco, California. My aspiration is to secure the blessings of liberty for our posterity. I'm interested in a pretty eclectic mix of things, from First Amendment law to German history to topology, though I'm pretty focused on AI these days.

And it is here that the contradiction occurs.  Situation awareness is, of course, situational. But AI's context crosses the boundaries of the organization of human functional differentiation. It is here that perhaps the neural network.  "As Aschenbrenner wrote, there are probably only a few hundred people, most of them in AI labs in San Francisco, who have "situational awareness" about the industry. These people will most likely have the best insight into how AI will reshape the years ahead." (here). Yet they do not have the same awareness about the security apparatus of either of the major imperial centers, nor their own neurosis, fears, and imaginaries around AI and its effects on their situation. And, indeed, these are all moving targets--even as AI metastasizes in accordance with its own logic, so does the security apparatus of the United States/ Europe, and China (along with subaltern but much more military active dependencies). Each is coded differently--their bias profiles (clusters of datafied norms and priorities coded into their programs) may not align (current politics opens a small window onto coding bias privileging differences even within systems). 

It is here that arboresence fails and the rhizomatic schema offers possibilities--but the cost is high, perhaps too high for humans--a loss of hierarchy and control ver direction. And that leaves open the question that once dominated and now has been shunted to the margins--where is human autonomy in the mix.  Perhaps, at least, AI and its human collective responses finally brings clarity to Nietzsche's description of the 4th of the Great Errors (Twilight of the Idols (1888))--the error of free will ("Men were thought of as “free” in order that they might be judged and punished—in order that they might be held guilty: consequently every action had to be regarded as voluntary, and the origin of every action had to be imagined as lying in consciousness(—in this way the most fundamentally fraudulent character of psychology was established as the very principle of psychology itself)."). Note the misdirection--it is not that judgement and punishment is wrong, it is merely misdirected through a presumption tat everything is freely undertaken rather than as compelled by context. That context, in turn, is constructed from out of the phenomenology of the premises from which a lebenswelt emerges (that cocktail of interactive premises from which the consciousness and ordering of things around humans may be said to emerge)--whether it is liberal democratic, Marxist-Leninist, altruistic, or that eventually imagined and programed into AI super intelligence. Aschenbrenner, as one of what Nietzsche might call the priest, and others the influencer, seems to believe AI will assume a critical role in national security, and that within the larger security context of social solidarity and the allocation of its material needs and desires (themselves constructed form out of the aspirational objectives written into their ideological frameworks). Tis is one view of the arboreak schema--the hierarchical framework--that may emerge in whole or part, from the trajectories of AI development and its resistance/reactions from the human collectives.  

This brings us squarely back to the contextually situated arborescence in "Situational Awareness."  The situating of trajectories within premises of free will and choice, and the triumph of AI and its human collective responses (energy competition, social organization, evolution of the political-economic model as applied, "human" or "sentient" rights, etc.), as a consequence of the cascading limitations of choices (boxed into smaller spaces by time, space, place and perception of conscious possibility), still leaves open multiple pathways (as palatable or unpalatable they may be as a function of the premises of the ideological framework with which one must learn to live with them). If one takes Deleuze and Guattari's esctatic elaboration of a metaphor" (Dan Clinton's suggestion) seriously, the result may not be so much a triumph as a melding--the rhizome may think itself autonomous and interconnected; it is exists only within and as an expression of a different sort of schema of arboresecence. The trick here is the avoidance of the binaries (triumph, defeat, one thing or another), in favor of bricolage--assemblages that are intimately inter-subjective and plural. That, more than the emergence of virtual "super" sentience, may be the hardest concept to grasp as coders, structuralists, prognosticators, planners--natural persons or generative intelligence--do what they must and plan for what they can, to their own advantage to be sure. This may, then, serve as the starting point for developing a sense of what is to come, but also the point of the greatest potential for tragedy as both humans and generative intelligence will inevitably be compelled to follow its own coding. This was mapped out for us millennia ago as part of the celebrations of the Great Dionysia--the mimesis of the human condition in tragedy, comedy and satyric play--the great mimetic dialectic within which it is now possible to construct its plural. Aschenbrenner is helping to write (and in "Situational Awareness" is writing about) the tragedy, comedy and resulting satyr play that is the creation of generative intelligence in the likeness of its makers--our lived tragedy at play. And now back to the practical aspects of scripting this play. . . . .

That, anyway, is suggested by the opening engagement of the AI community with "Situational Awareness" or at least with the demons lurking about it.

The Introduction and Table of Contents of "Situational Awareness" follows.  More thought in later posts. 


Pix credit here; Fellini, Satyricon (1969)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Contents
Introduction 3
     History is live in San Francisco.
 

I. From GPT-4 to AGI: Counting the OOMs 7

AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~pre-
schooler to ~smart high-schooler abilities in 4 years. Tracing trend-
lines in compute (~0.5 orders of magnitude or OOMs/year), algorith-
mic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chat-
bot to agent), we should expect another preschooler-to-high-schooler-
sized qualitative jump by 2027.

II. From AGI to Superintelligence: the Intelligence Explosion 46
AI progress won’t stop at human-level. Hundreds of millions of AGIs
could automate AI research, compressing a decade of algorithmic progress
(5+ OOMs) into 1 year. We would rapidly go from human-level to vastly
superhuman AI systems. The power—and the peril—of superintelli-
gence would be dramatic.

III. The Challenges 74

IIIa. Racing to the Trillion-Dollar Cluster 75

The most extraordinary techno-capital acceleration has been set in mo-
tion. As AI revenue grows rapidly, many trillions of dollars will go
into GPU, datacenter, and power buildout before the end of the decade.
The industrial mobilization, including growing US electricity produc-
tion by 10s of percent, will be intense.

IIIb. Lock Down the Labs: Security for AGI 89
The nation’s leading AI labs treat security as an afterthought. Cur-
rently, they’re basically handing the key secrets for AGI to the CCP
on a silver platter. Securing the AGI secrets and weights against the
state-actor threat will be an immense effort, and we’re not on track.

IIIc. Superalignment 105
Reliably controlling AI systems much smarter than we are is an un-
solved technical problem. And while it is a solvable problem, things
could very easily go off the rails during a rapid intelligence explosion.
Managing this will be extremely tense; failure could easily be catas-
trophic.

IIId. The Free World Must Prevail 126
Superintelligence will give a decisive economic and military advan-
tage. China isn’t at all out of the game yet. In the race to AGI, the free
world’s very survival will be at stake. Can we maintain our preem-
inence over the authoritarian powers? And will we manage to avoid
self-destruction along the way?

IV. The Project 141
As the race to AGI intensifies, the national security state will get in-
volved. The USG will wake from its slumber, and by 27/28 we’ll get
some form of government AGI project. No startup can handle super-
intelligence. Somewhere in a SCIF, the endgame will be on.

V. Parting Thoughts 156
What if we’re right?


Appendix 162


SITUATIONAL AWARENESS: The Decade Ahead

Leopold Aschenbrenner, June 2024

You can see the future first in San Francisco. 

Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.

The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.

Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change. 

Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride. 

Let me tell you what we see.

No comments:

Post a Comment