Tuesday, May 30, 2023

The Handwringing Pandora: Brief Thoughts on the "Statement on AI Risk" Signed by AI Scientists and "Other Notable Figures"





"A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity. . . This statement, published by a San Francisco-based non-profit, the Center for AI Safety, has been co-signed by figures including Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman, as well as Geoffrey Hinton and Yoshua Bengio — two of the three AI researchers who won the 2018 Turing Award (sometimes referred to as the “Nobel Prize of computing”) for their work on AI. At the time of writing, the year’s third winner, Yann LeCun, now chief AI scientist at Facebook parent company Meta, has not signed." (Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement).

The statement:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." (Statement on AI Risk Center for AI Safety)

 


The masses may also add their names to the statement (Sign the statement). And everyone should join the conversation, perhaps ultimately to give guidance to those who caused the problem and remain in control of the process.

The statement emerges in the afterglow of the 22 March 2023 Open Letter (Pause Giant AI Experiments: An Open Letter:We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4; news report here). " The letter was criticized on multiple levels. Some experts thought it overstated the risk posed by AI, while others agreed with the risk but not the letter’s suggested remedy." (Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement). I was reported that the brevity of this 22 word statement was meant to avoid the disagreement of the earlier Open Letter (Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement).  

For all that, it may be worth parsing the 22 words or at least some of them. It is in the shortest text that the greatest misdirection may be had. Indeed, the most straightforward sentence tends to be the best exemplar of the crooked path towards an understanding "cannot speak its name." So let's consider the words, and then the words together, to get a better sense of the sub-textual meaning meant to be conveyed. It is in that exercise that one encounters the handwringing Pandora; or perhaps better put the story of Lulu as put to music by Alban Berg. The principal point, neither Pandora nor Lulu are inherently evil, or for that matter moral; they are both reflection and augmenter of the energy and desires in the people and activities around them. From the statement one gets a sense not merely of the adherents but also of the environment around them. That analysis makes up the bulk of the short essay

1. "Mitigating." That is an  interesting word to start the statement  The word remains close to its Latin parent--mitigare "soften, make tender, ripen, mellow, tame,"The precise object then is not to prevent or to avoid.  It is instead to soften, t make more gentle and to tame.  AI is here to stay.  It is just that it must be tamed or made tolerable.  Whatever that means. 

2. "risk of extinction." This gets one to the object of taming or of softening. What one wants to soften (
though not avoid entirely) is risk of extinction. That makes the statement even more interesting.  One ought not to avid AI because it represents (another) risk of extinction; instead it is the risk of extinction that ought to be mitigated--not avoided. One can then, in AI, bear a certain likelihood  of exposure to mischance or harm; one must strive to reduce that risk to acceptable levels (mitigation). Whatever that means. 

3. "from AI." AI is not defined,  But that as never been an issue. It is centered on programing that can free itself, to some extent, from its programmers, and make choices within the parameters for which it was written.  Or it may, by using programs of self-learning escape the confines of its original programing. . . to dominate the world. And so on. And not only would it escape its functional limits, bit, in a pathetically life like imitation of its creator, immediately then make it its business to exterminate its creator. 

4. "should be." Mitigating language--not "MUST BE", not  "SHALL BE" and the like, but should be. Its etymology suggests obligation--but in a subjunctive tense. Should be is what ought to be; not necessary what shall or must be.  That is should stands in for something like  this--"if you were as smart and well informed and embedded in the rue core values on which decisions like this ought to be grounded then you would. . ." This then dovetails nicely with the division of signatories: AI scientists and "notables" (those who ought to be known, and thus known relied on for guidance. 

Pix credit here
5. "a global priority." This is easy enough--the problem is beyond states.  Yet states are themselves the core of the problem.  Tat problem originates either in an indifference to development, or in the conviction that development must be accelerated bt focused on objectives that hence state power int he world. Thus, perhaps, the mitigation is beyond states--and certainly beyond the enterprises that are themselves the drivers of development- But what is the international?  If states, standing alone are problematic stakeholders for the global, then who should be included? Enterprises probably; scientists certainly, other notables (whether self nominated or selected by other notables) and influencers, experts, and leaders of mass collectives irrespective of whatever it is they think they know about this subject (it is their interest rather than their expertise that would drive choice). And priority  in the sense of "precedence in right, place, or rank" does not mean primacy. One might view this as merely a call to suggest an equality of rank, in terms of the importance of the problem, with something else. That may be the intent of the phrase priority alongside."

Pix credit here
6. "other societal-scale risks such as pandemics and nuclear war.." Welcome to the club. The idea here appears to suggest importance, again, by reference to the species threatening effect risk if AI. And in case one did not understand what a societal scale risk is, the references to pandemics and nuclear war try to make it clearer. We have no experience with nuclear war--those three generations or more of movies about nuclear war have taught people to know that the effects on society would be tremendously negative. And the global masses are just now coming out of the effects of the COVID pandemic. And that brings us back to extinction, the 5th word in the statement. 

So what is left? (1) AI can cause great harm; (2) the negative effects of AI make it as great a global priority as other world changing threats, like COVID and World War III; (3) that harm or those negative effect create not an obligation but a desire to be obligated to (4) soften but not avoid those negative effects; (5) somehow; and (6) by someone who is capable. 

We are precisely where we started: a group of people and the institutions that bankroll them along with government and others who  approach an ecstatic state anticipating the marvels that this will bring who first blindly pursued an AI goal now wish to pursue an AI goal with limits.  Same people, same problem, same institutions, same hunger, same temptation--but now under law. But this is, in the end, law that merely serves as guidance not necessarily compulsion.

The 22 March 2023 Open Letter (Pause Giant AI Experiments: An Open Letter:We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4) follows below.

Pause Giant AI Experiments: An Open Letter

 

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.

In addition to this open letter, we have published a set of policy recommendations which can be found here:


No comments:

Post a Comment