![]() |
| Pîx credit here |
"!The United Nations tends to be a useful mirror, reflecting the current version of orthodoxy among its various stakeholders, some of whom are states, many of whom are elements of the complex ecologies of techno-functionaries who in public and private institutional organs, constitute the only global focus group worth watching. Its advisory bodies, experts and related functionaries are especially useful for inscribing the forms into which orthodoxy, translated into the law and norms of social relations, will morph." (Made in Our Own Image; Animated as Our Servant; Governed as our Property: Interim Report "Governing AI for Humanity" and Request for Feedback).
December 2023, the UN Secretary-General's AI Advisory Body launched its Interim Report: Governing AI for Humanity. Almost a year later they brought us the Global Digital Compact.
The Global Digital Compact is a comprehensive global framework for digital cooperation and governance of artificial intelligence. Twenty years after the World Summit on the Information Society, it charts a roadmap for global digital cooperation to harness the immense potential of digital technology and close digital divides. On 22 September 2024, world leaders convened in New York for the Summit of the Future, where they adopted a Pact for the Future that includes a Global Digital Compact. (HERE).
Now comes dialogue--or better put, the launch of the launch the Global Dialogue on Artificial Intelligence (AI) Governance. (Global Dialogue on Artificial Intelligence Offers Platform to Build Safe Systems, Secretary-General Says at Launch)
Today, we lay the corner-stones of a global AI ecosystem that can keep pace with the fastest—moving technology in human history. A system that rests on three fundamental pillars — policy, science and capacity.Thsi is all very nice, one can suppose. One can also suppose that much of its work, and certainly its trajectories, have already been scripted. That assumption, at any rate, may be not unreasonable given the performances at the event announcing the Dialogue:
The first pillar, policy. Today, we launch the Global Dialogue on AI Governance at the United Nations — the world’s principal venue for collective focus on this transformative technology. * * * The goals of the Global Dialogue are clear: To help build safe, secure and trustworthy AI systems — grounded in international law, human rights and effective oversight; To promote interoperability between governance regimes — aligning rules, reducing barriers and boosting economic cooperation; And to encourage open innovation — including open—source tools and shared resources — accessible to all. * * * In short, this is about creating a space where governments, industry and civil society can advance common solutions together. Where innovation can thrive — guided by shared standards and common purpose. In an era of rapid disruption, policy dialogue must also be well-informed.
And so, the second pillar of an effective global AI system is science. The creation, within the United Nations, of the International Independent Scientific Panel on AI, represents another milestone — putting science at the centre of our efforts. Today, we are launching an open call for candidates — from all regions and disciplines — for the International Independent Scientific Panel on AI. This group of 40 experts will provide independent insights into the opportunities, risks and impacts associated with AI. The Panel will be the world’s early warning system and evidence engine — helping us separate signal from noise, and foresight from fear. Their independent assessments will inform the Global Dialogue and beyond — helping the international community anticipate emerging challenges; Make informed decisions about how to govern this unprecedented technology; And level the information playing field for policymakers worldwide.
The third AI cooperation pillar is capacity. I recently submitted a report on financing options for AI capacity building. The report sets out practical pathways to narrow the AI divide — in computing power, data, research, education, training and safety standards. It proposes innovative and blended approaches — from philanthropic capital to concessional instruments, from computing credits to shared regional centres of excellence and fellowships. Building on that, I will soon begin consultations with Member States, potential funders and partners on the establishment of a Global Fund for AI Capacity Development. (Global Dialogue on Artificial Intelligence Offers Platform to Build Safe Systems, Secretary-General Says at Launch)
To begin the initiative, dozens of U.N. member nations — and a few tech companies, academics and nonprofits — spent a portion of Thursday summarizing their hopes and concerns about A.I. In short snippets, the speakers extolled the promise of the technology to cure disease, expand food production and accelerate learning. But they also identified risks including mass surveillance, the spread of misinformation, the consumption of energy resources and worsening income gaps among people and nations. (Countries Consider A.I.’s Dangers and Benefits at U.N.)
One notes variations of these themes at virtually every public facing event framing an underlying regulatory project. That is not a bad thing, but it does suggest that the semiotics of dialogue may be more narrowly understood here as a pathway toward an anticipated (regulatory) and normative goal--one already presupposed. "“The future will not be shaped by algorithms alone,” Annalena Baerbock, president of the U.N. General Assembly. “It will be shaped by the choices we make together.” (Countries Consider A.I.’s Dangers and Benefits at U.N.). Still one has the sense that the choices were made before the dialogue, one that is pre-shaped by the normative rivalries of the U.S. and China, or better put, by the choices both are making about how they mean to use the U.N. apparatus strategically for the advancement of their respective interests.
“We totally reject all efforts by international bodies to assert centralized control and global governance of A.I.,” Michael Kratsios, director of White House Office of Science and Technology Policy, said on Wednesday. As the United States has pulled back, China has increased its support for global A.I. initiatives and cast itself as a champion of developing nations. Speaking on Thursday, Ma Zhaoxu, China’s executive vice minister of foreign affairs, warned that A.I. must not become “a game of the club of wealthy nations” and “tool of hegemony.” (Countries Consider A.I.’s Dangers and Benefits at U.N.; more on the U.S. current position here and here).
All of these performative acts continue to move forward a quite peculiar approach to generative intelligence in automated decision making and virtual systems. It is one that assumes that these systems are inanimate instruments that can be bent to conform to the variety of usually highly contested human institutional objectives, values, desires, and organizational (and policy) frameworks. In the prpocess it reconstructs these systems as something that perhaps they are not--both as something greater and something less than what is being developed.
The text of the Press Release: Global Dialogue on Artificial Intelligence Offers Platform to Build Safe Systems, Secretary-General Says at Launch, follows below. The Open Call for Candidates may be accessed here and below.
Though, of course, one cannot help thinking that there are multiple tracks for selection. everyone with even the slightest interest ought to seriously consider submitting an application. Oneof the most curious elements of the se4lcxtion process is the bias in favor of bubble and peer connections. In the process the selection process appears to provide an ironic example of the sorts of bias that the appointed group is supposed to worry about. But then, perhaps the group is not meant to do that, but rather to protect the interests of the peer reinforcing groups that produce the sort of bias loops that the Committee is, at least in theory supposed to consider. For the moment, the bias of se4lection rewards "peer marker" collectors. And that bias is compounded by the choice of data markers used to replicate, in Committee forms, the "forbidden cities" of experts who might have contributed to the problem and who certainly have as great an interest in peer marker collection as in anything else. In that sense, the UN might have considered skipping this tedious process and merely develop AI enhanced analytics that can accumulate the data markers it values. That may be most appropriate in an institution the cognitive architecture of which is biased to ensure both self-replication and the protection of the normative bubbles which sustain its self-referencing normative systems. But then, that may be the point where the objectives and goals are pre-set and the Committee is selected to connect sequenced data dots to "scientifically" get from a pot of information to the preferred "insights into the opportunities, risks and impacts associated with AI" that follow from the normative cage constricted for that purpose.
























