Tuesday, September 24, 2024

UN AI Advisory Body: "Final Report - Governing AI for Humanity"

 


 

Large scale techno-bureaucracies require a number of things to work well as a substitute or overlay for traditional governance structures grounded in the political expression of the will of the masses either directly through elections (the liberal democratic model) or as the democratic centralist expression of the  a mass line which is expressed under the leadership and guidance of a vanguard of social forces (the Marxist-Leninist model). 

First they require an object of regulation that requires specialized knowledge. Second they require that this specialized knowledge be dynamic in the sense that knowledge must constantly be refreshed in order o be current. Third, techno-bureaucratic leadership and guidance requires a specialized language that is to be used with respect to the specialized knowledge  required to manage the object of governance. Fourth, they require an apparatus that is self-referencing, that is an apparatus whose members share a solidarity built on a set of core premises and expectations, a shared outlook, among the who form the techno-bureaucratic core. Fifth, that techno-bureaucratic core requires its own rules of membership and exclusion grounded in part of shared values; opposing voices, even techno savvy vices, tend to be silenced or suppressed. Sixth, the techno bureaucratic apparatus requires its own ecologies of authority, in this case built into interfaces with analogues in intellectual circles (as they may be constituted and operated within a larger social system), and non-governmental organs directly interested in the object of techno-bureaucratization. Seventh, the techno-bureaucracy reinforces its authority and disciplines its community through interactions among its interface forces. Eighth, the function of techn0o-bureaucy is to displace the traditional structures of politics (in either liberal democratic or Marxist-Leninist regimes) with the well managed discourse of specialized knowledge the purpose of which is to fulfill the premises around which these knowledge systems are built. And Ninth, techno-bureaucracies build their legitimacy by attaching the,selves to the form of traditional political structures, an attachment that gives the appearance of politics but might be better understood as using the poli9tical structures as a means of transposing knowledge power into political imposition.

None of this is bad per se, though it might have caused some who were once in a position to reconsider the relationship of techno-governance to the fundamental political structures of the systems into or on which they were attached. But those decisions were made almost a century ago in the first flush of the triumph of the scientism of the ocial sciences and the acceptance of the immutable verities of scientific investigation (but of which, of course proved to be as permanent as the state of research at any given point, but each of which displaced polities with knowledge communities with polities. Global communities have celebrated that triumph for a long time, though to greater or lesser extent depending n context. The ruling global ideology within international bureaucratic and institutional circles, however, has provided probably one of the most durable and powerful homes for this turn in governance. That, in part, was inevitable--one substitutes knowledge power and works through knowledge communities when one is denied, directly, political authority.  The state system may remain triumphant, in this sense, but its international organs led through a different means--in pace of political authority there is knowledge authority and the cultivation of the absoluteness of knowledge and its primacy over politics.  That is, in the face f the pronouncements of a knowledge community--so recognized by political bodies--politics ust give way. 

The regimes f knowledge communities within techno-bureaucratic ecologies may be at their most potent (subject to temporal shifts as knowledge grows and changes) where the "science" is at its most potent (an invitation to conclusion that is usually hotly debated at least at its margins). It is at its least potent when it attaches to or appears to serve as justification for social scientific "truth" int he service of political agendas--especially agendas of control and management. The current debates around artificial intelligence appears to fit in neither category.  To some extent it is based on conjecture--the application of crude predictive analytics based on current states of knowledge respecting the development of artificial intelligence (however that is defines--another contentious issue). But conjecture is based on current states of knowledge and current knowledge of the proclivity of people to seek to emply these generative and big data technologies in ways that appear to run counter to either deeply held principles of human social relations or more short term political interests. 

It is with this in mind that one might productively engage in the latest effort to develop a governing framework for artificial intelligence, one that might appear to be tied to related production among allied techno-bureaucracies in other state organs and elsewhere in what appears to be a coordinated effort of a solidarity based and widely dispersed techno bureaucracy to appear to speak from many vices in a variety of institutional frameworks at about the same time. The effort, of course, is the widely anticipated fonal report of the United Nations AI Advisory Body.  The AI Advisory body+s self description serves as a model of the techno-bureaucratic form and its solidarity networks:

To foster a globally inclusive approach, the UN Secretary-General convened a multi-stakeholder High-level Advisory Body on AI on 26 October 2023 to undertake analysis and advance recommendations for the international governance of AI.​ The Advisory Body comprised 39 preeminent AI leaders from 33 countries from across all regions and multiple sectors, serving in their personal capacity.​
A call for Interdisciplinary Expertise​
Selected from over 2,000 nominations, this diverse group combined cutting edge expertise across public policy, science, technology, anthropology, human rights, and other relevant fields.​

A Multistakeholder, Networked Approach​
The Body included experts from government, private sector and civil society, engaged and consulted widely with existing and emerging initiatives and international organizations, to bridge perspectives across stakeholder groups and networks.​

An Agile, Dynamic Process​
The Body worked at speed to deliver its interim report in under 2 months, engage over 2,000 AI experts stakeholders across all regions in 5 months, and produce its final report in under 3 months. Keeping pace with technical and institutional developments let the Advisory Body provide high-level expert and independent contributions to ongoing national, regional, and multilateral debate. (About the UN Secretary-General's High-level Advisory Body on AI​)

I have written about the form and semiosis of this techno-bureaucratic product in the context of the AI Advisory Body+s Interim Report (Made in Our Own Image; Animated as Our Servant; Governed as our Property: Interim Report "Governing AI for Humanity" and Request for Feedback). Now the techno-bureaucracy solidifies its claims to the need for global governance based on a synthesis of the specialized output of knwledge communities with the interests of political institutions. It is a rich and increasingly typical reduction of the democratic process to the technologies of knowledge production in the service of political management  the object of which is to enrich the lives of its objects--the individuals increasingly remote from the processes of knowledge or of politics.  That, perhaps, cannot be heed.  It might, however, at least have been worth a conversation and some better communication of what the masses were to be giving up (or in the case of Marxist Leninist systems, the political core of leadership). One is left, then, especially if one falls outside the authoritative knowledge communities from out of wich Reports like this are fashioned, to read, consider, and perhaps comment to no one in particular.And one might at the same time consider the politics of that exercise as its own system semiotic transformation of the democratic impulse to one in which individuals are again reduced to some sort of benign passive receptacles of something that is good for them. Perhaps it is for the best; though one wonders whether knowledge communities have indeed considered alternatives other than self serving ones. With respect ot the underlying premises that inform its substance, see here: Just Published: 'The Soulful Machine, the Virtual Person, and the “Human” Condition', International Journal for the Semiotics of Law.

In any case the Report carries forward the usual potpourri of current sensibilities about AI as an object that permits the triggering of substantial amounts of management (if its own knowledge production and application), and of its relationship to individuals, institutions, and the constitution and operation of social relations.  And indeed one cannot but note the first rule of knowledge communities--to serve oneselves.



The Executive Summary follows

Read the Final Report

AREN | ES | FR | RU | ZH

 Executive summary
i Artificial intelligence (AI) is transforming our world.
This suite of technologies offers tremendous
potential for good, from opening new areas of
scientific inquiry and optimizing energy grids,
to improving public health and agriculture and
promoting broader progress on the Sustainable
Development Goals (SDGs).
ii Left ungoverned, however, AI’s opportunities may
not manifest or be distributed equitably. Widening
digital divides could limit the benefits of AI to a
handful of States, companies and individuals.
Missed uses – failing to take advantage of and
share AI-related benefits because of lack of trust
or missing enablers such as capacity gaps and
ineffective governance – could limit the opportunity
envelope.
iii AI also brings other risks. AI bias and surveillance
are joined by newer concerns, such as the
confabulations (or “hallucinations”) of large
language models, AI-enhanced creation and
dissemination of disinformation, risks to peace
and security, and the energy consumption of AI
systems at a time of climate crisis.
iv Fast, opaque and autonomous AI systems
challenge traditional regulatory systems, while
ever-more-powerful systems could upend the
world of work. Autonomous weapons and public
security uses of AI raise serious legal, security and
humanitarian questions.
v There is, today, a global governance deficit
with respect to AI. Despite much discussion of
ethics and principles, the patchwork of norms
and institutions is still nascent and full of gaps.
Accountability is often notable for its absence,
including for deploying non-explainable AI systems
that impact others. Compliance often rests on
voluntarism; practice belies rhetoric.
1 See https://un.org/ai-advisory-body.
vi As noted in our interim report,1 AI governance is
crucial – not merely to address the challenges
and risks, but also to ensure that we harness AI’s
potential in ways that leave no one behind.
1. The need for global
governance
vii The imperative of global governance, in particular,
is irrefutable. AI’s raw materials, from critical
minerals to training data, are globally sourced.
General-purpose AI, deployed across borders,
spawns manifold applications globally. The
accelerating development of AI concentrates power
and wealth on a global scale, with geopolitical and
geoeconomic implications.
viii Moreover, no one currently understands all of AI’s
inner workings enough to fully control its outputs
or predict its evolution. Nor are decision makers
held accountable for developing, deploying or
using systems they do not understand. Meanwhile,
negative spillovers and downstream impacts
resulting from such decisions are also likely to be
global.
ix The development, deployment and use of such
a technology cannot be left to the whims of
markets alone. National governments and regional
organizations will be crucial, but the very nature of
the technology itself – transboundary in structure
and application – necessitates a global approach.
Governance can also be a key enabler for AI
innovation for the SDGs globally.
x AI, therefore, presents challenges and opportunities
that require a holistic, global approach cutting
transversally across political, economic, social,
ethical, human rights, technical, environmental
8 Governing AI for Humanity
and other domains. Such an approach can turn a
patchwork of evolving initiatives into a coherent,
interoperable whole, grounded in international law
and the SDGs, adaptable across contexts and over
time.
xi In our interim report, we outlined principles2 that
should guide the formation of new international
AI governance institutions. These principles
acknowledge that AI governance does not take
place in a vacuum, that international law, especially
international human rights law, applies in relation
to AI.
2. Global AI governance
gaps
xii There is no shortage of documents and dialogues
focused on AI governance. Hundreds of guides,
frameworks and principles have been adopted by
governments, companies and consortiums, and
regional and international organizations.
xiii Yet, none of them can be truly global in reach
and comprehensive in coverage. This leads to
problems of representation, coordination and
implementation.
xiv In terms of representation, whole parts of the world
have been left out of international AI governance
conversations. Figure (a) shows seven prominent,
non-United Nations AI initiatives.3 Seven countries
are parties to all the sampled AI governance
efforts, whereas 118 countries are parties to none
(primarily in the global South).
xv Equity demands that more voices play meaningful
roles in decisions about how to govern technology
that affects us. The concentration of decision-
making in the AI technology sector cannot be
justified; we must also recognize that historically
many communities have been entirely excluded
from AI governance conversations that impact
them.
xvi AI governance regimes must also span the globe to
be effective — effective in averting “AI arms races”
or a “race to the bottom” on safety and rights, in
detecting and responding to incidents emanating
from decisions along AI’s life cycle which span
multiple jurisdictions, in spurring learning, in
encouraging interoperability, and in sharing AI’s
benefits. The technology is borderless and, as it
spreads, the illusion that any one State or group of
States could (or should) control it will diminish.
xvii Coordination gaps between initiatives and
institutions risk splitting the world into
disconnected and incompatible AI governance
regimes. Coordination is also lacking within the
United Nations system. Although many United
Nations entities touch on AI governance, their
specific mandates mean that none does so in a
comprehensive manner.
xviii However, representation and coordination are not
enough. Accountability requires implementation
so that commitments to global AI governance
translate to tangible outcomes in practice, including
on capacity development and support to small
and medium enterprises, so that opportunities are
shared. Much of this will take place at the national
and regional levels, but more is also needed
globally to address risks and harness benefits.


 

3. Enhancing global
cooperation
xix Our recommendations advance a holistic vision for
a globally networked, agile and flexible approach
to governing AI for humanity, encompassing
common understanding, common ground and
common benefits. Only such an inclusive and
comprehensive approach to AI governance can
address the multifaceted and evolving challenges
and opportunities AI presents on a global scale,
promoting international stability and equitable
development.
xx Guided by principles established in our interim
report, our proposals seek to fill gaps and bring
coherence to the fast-emerging ecosystem
of international AI governance responses and
initiatives, helping to avoid fragmentation and
missed opportunities. To support these measures
efficiently and to partner effectively with other
institutions, we propose a light, agile structure as
an expression of coherent effort: an AI office in the
United Nations Secretariat, close to the Secretary-
General, working as the “glue” to unite the initiatives
proposed here efficiently and sustainably.
A. Common understanding
xxi A global approach to governing AI starts with
a common understanding of its capabilities,
opportunities, risks and uncertainties. There is a
need for timely, impartial and reliable scientific
knowledge and information about AI so that
Member States can build a shared foundational
understanding worldwide, and to balance
information asymmetries between companies
housing expensive AI labs and the rest of the world
(including via information-sharing between AI
companies and the broader AI community).
xxii Pooling scientific knowledge is most efficient
at the global level, enabling joint investment in a
global public good, and public interest collaboration
across otherwise fragmented and duplicative
efforts.
Figure (a): Representation in seven non-United Nations international AI
governance initiatives
10 Governing AI for Humanity1
International scientific panel on AI
xxiii Learning from precedents such as the
Intergovernmental Panel on Climate Change (IPCC)
and the United Nations Scientific Committee on
the Effects of Atomic Radiation, an international,
multidisciplinary scientific panel on AI could collate
and catalyse leading-edge research to inform
scientists, policymakers, Member States and other
stakeholders seeking scientific perspectives on AI
technology or its applications from an impartial,
credible source.
xxiv A scientific panel under the auspices of the United
Nations could source expertise on AI-related
opportunities. This might include facilitating “deep
dives” into applied domains of the SDGs, such as
health care, energy, education, finance, agriculture,
climate, trade and employment.
xxv Risk assessments could also draw on the work of
other AI research initiatives, with the United Nations
offering a uniquely trusted “safe harbour” for
researchers to exchange ideas on the “state of the
art”. By pooling knowledge across silos in countries
or companies that may not otherwise engage or be
included, a United Nations-hosted panel can help to
rectify misperceptions and bolster trust globally.
xxvi Such a panel should operate independently, with
support from a cross-United Nations system
team drawn from the below-proposed AI office
and relevant United Nations agencies, such as
the International Telecommunication Union (ITU)
and the United Nations Educational, Scientific and
Cultural Organization (UNESCO). It should partner
with research efforts led by other international
institutions, such as the Organisation for Economic
Co-operation and Development (OECD) and the
Global Partnership on Artificial Intelligence.


B. Common ground
xxvii Alongside a common understanding of AI, common
ground is needed to establish interoperable
governance approaches anchored in global norms
and principles in the interests of all countries. This
is required at the global level to avert regulatory
races to the bottom while reducing regulatory
friction across borders; to maximize learning and
technical interoperability; and to respond effectively
to challenges arising from the transboundary
character of AI.
Policy dialogue on AI governance
xxviii An inclusive policy forum is needed so that all
Member States, drawing on the expertise of
stakeholders, can share best practices that are
based on human rights and foster development,
that foster interoperable governance approaches
and that account for transboundary challenges that
warrant further policy consideration. This does not
mean global governance of all aspects of AI, but it
can set the framework for international cooperation
and better align industry and national efforts with
global norms and principles.
xxix Institutionalizing such multi-stakeholder exchange
under the auspices of the United Nations can
provide a reliably inclusive home for discussing
emerging governance practices and appropriate
policy responses. By edging beyond comfort
zones, dialogue between non-likeminded countries,
and between States and stakeholders, can
catalyse learning and lay foundations for greater
cooperation, such as on safety standards and
rights, and for times of global crisis. A United
Nations setting is essential to anchoring this effort
in the widest possible set of shared norms.
xxx Combined with capacity development (see
recommendations 4 and 5), such inclusive dialogue
on governance approaches can help States and
companies to update their regulatory approaches
and methodologies to respond to accelerating AI.
Connections to the international scientific panel
would enhance that dynamic, comparable to the
relationship between IPCC and the United Nations
Climate Change Conference.
xxxi A policy dialogue could begin on the margins of
existing meetings in New York (such as the General
Assembly4) and in Geneva. Twice-yearly meetings
could focus more on opportunities across diverse
sectors in one meeting, and more on risks in the
other meeting.5 Moving forward, a gathering like
this would be an appropriate forum for sharing
information about AI incidents, such as those
that stretch or exceed the capacities of existing
agencies.
xxxii One portion of each dialogue session might focus
on national approaches led by Member States, with
a second portion sourcing expertise and inputs
from key stakeholders – in particular, technology
companies and civil society representatives.
In addition to the formal dialogue sessions,
multi-stakeholder engagement on AI policy
could leverage other existing, more specialized
mechanisms, such as the ITU AI for Good meeting,
the annual Internet Governance Forum meeting, the
UNESCO Global Forum on AI Ethics and the United
Nations Conference on Trade and Development
(UNCTAD) eWeek.
4 Analogous to the high-level political forum in the context of the SDGs that takes place under the auspices of the Economic and Social Council.
5 Relevant parts of the United Nations system could be engaged to highlight opportunities and risks, including ITU on AI standards; ITU, the United Nations
Conference on Trade and Development (UNCTAD), the United Nations Development Programme (UNDP) and the Development Coordination Office on AI applications for the SDGs; UNESCO on ethics and governance capacity; the Office of the United Nations High Commissioner for Human Rights (OHCHR) on human rights accountability based on existing norms and mechanisms; the Office for Disarmament Affairs on regulating AI in military systems; UNDP on
support to national capacity for development; the Internet Governance Forum for multi-stakeholder engagement and dialogue; the World Intellectual Property
Organization (WIPO), the International Labour Organization (ILO), the World Health Organization (WHO), the Food and Agriculture Organization of the United Nations (FAO), the World Food Programme, the United Nations High Commissioner for Refugees (UNHCR), UNESCO, the United Nations Children’s Fund, the World Meteorological Organization and others on sectoral applications and governance.

AI standards exchange
xxxiii When AI systems were first explored, few
standards existed to help to navigate or measure
this new frontier. More recently, there has been a
proliferation of standards. Figure (b) illustrates the
increasing number of standards adopted by ITU,
the International Organization for Standardization
(ISO), the International Electrotechnical
Commission (IEC) and the Institute of Electrical and
Electronics Engineers (IEEE).
xxxiv There is no common language among these
standards bodies, and many terms routinely used
with respect to AI – fairness, safety, transparency
– do not have agreed definitions. There are
also disconnects between those standards that
were adopted for narrow technical or internal
validation purposes, and those that are intended
to incorporate broader ethical principles. We now
have an emerging set of standards that are not
grounded in a common understanding of meaning
or are divorced from the values that they were
intended to uphold.
xxxv Drawing on the expertise of the international
scientific panel and incorporating members from
the various national and international entities that
have contributed to standard-setting, as well as
representatives from technology companies and
civil society, the United Nations system could serve
as a clearing house for AI standards that would
apply globally.

No comments:

Post a Comment