Wednesday, March 22, 2023

Coding Orthodoxy; Automated Law; and Quality Control in AI--CAIDP (Center for AI and Digital Policy): OPEN AI (FTC 2023)

 

©Larry Catá Backer; Pieter Bruegel, The Tower of Babel 1563 (Vienna Arthistorical Museum)


 

The issue of the social relations between humans and the virtual spaces they inhabit--which are populated by both human dependent and, increasingly by self-learning programs that have acquired a certain amount of autonomy from their coders--have begun to capture the imagination of social collectives.  Having spent a tremendous amount of time and effort to detach humanity from exogenous supra-human forces that appeared to have dominion over the human, one now encounters a situation where it may be possible to argue that humanity has (re)created God in their own image. 

The response has sought to deploy many of the categorical mechanics of human agency on the virtual processes thy have created, or at least on the willingness of humans to accept the overlordship of the program, the simulation, and the like thrugh acceptance of the judgments they produce. These deployments include constitutional principles, moral. and ethical frameworks that are meant somehow to assert a (collective) human engagement or control of the systems, processes, and judgments that humans themselves have coded into machines that now might think for themselves (at least within the parameters of their programming, and subject to the logic of the human condition that is represented by the instructions for data identification, generation, analysis, and principles embedded in coding. These self-referencing systems are, in a sense both all too human and at the same time collectively supra-human in their good, bad, or indifferent habits of engaging with the stimuli that animate their programming. 

The problem, then, can be understood in semiotic terms. Where the language of social relations shifts from text to code,  a transposition of the mechanics of orthodoxy is required. That mechanics requires both translation and quality control measures. That is it requires a re-invention of the signification of the signs and objects through which meaning is described and applied in social relations. It also requires a new supervisory structures--from the discretionary decision making of human collectives (public and private operating as an exogenous force against heresy), to the automated self learning machines that serve that purpose in the ecologies of enormous data flows (public and private analytics ted to judgments of aggregated data representing a quantified vision of social relations in macro and micro relations and endogenous (within) them). Chat-GPT like efforts represents the quality control element of this transposition; the self-learning machine coding and algorithms (judgment structures) represent spaces where once text based ordering principles are realized.  

These discussions have, of course, spilled over onto regulatory spaces--for where better to cement collective meaning making and a unified orthodoxy respecting humanity's creatures than the human spaces created for the incarnation of an aggregated human (virtual) person expressed within the apparatus of politics in its administrative organs (I dare not suggest a correspondence with the organs of the individual). Much good work has resulted--in the sense of achieving their intended effect; it is for history to judge both its value and its success even in line with its own ambitions.

 

One of the more interesting efforts was recent announced by the CAIDP (Center for AI and Digital Policy). which, in its own words, "aims to promote a better society, more fair, more just — a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law." To those ends, 

joined by others, will file a complaint with the Federal Trade Commission, calling for an investigation of Open AI and the product chatGPT. We believe the FTC has the authority to act in this matter and is uniquely positioned as the lead consumer protection agency in the United States to address this emerging challenge. We will ask the FTC to establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established. We will simultaneously petition the FTC to undertake a rulemaking for the regulation of the generative AI industry.

The announcement and justification follow.  It may be accessed here in the original. 

POSTSCRIPT: The Press Release announcing the filing of the FTC Complaint may be accessed HERE (30 March 2023)


 

 

CAIDP Update 5.11


March 20, 2023

Dear Friends,


In 2019, many countries around the world, including the United States, committed to the development of human-centric and trustworthy AI. Yet less than a few years on, we appear to be approaching a tipping point with the release of Generative AI techniques, which are neither human-centric nor trustworthy. 


These systems produce results that cannot be replicated or proven. They fabricate and hallucinate. They describe how to commit terrorist acts, how to assassinate political leaders, and how to conceal child abuse. GPT-4 has the ability to undertake mass surveillance at scale, combining the ability to ingest images, link to identities, and develop comprehensive profiles.


As this industry has rapidly evolved so too has the secrecy surrounding the products. The latest technical paper on GPT-4 provides little information about the training data, the number of parameters, or the assessment methods. A fundamental requirement in all emerging AI policy frameworks – an independent impact assessment prior to deployment – was never undertaken. 


Many leading AI experts, including many companies themselves, have called for regulation. Yet there is little effort in the United States today to develop regulatory responses even as countries around the world race to establish legal safeguards. 


The present course cannot be sustained. The public needs more information about the impact of artificial intelligence. Independent experts need the opportunity to interrogate these models. Laws should be enacted to promote algorithmic transparency and counter algorithmic bias. There should be a national commission established to assess the impact of AI on American Society, to better understand the benefits as well as the risks.


This week the Center for AI and Digital Policy, joined by others, will file a complaint with the Federal Trade Commission, calling for an investigation of Open AI and the product ChatGPT. We believe the FTC has the authority to act in this matter and is uniquely positioned as the lead consumer protection agency in the United States to address this emerging challenge. We will ask the FTC to establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established. We will simultaneously petition the FTC to undertake a rule-making for the regulation of the generative AI industry. 


We favor growth and innovation. We recognize a wide range of opportunities and benefits that AI may provide. But unless we are able to maintain control of these systems, we will be unable to manage the risk that will result or the catastrophic outcomes that may emerge. We are asking the FTC to “hit the pause button” so that there is an opportunity for our institutions, our laws, and our society to catch up. We need to assert agency over the technologies we create before we lose control. 


Merve Hickok and Marc Rotenberg


For the CAIDP 

No comments: