Saturday, March 05, 2022

Algorithmic Law: "ELI Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration" (Feb. 2022); Building a Disciplinary Ecology Around the Egoism of Algorithmic Decision Making Systems

 

Pix Credit HERE

  Aereon : They are an army unlike any other... crusading across the stars toward a place called UnderVerse, their promised land - a constellation of dark new worlds. Necromongers, they're called. And if they cannot convert you, they will kill you. Leading them, the Lord Marshal. He alone has made a pilgrimage to the gates of the UnderVerse... and returned a different being. Stronger. Stranger. Half alive and half... something else. If we are to survive, a new balance must be found. In normal times, evil would be fought by good. But in times like these, well, it should be fought by another kind of evil. (Opening Lines, Movie--The Chronicles of Riddick (2004))

 * * *

Since time immemorial it has been understood that people can be cruel, malicious, spiteful, greedy, narrow minded, ambitious, and utterly utterly selfish in the protection of their personal interests and in the search for the greater gory of themselves. Society has learned to love with all of that. And, indeed, much of what passes for literature in many societies is dependent on the certainty of the cruelty of humans especially in their relations with other humans and especially when they abuse their power relationships in every possible conceivable way.

However weakly, collectives have sought to manage the cruel and selfish in the service of social stability and prosperity and to the greater glory of whatever goals their ideological premises and beliefs propel them towards.  That is not news--though it is the stuff of analysis, morals, ethics, strategy and the like.  To that end, and in both public and private bodies, societies have sought to build cages of regulation against exercise of discretion in decision making, have elaborated complex standards for abuse of discretion (public institutions) and breach of duty (private institutions).  They have deployed religious, moral, social, and other control mechanisms to provide the normative baselines against which these modalities of management are deployed. These have worked well enough form time to time--producing in the best of times enough 'saints' to counter 'sinners' in a collective in which most are passive and reactive (fodder for management by leading forces) and can be induced in either direction.

But technology has added a new wrinkle on a old human condition.  What happens when one can construct a monster, a Frankenstein, into which one can breath the life of all of our strengths and every bit of our weaknesses, lusts, selfishness, cruelty and the like and then leverage that across the entirety of the terrain in which one may seek to assert power or to stamp a collective with one's own brand? Data driven governance--the management of people and productive forces across the entire breadth or organized life--proves an efficient means of projecting individual or collective prejudice in decision making (for either good or bad or both) far beyond the capacity of an individual or a leading group constrained by the human technologies of 20th century administrative capacity. That is scary--not because the technological possibilities are boundless, but precisely because they can make boundless the great capacity for abuse inherent in even the most well meaning individual. And it can be done in a way that detached human culpability (and culpability enforcing systems) from the assertion of power through the apparently neutral and bloodless logic of the algorithm supported by its quantifiable analytics. 

What, in fact, can be more neutral,  more scientific, than a collection of numbers arranged in accordance with the logic of equations, and feed on a set of gathered "facts"  from the world around us? Well. . . .at least in the deepest reaches of an Enlightenment fueled culture . . . . nothing. And so it is--within the logic and in accordance with the normative choices that produce data, that produce analytics of a specific kind, and that produces the algorithms that serve in a sense the function of the Old Testament supreme (and thus only) divinity. The problem then, with data driven governance, its analytics and algorithms, is not that it doesn't work, but rather that it works all too well.

Successful systems built on the logic of facts and factual relations that can serve the present (managerial  or regulatory analytics) and able to project itself back to the past (descriptive analytics) and into the future (predictive analytics) are built on a highly subjective and fragile foundation. Human develop analytics (in Freudian terms the data driven ego), humans must be able to 'see' the objects that analytics consumes (the data driven id), and humans develop the algorithms that serve as the judgment and direction for action fro out of that exercise (the data driven super-ego). These are, indeed, acts of egoism that then propel these modalities back to the well charted territory of managing human frailty in the service of the collective good.


It is with this in mind that is may be useful to see how the ELI has sought to temper the exportation and re-animation of the projected administrative ego in the form of disembodied data driven algorithmic systems by those institutions that had well enough, I suppose, sought to build a cage of rules around the abuse of administrative discretion.  They seek to meet the challenge of providing a new basis for meeting the challenge of abuse of discretion by inanimate processes in a way that hopefully will align with the more ancient systems for curbing abuse of discretion when undertaken directly by a human subject. For those ends a system is required--that is a system is necessary to manage the abuse of other systems that are its supervisory object.  That system, in turn, is embedded in an architecture (or an ecology, to use the fashionable language of this era) of sub systems that together are meant to re-inject the human, and human assessment, into the work product (and the system construction) of these algorithmic decision making modalities. That, of course, brings us back full circle.  In order to manage (again in the fashionable language of 21st century administration--to prevent, mitigate, and remedy breaches) algorithmically produced abuses of discretion, it is necessary to reinsert humans (and the systems necessary to ensure that they themselves do not abuse discretion in their role of protecting collectives from algorithmic abuses of discretion).  What is missing, then, is the re-invigoration of human directed systems to monitor human abuses of discretion in their role as monitors of algorithmically embedded abuses.  And more than that--the insertion of structures of impact assessment invite the layering of systems of analytics that themselves may complicate the process of administration in ways that may also reproduce the challenge but now in another context.  Nonetheless, that has been the model since time immemorial--to protect against abuse one must develop mechanisms that themselves may be sites of abuse. Yet that ought not to be forgotten here.

And that, then, points to the obvious.  It is an 'obvious' however, that is bound up in the necessary embrace of the original (systemic) sin of abuse (whether human or data driven): One can control for human deviation, and for human (selfish) deviations or projections within data based systems.  But one cannot control for, and indeed systems are meant to enhance the power to project) the fundamental prejudices, the collective manifestations of which are the foundations of political, social, religious, moral etc. culture, and the basis for the construction of identity within and the protection of differences among identities. Where within or between systems those terrains of differentiation are battlegrounds no amount of system tweaking will remove these contestations either from the field of human combat or in its more abstract manifestations in systems. To pretend that data driven governance and its algorithmic decision making are not themselves heavily embedded in and can be made tools of and powerful mechanisms for the projection of these battles is to misunderstand the intimate connection between the human and human systems, however much such systems, once birthed, are detached from direct and everyday human management. Thus the very worthy goals of the ELI: 

The use of these techniques poses specific problems relating to the principle of good administration. In addition, issues such as transparency, accountability, compliance and non-discrimination are particularly relevant in the context of public administration. . . There are various ways in which concerns raised by algorithmic decision-making can be addressed. The central idea underlying the Model Rules is an Impact Assessment. (ELI Issues Guidance on the Use of Algorithmic Decision-Making Systems by Public Administration)

These are themselves a collection of sometimes deeply contested normative concepts the resolution of which will substantially affect the way that system monitoring and evaluation (for abuse) is judged. Algorithmic systems are mirrors; it is clear that many humans do not like what they see.


For all that, the Model Rules provides both a step in the right direction, and a window into the normative foundations that guided choices in the development of the disciplinary and abuse mitigating ecology that is the "ELI Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration" (Feb. 2022). The framing concept of analytics to discipline analytics is useful--and its embedding of the human (assuming that the human is itself subject to monitoring for abuse) an important means for ensuring that detachment does not itself open further opportunities for abuse.  The construction of the "impacts assessment" framework mirrors similar systemic measures related for example to the governance of the obligation of corporations respect human rights trough human rights due diligence assessments that are also contextually contingent. All who contributed ought to be commended for this preliminary effort to rationalize a significant challenge to evolving administration in the public (and to some extent in the private) sector. The embedding of the approach within the practices already developing in related areas is or ought to be welcome; especially to the extent it may make more seamless communication and operation across systems. The notion of Impact analytics is intriguing, as is the notion that the management of one system creates the reaction of another and perhaps the communication and dependency between multiple systems. And that perhaps brings us back to the quite ironic opening of the Chronicles of Riddick movie and the quote offered above.

The Table of Contents, Executive Summary, Reporters' Preface, and Black Letter follows.  The entire Report may be accessed HERE. More information about the project and the full report are available here. A webinar on the topic, open to the public free of charge, will take place on 13 April 2022 from 12:00–13:30 CET. To register, please click here.





















No comments: