Sunday, September 05, 2021

Bricolage and the Bricoleur--Data, Analytics, Human-Machine Learning and the Assemblage of Society and its Cultures Part I


 Bricolage is a popular French term borrowed by English speakers. It is commonly understood to refer to a construction or creation from a diverse range of available things. It speaks to intertextuality (the shaping of text and meaning by other text and meaning), and to the way in which meaning is made from the available objects around us--culture, politics, societal taboos and the like.

 It is as a bricoleur that I offer the first seven of a number of bursts of thought objects that seek to explore the foundations for the transformations of meaning from the objects around us even as we work furiously to pretend they are not there. 

For the first seven: Bricolage and the Bricoleur--Data, Alanlytics, Human-Machine Learning and the Assemblage of Society and its Cultures Part I.

For the next seven: Bricolage and the Bricoleur Part 2--The Constitution of Meaning and the Meaning of Constituting

For the third in the series: Bricolage and the Bricoleur Part 3--Institutional Self-Pleasuring and the Role of the Priests of Contemporary Collectives

And the last of this accumulation:  Bricolage and the Bricoleur Part 4--The State as a Consumable Object; and the Objects of Consumable States.


1. One asks the right question when one considers the need for accountability systems that align with the discretionary authority delegated to AI based systems administrative systems. Now accountability becomes a function of both the exercise of administrative discretion under authority of law, and the oversight and operational responsibilities of the bureaucracy over the systems that now exercise operational and policy discretion. But equating accountability with constraints on use, and its implementation by the re-insertion of the human in AI analytics itself raises further issues of self-reflexive accountability. In the end, the issue remains unresolved: accountability systems are meant to protect against AI abuses, yet those abuse are built into the programing and conceptualizations, into the data scrapping, of the program itself--efforts undertaken by humans. Protecting against AI abuse, in the end, still requires human accountability the solution to which may not necessarily be more human interaction. To get to that point, however, it still is necessary to patiently build the scientific knowledge of those inter-relationships--that self-contained ecology of human, machine, and analytical input -- that are AI systems. Building AI supervisory systems that reflect AI systemicity provides the context that helps better understand the misplaced tenor of arguments about transparency in AI accountability. People complain about "black boxes" in the context of AI, data driven governance, modeling and the like; but that is merely a metaphor for frustration at the borderlands between transparency and a deliberate embrace (property rights based) protecting the value of systems from close scrutiny. Transparency, then, acquires a characteristics much like that of internal auditing around system secrets. That is what is most challenging about accountability in this context--the way that accountability systems must carefully work their way around a public and quite conscious choice about the value of information and system protection (and there may be really good reasons for that) against the transparency necessary for robust accountability measures.  It is a pity that is rarely addressed by those who seek to influence the debates about AI based governance.

2. Is the judicial function useful today only for its theater and as a bit of nostalgia, connecting a society to its past. That, at any rate, is what the substitution of human discretion for predictive analytics suggests when applied to the rendering of decisions, with particular application to criminal cases. The issue of the use of predictive analytics implicates critical principles of criminal justice. The automation of decision making in these respects has significant normative consequences for conceptions of the borderlands between justice (exercise of discretion) and law (routinized, predictable and replicable results in the face of similar facts or data). When these are taken out of the hands of an individual human and placed within routines and sub-routines of programs that synthesize a huge data set of historical progressions of the exercise of both justice and law, one might ask oneself whether the predictive analytics are truly more human (in a collective sense) than the individual judge who seeks to stand in for the collective and its present understanding of its collective wisdom and experience). But once machines learn all there was from the aggregated collection of human decision making, and then learn to extrapolate from that on its own, does the human slowly vanish from the collective expression of human conduct? It is not so much that the machine and its predictive modelling cannot dispense either law or justice, but at some point is it human law and justice anymore? But to ask that question is to implicate a more terrible idea--that the reason one fears this dehumanizing element in such models is that the perfect state of humanity is imperfection; a system that more closely attains perfection becomes not merely inhuman, but also in that sense unjust.

3. Perhaps Isaac Asimov was right when he wrote his Robot series in the 1950s: if the robot represents an essentialized reproduction of the humanity that built its functions and infused its programs with (human, all too human) values then does the self-learning Robot deceive when it elaborates that all too human, though essentialized, response to interactions with representatives (singular and autonomous to be sure) of its aggregated functional creators? Asimov provided no answer. Humanity worries overmuch much about the deceptiveness of reproduction--though there is irony here evidenced by every childbirth. That irony is lost on our societal leading forces, those who influence collective meaning making, many of whom have embraced the pathologies of a defining premise--that the simulation of humanity expressed through the robot (rather than the infant), the self-learning machine, the 'bot, etc.) is inherently deceptive and manipulative. The deception and manipulation itself is not the problem (or humanity would have to self-destruct), but that it comes from the simulacra of humanity which the robot represents. The problematic nature of this premise of inherent deception is enhanced by its consequence for a society that is both suspicious of and open to the insertion of robots in its organization and operation: the generalization and privileging of transparency which, paradoxically, when taken to its limits makes it impossible to have a society that values equally its rights to privacy and private life, and to be protected against deception. Efforts in that direction are appreciated. The paradox is increasingly important as cultures of compliance and accountability increasingly bump up against cultures of personal autonomy and privacy. Adding robotic actors (programmed machine responsive autonomy) complicates matters. In this context the notion of consent becomes particularly interesting; one can I suppose consent to be deceived: role play is deeply ingrained in human interaction. But as the American experience recently with legislating consent in sexual encounters suggests, the concept is both slippery and redolent with cultural incoherence.

4. At a time when much academic research starts from the premise that one ought to be suspicious of the new data driven technologies (ultimately a conservative position that is both ironic but also consistent with the field preserving and policing cultures of academia) it is refreshing different to consider the possibilities from the other end of the spectrum. Might it be possible to harness the power of data driven analytics to enhance the efficiency, predictability and integrity of dispute resolution by looking to data analytics to cut the human (and human bias, including illicit bias in the form of corruption) from the process and thereby enhance the legitimacy of the system. The operative objective is trust building by enhancing systemic integrity--oddly enough the fundamental orienting approach of Chinese Social Credit systems .And yet one wonders how analytics can build in a trigger for justice measures.

5. It is said by influential people that transactional and functional approaches best justify the political state as it has come to be developed over the last several centuries. These in turn are grounded in a fundamental premise of state exceptionalism. State exceptionalism has been the bane of Enlightenment thinking for several centuries. Bodies corporate--whether in the form of states, religious communities, Mahjong clubs, economic collectives, or affinity collectives-- pose different versions of the same fundamental problem. The issue, then, is one that is a peculiar conceit of the way in which society (in the West especially) has invested one particular form of collective, with exceptional characteristics. Having imposed this exceptionalism as the fundamental starting point for meaning making, it is then necessary, from time to time, to justify it. That occupies a lot of time. But it is not a frivolous exercise by any means. The justification itself serves as the baseline premises against it is possible to conceive and order the "ideal" state and to measure the universe of current variations against it. And thus we arrive at the problem, the normative context against which an idealized baseline can be conceived. That is a critical problem to overcome, its resolution (note not solution, for this is a collective expression of faith-fidelity to ideals and principles) produces an enormous apparatus of legitimation and justification, of the elaboration of borderlands and taboos against which constitutional orders are measured. And at the bottom of all this is sometimes found the foundation of (political) justice, however expressed. And yet the Institutes of Justinian reminds us that justice precedes the state--the state is merely one of several vessels for its organization; and in the West the God of the Old and New Testament also makes clear that justice proceeds from a very different source requiring a very different set of collectives where in place of coercion one encounters the coercive force of agape. Still, people don't think this way anymore. And it is important to help develop knowledge along lines that can be encountered, ingested and absorbed to good effect.

6. One can start from a sensible position--that internet architectures tend to mirror the social and political organization of those who control it. The current dominant architecture aligns with the markets driven individual autonomy enhancing, and the democratic nature of liberal democratic globalization. For that reason it has been easy to be weaponized against states whose political-economic system is inconsistent with liberal democratic values (Iran, the Arab Spring, Hong Kong protests 2019, etc.). This is neither hidden nor a subject of apology; especially where such alignment is said to reflect the global good (normatively speaking at least). It does not take a genius, then, to understand that political systems that reject liberal democratic values, if they mean to protect their integrity and legitimacy, must develop internet systems that reflect their own values. And if such actors also embrace internationalism--to project that internet system outward to serve as a global or regional baseline that aligns with and strengthens their interventions abroad. All of this is fair enough. What becomes problematic is when these efforts are undertaken under cover of participation in accordance with dominant rules and sensibilities (good strategy but terrible ethics) and when the end product poses a threat to the political- economic order of other states and their influence empires through their economic, social, and political networks. Worse, of course, follows where this proffered alternative can also serve as a means of surveillance whether overt or covert.

7. Mao Zedong famously declared: "“A revolution is not a dinner party, or writing an essay, or painting a picture, or doing embroidery; it cannot be so refined, so leisurely and gentle, so temperate, kind, courteous, restrained and magnanimous. A revolution is an insurrection, an act of violence by which one class overthrows another.” Academics, especially those protective of field conceptions and boundaries are especially susceptible to the periodic revolutionary movement as the cycle of conception from radical to reactionary inevitably progresses over time. So it is, we are told, with the organization of international relations, of economics, or corporate governance, and of the organization of politics. I would tend to agree. But I have no stake in this revolutionary undertaking. Such a revolution requires the insights of semiotics and the ideologies of meaning (Bourdieu and Foucault) projected into the institutional silo (Luhmann) of an academic field. It is about time. But its mandarins will not be well pleased unless of course the revolution can be co-opted. That, however, is a post proposal realization problem.

No comments: