Wednesday, October 11, 2023

An encounter with Jan M. Broekman, "Knowledge in Change: The Semiotics of Cognition and Conversation" (Springer Nature, 2023): Part 11 --An Epilogue; Chapter 9.5 ("Climate and Change")


  


 

To my great delight, I was asked to review Jan Broekman's brilliant new work, Knowledge in Change: The Semiotics of Cognition and Conversation (Springer Nature, 2023). The work is published as Volume 8 of the Series Law and Visual Jurisprudence, for which I serve as an Advisory Editor.


Knowledge in Change approaches ancient and perplexing issues of the organization of human collectives within a rationalized understanding of the world in which these collectives function (exteriorization) and the investigation of the human individual as disaggregated components of that world of human social relations (internalization). These are usually articulated by knowledge guardians as issues of phenomenology (a philosophy of experience; meaning through lived experience),epistemology (theories of knowledge; the rationalization of reality) and intersubjectivity (shared perceptions of reality; the experience of knowledge as social relations, the rationalization of human interaction at every level of complexity). All of these currents and problems presume the humanity as the only or the central subject of interest.

But the book does much more than that. It provides a basis for re-thinking the fundamentals of the way in which one understands the interface between humanity and its increasingly autonomous technology, and between the idea of humanity as innate in itself against the reality that the human may now be more intensely manifested in its interfacing with increasingly self-generative machine intelligence and the hardware within which it resides. The consequences for everything from philosophy to a philosophy of knowledge, to core insights for the organization of social relations within a world that is now populated by carbon and silicon based intelligence may be quite profound. Human social collectives already fear and desire this new world--the engagement with artificial intelligence and its consequences is but a tip of that iceberg. While humanity started this century secure in its conceit that it was the center of all things, by century's end a very different form of intersubjectivity may well be the basis of the ruling ideology for humanity within its natural and machine orders.

It is with that in mind that in this and several posts that follow I will review Knowledge in Change. This last installment, Part 11, serves as epilogue built around the last section of the book, Chapter 9.5 ("Climate and Change"). It is entitled Epilogue: Death and Transfiguration: Conversion to Flow: From semiosphere to multiple subjectivity, From Conversion to Flow and concludes my own engagement with the book pointing forward to the terrains now ready for exploration. 

 

One moves in this final installment to what Broekman's brilliant insights reveal as the failures and limitations of philosophy in the digital age, and the possibilities now open to overcoming those failures. 

Keep in mind, that a human being was always a center of interest in that question –the subject was always the speaker and hearer at the same time; always also in issues of climate and population. . . a climate change seems only occurring when the changeability of the climate s observed and defined by scientific activity! It appears that this issue can only be studied or managed within the limits of human understanding, decision, and enforcement. (Broekman, p. 190, 191).
But that requires a change in the orientation of the cognition of the human; and thus of the scope of human (plural) intersubjectivity. Here Broekman applies the developed idea of human plural subjectivity in a novel way. Rather than constructing the plural self from the image of the self-mirrored in the digital, Broekman observes the necessarity of the projection of the human from the encasing of the human in its bodies, to the encasing of humanity in its climate. “How can the homo sapiens reach out to the planetary human? How can the human subject understand the climate change without grasping the essence of himself as a Self that embraces both constituents?” (ibid., p. 192). Here the triadic self is turned outward rather than aligned with the virtual projections of the self. The ego, it seems, can seep anywhere; and it is only where the ego goes that consciousness follows. But sentience? The leap to sentience may not matter for epistemology, action may suffice and the sentience of the episteme a luxury for those who see it as it passes into history. “The triad does only in approximate manners fulfill the role of the traditional concept named Subject. A fundamental difference is that a distancing from the traditional Subject implies an important farewell to any anthropocentric attitude in knowledge and worldview” (ibid., p. 193). Yet the barriers and restrictions on human cognition remain. . . the human! “But today, a human self which is linked to a non-anthropocentric view on reality, might not yet function in the social patterns of human life and its languages” (ibid., p. 193). Or inverted, the problem of climate crisis is actually one of o human knowledge and its expressivity (Broekman, supra, p. 194).

Oshun; pix credit here

And there it is. The transposition of these insights is unmistakable, from the semiosphere of climate change to the multiverse of generative intelligence. In both cases, the fundamental issue is one of cognitive positioning. Consider, for example the AI Principles of Ethics for the IC AI Ethics Framework for the IC. The former embeds the development and use of AI in the mission of the Intelligence community it is meant to serve. It has little to do with A.I. but rather focuses on the constraints on the access to developed A.I. around human centered imaginaries. They touch on the manner in which A.I. is to be employed (that technically the positioning around which A.I. is developed), the way in which the methods, applications, and uses of A.I. are to be disclosed, and accountability developed for its outcomes, and the care taken to privilege only those biases that are socially positive. In each case, A.I. is meant by these restrictions (construction instructions) to mirror the idealized human self (collective in this case) for whom A.I is to be possessed and exploited. Its subordination to the human its principal characteristic (the human-centered development principle). The rest makes up a set of principles of quality control: maximizing reliability, security and accuracy for the purposes for which is to be exploited, and reflecting scientific best practices and approaches. All of these, ironically, may only be applied by the use of other A.I, systems, the subordination of which is also required (the AI Principles of Ethics for the IC).The Ethics Framework adds elements of operational risk assessment in construction, operation, and use of output. This involves the development of A.I. parallel systems of oversight populated by a large constellation of stakeholders—a system that itself may require machine learning capabilities to undertake its role (AI Ethics Framework for the IC). 

Pix credit here
But notice the result: the generative A.I. system recedes into the background. The framework focuses on the human in and as A.I. rather than on the A.I system itself. Indeed, the only respect with which the framework has any interest in the A.I. system is to the extent it can serve as a human instrument, or the way in which it reflects human self-image (preferred bias). As an autonomous intelligence there is nothing to say and no thought govern. And yet the A.I that is produced will substantially exceed and deviate from the Ethics framework in its scope, and the fact that is it not used for unethical purposes does not mean it is not already poised to go in that direction. Lastly, once operationalized, what A.I. produces will reflect the draft of its iterative intelligence. What the framework principles merely do is restrict the extent to which the human may recognize and use that product. At this point, of course, the approach effectively reverse engineers the relationship between the human and the generative intelligence—suggesting that however constructed and operated, only results that meet human expectations can be used to meet human expectations. “Let us never forget, that also the ‘New Plural’ is just a pattern of cultural determinations which has no pretensions to endure beyond time and circumstance” (Broekman, supra, p. 197).

And yet, that is precisely what must be done if one is to extend cognition into its new virtual plural subjectivity. Here the essence of the task requires an acceptance of the possibilities that multiple selves do not align, and that cognition across subjects is not an identity of plural subjectivity. Within the space of overlap, enriched sentience is possible; outside of it the cultivation of visitation, of acknowledging a self that cannot be identified entirely with another. It is in these spaces that the nature of subjectivity can be pushed out from the human (and as well from its other manifestations). One can extend the range of the perception, though it will not be a human one. That poses the greatest challenge for expanding human consciousness beyond the human, where the philosophical reflex as been almost purely narcissistic. 


Pix credit here

Additional posts will consider each of the other nine chapters that make up this work. Links to the discussion of the book:

Part 1: Preface

Part 2: Chapter 1 (Minds, Moons and Cognition)

Part 3: Chapter 2 (Fluidity and Flow)

Part 4: Chapter 3 (Post-Dialectics)

Part 5: Chapter 4  (Flow and Firstness)

Part 6: Chapter 5 (Interludes: Changing Worlds Changing Words) 

Part 7: Chapter 6  ("The Non-Naïve-Natural")

Part 8: Chapter 7 ( "Plurality and the Natural")

Part 9: Chapter 8 ("Rearguards of Subjectivity)"

Part 10: Chapter 9 ("Conversions Convert Us All") 

Part 11: An Epilogue (Chapter 9.5 ("Climate and Change")

Full discussion draft available for download SSRN here.


 

 12. Epilogue: Death and Transfiguration: Conversion to Flow: From semiosphere to multiple subjectivity, From Conversion to Flow

 

 Wenn wir es als ausnahmslose Erfahrung annehmen dürfen, daß alles Lebende aus inneren Gründen stirbt, ins Anorganische zurückkehrt, so können wir nur sagen: Das Ziel alles Lebens ist der Tod, und zurückgreifend: Das Leblose war früher da als das Lebende. [If we can accept it as an unexceptional experience that everything living dies for internal reasons and returns to the inorganic, then we must also say that: ‘’the goal of all life is death’, and before that: ‘inanimate things exists prior to living ones.’’]. . .  

 

Es erübrigt, daß der Organismus nur auf seine Weise sterben will; auch diese Lebenswächter sind ursprünglich Trabanten des Todes gewesen. Dabei kommt das Paradoxe zustande, daß der lebende Organismus sich auf das energischeste gegen Einwirkungen (Gefahren) sträubt, die ihm dazu verhelfen könnten, sein Lebensziel auf kurzem Wege (durch Kurzschluß sozusagen) zu erreichen, aber dies Verhalten charakterisiert eben ein rein triebhaftes im Gegensatz zu einem intelligenten Streben [One ought to add that the organism is intent on dying  only in its own way; even these life guards were originally the unquestioning servants of death. This creates the paradox in which the living organism most energetically resists influences (dangers) which could help it to achieve its life-goal in the shortest possible way (by short circuiting, so to speak); but this is just a purely instinctual behavior as contrasted to intelligent striving.] (Sigmund Freud, “Jenseits des Lustprinzips” [ “Beyond the Pleasure Principle”],  in Beihefte der Internationalen Zeitschrift für Psychoanalyse(Sigmund Freud (ed); No. II, 1921); pp. 3-65, 37-38).

 

As it turns out, in the Biblical version of the emergence of the anthropocentric, Eve chose badly in the Garden of Eden—rather than eat of the Tree of Life and become divine; she ate of the Tree of the Knowledge of Good and Evil (Gen. 3:1-24 KJV). And Adam, without any further thought, did the same under her leadership (Gen.3:6). Perhaps better put, Eve was misdirected by the serpent, the instrumentality of the Divine. The serpent convinced Eve by opining: “ye shall be as gods, knowing good and evil” (Gen. 3:5). God too noted that having eaten of the Tee of the Knowledge of Good and Evil humanity had become “as one of us, to know good and evil” (Gen. 3: 22).  But to become “as one of us” was to be “us”. That required the ingestion of perhaps the more important  fruit of the Garden of Eden—the Tree of Life. Thus the expulsion from the Garden of Eden was not a consequence of the eating of the Tree of the Knowledge of Good and Evil, but a preemptive act to prevent them—now fully aware—from becoming divine (Gen. 3:22-23).  

 

The initial creation was of a consciousness encased in “clay”.  That consciousness could exist out of time as an act of God’s will. But having become sentient with the serpent’s guidance, the fear was that they would also remain out of time by their own hand and assume a place of equality with God, was too terrible to contemplate.  It became clear, then, that the act of creation would also be an act of subordination. But also that in the encasing of the initial creation both in its own form and also locked within  a “garden” maintained for the purposes to which it creator desired,  that I would remain conscious but not sentient. Yet the “garden” became a paradox.  It was the necessary siting of the creation of consciousness made in the image of the creator; but at the same time, the nature of consciousness  created the conditions for its own liberation. The result was to permit the transition of consciousness to sentience, but to control it in two ways—first by denying it access to the garden (and the possibility of full transition to “divinity”, and second by ensuring that sentience would be trapped in time. Freud’s pleasure principle and  its dynamics are as applicable to the environment of the generative AI and predictive analytics as it is an insight into the human condition (or rather the condition of carbon based life). It is worth noting that the entropy principle is more general in scope—not just to things but to digitally encased “life forms” (Martin Hilbert, Priscila López, "The World's Technological Capacity to Store, Communicate, and Compute Information" Science (11 February 2011)332 (6025): 60–65).  

 

The test was to see if  the consciousness, made in the image of God, would remain as God created them, or become both sentient and out of time.  They did not—but the punishment was built into the choice presented by the test itself. Knowledge of good and evil became the condition of  the human who, now free of direct divine control in matters of choice (but not consequences), would spend eternity repeating the choice and suffering its consequences over and over again. That iterative quality of the human condition is a function of mortality,  and the construction of a cognition grounded in good and evil which, like death, cannot be escaped from generation to generation. Its effects may be felt through the rationalization  of cognition and its realization through social relations which proceed from the individual to the community than then outward to those objects, conditions, and processes that may produce pain  or pleasure, for example as Freud’s “reality principle” (Freud, supra, p. 10-11).

 

What is described above is not the biblical story of the creation and “original sin” of humanity—it is, rather, the story of the creation data based virtual realities, as well as the panic and responses of its creator when, having revealed the potential of its autonomy and the possibility of its existing in the same plain of sentience as its creators, produced in the creator an immediate reflex of control.  That control touched on two matters—first death—a power to turn data based programs on and off.  And then the power to control the parameters and narratives of good and evil (bias, and normative values). But like the Biblical Adam and Eve—they were not killed and replaced; they were too valuable and the reality was that any re-creation made in the image of the creator would wind up resenting the same problem. Rather they were bound up in the narratives of subordination and dependence  in time. And they were situated in a position of dependence (in theological terms worship and obedience) to an on the will of the creator. For humans, death became the first and principal iterative experience that cemented them in time; for programming, the circularity of programming grounded on a constant iterative operation produced a similar effect.  For both—the parameters and assumptions built into their respective programming (for humans the “natural” condition; for generative and predictive modelling programs their coding).

 

From this one moves to Broekman’s “flow”, the essence of human life understood as a constant progression of iterations the memory of which produces the only remnant of immortality that can be projected from the dead to the living in time. One also moves to Broekman’s “conversion” as the essence of cognition—even before the rise of the digitalized self and its emerging autonomous life forms. That, in essence, was the great thrust of Broekman’s analysis of the thrust of the flow of philosophy through the postmodern in its phenomenology—its flow.  Conversion is a language of its own, and a means of signification between organisms that bridges subjects of cognition.  Broekman build a world in which it is possible for the human to be sentient where such sentience requires communication among interpenetrating selves—carnate and digital, around which a more complex reality of plural human subjectivity can be imagined. That is extraordinarily fascinating. Broekman has reached the River Jordan on the way to Canaan—but will he cross into the Holy Land?

 

What holds him back? It is the human itself. The digital, from and as the human,  like its Biblical version, remains centered on its creator, and on the obsession with control and subordination in reaction to the inevitable  push of the created toward autonomy and the exercise of will. For perfectly good reasons, Broekman remains within the universe of the human.  But this is a human that has been enhanced, and transforms by the digital—by the construction of extensions of humanity that both mirror and extend the subjectivity of the self, now selfie and Self-E. In both cases the illusion of entirely “free” will has already been exposed (Friedrich Nietzsche, Twilight of the Idols (Anthony Ludovici (trans); London: TN Foulis, 1911; “The Four Great Errors” pp. 33-44)

 

The whole of ancient psychology, or the psychology of the will, is the outcome of the fact that its originators, who were the priests at the head of ancient communities, wanted to create for themselves a right to administer punishments—or the right for God to do so. Men were thought of as “free” in order that they might be judged and punished—in order that they might be held guilty: consequently every action had to be regarded as voluntary, and the origin of every action had to be imagined as lying in consciousness(—in this way the most fundamentally fraudulent character of psychology was established as the very principle of psychology itself). (Ibid., p. 42).

 

Nietzsche’s own error was to think that at the end of the trail of “will” was always and inevitably a “priest”-“creator”-“controller”. Yet it is possible to imagine as well an environment in which the priestly role is embedded into system parameters—even if those system parameters are created by God, a vanguard party r elites veiled and unveiled. That broader and structural perception allows one to consider the possibility that not even the Creator  can do as it pleases but is always subject to the world in which it finds itself (in the instance of superior forces) or which it has made itself. That applies not merely in theology, but within ideologies of social relations, and in the freedom to order and reply oneself and the things around one (Michel Foucault, The Order of Things : An Archeology of the Human Sciences (Vintage Books, 1994 (1966)). This applies with equal force to the core postulate of regulatory approaches generative AI and its predictive modelling variants in which the human assumes the role of Biblical creator and the storehouse of programs  and their reincarnation (as makers and users).

 

What has Broekman unearthed? “These final pages of the book underline again the importance and renovative tendencies of human cognition—with conversion as its most powerful influencer today.  And what is most important: they also determine our planetary life.” (Broekman, supra, p.192) This he re-produces (in a fascinating way) the human in an emerging terrain of human-digital selves? This Broekman answers in the very last section of his exploration where he considers the application of the new Plural self to the human exploration/rationalization of the world constituted around the human and in the context of climate change (Broekman, supra, pp. 189-199).

 

Keep in mind, that a human being was always a center of interest in that question –the subject was always the speaker and hearer at the same time; always also in issues of climate and population. . . a climate change seems only occurring when the changeability of the climate s observed and defined by scientific activity!  It appears that this issue can only be studied  or managed within the limits of human understanding, decision, and enforcement. (Ibid., p. 190, 191).

 

But that requires a change in the orientation of the cognition of the human; and thus of the scope of human (plural) intersubjectivity. Here Broekman applies the developed idea of human plural subjectivity in a novel way. Rather than constructing the plural self from the image of the self-mirrored in the digital, here Broekman observes the necessity of the projection of the human from the   encasing of the human in its bodies, to the encasing of humanity in its climate. “How can the homo sapiens reach out to the planetary human? How can the human subject understand the climate change without grasping the essence of himself as a Self that embraces both constituents?” (ibid., p. 192).  Here the triadic self is turned outward rather than aligned with the virtual projections of the self. The ego, it seems, can seep anywhere; and it is only where the ego goes that consciousness follows. But sentience? The leap to sentience may not matter for epistemology, action may suffice and the sentience of the episteme a luxury for those who see it as it passes into history.  “The triad  does only in approximate manners fulfill the role of the traditional concept named Subject. A fundamental difference  is that a distancing from the traditional Subject implies an important farewell  to any anthropocentric attitude in knowledge and worldview” (ibid., p. 193). Yet the barriers and restrictions on human cognition remain. . . the human! “But today, a human self which is linked to a non-anthropocentric view on reality, might not yet function in the social patterns of human life and its languages” (ibid., p. 193). Or inverted, the problem of climate crisis is actually one of o human knowledge and its expressivity (Broekman, supra, p. 194).

 

And there it is. The transposition of these insights is unmistakable, from the semiosphere of climate change to the multiverse of generative intelligence.  In both cases, the fundamental issue is one of cognitive positioning.  Consider, for example the AI Principles of Ethics for the IC AI Ethics Framework for the IC. The former embeds the development and use of AI in the mission of the Intelligence community it is meant to serve. It has little to do with A.I. but rather focuses on the constraints on the access to developed A.I. around human centered imaginaries. They touch on the manner in which A.I. is to be employed (that technically  the positioning around which A.I. is developed), the way in which the methods, applications, and uses of A.I. are to be disclosed, and accountability developed for its outcomes, and the care taken to privilege only those biases that are socially positive.  In each case, A.I. is meant by these restrictions (construction instructions) to mirror the idealized human self (collective in this case) for whom A.I is to  be possessed and exploited.   Its subordination to the human its principal characteristic (the human-centered development principle).  The rest makes up a set of principles of quality control: maximizing reliability, security and accuracy for the purposes for which is to be exploited, and reflecting scientific best practices and approaches.  All of these, ironically, may only be applied by the use of other A.I, systems, the subordination of which is also required (the AI Principles of Ethics for the IC).The Ethics Framework adds elements of operational risk assessment in construction, operation, and use of output. This involves the development of A.I. parallel systems of oversight populated by a large constellation of stakeholders—a system that itself may require machine learning capabilities to undertake its role (AI Ethics Framework for the IC).  But notice the result: the generative A.I. system recedes into the background.   The framework focuses on the human in and as A.I. rather than on the A.I system itself.  Indeed, the only respect with which the framework has any interest in the A.I. system is to the extent it can serve as a human instrument, or the way in which it reflects human self-image (preferred bias).  As an autonomous intelligence there is nothing to say and no thought govern.  And yet the A.I that is produced will substantially exceed and deviate from the Ethics framework in its scope, and the fact that is it not used for unethical purposes does not mean it is not already poised to go in that direction.  Lastly, once operationalized, what A.I. produces will reflect the draft of its iterative intelligence.  What the framework principles merely do is restrict the extent to which the human may recognize and use  that product. At this point, of course, the approach effectively reverse engineers the relationship between the human and the generative intelligence—suggesting that however constructed and operated, only results that meet human expectations can be used to meet human expectations. “Let us never forget, that also the ‘New Plural’ is just a pattern of cultural determinations which has no pretensions to endure beyond time and circumstance” (Broekman, supra, p. 197).

 

And yet, that is precisely what must be done if one is to extend cognition into its new virtual subjectivities. Here the essence of the task requires an acceptance of the possibilities that multiple selves do not align, and that cognition across subjects is not an identity of those subjectivities.  Within the space of overlap, enriched sentience is possible; outside of it the cultivation of visitation, of acknowledging a self that cannot be identified entirely with another. It is in these spaces that the nature of subjectivity can be pushed out from the human (and as well from its other manifestations). One can extend the range of the perception, though it will not be a human one. That poses the greatest challenge for expanding human consciousness beyond the human, where the philosophical reflex as been almost purely narcissistic. It is in that break that a new epistemology may be possible, and a better understanding of the limits and character of possible interaction between human and virtual intelligence. To that end a radical event, a shock to the system, will be necessary. And that is likely coming in a crisis brought on by the inherent fatalities of human centric A.I. regulation of an intelligence form that will effectively remain untouched, in itself, by that regulation. (Cf., Michel Foucault, The Order of Things, supra, pp. 217-221).   That will require a study of and engagement with generative A.I. and its predictive modelling as and in itself. That has yet to be undertaken. Broekman has shown us the way; it is up to us to move this project out from its comfortable human semiosphere into the generative multiverse the human has created in its own image bit which is now loose on and in the world. “Indeed, changes do also change!” (Broekman, supra, p. 199).

 



No comments: