As both liberal democratic and Marxist-Leninist societies lurch inexorably forward in the construction of systems of algorithmic governance, the transposition of political and normative preferences of society becomes a more significant and immediate issue. That transposition is necessary because, like words, neither data nor analytics has an ideology or a morality. Neither is inherently politically sensitive; and both have little regard to social taboos. That detachment produces the danger--that data and analytics can be weaponized to undermine social taboos, or in the case of the early 21st century--the concentrated efforts of leading forces in both liberal democratic and Marxist-Leninist states to engineer a specific form of morality, and of social relations in the instruments of collective control. This is neither a terrible thing nor unusual in any respect. But it is an insight that must be foregrounded to avoid the great error of contemporary debates about the mechanisms of social control--that data, information or analytics is inherently imbued with a soul and a consciousness or character sufficient to express a forbidden bias given the proclivities and objectives of leading ideology (expressed either in constitutional law or in the general program of leading forces constituted as the organized holders of political authority).
That error, and especially the consequences of investing data, information, or analytics, with an inherent moral character, continues to spice the still nascent and efforts by the vanguards of the traditional legal and constitutional orders to confront and embed these modalities--these languages and forms--of law within their political, economic, social, and political systems. In the liberal democratic West, where many states continue to wrestle with the aftermath and consequences of a race and ethnic based colonial and imperial system, the issues become acute and usually erupt where they touch on race (however constructed and deployed in national context). The issues are compounded where retrograde elements of society, still under socialized to the realities of contemporary social justice moralities (however manifested in local context).
The folks at the Völkerrechtsblog have now confronted this issue in a very interesting online symposium: Racial Profiling in Germany. "In this symposium, scholars reflect on the European Court of Human
Rights’ recent Basu v. Germany decision (215/19; Judgment 18.10.2022 [Section III Information Note published; Text of 3rd Section here and here]. They situate the decision
within recent conversations surrounding race and racism in Germany and
in international human rights discourse more broadly." (Racial Profiling in Germany Symposium). The Symposium Introduction provides a nice description:
In Basu v. Germany, an international body reminded Germany once again of its less-than-perfect human rights record regarding racial discrimination. In this case, the European Court of Human Rights (ECtHR) ruled that Germany had violated the right to privacy according to Article 8 of the European of Human Rights (ECHR) in conjunction with the right to non-discrimination (Article 14 ECHR) by failing to provide a proper and independent investigation into allegations of racial profiling. This symposium takes the decision as a starting point to reflect on the practice of racial profiling in Germany and, more generally, on the place of race and racism in Germany and in international human rights discourse. (Racial Profiling in Germany )
While the symposium seeks to translate the practices at issue in Basu v. Germany into the language of race and racism ( and this appears to be an easy case in that respect given the sensibilities of the times), it may be worth looking at the larger issues lurking in the background.
I will be posting the Symposium contributions here and will also
contribute some brief reflections and engagement with each of the
excellent and thought provoking contributions. For this Part 1 we consider Dr. Sué González Hauck's excellent introduction.
If racism can be understood as a manifestation of profiling for the wrong reasons--that is through an analytics that presupposes that race inevitably triggers something-- does it follow that race is inherently racist as a point of data (that cannot be true if one is looking at this from a racial justice or reparations perspective for example). If that is not the case, then might the issue be the challenge of transforming the use of race in analytics, preserving its criticality in data based governance; or perhaps the equally challenging confrontation with the issue of the gaps between academic, political, cultural, and legal approaches to the identification of data on the basis of taxonomies that may reflect current culture but that have been slated fr transformation by elites. We return, in an odd and roundabout way to an insight somewhat inadvertently identified a generation ago in Posner's 1992 Sex and Reason or more pointedly as the signification of things (Culturally Significant Speech: Law, Courts, Society, and Racial Equity). In other words, in an algorithmically focused governance color, though in itself not a crime or an element of criminality (for example), it have operational significance.
That raises a number of questions that touch on the evolving conversations in developed states about the nature and role of multi-ethnic, religious and racial societies, which itself continues a conversation first seriously attempted in the aftermath of the decision to break up some of the European Empires in 1918. Among them: is it even possible to consider such characteristics if they are connected to historical cultural assumptions? would it make a difference if bias is exercised by an individual officer rather than systematically? does balancing the costs and benefits of the recognition and use of such data for predictive or descriptive analytics serve a useful function? how does one distinguish between the use of such data for positive purposes (racial or social justice campaigns and policies) but not in others (eg policing)? More generally profiling, when attached to elaborated systems of interlinked predictive or descriptive analytics nudges further thinking about the way that constitutional principle affect or ought to affect the signification of something (like color) in law, economics, politics, and social relations.
The repercussions are significant. They touch on issues of compliance and accountability--as it moves towards a more metrics based set of forms. It also suggests the difficulty of race based therapeutics--racial justice requires the development of a semiotics of race (for example--also gender and other privileged categories), but those categories are also the subject of what may be forbidden to states and other actors. That suggests both the need for a greater development of the ideologies of data recognition around sensitive "facts" and the broader discussion about the use of that data (bias) both as a category and as an object of analytics. One is only at the beginning of these conversations.
Other Essays and Reflections produced for this online symposium may be accessed here:
Part I Introduction
Part 2 Observations on the Case Information Note
Part 3: Observations on Elisabeth Kaneza, "Human Rights Standards for Accountability and Effective Remedies.
Part 4: Observations on Anna Hankings-Evans, "Race and Empire in International Law"
Part 5: Observations on Lisa Washington, "Racist Police Practices"