Sunday, December 29, 2019

Ruminations 89(4) (Data, Discretion, and Analytics in the State-Enterprise Complex): Looking Back on 2019 in Epigrams and Aphorisms

The year 2019 is ending with the great rifts--opened in 2016, exposed in 2017, and acquiring a greater urgency and revealing the power of its consequences in 2018--now exposed. More than exposed, 2019 marked their explosion, the aftermath of which, in 2020, will be marked by the start of a variety of end games in law, society, politics, culture and economics. Global divisions, more acute in 2018, finally reached moved toward climax in virtually all states, and with respect to all systems--law, compliance, religious, societal, cultural, and economic. While 2020 will likely be the year in which the climax events of 2019 will play themselves out, the year 2019 was in many ways the year of the "big bang" for the third decade of the 21st century.

Indeed, 2019 was rich with rupture-climax events.  But it might also be said that 2019 was as much the year of the anti-climax--that is, the year that events, long anticipated, finally burst fully ripened. That was, of course, the story of the impeachment of President Trump by the US House of Representatives.  But it was also the case with the decoupling of the Chinese and US economies (and note, not their separation or segregation) marked by rupture at the beginning of the year and a first stage arrangement at its end. This was also the year of Brexit, but not just Brexit but of the metaphor of Brexit fro the great inversions of political affiliation that appeared to affect political communities worldwide.  In some sense, this was also the great year of Jew baiting--everyone, it seems, had something to say about the People of Israel, even as their actions usually belied their words.  It was also the year of explosions.  There were explosions in Hong Kong, in Bolivia, in the UK, and in that stew pot that is Syria-Lebanon.  This was also the year of the rise of the core of leadership--in Turkey, Russia, China, the United States, Germany, and France.  When one thinks about 2019 in the future, one will think--climax, explosion, rupture, and revelation.    

With no objective in particular, this post and a number that follow provides my summary of the slice of 2019 to which I paid attention through epigrams and aphorisms.  It follows an end of year  tradition I started in 2016 (for those see here), 2017 (for these see here), and 2018 (for those see here).  

This is Part 4  (Data, Discretion, and Analytics in the Administrative State-Economic Enterprise Complex) which considers 2019 as the year that data driven governance, and the legal management of administrative and political decision making became a political, human rights and trade issue.  One moves here across a broad arc of governance that jumped into popular consciousness even as it remains shrouded in complexity. 2019 brought us both a muscular law that has been used to constrain political decision making in the United States and Britain, and increasing acceptance of governmentalization of the private sector which is increasing governing through data analytics rather than rules. 2019 was the year that liberal democracies embraced more warmly the notion of law as  a constituting element in systems of discretionary decision making. At the same time, the hand wringing about machine governance (through machine learning mechanisms (so-called artificial intelligence)) produced law that was itself a creature of data driven governance.
Share your own!

Ruminations 89: 2019 in Epigrams and Aphorisms:
Ruminations 89(1) (Blasphemies).
Ruminations 89(2) (Cults and Cult Objects).
Ruminations 89(3) (Impeachments).
Ruminations 89(4)  (Data, Discretion, and Analytics in the State-Enterprise Complex).
Ruminations 89(5) (The "Jewish Question" as Global Social Ordering)
Ruminations 89(6) (Metamorphosis)

Data and discretion are the twin pillars of governance that is increasingly displacing the modernist state and its rule of law enterprise.  In that context it is important to consider meanings.

data (n.): 1640s, "a fact given or granted," classical plural of datum, from Latin datum "(thing) given," neuter past participle of dare "to give" (from PIE root *do- "to give"). In classical use originally "a fact given as the basis for calculation in mathematical problems." From 1897 as "numerical facts collected for future reference." Meaning "transmittable and storable information by which computer operations are performed" is first recorded 1946. Data-processing is from 1954; data-base (also database) "structured collection of data in a computer" is by 1962; data-entry is by 1970.
discretion (n.): c. 1300, dyscrecyounne, "ability to perceive and understand;" mid-14c., "moral discernment, ability to distinguish right from wrong;" c. 1400, "prudence, sagacity regarding one's conduct," from Old French discrecion and directly from Medieval Latin discretionem (nominative discretio) "discernment, power to make distinctions," in classical Latin "separation, distinction," noun of state from past-participle stem of discernere "to separate, distinguish" (see discern).  Phrase at (one's) discretion attested from 1570s (earlier in (one's) discretion, late 14c.), from sense of "power to decide or judge, power of acting according to one's own judgment" (late 14c.). The age of discretion (late 14c.) in English law was 14. 
Discretion is grounded in data but based on moral judgment that is itself tied to data from which the inferences necessary for choices becomes possible.  The infinite loop of data and discretion is now bound up in systems that are displacing the normative constructs of law (and their constraints on decision making).  Beyond these, discretion and data are tied to risk--the minimization of which has displaced morals as the fundamental predicate for the normative foundations of law.  And where once hermeneutics formed the core challenge of law when it moved from theory to working system, now the "facticity" of data becomes the foundation on which data driven governance, and compliance, are founded.

1.  We now acquire significance by the data we can contribute to the functioning of the systems in which we are embedded; the university as a learning factory has now acquired a side business, the harvesting of data that can be used to manage the social construction of its students

"Short-range phone sensors and campuswide WiFi networks are empowering colleges across the United States to track hundreds of thousands of students more precisely than ever before. Dozens of schools now use such technology to monitor students’ academic performance, analyze their conduct or assess their mental health. . . . Americans say in surveys they accept the technology’s encroachment because it often feels like something else: a trade-off of future worries for the immediacy of convenience, comfort and ease. If a tracking system can make students be better, one college adviser said, isn’t that a good thing? But the perils of increasingly intimate supervision — and the subtle way it can mold how people act — have also led some to worry whether anyone will truly know when all this surveillance has gone too far. 'Graduates will be well prepared … to embrace 24/7 government tracking and social credit systems,” one commenter on the Slashdot message board said. “Building technology was a lot more fun before it went all 1984.'”(Colleges are turning students’ phones into surveillance machines, tracking the locations of hundreds of thousands).

2. Data harvesting requires justification; but the justification need not remain connected to the harvesting; that is the great fallacy of efforts to constrain  the use of data but not its harvesting. 

 “We have students so concerned about their privacy that they’re resorting to covering their [laptop] cameras and microphones with tape,” a junior said at the October 18, 2018 meeting. Woodbridge had recently joined hundreds of other school districts across the country in subscribing to GoGuardian, one of a growing number of school-focused surveillance companies. Promising to promote school safety and stop mass shootings, these companies sell tools that give administrators, teachers, and in some cases parents, the ability to snoop on every action students take on school-issued devices. . . . The capabilities of software programs like GoGuardian vary, but most can monitor the user’s browsing history, social media activity, and location, and some even log keystrokes. That surveillance doesn’t stop at the school doors, but continues everywhere children carry their school-issued computers and whenever they log into school accounts.  The companies that make this software—popular brands include Securly, Gaggle, and Bark—say that their machine learning detection systems keep students safe from themselves and away from harmful online content. Some vendors claim to have prevented school shootings and intervened to save thousands of suicidal children. 

3.  Data has no ideology; the choice of data is an expression of ideology; and the analytics that consumes data has a normative bias--one identifies data because it furthers an analytics that in turn makes it possible to control; that is the new face of law; what is controlled is a matter of politics.

There are multiple reports that the Chinese government is gathering people’s biometric data to track church attendance at different locations in Hubei province. The news adds to the growing list of human and religious rights violations being committed by the Chinese Communist Party (CCP). “The president of the Two Chinese Christian Councils of Huangshi city explained to believers that congregants’ fingerprint and facial data collection is one of the priorities in the churches’ work this year,” says watchdog site Bitter Winter. “She also said that this initiative helps to monitor gatherings at state-run churches and record attendance, warning that those believers who do not have their biometric data in the system will not be allowed into churches in the future.” In addition to mandating that people have their faces and fingerprints scanned, the Two Chinese Christian Councils is recording church members’ personal and family information. This is a requirement for all Three-Self churches (which are government approved) in Huangshi city in Hubei.

4. Data driven governance has produced both an intensification of the need to harvest data, to control data warehouses, and to develop machines that might use both effectively: what better instrument than human rights in business to mask these objectives.
When Beijing declared plans to become the world leader in artificial intelligence (AI) in 2017, it alarmed the US and the rest of the world, according to former US secretary of state John Kerry. In a conference in May, Kerry said Chinese President Xi Jinping’s announcement was not the “wisest” move: “It would have probably been smart to go try to do it and not announce it, because the announcement was heard in Washington and elsewhere.” His words foreboded a storm approaching Chinese AI firms. Reports days later indicated Washington was considering placing several Chinese surveillance companies on the US Entity List, the same export control blacklist telecoms equipment giant Huawei was put on, effectively banning them from purchasing core components from American companies. Trump administration officials confirmed the news in October, announcing that eight Chinese companies – including national AI champions SenseTime, Megvii and Yitu – were to be added to the Entity List, along with 20 police departments. “These entities have been implicated in human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance against Uygurs, Kazakhs, and other members of Muslim minority groups”, the department filing said. (2019 was the year AI became a political, human rights and trade issue. Where does this leave China’s AI superstars?).

5. Data driven governance is said to be extra-legal--it's judgment is based on algorithms without the complications of legal process; and yet from the perspective of governance, data driven analytics  is the essence of law, one in which the proof is in thew data, and the law is in the analytics, and the sentence is pronounced by the algorithm and enforced by the list. 

Nobody likes antisocial, violent, rude, unhealthy, reckless, selfish, or deadbeat behavior. What’s wrong with using new technology to encourage everyone to behave? The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extralegal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights. Social credit systems are an end-run around the pesky complications of the legal system. Unlike China’s government policy, the social credit system emerging in the U.S. is enforced by private companies. If the public objects to how these laws are enforced, it can’t elect new rule-makers. (Uh-oh: Silicon Valley is building a Chinese-style social credit system).

6. Data driuven governance systems inherently reflect the political-economic model in which they are used; Marxist Leninist Social Credit will be Marxist and Leninist in its principles and objectives, grounded in central planning and collective objectives; liberal democratic data driven governance will be free markets and societally based in its principles and operation, grounded in markets, fracture, and a governmentaiuzed private sector.

"Here are some of the elements of America’s growing social credit system. Insurance companies: The New York State Department of Financial Services announced earlier this year that life insurance companies can base premiums on what they find in your social media posts. That Instagram pic showing you teasing a grizzly bear at Yellowstone with a martini in one hand, a bucket of cheese fries in the other, and a cigarette in your mouth, could cost you. On the other hand, a Facebook post showing you doing yoga might save you money. . . . PatronScan: A company called PatronScan sells three products—kiosk, desktop, and handheld systems—designed to help bar and restaurant owners manage customers. . . It’s now easy to get banned by Uber, too. Whenever you get out of the car after an Uber ride, the app invites you to rate the driver. What many passengers don’t know is that the driver now also gets an invitation to rate you. Under a new policy announced in May: If your average rating is “significantly below average,” Uber will ban you from the service. WhatsApp: You can be banned from communications apps, too. For example, you can be banned on WhatsApp if too many other users block you. You can also get banned for sending spam, threatening messages, trying to hack or reverse-engineer the WhatsApp app, or using the service with an unauthorized app." (Uh-oh: Silicon Valley is building a Chinese-style social credit system).

7.  The vanguard looks at the out-of-control masses as populism in the West and political chaos in China; the masses look at a self satisfied and lecturing vanguard as a bloated corrupt aristocracy in the West and as a party that fails to adhere to its own ideology in China; for both social reconstruction is necessary, the construction of the ideal citizen, worker, official, leader--and for that the algorithm not the statute serves society best. 

The OECD Principles on Artificial Intelligence were adopted on 22 May 2019 by OECD member countries upon approval of the OECD Council Recommendation on Artificial Intelligence. The OECD AI Principles are the first such principles signed up to by governments. The OECD's website announcing adoption expressed the hope that the "OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct." 
It consists of five normative principles (what the OECD terms "values based") grounded in the sustainability enhancing notion of responsible stewardship that has gotten much traction in the business context among influence leaders in recent years (e.g., here).  They include
1--AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2--AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
3--There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
4--AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
5--Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
These five principles are then directed to the state, as is the habit of the OECD regulatory form.  That direction is summarized in five recommended actions that states can take. (OECD Principles on Artificial Intelligence and Some Brief Reflections; Recommendation of the Council on Artificial Intelligence)

8. The extent to which the courts must defer to the discretionary decisions of administrative bodies--as to both the meaning and application of their own rules; that is the extent to which the state and its apparatus will defer to the discretionary decisions of  coders and analysts as to both the identification of data and the construction (meaning) of the analytics to which data is fed; that courts and those who seek advantage through litigation are ignorant of the connection changes nothing.. 

"As we explain in this section, the possibility of deference can arise only if a regulation is genuinely ambiguous.And when we use that term, we mean it—genuinely ambiguous, even after a court has resorted to all the standard tools of interpretation. Still more, not all reasonable agency constructions of those truly ambiguous rules are entitled to deference. As just explained, we presume Congress intended for courts to defer to agencies when they interpret their own ambiguous rules. . . . But when the reasons for that presumption do not apply, or countervailing reasons outweigh them, courts should not give deference to an agency’s reading, except to the extent it has the “power to persuade.”. . . And although the limits of Auer deference are not susceptible to any rigid test, we have noted various circumstances in which such deference is “unwarranted.” Ibid. In particular, that will be so when a court concludes that an interpretation does not reflect an agency’s authoritative, expertise-based,“fair[, or] considered judgment.”  Kisor v. Wilkie, 588 U.S. --- (2019); Judgment of the Court slip op. pp. 11-12.

"Auer represents the apotheosis of this line of cases. In the name of what some now call the Auer doctrine, courts have in recent years “mechanically applied and reflexively treated” Seminole Rock’s dictum “as a constraint upon the careful inquiry that one might ordinarily expect of courts engaged in textual analysis.”30 Under Auer, judges are forced to subordinate their own views about what the law means to those of a political actor, one who may even be a party to the litigation before the court. After all, if the court agrees that the agency’s reading is the best one, Auer does no real work; the doctrine matters only when a court would conclude that the agency’s interpretation is not the best or fairest reading of the regulation." Kisor v. Wilkie, 588 U.S. -- (2019), Gorsuch, J., Concurring, slip op. at p. 9). (also Kagan and Gorsuch Clash Over Judicial Deference to the Administrative State: Who will rein in the ever-expanding administrative state?).

9. Even as judges seek to reduce the scope of administrative decision making that is beyond judicial reconsideration, they have begun as well to constrain the scope of political decision making; but the judges that would reduce the deference to the discretion of administrators would expand the discretion afforded the political actor a wide scope of discretion; while the judge that would protect the scope of the discretion of administrators would severely constrain the discretion of political decision making; it is in this context that the judicial control of data will be decided, but in the process the process of analysis and algorithmic consequences may well break free of judicial control.

Earlier today (24 September 2019) the UK Supreme Court ruled on two questions regarding an Order to prorogue the UK Parliament. First, was it justiciable: in other words, could the decision to prorogue be subjected to scrutiny by the courts? Second, was it legal? The court found that it did have the power to rule on this question. It went on to find that the Government’s decision to prorogue Parliament for five weeks was unlawful, and that Parliament has not in fact been prorogued. . . The Court held that the power to prorogue Parliament is a prerogative power: “a power recognised by the common law and exercised by the Crown… on advice” of the Prime Minister. The Court did not express a view on whether Her Majesty is obliged to act on that advice. The Court then asserted a right to exercise supervisory jurisdiction over decisions of the executive, which was said to have ample judicial precedent. . . . The Court noted its historical role in protecting: "‘Parliamentary sovereignty from threats posed to it by the use of the prerogative powers and in doing so have demonstrated that prerogative powers are limited by the principle of Parliamentary sovereignty.’"[para. 41]. The Court went on: "‘The sovereignty of Parliament would, however, be undermined as the foundational principle of our constitution if the executive could, through the use of the prerogative, prevent Parliament from exercising its legislative authority for as long as it pleased.’" [para 42]. As a result, the Court determined that the power to prorogue cannot be unlimited and must, therefore be subject to judicial review. On the legality of the prorogation, the Court concluded: "‘It is impossible for us to conclude, on the evidence which has been put before us, that there was any reason – let alone a good reason – to advise Her Majesty to prorogue Parliament for five weeks, from 9th or 12th September until 14th October. We cannot speculate, in the absence of further evidence, upon what such reasons might have been. It follows that the decision was unlawful.’" [para. 61] (Decision of the Supreme Court on the Prorogation of Parliament).

"At the heart of this suit is respondents’ claim that the Secretary abused his discretion in deciding to reinstate a citizenship question. We review the Secretary’s exercise of discretion under the deferential “arbitrary and capricious” standard. See 5 U. S. C. §706(2)(A). Our scope of review is “narrow”: we determine only whether the Secretary examined “the relevant data” and articulated “a satisfactory explanation” for his decision, “including a rational connection between the facts found and the choice made.” Motor Vehicle Mfrs. Assn. of United States, Inc. v. State Farm Mut. Automobile Ins. Co., 463 U. S. 29, 43 (1983) (internal quotation marks omitted). We may not substitute our judgment for that of the Secretary, ibid., but instead must confine ourselves to ensuring that he remained “within the bounds of reasoned decisionmaking,” Baltimore Gas & Elec. Co. v. Natural Resources Defense Council, Inc., 462 U. S. 87, 105 (1983). The District Court set aside the Secretary’s decision for two independent reasons: His course of action was not supported by the evidence before him, and his stated rationale was pretextual." (Dept. of Commerce v. New York, 588 U.S. --- (2019) slip op. at p. 16),

10. In the face of data driven governance and the emerging power of discretionary decision making, the law is reduced to gesture; the less likely the gesture is to be effective, the grander its forms; gesture becomes politics.

Legislators in San Francisco have voted to ban the use of facial recognition, the first US city to do so. The emerging technology will not be allowed to be used by local agencies, such as the city’s transport authority, or law enforcement. Additionally, any plans to buy any kind of new surveillance technology must now be approved by city administrators. Opponents of the measure said it will put people’s safety at risk and hinder efforts to fight crime. . . "With this vote, San Francisco has declared that face surveillance technology is incompatible with a healthy democracy and that residents deserve a voice in decisions about high-tech surveillance," said Matt Cagle from the American Civil Liberties Union in Northern California. "We applaud the city for listening to the community, and leading the way forward with this crucial legislation. Other cities should take note and set up similar safeguards to protect people's safety and civil rights.". . . The new rules will not apply to security measures at San Francisco’s airport or sea port, as they are run by federal, not local, agencies." [And of course it does nothing to regulate privatized uses of facial recognition.] (San Francisco is first US city to ban facial recognition).


No comments: