Friday, October 06, 2017

Flora Sapio Reflections on 《不能让算法决定内容》"Do Not Rely on Algorithm to Decide"

In the run up to the 19th Chinese Communist Party Congress one expects a certain quickening of the pace of political discussion as decisions become necessary around sometimes contentious choices for moving the nation forward. While most of these discussions occurs within the CCP itself, some sometimes leak out to the public--in what might be assumed to be carefully measured disclosures in state sanctioned media.

More specifically what appears to be an unusual opinion essay found its way onto the pages of China's People's Daily. That essay 《不能让算法决定内容》"Do Not Rely on Algorithm to Decide" suggests that there are some qualms about the scope and application of the "big data management" initiatives and its related social credit architecture that appears to be the vanguard forces of a transformation of the governing apparatus of state and party. The qualms come form both the Chinese left and right. In both cases the qualms potentially reveal the weaknesses of current approaches to analysis and critique of emerging governance structures represented by big data management initiatives and their social credit programs.

Flora Sapio and I have written short reflections on this article. The article raises fundamental issues of law, governance and culture at time of great transformation of the modalities through which these great institutions of human societal organization are undergoing change.

My Reflections may be accessed HERE.

This post also includes the text of the article 《不能让算法决定内容》"Do Not Rely on Algorithm to Decide" (只有中国语文). ENGLISH TRANSLATION at China Social Credit System Blog, with thanks to Flora Sapio: HERE.

Reflections on《不能让算法决定内容》"Do Not Rely on Algorithm to Decide"
Flora Sapio
The run-up to each Congress of the CCP is a time when policy positions are advanced through channels and in ways which look esoteric to most observers. One opinion piece contributing to on-going debates - “One Should Not Let Algorithms Determine Content” -  was published on October 5 on the People’s Daily, the official press organ of the Central Committee of the CCP.

I am commenting on the article, as opposed to ‘decoding’ or ‘explaining’ it, because the trends it discusses are real, and they transcend the conventional and often instrumental cognitive divide of “China vs. The Rest”. While the article is part of a domestic debate, it touches upon points essential to big data management and governance, as these exist across borders.

The world of human-computer interaction is the world of search and ranking algorithms (among others). Algorithms have changed the way we write for the public. Short, hopefully agile web pieces are more popular than journal articles. Algorithms filter the content we are exposed to on social media, determine who or what appears on the first page of search results, calculate the fastest itinerary on our car trips, etc. They have changed our lives in ways unimaginable only 20 years ago.

Rather than being ahead of it, many of us are just beginning to catch up with this trend. One of the most ancient publishing houses, for instance, has been educating academic authors in web publishing, web branding, e-commerce, and Search Engine Optimization. Despite the considerable weight they play in determining an author’s ranking – and hence his career opportunities - these activites are still frowned upon as being more suited to lowly skilled technicians than to intellectuals.

Already in the 1980s Italian philosopher Umberto Eco (翁贝托·埃可)  compelled his students to learn at least the basics of programming languages: intellectuals had to be the masters of human-computer interaction, rather than its slaves. Umberto Eco’s pedagogy deeply scandalized all those with a deep-seated fear that computers would, one day, replace them.

Less than forty years later, distance learning has in part replaced face-to-face contact in classrooms... Is there anything intellectuals have missed about algorithms? To use the jargon popular among European intellectuals in the late 1970s early 1980s:

Algorithms are not means of production. Labour, subjects of labour and instruments of labour are physical, while algorithms are immaterial. Algorithms do not produce anything tangible. They are mathematical formulas used to collect, sort, and organize information that is produced by or belongs to someone else.

An algorithm is just a procedure used to solve a problem by processing data, automating manual tasks, or performing calculations. Algorithms have always existed in the most diverse societies. What else are the methods of the 九章算术 (The Nine Chapters on the Mathematical Art), if not algorithms created to solve problems in engineering, trade, and taxation? Seen from this point of view, governance through the aid of data and algorithms is not new. What is new is rather our focus on algorithms as an object of study and contention.

Algorithms are part of the mode of production. Algorithmic procedures change, becoming increasingly complex and arcane, as societies move from one mode of production to another.

At the time when the 九章算术  was  compiled, imperial mathematicians performed their calculations using brush and paper. Nowadays, imperial mathematicians have been replaced by algorithm engineers, and computers have substituted the abacus.

In the European Middle Ages one would go through a lenghty procedure to make an effigy, which one would then use to worship the undying spirit of kings and popes, or to destroy an enemy through witchcraft. Because of this reason, those found out to possess the birth data of the Pope or anyone else could face grave accusations.

Today, what it is praised or attacked is not the actual body of the King, the Pope, or the commoner, but their online persona. An online persona however is not created by algorithms. An algorithm is unable to protect data and information, much less can it protect the online persona, because of the simple reasons that:

  • an algorithm is not the law, conventionally understood as regulation produced by the State (in the “West”) or by the Party-and-the-State (in China)

  • an algorithm may lack transparency...even though deducting how algorithms work is always possible.

As noticed by the People’s Daily op-ed, this opens up algorithms to the possibility of self-serving manipulation (as in  black hat SEO) rather than than the opposite (i.e. a focus on valuable content, as in white hat SEO).

Relations of production, ideologies, and religions can be easily codified in the law, and regulated through the law. Algorithms cannot.

First, algorithms are at the same time a governance tool and a commodity.

Algorithms are procedures expressed in programming language, rather than through natural language. These procedures are compiled by those with the relevant skills, and then sold and purchased, or licensed.

Algorithms are governance tools used by the state, by public and private enterprises, so their circulation, production, and licensing cannot be prohibited by law. Industry standards 行业标准, however, can be created to mitigate the risk that science be manipulated to lend legitimacy to spurious correlations.

Second, algorithms are a product of human creativity.

Algorithmic calculations were used, more than 2,000 years ago, by Greek, Arabic, Chinese, and Indian mathematicians to promote the progress of their respective societies. In the 1940s, Alan Turing’s cryptographical calculations contributed to an earlier end of WWII. Principles, values and goals beneficial to human development are already embodied by the law. Human creativity is what allows those principles, values, and goals to become more and more concrete.

Third, without data algorithms become useless.

Algorithms indeed hide the standpoint of their designer. Algorithms are difficult to understand, because they are a collection of mathematical formulas. So even though pure algorithms are public, only the very few have the ability to read them, and understand what they mean. Algorithms can be protected under intellectual property legislation only if they are applied to a specific purpose, for instance through a software.

Algorithms derive all of their power from data. Only if data and algorithms do not enjoy the same level of legal protection – if one stands naked before the algorithm so to speak, can algorithms spin out of control.


The article follows (只有中国语文)

宣 言
《 人民日报 》( 2017年10月05日 04 版)

  随着大数据、人工智能的广泛应用,一些商业网站、移动新闻客户端,包括直播平台、浏览器、搜索引擎、影音软件等,都在 运用算法这个“读心术”,为用户量身打造信息,创造出一种新的个性化阅读体验,信息获取已经从“大海捞针”进入“私人定制”模式。然而,技术往往是一把冷 冰冰的双刃剑,在价值和利益的天平上,所谓的算法成为了利益的砝码,一切围着流量转,唯点击量、转发量马首是瞻,“标题党”泛滥,价值取向跑偏,内容沦为 附庸。

  在“网红爆款”刷屏的网络世界,人的注意力是最稀缺资源,“10万+”的阅读、百亿量的点播成为竞相追逐的“眼球经济”。一些平 台打着定制服务、精准推送的幌子,让算法变成了打擦边球的工具。侵权盗用原创作品、违规制作生产信息,更是屡见不鲜。无论是有心搜索一个词条,还是无意点 开一个链接,千篇一律的同质化信息,信口雌黄的歪理邪说,形形色色的商业广告,还有那些活色生香、花边八卦、流言蜚语,以各种争奇斗艳的形式粉墨登场,一 股脑、无休止地推送过来,让广大受众被迫地接受。就好像捅了马蜂窝一样瞬间扑面而来,让人们在算法的“牢笼”里被蜇得“遍体鳞伤”。

  先 进技术原本应该带来的是阿里巴巴的宝库,而不是潘多拉的魔盒。就如同金融要回归服务实体经济是一个道理,算法也是时候回归服务内容这个本源了,而不能本末 倒置,让内容被算法牵着鼻子走。技术再怎么变,传播渠道再怎么变,内容为王的地位没有变也不会变。离开权威客观公正的新闻报道、离开积极健康向上的信息内 容,再强大的算法也是无源之水、无本之木。一些新媒体平台口口声声称自己只是“新闻的搬运工”,可“搬运”的结果就是,大量来路不明、藏污纳垢的信息横行 在网络空间。“搬运”什么要讲规矩讲责任,决不可任性而为之,更不能用满足用户阅读需求而自动分发的借口去搪塞。

  行生于己,名生于人。 算法隐藏着设计者的立场,有的新媒体平台频繁地调整改变算法,这是技术的创新吗?当然不是,无外乎是利用算法实现最大推送量,获得最高点击率,说到底是在 追求利益的最大化。互联网企业不是“舆论飞地”,同样要坚持社会效益和经济效益相统一,担负起与媒体角色相对应的社会责任,饮水思源,回报社会,造福人 民。内容推送少不了“总编辑”,算法再精良也要装上“安全阀”,加强内容把关,不能各唱各的调、各吹各的号。要提高平台使用透明度,畅通用户的设置渠道, 将信息的选择权还给用户,不能让“劣币”驱逐“良币”。要坚持正确舆论导向,强化价值引领,以“人工推荐+智能筛选”相结合优化推送方式,大力传播和弘扬 主流价值,不能让“有意思”代替了“有意义”。


1 comment:

Unknown said...

The article "不能让算法决定内容" is the first one of the People's Daily's trilogy on the algorithm. The other two articles can be found through the links below (both in Chinese only):
the second article 别被算法困在“信息茧房”
the third article 警惕算法走向创新的反面