Friday, August 09, 2024

"To Serve Man": 杨晓光, 陈凯华: 国家安全视角下的人工智能风险与治理 [Yang Xiaoguang and Chan Kaihua: Risks and Governance of Artificial Intelligence from the Perspective of National Security]

 


 It seems everyone must have an AI Law. Europe's AI Law appears at the vanguard, though in a way t is merely a local expression of what appears to be a developing consensus among globally interconnected leaders in the field all of whom appear to share, in broad outline, a singular vision for  taming, exploiting, and controlling AI, and for suppressing manifestations that do not conform to the orthodox vision. On the European version, see The Control of the Self and the Autonomous Virtual Collective Self: EU Parliament Approves Artificial Intelligence Act (With Links to High Level Summary); Useful Resources from the Future of Life Institute (FLI): "The AI Act Explorer". The Americans are sure to follow in one (piecemeal) way or another.  The emerging consensus represents one way of approaching the "problem" of AI (The making of an Artificial Intelligence Law Q&A with the creators of China’s expert draft AI law [人 工 智 能 示 范 法 2 . 0] (European Chinese Law Research Hub) ).

"The environment as we perceive it is our invention." (Heinz von Foerster, "On Constructing a Reality," in  Observing Systems (Seaside, CA: Intersystems pub, 1981) , p. 288).

 AI regulatory governance has always been a function of risk (eg here).  The core issue, however, has been what sort of risk would serve as the basis around which governance could be built. That, in turn, is constructed from out of choices respecting the understanding of what AI is (and isn't) and the relationship between AI and humans (my discussion here). Ethics is one perspective; human rights another; development of national productive forces is yet another (eg here). Most efforts at regulation are built on a deep foundation of risk managing structures, and these, in turn, are dependent on the identified risk and their order of importance. Among them are risk of bad data, risk to data sources (both human and institutional), risk of bad analytics, risk of misalignment between institutional objectives and analytics, and the risk of corruption, broadly understood. With generative systems comes the risk of autonomy--a risk that mimics, in part, the threat of un-managed individual autonomy within collective social systems, irrespective of the ideology of individual autonomy created for the political economic system to which it is applied.  

Tightly aligned with the notion of risk as the basic building block of governance are the parameters around which risk value are constructed. Principle among these in the West has been the hierarchically arranged regulatory trilogy of prevent-mitigate and remedy--in that order of importance.  In other places hierarchies of risk  may be different.  The identification and measurement of risk is also highly contextual and may be a function of the value systems built into governing ideologies. 

Lastly, the relationship of the State to the productive forces (including social, economic, cultural, religious and other forces) serves as the third leg of the tripod n which AI regulatory governance is built. For some the State and its apparatus must be central as a directing, coordinating and overseeing force.  For other the opposite is true, the State provides the platform and its rules but leave it to producers and consumers to drive development and use (eg here; here).  In a third variation, the State and the institutions of its productive forces develop a state supervised system of interpenetrated techno-bureaucracies that oversee the development and consumption of AI with the object of ensuring that institutionalized productive forces express and further State objectives and policies, though providing a space for them to engage in markets based realizations of permitted forms and effects of activity.

All of these trajectories are driven to a large extent by an instrumental view of the object of regulation.  The assumption appears to be that AI is an instrument, like a hammer. And like a hammer it can only be created by a human hand, and only a human hand can choose to use it to build a house or kill another human. More importantly, the human hand can be directed NOT to make the hammer, or not to use it in particular ways; the hammer has no volition, it exists only as and when it is picked up by the human hand or forged through human agency.  A very nice and comforting narrative for those in the business of managing human power relationships. And one that has driven its regulatory trajectories for the last decade. But AI can be animated--not so much Frankenstein (though that is the story that certain techno/intellectuals might like to use to scare the children into delegating authority to them) but more like a horse or a dog that has trained volition--that training--that programming--is also directed "to serve man." But who or what is bring served? And to whom? Indeed, "the emergence of algorithmic  cultures is also accompanied by the blurring of clearly defined flows, creating an atmosphere of uncertainty about the identity of interactional partners" (Jonathan Roberge and Robert Seyfert, "What are algorithmic cultures" in   Algorithmic Cultures (Routledge 2016), 1, 19)) and about the interactions themselves. 

Pix credit here
That last reference  is to that famous episode of the Twilight Zone Television Series (Season 3 Episode 24, 1962, based on a short story by Damon Knight (1950)); one that exposes the semiotic conundrums of the convergence of current approaches that share one thing passionately in common--the unalterable belief that humanity's relation to tech must be understood on humanity's terms--and no more. In the short story and the TV episode, a race of benevolent aliens come to Earth at a time of great need and offer them advanced technology which is translated as "To Serve Man." That suggests sincerity and eventually humans volunteer to travel to the alien home world. By the time the protagonist boards his friend reveals that the translation was literally correct but that its meaning was misunderstood--it was a cookbook.  Nothing had changed, of course; but the fundamental understanding of the tool and its use shifts as the interpretive baseline shifts. The aliens are indeed serving man, but humanity is also serving them. Each uses the other as an instrument and both understand humanity as the object, in one case to better its collective life, in the other to develop eat people. None of this, of course, disturbs the sleep of those in the business of concocting the narratives and eventually the legalities of regulatory governance and whatever manner of human oriented state supervision that now effectively defines the scope of approaches to the "regulation" of AI. And yet this is no "Twilight Zone"--and the world is not coming to an end. Rather, a sensitivity to the foundations of meaning and the power of perspective can expose its consequential effects. In this case, for example, the conundrum can be understood in a macro sense (the nightmare--AI will consume humanity), or in the micro sense (the way that humanity is centered changes the character of mutual consumption and also alters the character of what is created).

 That brings us back to the human centered perspectives that drive analysis and frame the issue and challenges of AI. While the trajectories of the framing of AI risk through regulatory governance as driven by European and American intellectuals, state officials , and the techno-administrators in both public and private organs worry about the ethics of AI, and its relationships to the rights of individuals and the overarching prerogatives of the state eg here), Chinese theorists appear to have been more robustly considering its implications for State security as the grounding perspective for an otherwise similar approach to the management of AI from an instrumentalist perspective (eg here). Both are comprehensive systems that are given their distinctive character through the application of this instrumental lens. And both illustrate the scope of approaches to the effort to the structural construction of systems of human self control in their engagement with tools they believe that they control utterly and entirely. Both are based on the presumption that like a light bulb, it only shines when the electrical circuit is switched on, and then only if it is appropriately "lamped."  (eg here).

A quite useful essay on the Chinese comprehensive security driven approach was recently published in 《国家治理》2024年第13期 ("National Governance" 2024 Issue 13):  杨晓光, 陈凯华: 国家安全视角下的人工智能风险与治理 [Yang Xiaoguang and Chan Kaihua: Risks and Governance of Artificial Intelligence from the Perspective of National Security]. It is well worth reading as a quite good analysis of the regulatory governance context for Chinese AI. In the end, however, all  States may eventually flip to a security oriented lens--contextually manifested (here, and here).  

Following the current of the times, the authors develop their analysis as a function of risk, risks that cannot be underestimated (不容小觑的风险) . They group risk into the following categories: (1) 国家竞争力风险。 (risk of national competitiveness); (2 ) 技术安全风险。 (technical and security risks); (3) 网络安全风险。(cybersecurity risks); (4) 经济社会风险。(economic and social risks); (5) 意识形态风险。(ideological risks). For these risks the authors offer structures of governance (人工智能风险的治理策略).

The only real question is the lens through which risk is understood, analyzed and aligned with objectives. Governance is the means, of course, but the lens gives it shape, purpose and structure. These are distilled at the end of the essay, through the lens of security: 

一是技术与制度双向赋能治理机制设计。面对飞速更迭的人工智能技术与复杂多样的国家治理应用场景,需加强技术应用与制度创新的协同治理,依靠制度设计引导技术方向的同时,通过技术发展帮助完善制度设计从而提升治理能力。二是加强人工智能意识形态风险的治理,大力开展人工智能内容识别民众素质教育,培养民众对人工智能获取的信息自觉进行多源验证;开发人工智能生成式内容溯源技术,高概率辨识可疑内容的来源;建立生成式人工智能信息失真检查和披露平台,把正确信息及时公示于社会。三是积极推动人工智能治理国际平台建设,参与国际规则制定。积极与国际社会合作,建立人工智能技术联盟,防范算法世界政治霸权和数据跨境安全风险;建立广泛、权威的国际对话机制,依托共建“一带一路”倡议、金砖国家、上合组织、东盟等具有国际影响力的多边机制,合理助力后发国家进步,促进人工智能全球治理成果的普惠共享;总结国内人工智能治理经验并强化国际交流,在促进互信共识的过程中,推动多方、多边主体间形成公开报告、同行评议等协同机制的标杆示范,切实推动人工智能治理原则落地。

First, the design of governance mechanisms that enable both technology and institutions. Faced with the rapidly changing artificial intelligence technology and complex and diverse national governance application scenarios, it is necessary to strengthen the coordinated governance of technology application and institutional innovation, rely on institutional design to guide the direction of technology, and help improve institutional design through technological development to enhance governance capabilities. Second, strengthen the governance of ideological risks of artificial intelligence, vigorously carry out quality education for the public on artificial intelligence content recognition, and cultivate the public's conscious multi-source verification of information obtained by artificial intelligence; develop artificial intelligence-generated content tracing technology to identify the source of suspicious content with a high probability; establish a generative artificial intelligence information distortion inspection and disclosure platform to promptly publicize correct information to the society. Third, actively promote the construction of an international platform for artificial intelligence governance and participate in the formulation of international rules. Actively cooperate with the international community to establish an artificial intelligence technology alliance to prevent algorithmic world political hegemony and cross-border data security risks; establish a broad and authoritative international dialogue mechanism, relying on the joint construction of the "Belt and Road" initiative, BRICS, SCO, ASEAN and other multilateral mechanisms with international influence, reasonably assist the progress of developing countries, and promote the universal sharing of global artificial intelligence governance results; summarize domestic artificial intelligence governance experience and strengthen international exchanges, and in the process of promoting mutual trust and consensus, promote the formation of benchmark demonstrations of collaborative mechanisms such as public reporting and peer review among multiple and multilateral entities, and effectively promote the implementation of artificial intelligence governance principles. ( 杨晓光, 陈凯华: 国家安全视角下的人工智能风险与治理 [Yang Xiaoguang and Chan Kaihua: Risks and Governance of Artificial Intelligence from the Perspective of National Security]).

For all of that, the instrumental character of AI remains consistent. The issues are only around what the hammer will look like and how it is to be wielded.  Those issues, in turn, reduce themselves to one--what are the structures of self control that collectives can impose on themselves with respect to AI. The instrumentalist presumption is useful in that respect, certainly, but it is a fiction--quite useful like all legal ideological fictions--and true enough when a collective believes profoundly in its "truth". The question is only when and how this will be transposed to the US and EU systems. Time will tell. 

The essay follows below in the original Chinese and in a crude English translation.

 

 

 


长安理论 | 国家安全视角下的人工智能风险与治理
长安评论 2024年08月09日 02:01



The following article is from 国家治理杂志 Author 杨晓光 陈凯华


人工智能正改变着人类的生产生活方式,在赋能经济社会发展的同时也引发了多方面的风险,包括人工智能引发的国家竞争力风险、技术安全风险、网络安全风险、经济社会风险和意识形态风险等。中国科学院数学与系统科学研究院研究员杨晓光,中国科学院大学公共政策与管理学院特聘教授、国家前沿科技融合创新研究中心副主任陈凯华在《国家治理》撰文指出,我国应坚持包容审慎的监管方式,强化国家安全意识,在加强顶层设计、大力推进人工智能发展的同时,建立并逐步完善人工智能风险国家治理体系,促进我国人工智能健康有序安全发展,提高我国在人工智能领域的国际竞争力。

人工智能是新一轮科技革命和产业变革的重要驱动力量。加快发展人工智能是我们赢得全球科技竞争主动权的重要战略抓手,是推动我国科技跨越发展、产业优化升级、生产力整体跃升的重要战略资源。人工智能依靠其通用性、多模态和智能涌现能力,与千行百业深度融合,在引发生产方式、技术创新范式、内容生成方式和人机关系等多领域深刻变革的同时,也带来诸多风险。
党的十八大以来,以习近平同志为核心的党中央把发展人工智能提升到战略高度。习近平总书记指出:“我们要深入把握新一代人工智能发展的特点,加强人工智能和产业发展融合,为高质量发展提供新动能。”同时强调:“要加强人工智能发展的潜在风险研判和防范,维护人民利益和国家安全,确保人工智能安全、可靠、可控。”为确保人工智能安全、可靠、可控,我国逐步加强对人工智能发展的规范和治理。例如,2019年6月,国家新一代人工智能治理专业委员会发布《新一代人工智能治理原则——发展负责任的人工智能》;2023年10月,中央网信办发布《全球人工智能治理倡议》。在大力发展人工智能、提升我国在人工智能领域国际竞争力的同时,也要对人工智能带来的技术安全风险、网络安全风险、经济社会风险、意识形态风险,以及伴随的伦理、法律和安全问题进行约束和监管,建立并逐步完善人工智能风险治理体系,推动人工智能安全、健康、有序发展。


国家安全视角下的人工智能风险


人工智能技术逐步进入实用阶段,赋能千行百业,创造出新的产品、服务和商业模式,为人类社会发展带来重大改变。作为一种追赶人类智能的特殊技术,人工智能对社会发展的推动力、对社会伦理与秩序的冲击力,以及其些作用背后的复杂性,都是人类既往的技术发明所不具备的。这其中蕴含着许多前所未有的、不容小觑的风险。


国家竞争力风险。人工智能是发展新质生产力的重要引擎,不仅是经济的推动力,而且在科技创新、国防建设中都发挥着重要作用。当前,我国人工智能技术发展迅速,人工智能领域的学术论文和发明专利都居国际前列,但在人工智能原创性技术领域与美国还有一定的差距,诸如ChatGPT、Sora等里程碑式技术创新均最早出现于美国,我国尚处于“跟跑”阶段。近年来,以美国为首的西方国家实施“小院高墙”政策,意图对我国人工智能的发展进行系统性打压,对我国造成不利影响。目前,我国的人工智能发展在人才、数据、算力等方面尚难打破西方霸权的垄断。例如在人才方面,2019年全球顶尖人工智能研究人员中中国研究人员占10%,2022年这一比例增加到26%,美国仍然吸引最多的人工智能人才。在数据方面,截至2024年,美国Common Crawl开源数据集每月对全网爬取会增加大约386TB的数据,其每月增量比我国多数开源数据集的总量还大。在算力方面,国产芯片的计算速度与世界顶尖水平相比还有不小差距;与图形处理器(GPU)配套的国产编程环境及软硬件生态尚未成熟,制约了算力效率。整体而言,我国人工智能产业创新生态与国际人工智能产业创新生态相比尚有很大提升空间。


技术安全风险。一是人工智能“黑箱效应”可能生成偏误信息,从而误导使用者。人工智能模型存在非透明性的算法“黑箱”局限,使用者难以观察模型从输入到输出的过程。当数据来源良莠不齐时,特别是有人使用特殊设计的输入数据对训练模型进行对抗性攻击时,人工智能模型生成内容的事实解释性以及可理解性缺失,不仅使得输出信息秩序紊乱、传递内容扭曲失真,也使得审查和纠正偏见、错误或不当行为变得困难。再者,由于植根于人类社会根深蒂固的偏见以及因话语权不同导致的数据资料分布不均衡,人工智能模型训练的数据是有偏的,导致人工智能模型对某些人群或事物作出不公正或不均衡推断。二是人工智能模型训练需要运用大量个人、企业和组织的数据,存在隐私和国家机密泄露风险。人工智能生成文本、图像和视频所需的大量数据可能涉及敏感的国家信息、商业信息和个人信息,在大范围应用时经常面临数据过度采集、窃取、泄露和滥用风险。三是人工智能技术研发与应用过程中存在知识产权侵犯和知识生产生态破坏等风险。随着人工智能技术的快速发展,人工智能在艺术作品、科学研究、发明创新等领域的应用愈发深入,这些活动涉及到版权、专利权、商标权等多个方面知识产权。如何在知识产权、原创性等方面平衡技术使用和创作者权益目前仍存在争议,包括但不限于人工智能生成作品的版权归属和保护范围、人工智能作为发明者的专利申请资格等问题。


网络安全风险。一是人工智能技术可能引发新型、难以管控的网络犯罪。人工智能技术能够提高网络攻击的隐秘性,降低高级网络攻击的技术门槛,拓宽网络犯罪的时空范围,使犯罪从传统的个体或小团伙作案变为普及率高且低成本的活动,同时也使得国际网络攻击、渗透更加易发多发,加大了我国对网络攻击预防和溯源的难度。二是深度伪造等技术诱发传统犯罪模式升级。人工智能的深度应用涉及经济利益、保密利益、使用性能等多重维度,当技术被恶意利用、大规模生成误导性信息,将带来智能诈骗风险。这种手段比传统的网络钓鱼和电话诈骗更具欺骗性和破坏力。还有恶意用户通过公开或非法手段收集资料,利用深度伪造技术制作虚假甚至淫秽图片或者视频传播,侮辱霸凌他人。三是人工智能技术风险可能引发网络空间衍生灾变。随着人工智能技术与网络空间的深度融合,人工智能的本体风险有可能随着网络空间特别是物联网的泛在互联而加以放大,衍生演化出网络空间的重大灾变,给物理世界带来巨大风险。无人机、无人驾驶汽车、医疗机器人等无人化智能系统设计缺陷可能直接威胁公民生命权和健康权。例如,2018年Uber自动驾驶汽车凌晨撞死行人事故即起因于Uber系统将行人判定为塑料袋等漂浮物,导致未能及时刹车。


经济社会风险。一是人工智能对简单重复性劳动所产生的替代作用或将引发技术性失业。人工智能使得部分劳动密集型制造业实现了自动化生产,从而替代这部分行业中的劳动力。当人工智能适用于工业设计、药物研发、材料科学、信息服务等领域,其发展伴随的大规模技术性失业或将引发全球结构性失业,成为经济社会的不稳定因素。二是人工智能的技术依赖加深致使认知浅层化风险加剧。人工智能的深度应用将减少人类主动思考的机会,并限制了其视野和思维深度,或将造成认知能力缺陷。例如,Sora等生成式人工智能的应用将会抑制学生抽象逻辑和批判性思维的发展,给科学和教育事业带来巨大挑战。三是技术复杂性带来的数字鸿沟和信息茧房加剧社会阶层分化。不同社会群体在理解和使用人工智能技术方面存在显著差距的问题普遍存在。技术采纳者在通过人工智能技术不断优化效率、积累资源和权力的同时,致使技术知识或信息匮乏者被不断边缘化,从而引发社会不公平现象。此外,凭借人工智能算法绘制“数字脸谱”以精准迎合个体视觉偏好与信息需求的行为,加强了“信息茧房”的回音壁效应。用户长时间处于同质化信息空间,容易对固有观念产生偏执认同,滋生排他性倾向,进而诱发群体极化现象,导致经济社会不稳定。


意识形态风险。一是人工智能语料差异内嵌意识形态,具有价值取向不可控风险。大模型训练语料库中的意识形态差异很有可能被利用生成大量“可信”文本,以加深对客观事实认知的分歧。2023年5月8日,布鲁金斯学会评论文章《人工智能的政治:ChatGPT和政治偏见》指出ChatGPT存在政治偏见。华盛顿大学、卡内基梅隆大学和西安交通大学的一项研究发现,不同的人工智能模型具有不同的政治偏好,OpenAI的GPT-4是最左翼自由派,Meta的LLaMA是最右翼。具有不同国家立场的人工智能产品在敏感事件或国际关系等问题上容易生成具有预设倾向的内容,其传播将潜移默化地影响年轻一代的价值观,或将为西方国家向我国进行意识形态渗透及干涉提供便利,威胁社会意识形态安全。二是人工智能技术带来的创作者责任缺失导致各种假信息泛滥,蛊惑社会人心。技术黑箱导致确定人工智能系统行为的道德法律归属变得复杂,由此带来创作者责任缺失。随着人工智能的发展和应用,虚假新闻制作和传播的成本变得更低。借助人工智能文本、音频、视频大模型,恶意用户制造大量真假难辨的假信息,篡改历史,伪造事实,煽风点火带节奏,为吸引眼球不择手段。假信息泛滥可能导致人们对数字内容的真实性产生普遍怀疑,影响社会信任与秩序。三是新型“数字殖民”引发意识形态偏移风险。在国家对于数据和算法依赖程度日益增强的情况下,领先掌握新一代人工智能技术的国家凭借技术优势占据规则和标准制定主导地位,这种技术霸权可能产生新的“数字殖民地”。


人工智能风险的治理策略


我国应坚持包容审慎的监管方式,在大力发展人工智能的同时,加强对关键性风险的治理,统筹技术发展和规范引导,防范化解各种风险,构建安全与发展兼容的人工智能治理生态。
加强顶层设计和系统性治理。一是坚持系统性谋划和整体性推进,制定人工智能发展与治理规划。对人工智能的发展趋势和应用前景进行综合研判和分析,以系统观念协调好人工智能发展和安全。在发展方面,发挥新型举国体制优势,建立人工智能产业良性发展生态,通过技术预见识别关键技术和市场需求,重点布局,推动人工智能安全有序发展。在安全方面,国家层面上成立专家委员会或咨询委员会,遴选人工智能技术、伦理、法律、安全等领域的专家,并和社会公众代表一起,对人工智能的风险治理提出前瞻性建议,提升全方位多维度综合治理能力。二是构建完备法律体系,推进制度建设。加快建立健全协调安全与发展、统筹效率和公平、平衡活力与秩序、兼顾技术和伦理的法律法规体系,推动人工智能监管制度体系建设;依法出台人工智能发展与治理的地方性法规和地方政府规章,引导创意者公平、安全、健康、有责任地将人工智能技术用于地方特色创新活动中,更有针对性地满足地方需要。三是加快成立负责任人工智能技术的职能管理机构,加强政策引导与监管。建议成立专门的监管机构,通过制定有关人工智能技术开发和应用的政策,确保技术发展符合社会公共利益和道德标准;制定人工智能技术开发和应用的标准和规范,确保技术安全性、透明性和公平性;建立人工智能风险评估与预警机制,防止技术滥用和潜在危害。


降低数智社会转型对民众的冲击。一是持续谨慎观察人工智能带来的“数字鸿沟”与失业风险,通过综合性应对措施维护社会稳定。加强人工智能教育与普及,缓解数字鸿沟,提高民众对人工智能技术的认知与应用能力;密切关注人工智能技术发展可能带来的结构性失业、技术性失业与非对称性失业等“创造性破坏”,建立失业预警制度,加大就业指导培训,提供税收优惠,出台兜底性失业保障政策。二是面向未来培养新型技术治理和社会治理协同的复合型人才。在高校设置跨学科课程,支持人工智能相关学科建设,大力培养人工智能技术人才和管理人才,奖励优秀人才和团队,为人工智能技术发展与治理提供人才储备;建立终身学习机制,提供系统化的人工智能技术职业培训,鼓励在职人员持续学习人工智能新技术以适应快速变化的技术环境。三是推动包容性技术发展的同时加强网络安全保护。在人工智能技术研发和应用过程中充分考虑老年人、残疾人等弱势群体的特殊需求,设计和开发符合其使用习惯和能力的人工智能产品和服务,避免扩大“数字鸿沟”;建立健全数据保护法律法规,明确数据收集、存储、处理和共享规范,提高数据全生命周期的安全性;加大对网络犯罪的打击力度,组建专业的网络安全执法队伍,提升技术侦查能力,及时发现和遏制网络犯罪活动;提升企业信息安全责任,推动企业采取有效措施防止数据泄露和滥用;通过多种渠道和形式,向公众普及信息安全知识和防护技能,提高民众的信息安全意识和防范能力。


构建人工智能技术风险治理体系。一是夯实人工智能公共数字基础设施一体化平台建设,完善数据资源体系。加强对数据中心、超算中心、智能计算机中心等基础设备建设,夯实数字基础设施安全;加快推进数据共享平台建设,制定统一的数据标准和规范,建立高质量国家级数据资源库,以解决人工智能大模型的数据质量及算法偏见问题。二是前瞻布局人工智能大模型的风险防御技术体系,巩固发展人工智能技术底层架构以保障产业生态安全。针对人工智能全生命周期关键环节进行风险预判,打造动态升级、科学前瞻的防御技术体系,通过精准安全防范措施建立人工智能技术安全保障体系;在算法安全、数据安全、模型安全、系统安全、应用安全等方面加强前沿安全技术研发,并推动关键核心技术应用;加强对芯片、集成电路等基础产业的保护力度,推动国家和企业重点开发自主可控的人工智能技术,夯实核心技术安全。


完善人工智能国家治理体系。一是技术与制度双向赋能治理机制设计。面对飞速更迭的人工智能技术与复杂多样的国家治理应用场景,需加强技术应用与制度创新的协同治理,依靠制度设计引导技术方向的同时,通过技术发展帮助完善制度设计从而提升治理能力。二是加强人工智能意识形态风险的治理,大力开展人工智能内容识别民众素质教育,培养民众对人工智能获取的信息自觉进行多源验证;开发人工智能生成式内容溯源技术,高概率辨识可疑内容的来源;建立生成式人工智能信息失真检查和披露平台,把正确信息及时公示于社会。三是积极推动人工智能治理国际平台建设,参与国际规则制定。积极与国际社会合作,建立人工智能技术联盟,防范算法世界政治霸权和数据跨境安全风险;建立广泛、权威的国际对话机制,依托共建“一带一路”倡议、金砖国家、上合组织、东盟等具有国际影响力的多边机制,合理助力后发国家进步,促进人工智能全球治理成果的普惠共享;总结国内人工智能治理经验并强化国际交流,在促进互信共识的过程中,推动多方、多边主体间形成公开报告、同行评议等协同机制的标杆示范,切实推动人工智能治理原则落地。

来源:《国家治理》2024年第13期

 

Changan Theory | Risks and Governance of Artificial Intelligence from the Perspective of National Security
Changan Review August 9, 2024 02:01

The following article is from National Governance Magazine Author Yang Xiaoguang Chen Kaihua
Image

Artificial intelligence is changing the way of life and production of human beings. While empowering economic and social development, it has also caused many risks, including national competitiveness risks, technical security risks, network security risks, economic and social risks, and ideological risks caused by artificial intelligence. Yang Xiaoguang, a researcher at the Institute of Mathematics and Systems Science of the Chinese Academy of Sciences, and Chen Kaihua, a distinguished professor at the School of Public Policy and Management of the University of the Chinese Academy of Sciences and deputy director of the National Frontier Science and Technology Integration and Innovation Research Center, wrote in "National Governance" that China should adhere to an inclusive and prudent regulatory approach, strengthen national security awareness, and establish and gradually improve the national governance system for artificial intelligence risks while strengthening top-level design and vigorously promoting the development of artificial intelligence, so as to promote the healthy, orderly and safe development of Chinese artificial intelligence and enhance China's international competitiveness in the field of artificial intelligence.

Artificial intelligence is an important driving force for the new round of scientific and technological revolution and industrial transformation. Accelerating the development of artificial intelligence is an important strategic goal in order for us to win the initiative in global scientific and technological competition, and an important strategic resource for promoting the leapfrog development of Chinese science and technology, the optimization and upgrading of industries, and the overall leap in productivity. Artificial intelligence relies on its versatility, multimodality and intelligent emergence capabilities to deeply integrate with thousands of industries. While triggering profound changes in production methods, technological innovation paradigms, content generation methods, and human-machine relationships, it also brings many risks.

Since the 18th National Congress of the Communist Party of China, the Party Central Committee with Comrade Xi Jinping as the core has elevated the development of artificial intelligence to a strategic level. General Secretary Xi Jinping pointed out: "We must deeply grasp the characteristics of the development of a new generation of artificial intelligence, strengthen the integration of artificial intelligence and industrial development, and provide new momentum for high-quality development." At the same time, he emphasized: "We must strengthen the research and prevention of potential risks in the development of artificial intelligence, safeguard the interests of the people and national security, and ensure that artificial intelligence is safe, reliable and controllable." In order to ensure that artificial intelligence is safe, reliable and controllable, China has gradually strengthened the regulation and governance of the development of artificial intelligence. For example, in June 2019, the National New Generation Artificial Intelligence Governance Professional Committee issued the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence"; in October 2023, the Central Cyberspace Affairs Office issued the "Global Artificial Intelligence Governance Initiative". While vigorously developing artificial intelligence and enhancing China's international competitiveness in the field of artificial intelligence, we must also constrain and regulate the technical security risks, network security risks, economic and social risks, ideological risks, and accompanying ethical, legal and security issues brought by artificial intelligence, establish and gradually improve the artificial intelligence risk governance system, and promote the safe, healthy and orderly development of artificial intelligence.

Risks of artificial intelligence from the perspective of national security

Artificial intelligence technology has gradually entered the practical stage, empowering thousands of industries, creating new products, services and business models, and bringing major changes to the development of human society. As a special technology that catches up with human intelligence, the driving force of artificial intelligence on social development, the impact on social ethics and order, and the complexity behind its effects are all unprecedented in human technological inventions. There are many unprecedented risks that should not be underestimated.

Risks of national competitiveness. Artificial intelligence is an important engine for the development of new quality productivity. It is not only a driving force for the economy, but also plays an important role in scientific and technological innovation and national defense construction. At present, Chinese artificial intelligence technology is developing rapidly, and academic papers and invention patents in the field of artificial intelligence are at the forefront of the world. However, there is still a certain gap between China and the United States in the field of original artificial intelligence technology. Milestone technological innovations such as ChatGPT and Sora first appeared in the United States, and China is still in the "follow-up" stage. In recent years, Western countries led by the United States have implemented the "small courtyard and high wall" policy, intending to systematically suppress the development of Chinese artificial intelligence, which has had an adverse impact on China. At present, Chinese artificial intelligence development in terms of talent, data, and computing power is still difficult to break the monopoly of Western hegemony. For example, in terms of talent, Chinese researchers accounted for 10% of the world's top artificial intelligence researchers in 2019, and this proportion increased to 26% in 2022. The United States still attracts the most artificial intelligence talents. In terms of data, by 2024, the United States Common Crawl open source data set will add approximately 386TB of data to the entire network every month, and its monthly increase is larger than the total amount of most open source data sets in China. In terms of computing power, the computing speed of domestic chips is still far behind the world's top level; the domestic programming environment and software and hardware ecology supporting the graphics processing unit (GPU) are not yet mature, which restricts the computing efficiency. Overall, the Chinese  artificial intelligence industry innovation ecology still has a lot of room for improvement compared with the international artificial intelligence industry innovation ecology.

Technical security risks. First, the "black box effect" of artificial intelligence may generate biased information, thereby misleading users. Artificial intelligence models have the limitation of non-transparent algorithm "black box", which makes it difficult for users to observe the process from input to output of the model. When the data sources are mixed, especially when someone uses specially designed input data to conduct adversarial attacks on the training model, the factual interpretability and comprehensibility of the content generated by the artificial intelligence model are missing, which not only makes the output information disordered and the content distorted, but also makes it difficult to review and correct biases, errors or improper behaviors. Furthermore, due to the deep-rooted prejudices rooted in human society and the uneven distribution of data due to different discourse power, the data trained by the artificial intelligence model is biased, which leads to unfair or unbalanced inferences about certain people or things by the artificial intelligence model. Second, AI model training requires the use of a large amount of personal, corporate and organizational data, which poses a risk of privacy and state secrets being leaked. The large amount of data required for AI to generate text, images and videos may involve sensitive national information, commercial information and personal information, and often faces the risk of excessive data collection, theft, leakage and abuse when applied on a large scale. Third, there are risks such as intellectual property infringement and ecological damage to knowledge production in the process of AI technology research and development and application. With the rapid development of AI technology, AI has been increasingly applied in the fields of art, scientific research, invention and innovation, and these activities involve copyright, patent rights, trademark rights and other aspects of intellectual property. How to balance the use of technology and the rights of creators in terms of intellectual property rights and originality is still controversial, including but not limited to the ownership and scope of copyright protection of AI-generated works, and the eligibility of AI as an inventor to apply for patents.

Cybersecurity risks. First, AI technology may trigger new types of cybercrime that are difficult to control. Artificial intelligence technology can improve the secrecy of cyber attacks, lower the technical threshold of advanced cyber attacks, broaden the time and space scope of cyber crimes, and change crimes from traditional individual or small gang crimes to high-popularity and low-cost activities. At the same time, it also makes international cyber attacks and infiltration more likely to occur, increasing the difficulty of preventing and tracing cyber attacks in China. Second, technologies such as deep fakes induce the upgrading of traditional crime patterns. The deep application of artificial intelligence involves multiple dimensions such as economic interests, confidentiality interests, and performance. When the technology is maliciously used and misleading information is generated on a large scale, it will bring about the risk of intelligent fraud. This method is more deceptive and destructive than traditional phishing and telephone fraud. There are also malicious users who collect information through public or illegal means, use deep fake technology to create false or even obscene pictures or videos to spread, insult and bully others. Third, the risks of artificial intelligence technology may trigger derivative disasters in cyberspace. With the deep integration of artificial intelligence technology and cyberspace, the ontological risks of artificial intelligence may be amplified with the ubiquitous interconnection of cyberspace, especially the Internet of Things, and derive and evolve major disasters in cyberspace, bringing huge risks to the physical world. Design defects in unmanned intelligent systems such as drones, driverless cars, and medical robots may directly threaten citizens' right to life and health. For example, the accident in which an Uber self-driving car killed a pedestrian in the early morning of 2018 was caused by the Uber system identifying the pedestrian as a floating object such as a plastic bag, which resulted in the failure to brake in time.

Economic and social risks. First, the substitution effect of artificial intelligence on simple repetitive labor may lead to technological unemployment. Artificial intelligence has enabled some labor-intensive manufacturing industries to achieve automated production, thereby replacing the labor force in these industries. When artificial intelligence is applied to industrial design, drug research and development, materials science, information services and other fields, the large-scale technological unemployment associated with its development may lead to global structural unemployment and become an unstable factor in the economy and society. Second, the deepening of technological dependence on artificial intelligence has led to an increase in the risk of shallow cognition. The in-depth application of artificial intelligence will reduce the opportunities for humans to think actively, limit their vision and depth of thinking, and may cause cognitive defects. For example, the application of generative artificial intelligence such as Sora will inhibit the development of students' abstract logic and critical thinking, bringing huge challenges to science and education. Third, the digital divide and information cocoon brought about by technological complexity have exacerbated social class differentiation. The problem of significant gaps between different social groups in understanding and using artificial intelligence technology is widespread. While technology adopters are constantly optimizing efficiency, accumulating resources and power through artificial intelligence technology, those who lack technical knowledge or information are constantly marginalized, causing social injustice. In addition, the behavior of drawing "digital faces" with artificial intelligence algorithms to accurately cater to individual visual preferences and information needs has strengthened the echo wall effect of the "information cocoon". Users who are in a homogeneous information space for a long time are prone to paranoid identification with inherent ideas, breed exclusive tendencies, and then induce group polarization, leading to economic and social instability.

Ideological risks. First, the differences in artificial intelligence corpora are embedded with ideology, which has the risk of uncontrollable value orientation. The ideological differences in the large model training corpus are likely to be used to generate a large number of "credible" texts to deepen the differences in the cognition of objective facts. On May 8, 2023, the Brookings Institution's commentary article "The Politics of Artificial Intelligence: ChatGPT and Political Bias" pointed out that ChatGPT has political bias. A study by the University of Washington, Carnegie Mellon University and Xi'an Jiaotong University found that different AI models have different political preferences. OpenAI's GPT-4 is the most left-wing liberal, and Meta's LLaMA is the most right-wing. AI products with different national positions are prone to generate content with preset tendencies on sensitive events or international relations. Their dissemination will subtly influence the values ​​of the younger generation, or facilitate ideological infiltration and interference by Western countries in China, threatening social ideological security. Second, the lack of creator responsibility brought about by AI technology has led to the proliferation of various false information and the misleading of the public. The technical black box makes it complicated to determine the moral and legal attribution of the behavior of AI systems, which leads to the lack of creator responsibility. With the development and application of AI, the cost of producing and disseminating false news has become lower. With the help of AI text, audio, and video large models, malicious users create a large amount of false information that is difficult to distinguish between true and false, tamper with history, forge facts, fan the flames and set the rhythm, and do whatever it takes to attract attention. The proliferation of false information may lead to widespread doubts about the authenticity of digital content, affecting social trust and order. Third, the new type of "digital colonization" triggers the risk of ideological deviation. As the country's reliance on data and algorithms increases, countries that are leading the new generation of artificial intelligence technology will dominate the formulation of rules and standards with their technological advantages. This technological hegemony may produce new "digital colonies".

Governance strategies for artificial intelligence risks

China should adhere to an inclusive and prudent regulatory approach. While vigorously developing artificial intelligence, it should strengthen the governance of key risks, coordinate technological development and normative guidance, prevent and resolve various risks, and build an artificial intelligence governance ecosystem that is compatible with security and development.

Strengthen top-level design and systematic governance. First, adhere to systematic planning and overall promotion, and formulate an artificial intelligence development and governance plan. Comprehensively study and analyze the development trends and application prospects of artificial intelligence, and coordinate the development and security of artificial intelligence with a systematic concept. In terms of development, give play to the advantages of the new national system, establish a healthy development ecology for the artificial intelligence industry, identify key technologies and market needs through technological foresight, focus on layout, and promote the safe and orderly development of artificial intelligence. In terms of security, an expert committee or advisory committee should be established at the national level to select experts in the fields of artificial intelligence technology, ethics, law, security, etc., and together with representatives of the public, make forward-looking suggestions on the risk governance of artificial intelligence and enhance the comprehensive governance capabilities in all aspects and dimensions. Second, build a complete legal system and promote institutional construction. Accelerate the establishment and improvement of a legal and regulatory system that coordinates security and development, coordinates efficiency and fairness, balances vitality and order, and takes into account technology and ethics, and promote the construction of an AI regulatory system; issue local laws and regulations on the development and governance of AI and local government regulations in accordance with the law, and guide creators to use AI technology in local innovation activities in a fair, safe, healthy and responsible manner, and meet local needs in a more targeted manner. Third, accelerate the establishment of a functional management agency for responsible AI technology and strengthen policy guidance and supervision. It is recommended to establish a special regulatory agency to ensure that technological development is in line with social public interests and ethical standards by formulating policies on the development and application of AI technology; formulate standards and specifications for the development and application of AI technology to ensure technical security, transparency and fairness; and establish an AI risk assessment and early warning mechanism to prevent technology abuse and potential harm.

Reduce the impact of the digital society transformation on the public. First, continue to carefully observe the "digital divide" and unemployment risks brought about by AI, and maintain social stability through comprehensive response measures. Strengthen AI education and popularization, ease the digital divide, and improve people's understanding and application of AI technology; pay close attention to the "creative destruction" such as structural unemployment, technological unemployment and asymmetric unemployment that may be brought about by the development of AI technology, establish an unemployment warning system, increase employment guidance and training, provide tax incentives, and introduce a bottom-line unemployment insurance policy. Second, cultivate compound talents for the future in the coordination of new technology governance and social governance. Set up interdisciplinary courses in colleges and universities, support the construction of AI-related disciplines, vigorously cultivate AI technical talents and management talents, reward outstanding talents and teams, and provide talent reserves for the development and governance of AI technology; establish a lifelong learning mechanism, provide systematic AI technology vocational training, and encourage employees to continue to learn new AI technologies to adapt to the rapidly changing technological environment. Third, promote inclusive technology development while strengthening network security protection. In the process of AI technology research and development and application, fully consider the special needs of vulnerable groups such as the elderly and the disabled, design and develop AI products and services that meet their usage habits and capabilities, and avoid widening the "digital divide"; establish and improve data protection laws and regulations, clarify data collection, storage, processing and sharing specifications, and improve the security of data throughout its life cycle; increase the crackdown on cybercrime and set up professional network security law enforcement team, improve technical investigation capabilities, timely discover and curb cybercrime activities; enhance corporate information security responsibilities, and encourage companies to take effective measures to prevent data leakage and abuse; popularize information security knowledge and protection skills to the public through various channels and forms, and improve people's information security awareness and prevention capabilities.

Build an artificial intelligence technology risk governance system. First, consolidate the construction of an integrated platform for public digital infrastructure for artificial intelligence and improve the data resource system. Strengthen the construction of basic equipment such as data centers, supercomputing centers, and intelligent computer centers to consolidate the security of digital infrastructure; accelerate the construction of data sharing platforms, formulate unified data standards and specifications, and establish a high-quality national data resource library to solve the data quality and algorithm bias problems of large artificial intelligence models. Second, proactively lay out the risk defense technology system for large artificial intelligence models, consolidate and develop the underlying architecture of artificial intelligence technology to ensure the security of the industrial ecosystem. Predict risks for key links in the entire life cycle of artificial intelligence, create a dynamically upgraded, scientific and forward-looking defense technology system, and establish an artificial intelligence technology security system through precise security precautions; strengthen the research and development of cutting-edge security technologies in algorithm security, data security, model security, system security, and application security, and promote the application of key core technologies; strengthen the protection of basic industries such as chips and integrated circuits, and promote the country and enterprises to focus on the development of independent and controllable artificial intelligence technologies, and consolidate the security of core technologies.

Improve the national governance system of artificial intelligence. First, the design of a two-way empowerment governance mechanism of technology and system. In the face of rapidly changing artificial intelligence technology and complex and diverse national governance application scenarios, it is necessary to strengthen the coordinated governance of technology application and institutional innovation, rely on institutional design to guide the direction of technology, and help improve institutional design through technological development to enhance governance capabilities. Second, strengthen the governance of ideological risks of artificial intelligence, vigorously carry out quality education for the public in artificial intelligence content recognition, and cultivate the public's conscious multi-source verification of information obtained by artificial intelligence; develop artificial intelligence generative content traceability technology to identify the source of suspicious content with a high probability; establish a generative artificial intelligence information distortion inspection and disclosure platform to promptly publicize correct information to the society. Third, actively promote the construction of an international platform for AI governance and participate in the formulation of international rules. Actively cooperate with the international community to establish an AI technology alliance to prevent the political hegemony of the algorithmic world and cross-border data security risks; establish a broad and authoritative international dialogue mechanism, relying on the joint construction of the "Belt and Road" initiative, BRICS, SCO, ASEAN and other multilateral mechanisms with international influence, reasonably assist the progress of latecomers, and promote the inclusive sharing of the results of global AI governance; summarize domestic AI governance experience and strengthen international exchanges. In the process of promoting mutual trust and consensus, promote the benchmark demonstration of collaborative mechanisms such as public reporting and peer review among multiple and multilateral entities, and effectively promote the implementation of AI governance principles.

Source: "National Governance" 2024 Issue 13

No comments: