Sunday, December 24, 2023

New On the European Chinese Law Research Hub Blogsite: Baiyang Xiao, "Making the Private Public: Regulating Content Moderation"

 

Capture of the video installation “Unerasable Characters II” by Winnie Soon: Drawing on the Weiboscope database, she designed software that visualizes Weibo posts that have been erased on a daily basis during the pandemic. Exhibit “Data Relations“, Australian Centre for Contemporary Art, Melbourne

 

The folks over at the European Chinese Law Research Hub (with thanks to Marianne von Blomberg, Editor ECLR Hub, Research Associate, Chair for Chinese Legal Culture, University of Cologne) have posted  a marvelous essay by Baiyang Xiao, "Making the Private Public: Regulating Content Moderation." Xiao aims to study the way that legal measures China adopted to serve the needs of content control and compares the framework with the regulatory approach of the EU.

The study is particularly interesting for the way it nicely frames the core issue of regulation--the way that broad political principles are translated into policy, and then the way that policy is transposed into regulatory structures sensitive to political principles.  More interesting still is the way that this self-referencing regulatory framework is then plugged into what is extracted as emerging international norms and standards, but then applied with a socialist perspective. Lastly, the objectification of platforms as a site for regulation--that is as a virtual regulable space that assumes some of the characteristics of physical territory, is also quite useful. All of this comes at a price--which all states and political systems are discovering must be paid. 

I am cross posting the link to the essay which may also be read below. The original ECLRH post may be accessed HERE. And as a plug for the marvelous work at the European Chinese Law Research Hub: if you have observations, analyses or pieces of research that are not publishable as a paper but should get out there, or want to spread event information, calls for papers or job openings, or have a paper forthcoming- do not hesitate to contact Marianne von Bloomberg.

 

Making the Private Public: Regulating Content Moderation

A paper by Baiyang Xiao
Capture of the video installation “Unerasable Characters II” by Winnie Soon: Drawing on the Weiboscope database, she designed software that visualizes Weibo posts that have been erased on a daily basis during the pandemic. Exhibit “Data Relations“, Australian Centre for Contemporary Art, Melbourne

Internet service providers (ISPs) globally are increasingly legally obliged to monitor and regulate content on their service. In general, such obligations may emanate from explicit legislative mandates, such as Article 17 of the EU’s Directive on Copyright in the Digital Single Market, or from the imposition of strict liability for user-generated content by judicial authorities, effectively requiring intermediaries to actively monitor and moderate illegal content to circumvent liability. China implemented a dual-track legal mechanism on content moderation that emphasizes the public and private distinction. Specifically, ISPs are exempted from monitoring obligations in private law, while public law explicitly imposes monitoring obligations for ISPs, requiring them to take on the role of gatekeepers who have a responsibility towards the public interest. This study aims to explain what legal measures China adopted to serve the needs of content control and compares the framework with the regulatory approach of the EU.

What is the current legal framework for content moderation?

On the one hand, the Chinese jurisprudence has reached consensus that the principle of prohibition on general monitoring obligations applies in private sphere and leaves certain room for monitoring obligations in cases of specific natures. In its authoritative interpretation of Article 1197 of the Civil Code, the Legislative Affairs Commission referred to international conventional practice and clarified that ‘ISPs that provide technical services are not subject to general monitoring obligations,’ but did not preclude the possibility of monitoring obligations of a specific nature. Moreover, the Supreme People’s Court (SPC) clarifies that the court shall not determine an ISP is at fault where it fails to conduct proactive monitoring regarding a user’s infringement. In another Guiding Opinion, the SPC explicitly stated that ‘[courts shall] not impose a general obligation of prior review and a relatively high degree of duty of care upon the ISPs […].’

On the other hand, under public law, ISPs are required to review, monitor, and inspect information prohibited from being disseminated by laws and administrative regulations. When they ‘discover’ illegal content disseminated on their services, they must fulfil their proactive monitoring obligations by taking certain measures to prevent the transmission of such content. In addition to technical filtering mechanisms, platforms must also employ trained personnel to conduct human reviews of uploaded content. Otherwise, they will face penalties for their failure to perform their monitoring obligations. Unsurprisingly, the scope of monitoring can be considered comprehensive, as the ISPs are required to monitor almost all online content in accordance with various laws, administrative regulations, and even ‘relevant state provisions.’

How did online platforms implement legal rules in practice?

Law enforcement agencies fully utilize the advantages of platforms in discovering, identifying, and handling illegal content, and entrust ISPs to proactively engage in collateral censorship through private ordering. Thus, platforms’ house rules act as a critical supplement to state legislation by restricting otherwise-legal content or activities. In practice, these house rules classify all the illegal, harmful and undesirable content as prohibited content, and ignore the distinction between prohibited content and undesirable content made in relevant administrative regulations. In fact, major Chinese platforms adopted a crafty approach by introducing more blurred and abstract concepts to explain the ambiguous language of legislation, thus worsening the predictability of house rules. Although commentators voice concerns about legal uncertainty deriving from ambiguous rules, the platforms frame them as ‘flexible’. With their expansive monitoring and an erratic and opaque decision-making process, mega platforms exercise much stronger control over the flow of information, regardless of more serious consequences that impact the fundamental rights of users.

On the one hand, in the broad T&Cs and Community Guidelines, a vast space is left for platforms to apply alternative mechanisms, which are often not transparent and not subject to external oversight, to moderate content. Within this frame, platforms adopt diverse measures to conduct content moderation, both preventive (ex-ante) and reactive (ex-post). Reactive measures such as region- and service-specific methods are employed to control the availability, visibility and accessibility of certain content, or restrict users’ ability to provide information, independently or in response to government mandates. Meanwhile, preventive content moderation, which aims to make content contingent on the prior consent of a designated public authority, usually takes the form of automated content filtering of unpublished content.

On the other hand, platforms extend the scope of content moderation with the substantial quasi-legislative power obtained from house rules. By introducing more uncertain concepts to elaborate on vague terms in public law, the predictability and transparency of house rules are further diminished. Under this parental state, other types of political heterodox speeches, legal speeches that violate widely held social norms and moral beliefs, or infrastructural values of platforms, are removed or blocked in practice.

When lacking systematic and institutional constraints, the constantly expanding content moderation practices are characterized by being quasi-legislative (T&Cs and Community Guidelines), quasi-executing (content moderation measures), and quasi-judicial (determination of illegal and harmful). Evidently, under the top-down collateral censorship mechanism, platforms try to adopt various stricter content moderation measures and further extend the scope of monitoring to eliminate potential uncertainties and risks. Such practices can further empower platforms, giving them greater control in terms of moderation technologies used and the making of norms for acceptable online content.

How did Chinese courts interpret content moderation in judicial practice?

Public law monitoring obligations encompass not only content that violates public law norms, but also content that violates private law norms. In judicial practice, the public law monitoring obligation is often interpreted as a duty of care.1 Courts thus deem that ISPs failed to fulfil their duty of care where they failed to perform public law monitoring obligations against online illegal content. The logic behind such legal reasoning indicates that, by virtue of their public law monitoring obligation, ISPs are presumed to have a corresponding monitoring obligation under private law. More importantly, courts implied that platforms should bear civil liability if they failed to perform their public law monitoring obligations.

In addition, fulfilling public law monitoring obligations may expose platforms to civil liability due to their actual knowledge of the existence of infringing content. In other cases, courts ruled that platforms risk losing their safe harbor protection if they take proactive measures to address illegal and harmful content.2 In certain exceptional circumstances, the level of duty of care for ISPs may be significantly elevated. For example, an ISP providing information storage space services is deemed to have constructive knowledge of a user’s infringement of the right of communication to the public on information networks, if the ISP substantially accesses the disputed content of popular movies and TV series or establishes a dedicated ranking for them on its own initiative. The legal reasoning in this decision implies that, since ISPs must fulfil their public law monitoring obligations, they should also be aware of potential copyright infringement within the content being monitored. 

Therefore, platforms face a dilemma: If they fail to fulfil their monitoring obligation set by public law, they are deemed to have committed an act that contributes to the occurrence of the infringement, for which they must assume administrative liability; at the same time, they need to conduct ex ante monitoring of content uploaded in order to fulfil the monitoring obligation set by public law, which means they have had constructive knowledge of the existence of infringing content and thus may bear a higher level of duty of care. Where infringing content appears on a platform, it is likely that the platform will be deemed to have knowledge regarding the existence of such content and thus be held liable. Particularly, law enforcement agencies are prone to fall into ‘results-oriented’ reasoning by presuming that ISPs failed to fulfil monitoring obligations.

Overall, the regulation of content moderation serves as a ‘policy lever’ used by public authorities to obtain control over the big tech powerhouses. At the same time, platforms are vested with a potent power, which has substantially mitigated not only illicit but also ‘lawful but awful’ online content to a large extent. However, this has accelerated the fragmentation of online law enforcement and generated the need for algorithmic recommendation and filtering systems. In the long run, excessively vague rules, inconsistent enforcement, paired with excessive reliance on algorithms will render the expansive collateral censorship of online content an inevitable failure, since it burdens ISPs with significant compliance costs and impacts freedom of expression, access to information and media pluralism at large.

The paperMaking the private public: Regulating content moderation under Chinese lawwas published in the Computer Law & Security Review. Baiyang Xiao is a PhD Candidate from University of Szeged, Institute of Comparative Law and Legal Theory. He is also a scholarship holder at the Max Planck Institute for Innovation and Competition. His main research interest is copyright law, intermediary liability, and AI governance in comparative perspectives.

  1. E.g. (2004)苏中民三初字第098号民事判决书; (2008)穗中法民三终字第119号民事判决书 ↩︎
  2.  E.g. (2021)京73民终220号民事判决书; (2019)京0491民初16240号民事判决书 ↩︎

No comments:

Post a Comment