merics.org
Lofty principles, conflicting incentives: AI ethics and governance in ChinaKey findings
Rather than being driven entirely from the top, China’s AI ethics and governance landscape is shaped by multiple actors and their varied approaches, ranging from central and local governments to private companies, academia and the public. China’s regulatory approach to AI will emerge from the complex interactions of these stakeholders and their diverse interests.
Despite notable advances in tackling ethics issues in specific AI sectors and application areas, a large gap remains between defining broad ethical principles and norms to guide AI development and putting these into practice through standards, laws and government or corporate regulation.
This gap is not unique to China, but particularly pronounced in the Chinese context since AI is seen as a core means for fully achieving the governance vision of the Chinese Communist Party, which prioritizes state control and political security over individual rights. Genuine concern for AI ethics coexists with Beijing’s use of AI for mass surveillance and ethnic profiling.
Given China’s rapid AI advancements, its expanding presence in global standards bodies and Chinese tech companies’ growing global reach, it will be critical for the EU to engage with Chinese actors. However, European policymakers must take the government’s rhetoric on AI ethics with great caution and push back against China’s use (and export) of AI for surveillance and other applications that threaten human rights and fundamental freedoms.
Exhibit 1
,
1. Introduction
Countries around the world are harnessing the transformative impact of artificial intelligence (AI) on their economies and societies. There has been much focus on the competition and rivalry between countries with advanced AI research and development (R&D) capabilities, with talk of an “AI race” between the United States and China – and to a lesser extent Europe. However, the ethical and safety risks of not getting AI right are as great as its beneficial potential. From facial recognition and recruitment algorithms carrying biases to self-driving cars endangering lives, the challenges associated with AI governance failures are enormous and require joint solutions.
Artificial Intelligence (AI) refers to both a scientific field and a broad suite of tech- nologies that accomplish tasks generally believed to require human intelligence, such as making decisions through the collection, processing and interpretation of data. The EU Commission defines AI as “systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals.”
Under the umbrella term “AI ethics,” experts are discussing questions such as what role AI systems should play in our societies, what risks they involve and how we should control them. In recent years, professional associations, companies, governments and international organizations have published a plethora of AI ethics principles and guidelines. Several European countries and organizations have played a pivotal role in these efforts, with the EU strongly advocating for the development of risk frameworks and legislation to ensure “trustworthy” AI, cemented in April 2021 in a proposal for the world's first dedicated AI regulations.
Understanding Chinese approaches to AI ethics and governance is vitally important for European stakeholders. China will be a fundamental force in shaping the trajectory of AI innovation and adoption as well as the way in which AI will be governed. It has embraced AI and aims to become the world’s primary AI innovation center by 2030. Chinese policymakers are paying increasing attention to ethics in the context of AI governance, having issued multiple related principles. Behind such initiatives there is a web of public and private players, interests and voices.
This MERICS Monitor provides an analysis of China's emerging AI ethics and governance landscape. It examines three issues:
The various approaches to AI ethics taken by government, corporate, academic and civil society actors in China
Ethical issues related to specific applications (healthcare, autonomous driving and public security) and how they are being addressed
China’s role in global AI ethics and governance efforts and its implications for European stakeholders
,
2. Beijing's strategic consideration and approach to AI ethics and governance
The government’s ambition to lead the world in AI is accompanied by its growing attention to the technology’s governance. In 2018, President Xi Jinping called for the “healthy development” of AI through the establishment of laws, ethics, institutional mechanisms and regulations.1 In the leadership’s view, researching and preventing the short-term risks, such as privacy and intellectual property infringements, and long-term challenges AI systems could pose to the economy, social stability and national security, such as unemployment and changes to social ethics, is of utmost importance.
2.1 China’s policymakers pay increasing attention to ethics in the context of AI governance
Starting with the publication in 2017 of the State Council’s New Generation Artificial Intelligence Development Plan (AIDP), the government expressed its intention to tackle ethical issues arising from AI systems. The plan states that by 2025 China will set up an initial system of laws, regulations, ethical norms and policies as well as a security assessment framework to “ensure the safe, reliable and controllable development of AI.” A comprehensive system should be established by 2030. The AIDP calls for strengthening research on legal, ethical and social issues. It also urges measures like an ethical framework for human-machine collaboration and codes of conduct for personnel in AI product R&D and design.2
Since then, several principles and white papers have been issued to guide AI governance (see Exhibit 1). In a 2018 AI Standardization White Paper, the Chinese Electronics Standards Institute (CESI) recommended three overarching ethical considerations for AI: “human interest,” “liability” and “consistency of rights and responsibilities.” The document discusses safety, ethical and privacy issues and reflects the government’s wish to use technical standardization as a tool in domestic and global AI governance efforts.3
In 2019, the Ministry of Science and Technology (MOST) issued the Governance Principles for a New Generation of AI, which put forward eight principles for developing “responsible AI.”4 Drafted by a dedicated expert group, the Governance Principles are the most official formulation of China’s approach to AI ethics to date.
Understanding the terms the government uses is necessary to gauge its vision for AI governance. Reference to “human rights” in the Governance Principles does not imply endorsement of liberal democratic values, while “societal security” implies maintaining stability by prioritizing collective wellbeing, as defined by the Chinese Communist Party (CCP), over individual freedoms. Additionally, the concept of human-machine harmony, read alongside the AIDP’s call for strengthening “public opinion guidance,” may indicate the intent to prepare society for greater data-driven monitoring and governance through AI.
2.2 The government directs a multi-stakeholder conversation on AI ethics
While the debate on AI ethics is overseen by Beijing and takes place within the strict limits of the party-state’s goals and interests, it is a multi-stakeholder conversation.
MOST’s AI governance committee comprises experts from leading universities, the Chinese Academy of Sciences and private AI companies. The Beijing AI Principles, a key document that preceded the Governance Principles, also resulted from a deliberation involving universities and companies under the leadership of the Beijing Academy of Artificial Intelligence (BAAI), a leading AI research institute backed by MOST and the Beijing municipal government.5
The third seminal set of principles, the Joint Pledge on AI Industry Self-Discipline, similarly emerged from a consultation between different players.6 Its process was launched by the Artificial Intelligence Industry Alliance, an association of universities and tech firms led by the China Academy of Information and Communications Technology (CAICT) of the Ministry of Industry and Information Technology (MIIT), the top government-affiliated think tank for tech policy issues.
Exhibit 2
A central feature of all these discussions is their applied approach. To drive implementation at the local level, MOST is encouraging municipal governments to step up relevant work in AI pilot zones (see Exhibit 2). Additionally, both the Beijing AI Principles and the Joint Pledge focus on applicable and action-oriented goals and measures to ensure that the trajectory of AI development throughout the lifecycle of systems, from R&D to commercialization, is beneficial for society.7
2.3 Safeguarding stability is a key objective of China’s AI strategy
The government’s rhetoric and attention to ethics can appear hypocritical given its use of AI for mass surveillance, repression and ethnic profiling (see section 4.3). However, from the perspective of China’s leadership and of its moral and ethical frameworks, this poses no contradiction. National security and stability are the highest collective goods, taking priority over personal privacy, transparency, accountability and individual human rights.
The CCP sees security and stability as preconditions as well as products of economic development, a key objective of China’s AI strategy. A major goal of the AIDP is the modernization of social governance, which entails not only the optimized provision of public services but also the construction of a modernized socialist society through, for example, the use of AI to “grasp group cognition and psychological changes.”8
Additionally, the party justifies its control over the legal system by arguing for the need to ward off threats from internal and external enemies to meet the superior goal of preserving political security.9 Thus, from the CCP’s perspective, the use of AI against a part of the population it sees as a terrorist threat to society, as is the case with Uighurs in Xinjiang, can coexist with efforts to ensure that AI systems do not cause harm to the majority.
Ethical questions about algorithmic decision-making are framed around the interests of the collective – of which the party-state claims to be the sole legitimate representative – rather than the individual.10 This logic also explains why the emerging data protection regime aims to impose restrictions on companies’ ability to collect personal information but leaves the government with nearly unrestrained power to harvest and use citizens’ data for public security and law enforcement.11
,
3. How industry, academia and civil society drive forward ethical AI
3.1 Industry plays a pivotal role in shaping Chinese discussions
China’s leadership sees industry as a key driver in coordinating self-regulation, research and education on AI ethics, though regulators ultimately set governance rules. It has highlighted the importance of corporate self-regulation, with a recent white paper published by the CAICT identifying companies as the main AI governance entities in the near-term.12 Many leading tech companies and startups have issued calls to address governance and ethics issues related to the development and commercialization of AI applications. They are also joining multi-stakeholder efforts to develop ethics principles and industry standards for responsible AI development, while initiating their own research and principles to tackle ethics issues.
Many companies were directly or indirectly involved in each of China’s three seminal AI documents, of which the joint pledge is an industry commitment to self-regulation. The seven members of MOST’s AI governance committee, for example, include two executives from e-commerce giant JD.com and facial recognition unicorn Megvii, demonstrating that companies are directly involved in the formulation of policy recommendations and guiding documents such as the Governance Principles.
Tech giants and AI startups are founding members of the previously mentioned BAAI and other key industry alliances behind AI principles and white papers.13 Baidu and Tencent have also submitted proposals on AI ethics directly to China’s leadership.14 Many companies are meanwhile active participants in domestic standard-setting activities related to AI.15
Corporate self-regulation has thus far primarily taken the shape of high-level ethics codes. Most notably, Baidu, Tencent and Megvii have issued documents that put forward ethics principles to guide their own and the industry’s development of AI. All three highlight similar notions such as the importance of technical robustness and safety, human oversight, data privacy and accountability. Tencent’s AI principles are the most detailed principles developed by a Chinese company so far. Issued in 2018, they urge for AI to be available, reliable, comprehensible and controllable , and highlight specific issues such as algorithmic transparency.16
Companies also conduct extensive research into governance and ethics issues through dedicated departments. Their research, much of which predates the government’s increased attention to AI ethics, ranges from techniques for preserving privacy in machine learning to methods for protecting against adversarial attacks on deep-learning systems.17
CEOs and AI executives also advocate for interdisciplinary exchanges and collaborative action on AI ethics, while positioning themselves as thought leaders on AI governance issues at key industry forums such as Shanghai’s annual World AI Conference. Some also raise public awareness of the risks of AI applications in everyday life, through campaigns such as AI for Good.18
While many tech companies and AI clearly recognize the importance of governing the societal and ethical impact of AI, few have institutionalized steps that turn high-level commitments into concrete procedures. Their AI ethics research and principles, while representing good- faith intentions, mostly lack concrete implementation measures that address the specific issues they identify, from algorithmic fairness to data privacy.
Megvii is one of the few companies to create internal structures such as an AI Ethics Committee to oversee the implementation of its AI principles. This committee is said to make recommendations to the board based on internal investigations and a whistleblowing procedure. However, one listed international member says he never joined the committee and it remains unclear what kinds of changes – if any – it has effected.19
It seems logical for companies to be at the forefront of identifying and addressing the harmful impacts of AI applications, given that they research, develop and deploy AI in real-life situations. They are also incentivized to anticipate and address the risks of their AI products and services to avoid backlash from regulators or the general public.
However, for now it is still unclear whether corporate AI ethics declarations are leading to meaningful changes in internal research and development processes, or whether they are ultimately empty commitments that serve only to enhance companies’ reputation. Companies are also commonly reluctant to implement potentially costly and time-intensive mechanisms to ensure safe and ethical AI products.
The close relationship of tech and AI companies with the government adds an additional layer of complication since the government not only provides extensive policy support but is also often a major client for corporates. Companies’ pledges on AI ethics thus often stand in stark contrast to their sale of AI products such as facial recognition or ethnic minority analytics tools to the public security apparatus (see section 4.3).
3.2 Chinese academic research also shapes AI ethics discussions
Academic research on the social and ethical implications of AI is increasingly informing discussions about AI governance in China. A review of relevant publications since 2017 reveals that although research efforts approach the issue from various angles, most are still limited to conceptualizing the changes brought about by AI and suggesting normative and regulatory frameworks. Critical research on specific applications is mostly lacking, although there are notable exceptions.20
Ethics research is conducted through state-sponsored projects and individual scholars’ initiatives. China’s two leading research institutes under the aegis of the State Council – the Chinese Academy of Sciences (CAS) and the Chinese Academy of Social Sciences (CASS) – undertake relevant work, some of which is sponsored by China’s largest public research fund for social sciences, the National Social Science Fund of China. One project led by the Institute of Automation at CAS explores issues like the relationship between humans and AI and challenges associated with determining liability. CAS-sponsored researchers also apply social science research to practical problems, such as social ethics issues caused by the introduction of robots into families.21
Several prominent scholars are particularly influential in driving forward ethics research. At CASS, Duan Weiwen (段伟文) – one of China’s most prominent thinkers on philosophical, ethical and social issues surrounding AI and Big Data – leads a Science, Technology and Society Research Center. Duan frequently emphasizes that innovation runs faster than ethics, which requires targeted work to tackle ethical risks in specific technology application scenarios rather than abstract prescriptions. He also advocates for public participation and oversight in ethics matters.22
Some researchers approach AI ethics from the perspective of traditional Chinese philosophy. CAS-affiliated Zeng Yi (曾毅) spearheaded the formulation of Harmonious Artificial Intelligence Principles, which are based on the concept of “harmony” in Chinese philosophy. These principles emphasize harmony between humans and machines, a concept that is also present in the Beijing AI Principles, and advocates for a positive symbiosis between the two. In addition to playing a leading role in drafting several seminal documents mentioned in this report, Zeng drives major applied ethics research efforts in areas like brain-inspired neural network architectures.23
Renmin University’s Guo Rui (郭锐), another prominent scholar and government advisor, focuses on translating ethical guidelines into an actionable governance system. Guo has advocated for companies to set up ethics committees, and in his latest book examines the ethical risks of specific AI applications, from precision marketing and content recommendation algorithms to sex robots and smart courts.24
Chinese academia actively engages in global exchanges on AI ethics. This aligns with the government’s call to increase the country’s “discourse power” (话语权) in the field. A prominent example of the interplay between scholarly exchanges and the state’ soft power ambitions is the Berggruen China Center, established by Peking University and the Berggruen Institute in 2018 with the stated goal of engaging Chinese thinkers to “examine, share and develop ideas to address global challenges.”25 AI ethics is one of the center’s main research areas. Additionally, in 2020 Tsinghua University established the Institute of Artificial Intelligence Global Governance to “actively contribute Chinese wisdom” and shape the field.26
While promoting official Chinese global governance concepts is an important goal behind these initiatives, it would be wrong to view all academic research and collaborations as being driven by the state’s aims. The diverse range of individual research initiatives reflects scholars’ genuine aspiration to make AI beneficial for mankind, as well as to overcome political tensions and cultural barriers between China and the West to advance cooperation. Xue Lan (薛澜), the Director of Tsinghua’s abovementioned institute, has warned that geopolitical tensions between China and the United States are having a chilling effect on industry and policy exchanges in the AI field, which may hinder cooperation on global AI governance.27
The BAAI has emerged as China’s leading AI research institute and a hub for multi- stakeholder and international collaboration. The institute has a research center, led by Zeng, which is dedicated to investigating AI ethics, governance and solutions for sustainable development. To foster international dialogue, a recently published study by BAAI and researchers at Cambridge University urges academia to play a greater role in overcoming cultural barriers to collaboration on AI ethics and governance.28
Chinese academia seems to be gaining influence in official government efforts to govern AI. Xue and Zeng, for instance, are also members of MOST’s AI Governance Committee. Yet it remains to be seen to what extent scholars will be able to directly influence government policy, corporate practices and regulation towards higher ethical standards.
3.3 Public pushback on AI risks has led to some regulatory changes
While, generally, the public is not seen as the decisive force in China’s AI development, Chinese citizens are pushing for ethical constraints on some use cases. Despite the common perception in the West that Chinese people are particularly trusting of new technologies, there is growing awareness, debate and occasionally pushback related to the risks of AI. In some cases, this has led to policy changes and corporate self-regulation.
Chinese consumers care about the protection of their personal information. When in 2018 Baidu’s CEO Robin Li said Chinese people were less sensitive about privacy and more willing to trade it for convenience, he faced intense opposition on social media. During the Covid-19 outbreak, the use of monitoring apps that collect health information and location data also provoked public criticism due to concerns over discrimination and the erosion of privacy.29
In recent years, consumer backlash has played a key role in holding Chinese tech companies accountable for data privacy violations and