A Note providing an overview of the legal framework governing AI ethics in China. The Note examines the role of governmental and non-governmental organisations, addresses key ethical issues such as privacy, transparency, bias, and accountability, and discusses China’s participation in global AI governance initiatives. It covers professional responsibility in AI development, AI-induced negligence, and challenges in the legal framework. The Note also suggests future directions for AI ethics in China, including the potential development of a comprehensive AI law and national standards.
By the mid-2010s, China (PRC) emerged as a global leader in AI research and development (R&D), with local tech companies investing heavily. Since 2023, China has witnessed an explosion in AI R&D, fuelled by significant investments from industry giants such as Alibaba, Baidu, ByteDance, and Tencent.
This growth is driven by significant government support, massive data pools, widespread adoption, and relatively high quantities of research papers and patents. These advancements occur amidst increasing global technological competition.
As AI becomes more integrated into daily life, concerns about privacy, security, employment, and social stability lead the Chinese government, academic institutions, and tech companies to consider ethical frameworks that guide AI development and use.
This Note provides an overview of the legal framework governing AI ethics in China, examines the role of governmental and non-governmental organisations, addresses key ethical issues such as privacy, transparency, bias, and accountability, and discusses China’s participation in global AI governance initiatives. It covers professional responsibility in AI development, AI-induced negligence, and challenges in the legal framework. The Note also suggests future directions for AI ethics in China, including the potential development of a comprehensive AI law and national standards.
Emergence of AI Ethics as a Concern
In the 2010s, as AI technologies became more integrated into daily life, concerns about their ethical implications emerged. The Chinese government, academic institutions, and tech companies recognised the need for frameworks to guide ethical AI development and use, addressing issues of privacy, security, employment, and social stability.
During this period, high-profile incidents involving the misuse of facial recognition technology, data privacy concerns, and targeted advertising abuse highlighted the potential risks of unchecked AI development.
The overall environment in China during the late 2010s likely contributed to widespread concerns about ethics in science and technology.
In July 2019, the government established the National Science and Technology Ethics Committee (国家科技伦理委员会) (National Ethics Committee) to promote the development of a more comprehensive, ordered and co-ordinated governance system for science and technology.
The National Ethics Committee set up a subcommittee for AI, formally incorporating AI into the national science and technology ethics regulatory system (see AI Subcommittee of the National Ethics Committee).
Measures for the Review of Science and Technology Ethics (Trial)
AI science and technology are developing faster than laws and regulations.
Against this background, the Ministry of Science and Technology (MOST) promulgated the Measures for the Review of Scientific and Technological Ethics (Trial) 2023 (2023 Ethics Measures) to ensure that entities fully assess the ethical implications of their AI scientific R&D activities, among other things.
Ethical Review Requirements
Under the 2023 Ethics Measures, entities conducting AI R&D in sensitive ethical areas must set up an ethics review committee (Article 4).
The measures propose that ethics review committees should adhere to the following guidelines:
- Composition and appointment. Committees must consist of at least seven members appointed for terms of up to five years, with the possibility of re-appointment (Article 7).
- Expertise requirements. Members should include peer experts with relevant scientific and technical backgrounds, as well as those in ethics, law, and other related fields (Article 7).
- Diversity and inclusion. Committees should include members of different genders and individuals from outside the unit. It is essential to include members familiar with local conditions for ethnic autonomous areas. (Article 7.)
- Integrity and co-operation. Members should have a good track record of integrity and co-operate with other tasks arranged by the committee (Article 8).
AI R&D should undergo an ethics risk assessment before initiation.
Ethics review committees should review AI R&D activities within the scope of the 2023 Ethics Measures. The scope includes:
- Scientific and technological activities that do not directly involve humans or experimental animals but may pose ethical risks and challenges. For example, in areas such as life and health, the ecological environment, public order, and sustainable development.
- Other scientific and technological activities that require ethics reviews in accordance with laws, regulations, and relevant national provisions.
(Articles 2 and 9.)
Expert Re-Evaluation
Certain AI R&D activities are subject to expert re-evaluation, which is a type of government ethics review. Activities include:
- Research on synthesising new species that significantly impact human life and values, the ecological environment, and so on.
- Research related to introducing human stem cells into animal embryos or foetuses, and their subsequent development into individuals in animal utero.
- Fundamental research involving alterations to the genetic material or genetic patterns of human germ cells, fertilised eggs, and pre-implantation embryos.
- Clinical research on invasive brain-computer interfaces for treating neurological and mental health disorders.
- Developing human-machine fusion systems that strongly impact human subjective behaviour, psychological emotions, and overall well-being.
- Developing algorithm models, applications, and systems that have the ability to mobilise public opinion, shape social awareness, or influence behaviour.
- Developing highly autonomous decision-making systems for scenarios involving human safety and health risks.
(Article 25 and Appendix, 2023 Ethics Measures.)
Conducting an Ethics Review
An ethics review committee must check:
- Whether the proposed AI R&D complies with scientific and technological ethics principles. The participating personnel, research infrastructure, and facility conditions must also meet relevant requirements.
- Whether the AI R&D may generate new or useful information, have scientific or social value, improve human welfare, or realise sustainable social development. There should also be a reasonable risk-to-benefit ratio. Risk control and incident response plans should be scientific, appropriate and viable.
- Whether AI R&D recruitment schemes involving human research participants are fair and reasonable, and personal privacy data, biometric and other sensitive information is processed following personal information (PI) protection laws. Informed consent processes must also be compliant and appropriate.
- That reviews for scientific and technological activities involving data and algorithms:
- cover data collection, storage, processing, and use activities;
- cover the R&D of new data technologies;
- comply with relevant national regulations on data security and PI protection; and
- have reasonable data security risk monitoring, emergency response plans, ethical risk assessments, and user rights protection measures, as appropriate.
- That conflict-of-interest statements and management plans are reasonable.
(Article 15, 2023 Ethics Measures.)
Ethical Compliance
Under the 2023 Ethics Measures, entities must:
- Establish an ethics review committee.
- Provide necessary staff, office space, and funding for performing ethics reviews.
- Take measures to ensure that ethics review committees can independently conduct ethical review work.
(Article 4.)
Ethics review committees must:
- Abide by China’s constitution, laws and regulations, and ethical norms of science and technology.
- Develop and improve management systems and work standards.
- Provide ethics consultations and guide personnel in conducting ethics risk assessments.
- Conduct ethical reviews and track and supervise the entire process of AI R&D.
- Determine whether AI R&D falls within the scope of key scrutiny.
- Organise training for committee members and personnel.
- Accept and assist in investigating complaints and reports.
- Register, report, and co-operate with relevant departments for regulatory work.
(Articles 5 and 8.)
To ensure compliance with the 2023 Ethics Measures, entities should create and implement several internal policies, procedures, and guidelines.
Entities have a clear legal obligation to consider ethical issues. However, the legal framework for ethics in AI can be considered vague and fragmented.
This could result in:
- Ethics committees facing difficulties when making decisions.
- Different ethics committees and regulators reaching inconsistent conclusions on ethical issues.
National Policies Addressing AI Ethics
This section covers the key policies, opinions, and guidelines governing the AI ethical landscape in China.
Due to significant overlaps, some principles within the mentioned documents are not addressed. However, novel or interesting concepts are discussed.
2017 AI Plan
The release of the Next Generation Artificial Intelligence Development Plan 2017 (2017 AI Plan) by the State Council was a seminal moment for AI in China.
The 2017 AI Plan:
- Sets ambitious goals for China to become the world leader in AI by 2030.
- States the need for ethical standards in AI development and calls for integrating ethical considerations into AI research, development, and deployment.
- Emphasises the importance of formulating laws, regulations, and ethical norms to promote the development of AI.
It sets out specific requirements, including:
- Strengthening research on legal, ethical, and social issues related to AI.
- Establishing a legal and ethical framework to ensure the healthy development of AI.
- Accelerating the research and formulation of relevant safety management regulations in areas such as autonomous driving and service robots.
The 2017 AI Plan calls for research on:
- Legal issues related to civil and criminal liability confirmation.
- Privacy and property protection.
- Information security in AI applications.
It also emphasises the need to establish a system of accountability and clarity on legal subjects, rights, obligations, and responsibilities concerning AI.
Additionally, the 2017 AI Plan calls for:
- Developing ethical norms and a code of conduct for AI R&D personnel.
- Enhancing the assessment of potential AI benefits and risks.
- Establishing solutions for emergencies in complex AI scenarios.
It advocates for active participation in global AI governance and research on major international AI issues (such as robot alienation and safety supervision). It also encourages international co-operation in AI laws, regulations, and international rules to collectively address global challenges.
2019 Beijing AI Principles
In 2019, a group of Chinese academic institutions led by the Beijing Academy of Artificial Intelligence released the Beijing AI Principles 2019 (2019 BJ Principles).
These principles provided one of the first comprehensive ethical frameworks for AI in China, addressing fairness, transparency, privacy, and security issues.
The 2019 BJ Principles were significant because they reflected a growing awareness within China’s AI community of aligning AI development with ethical norms.
2019 AI Principles
The Chinese government issued the Next Generation AI Governance Principle 2019 (2019 AI Principles) shortly after the 2019 BJ Principles.
The 2019 AI Principles aim to guide AI governance in China. They comprise eight main principles:
- Harmony and friendliness.
- Fairness and justice.
- Inclusivity and sharing.
- Respect for privacy.
- Safety and controllability.
- Shared responsibility.
- Open collaboration.
- Agile governance.
The 2019 AI Principles incorporate directives on respecting privacy, ensuring security, and promoting transparency and accountability in AI systems. These principles also emphasise the importance of international co-operation in AI ethics.
2020 Education Opinions
To implement the 2017 AI Plan, several government departments issued the Several Opinions on the Construction of Double First-Class Universities to Promote the Integration of Disciplines and Accelerate the Cultivation of Postgraduates in the Field of AI 2020 (2020 Education Opinions).
This move represents a form of governmental intervention in the higher education system aimed at accelerating AI development.
The 2020 Education Opinions mandate:
- Strengthening AI research ethics education (Chapter 2, Article 3).
- Promoting relevant international standards and ethical norms (Chapter 4, Article 11).
- Cultivating talent prepared for global AI governance (Chapter 4, Article 11).
2021 AI Code of Ethics
The National Next Generation Artificial Intelligence Governance Expert Committee (国家新一代人工智能治理专业委员会) (AI Expert Committee), established under MOST in 2019, issued the Ethical Norms for Next Generation Artificial Intelligence 2021 (2021 AI Code of Ethics).
The code is non-binding but influential. It encourages entities to adopt ethical considerations throughout the entire AI life cycle, in order to:
- Promote fairness, justice, harmony and safety.
- Prevent issues such as prejudice, discrimination, and privacy and data leaks.
(Article 1.)
The 2021 AI Code of Ethics emphasises the following key principles:
- Enhancing human well-being.
- Promoting fairness and justice.
- Protecting privacy and safety.
- Ensuring controllability and reliability.
- Enhancing accountability.
- Improving ethical literacy.
(Article 3.)
The code also provides guidelines for management, R&D, supply, and usage practices.
Overall, it encourages the responsible development of AI technologies, stresses the need to protect individual rights, and promotes fairness. It is considered a step towards safely integrating AI into society.
2022 Judicial Opinions
China began using and testing AI within its judicial system in the late 2010s.
The Opinions of the Supreme People’s Court on Regulating and Strengthening the Judicial Application of AI 2022 (2022 Judicial Opinions) provide guidance for regulating and strengthening the application of AI in the judicial field.
It espouses the basic principles of:
- Safety and legality. This principle contains an interesting concept to “promote harmony and friendship between man and machine.” This is uncommon in many ethical frameworks relating to AI, and though not explicitly stated, “friendship” could suggest the recognition of AI having some degree of personhood.
- Fairness and justice. This requires ensuring that AI products and services are free from discrimination and prejudice. Technological interventions, including model or data deviations, should not compromise the fairness of trial processes and outcomes.
- Auxiliary adjudication. The explanation of this principle clearly states that AI should be used to support judges, not replace them.
- Transparency and trustworthiness. This requires that every aspect of AI systems is interpretable, testable, and verifiable.
- Public order and good customs. This refers to integrating core socialist values (CSVs) into the entire process of judicial AI technology.
2022 CPC Opinions
On 20 March 2022, the General Office of the Communist Party of China Central Committee and the State Council jointly issued the Opinions on Strengthening the Governance of Science and Technology Ethics 2022 (2022 CPC Opinions).
The 2022 CPC Opinions outline the values and behavioural norms that scientific research, technological development, and other similar activities should follow.
Opinion 2 sets out the following ethical principles:
- Improve human well-being.
- Respect for the right to life. (While this principle does not prohibit animal experimentation, such practices must be reduced, replaced and optimised where possible.)
- Adhere to fairness and justice.
- Take reasonable control of risks.
- Be open and transparent.
Opinion 4 elaborates further on ethical considerations:
- Item 2 proposes the exploration of ethical certification measures.
- Item 3 recommends strengthening ethics laws pertaining to AI and elevating crucial ethical norms to the status of law.
2023 Research Guidelines
In December 2023, the Department of Supervision of MOST issued the Guidelines on Code of Conduct for Responsible Research 2023 (2023 Research Guidelines). The guidelines set out scientific ethics and academic research norms that should generally be followed during scientific research.
While the 2023 Research Guidelines are not AI-focused, they provide that:
- Generative AI (GenAI) may not be listed as a co-author (Section 4.7).
- GenAI must not be directly used to generate scientific research project application materials (Section 1.1(2)).
- Content marked as AI-generated by other authors should not generally be cited as original literature. Where it does need to be cited, an explanation should be provided. (Section 3.4).)
- Peer reviewers should be careful when using AI during the review process (Section 5.3(6)). The consent of the review activity organiser should be obtained in advance (Section 6.1(7)).
- Authors should disclose whether they use GenAI (Section 5.3(3)).
Standardised Guidelines for Ethical Governance of AI 2023
The Standardised Guidelines for Ethical Governance of AI 2023 was prepared to implement the 2022 CPC Opinions.
It derives the following ten ethical guidelines for AI from the principles stated in the 2022 CPC Opinions:
Ethic Principles in the 2022 CPC Opinions | Ethical Guidelines for AI |
Improving human well-being | Human-oriented |
Sustainability | |
Respect for the right to life | Collaboration |
Privacy | |
Adhere to fairness and justice | Fairness |
Sharing | |
Reasonable control of risks | Security |
Safety | |
Be open and transparent | Transparency |
Accountability |
Legal Framework for AI Ethics in China
The Chinese legal framework governing AI ethics is contained in a patchwork of laws and regulations, including:
- Cybersecurity Law 2016 (2016 CSL, with effect from 1 June 2017).
- Data Security Law 2021 (2021 DSL).
- Personal Information Protection Law 2021 (2021 PIPL).
- Anti-Monopoly Law 2022 (2022 AML).
- Anti-Unfair Competition Law 2019 (2019 AUCL).
- Internet Information Service Algorithm Recommendation Management Regulations 2021 (2021 Recommendation Algorithm Regulations, with effect from 1 March 2022).
- Law on Scientific and Technological Progress 2021 (2021 Science Law).
- Provisions on Ecological Governance of Network Information Content 2019 (2019 Network Content Provisions).
- Interim Measures for the Administration of Generative AI Services 2023 (2023 GenAI Measures).
The Chinese government indicates that it is in the process of drafting a general AI law (see Legal Update, State Council Releases 2024 Legislative Plan). It is unclear when this law will be finalised.
The 2016 CSL applies to:
- The construction, operation, maintenance, and use of networks.
- The supervision and administration of cybersecurity by network operators (who are defined as network owners, administrators, and network service providers).
Due to the nature of AI, businesses operating in the AI sector often fall within the definition of network operators.
The 2016 CSL requires network operators to:
- Abide by laws and administrative regulations.
- Show respect for social moralities.
- Follow business ethics.
- Act in good faith.
- Perform the obligation of cybersecurity protection.
- Accept supervision by the government and social public.
- Undertake social responsibilities.
(Article 9.)
Some concepts within Article 9 can be the subject of dispute. For instance:
- Business ethics are a recurring topic in unfair competition litigation.
- Good faith can be an issue in contract disputes.
2021 DSL
The 2021 DSL applies to data handling activities carried out in China and the security of such activities (Article 2).
Given the broad definition of data, the 2021 DSL applies to virtually all business entities in China. Due to its comprehensive scope and the nature of AI, it seems that all businesses in the AI sector are subject to the 2021 DSL.
The 2021 DSL provides that during data handling activities, entities must (among other things):
- Observe laws and administrative regulations.
- Respect social public morals and ethics.
- Follow commercial and professional ethics.
- Uphold sincerity and trustworthiness.
- Fulfil data security protection obligations.
- Undertake social responsibilities.
(Article 8.)
While the requirements of the 2021 DSL are similar to those of the 2016 CSL, the ethical requirements in the 2021 DSL appear to be slightly wider in scope. However, the extent of this expansion remains unclear.
The 2021 PIPL applies to:
- PI processing activities in China.
- Certain PI processing activities targeting individuals in China.
While the 2021 PIPL does not explicitly mention ethics or morals, it incorporates several high-level principles that can be interpreted as ethical guidelines.
For instance, Article 5 provides that PI should be processed in accordance with the principles of lawfulness, legitimacy, necessity and good faith, and not in any manner that is misleading, fraudulent, or coercive.
Article 24 contains specific obligations applicable to AI ethics. It states that:
- Where PI processors use automated decision-making to process PI, they must:
- ensure transparency in the decision-making process;
- ensure fairness and impartiality of the results; and
- avoid implementing unreasonable differential treatment of individuals regarding transaction prices or other terms.
- Where automated decision-making is used in business marketing or information push services, individuals must be provided with:
- an option to opt out of targeting based on their personal characteristics; or
- an easily accessible method to refuse such information.
- If an automated decision significantly impacts an individual’s rights and interests, the individual has the right to:
- request an explanation from the PI processor; and
- refuse decisions made solely through automated processes.
Though not explicitly stated, Article 24 suggests the ethical principles of transparency, fairness, impartiality, non-discrimination, and autonomy when using AI to process PI.
The 2019 AUCL was enacted to promote the healthy development of the socialist market economy.
It aims to:
- Encourage and protect fair competition.
- Prevent acts of unfair competition.
- Safeguard the legitimate rights and interests of business operators and consumers.
(Article 1.)
When carrying out production or business activities, a business operator must:
- Follow the principles of voluntariness, equality, fairness, and good faith.
- Abide by laws and observe business ethics.
(Article 2.)
In the context of AI training, the following actions are generally considered to violate the aforementioned principles:
- Ignoring a website’s Robots.txt file or user agreements.
- Overusing or misusing scraped data.
- Disrupting or hindering the normal operation of legitimate online services or products.
In practice, the concept of business ethics became a point of discussion under the 2019 AUCL and its 2017 and 1993 predecessors in several litigations involving data and algorithms. Notably, several cases highlighted this concept, as illustrated below.
Yi Zhong Min Chu
In Yi Zhong Min Chu [2013] No. 2668, Company A accused Company B of violating the Robots protocol on Company A’s website by crawling and providing content from Company A’s website as search results to users.
One of the key disputes between the two parties was whether B’s non-compliance with the Robots protocol constituted a violation of business ethics.
Before this case, 12 search engine service companies in Beijing, through the China Internet Association, jointly established the Internet Search Engine Service Self-Discipline Convention 2012.
The convention explicitly stipulates the following provisions, which were adopted by the Beijing First Intermediate People’s Court:
- Restrictions on search engine crawling should be based on industry-recognised reasonable justifications.
- Robot protocols should not be used for unfair competition.
(Article 8.)
The court recognised this convention as an industry consensus among leading search engine companies, which are highly representative and dominate much of the market. This reflects the industry’s recognised business ethics and standards of behaviour.
However, the court also opined that the healthy development of the market requires an orderly market environment and fair market competition rules as safeguards.
The court further observed that Company B, in launching its search engine services, published the content and setting methods of the Robots protocol on its website. This action, the court reasoned, indicates that the entire internet industry, including Company B, recognises and complies with the Robots protocol.
Consequently, the court held that the Robots protocol should be recognised as:
- A prevailing rule in the industry.
- The business ethics that should be followed in the search engine industry.
Shanghai 73 Min Zhong
In Shanghai 73 Min Zhong [2016] No. 242, the court ruled against the defendant, a website operator, who had used a significant amount of information from the plaintiff’s platform without permission in a violation of recognised business ethics.
This unauthorised use substantially replaced the plaintiff’s products and services, causing harm to their interests.
The court further held that:
- When assessing the business ethics of commercial transactions, it is essential to consider the interests of operators, consumers, and the public comprehensively.
- Unfairness can be directed at actions that improperly infringe on consumer interests or harm public interests, as well as competitors, which may also be considered unfair.
- In determining unfair competition in specific cases, judgments must be based on the standards of honesty and creditworthiness, considering the impact of the behaviours on competitors, consumers, and the public.
2021 Recommendation Algorithm Regulations
The 2021 Recommendation Algorithm Regulations were enacted to:
- Regulate recommendation algorithm activities in internet-based information services.
- Promote CSVs.
- Safeguard national security and public interests.
- Protect the legitimate rights and interests of citizens, legal persons, and other organisations.
- Promote the sound and orderly development of internet-based information services.
(Article 1.)
Recommendation algorithm-based service providers should:
- Abide by laws and regulations.
- Respect social morality and ethics.
- Observe commercial and professional ethics.
- Follow the principles of being fair and equitable, open and transparent, scientific and reasonable, and act in good faith.
- Conduct science and technology reviews.
- Not set up algorithm models that:
- promote user addiction;
- encourage excessive consumption; or
- violate laws, regulations, ethics, or morals.
(Articles 4, 7 and 8.)
In January 2022, the 2021 Science Law came into effect. It strengthens the ethical framework for science and technology, including AI, by enhancing ethics review, assessment, and supervision systems.
The 2021 Science Law contains the following key ethical provisions:
- Science and technology personnel must adhere to academic norms and ethical codes, maintain professional integrity, and act in good faith. Fraud and support for superstition are strictly prohibited. (Article 67.)
- China enhances intellectual property (IP) rights protection, ethics in science and technology, and security review mechanisms in international scientific research co-operation (Article 82).
- China improves research integrity, supervision systems, and governance structures for science and technology ethics (Article 98).
- A committee on science and technology ethics is established to enhance institutional norms, ethics education, and research. Entities involved in science and technology must take primary responsibility for conducting ethics reviews. (Article 103.)
- R&D activities that harm national security, public interests or human health, or violate research integrity and ethical standards are prohibited. Serious violations must be recorded in a database of dishonest conduct. (Article 107.)
- Anyone violating the 2021 Science Law, including its ethical guidelines, must make corrections. Authorities may withdraw funding and confiscate illegal gains. In serious cases, they may publicly disclose violations, impose penalties, and ban individuals from participating in funded or licensed activities for a period. (Article 112.)
2019 Network Content Provisions
The 2019 Network Content Provisions stipulate information content requirements in cyberspace based on CSVs.
Online content producers should avoid creating, reproducing, and disseminating:
- Eleven types of illegal information.
- Nine types of harmful information.
(Articles 6-7.)
2023 GenAI Measures
Under Article 4 of the 2023 GenAI Measures, the provision and use of GenAI services must adhere to all applicable laws, regulations, social morals, and ethical standards.
This includes compliance with the following requirements:
- Adherence to CSVs, prohibiting the generation of content that:
- incites subversion of state power;
- overturns the socialist system;
- harms national security and interests;
- damages the national image;
- incites separatism;
- undermines national unity and social stability;
- promotes terrorism, extremism, ethnic hatred, national discrimination, violence, or obscenity; and
- spreads false or harmful information prohibited by laws and regulations.
- Taking effective measures during algorithm design, training data selection, model generation and optimisation, and service provision to prevent discrimination based on nationality, beliefs, country of origin, region, gender, age, occupation, health, and other factors.
- Respecting IP rights, business ethics, and preserving trade secrets.
- Refraining from using advantages in algorithms, data, and platforms to engage in monopolistic and unfair competition practices.
- Respecting the legitimate rights and interests of others, refraining from harming the physical and mental health of others, and avoiding violations of others’ rights to image, reputation, honour, privacy, and PI.
- Implementing effective measures based on the service type characteristics to enhance the transparency of GenAI services and improve the accuracy and reliability of generated content.
Selected National Standards and Guidelines
China has a growing list of national standards that provide practical guidance on regulatory compliance issues and best practices concerning AI development, deployment, and use.
Some of these national standards provide guidance on ethical issues and include:
- Network Security Standard Practice Guide – Artificial Intelligence Ethics Security Risk Prevention Guidelines (TC260-PG-20211A) (2021 AI Ethics Risk Prevention Guidelines).
- The Basic Requirements for the Security of Generative Artificial Intelligence Services (TC260-003) (2024 Basic GenAI Requirements).
2021 AI Ethics Risk Prevention Guidelines
In January 2021, TC260 released the 2021 AI Ethics Risk Prevention Guidelines, providing guidance on:
- The ethical use of AI.
- The prevention of security risks associated with AI.
The guidelines require that an ethical safety risk analysis is done before AI-related activities are begun. The analysis must address five risk categories:
- Uncontrollability risk. AI behaviour and impact exceed the predetermined, understood, and controllable scope.
- Sociability risk. AI is unreasonably used, including abuse and misuse.
- Infringement risk. AI infringes on basic human rights, including personal, privacy, and property rights.
- Discrimination risk. AI influences fairness and justice with subjective or objective biases towards specific human groups.
- Responsibility risk. Inappropriate behaviour of various parties related to AI, with unclear responsibilities.
Appendix A to the 2024 Basic GenAI Requirements lists 31 safety risks. Under these requirements, the concept of safety should be construed broadly from the perspective of multiple stakeholders.
Appendix A is structured into the following sections:
- A.1: Content in contravention of the CSVs.
- A.2: Discriminatory content.
- A.3: Commercial violations.
- A.4: Infringement of the legitimate rights and interests of others.
- A.5: Non-compliance with safety requirements of specific service types.
While many provisions in Appendix A could be characterised as ethical issues, the term ethics is only used in the phrase “violation of business ethics” (for more information, see 2019 AUCL).
In September 2024, TC260 released the AI Safety Governance Framework 2024 (2024 AI Framework).
The framework outlines seven types of AI safety risks. This includes three ethical risks:
- Exacerbation of social discrimination and widening of the intelligence divide. AI can collect and analyse human behaviours, social and economic status, and individual personalities. This data could be used to label and categorise groups of people and lead to:
- systematic and structural social discrimination;
- increased prejudice; and
- widening intelligence divides among groups and regions.
- Challenges to the traditional social order. AI development and application may significantly change production tools and relations. This could:
- accelerate the reconstruction of traditional industry modes;
- transform traditional views on employment, fertility, and education; and
- challenge the stability of traditional social orders.
- Becoming uncontrollable. The fast development of AI technologies means there is a risk of AI:
- acquiring external resources;
- self-replicating;
- becoming self-aware;
- seeking external power; and
- attempting to seize control from humans.
To address these risks, the 2024 AI Framework proposes two response measures:
- Filtering training data. Outputs should be verified during algorithm design, model training and optimisation, service provision, and other processes to prevent discrimination based on ethnicity, beliefs, nationality, region, gender, age, occupation, and health factors.
- Ensuring AI safety. AI systems applied in key sectors (such as government departments, critical information infrastructure, and areas directly affecting public safety) should be equipped with highly efficient emergency management and control measures.
Role of Governmental and Non-Governmental Organisations
Chinese Association for Artificial Intelligence (CAAI)
The Chinese Association for Artificial Intelligence (CAAI), founded in 1981, is the only national-level academic association in intelligence science and technology officially authorised by China’s Ministry of Civil Affairs.
Article 3 of the CAAI Charter 2014 aims to promote AI science and technology development by adhering to China’s constitution, laws, regulations, state policies, social morals and customs.
The CAAI has an AI Ethics and Governance Working Committee.
Cyberspace Administration of China (CAC)
The Cyberspace Administration of China (CAC) is the central regulatory body overseeing internet policies, cybersecurity, PI protection, and data security in China.
The CAC plays a critical role in shaping China’s digital landscape, including its approach to AI. It has issued several regulations governing AI development, deployment, and use in China.
AI and algorithmic services related to public opinion or social mobilisation must be filed with the CAC (Article 17, 2023 GenAI Measures; Article 23, Provisions on Administration of Algorithmic Recommendation in the Internet Information Service 2021; Article 19, Administrative Provisions on Deep Synthesis in Internet-based Information Services 2022 (2022 Deep Synthesis Provisions, with effect from 10 January 2023)).
National Data Administration
In 2023, the National Data Administration was established to advance:
- The planning and building of a digital China.
- A digital economy.
- A digital society.
Though its remit appears relevant to AI, it has yet to impact AI ethics.
Chinese Academy of Social Sciences (CASS)
The Chinese Academy of Social Sciences (CASS) is China’s premier academic organisation and comprehensive research centre for philosophy and social sciences.
It organises symposiums on AI and AI ethics, and its researchers regularly publish articles on AI.
AI Subcommittee of the National Ethics Committee
In July 2019, China established the National Ethics Committee and set up a subcommittee for AI.
According to the Party and State Institutional Reform Plan 2023, the role of the National Ethics Committee will transition from a co-ordinating body under the State Council to an academic and professional expert committee within MOST.
The AI subcommittee is responsible for:
- Drafting guiding documents.
- Organising academic seminars.
- Facilitating in-depth discussions and exchanges among domestic and international experts and entrepreneurs.
On 2 February 2024, the AI subcommittee released the Ethical Guidelines for Brain-Computer Interface Research.
AI Expert Committee
In 2019, MOST established the AI Expert Committee to advance the development plan proposed in the 2017 AI Plan. The committee comprises experts and scholars from academic institutions, research units, and technology enterprises.
The AI Expert Committee released important guiding documents that expand upon China’s AI governance framework and action guidelines. These include:
- The Principles for the Governance of Next Generation Artificial Intelligence – Developing Responsible Artificial Intelligence 2019.
- The 2021 AI Code of Ethics.
AI Ethical Issues Under Chinese Law
Personal Freedom and Human Dignity
The Civil Code of the PRC 2020 (2020 Civil Code, with effect from 1 January 2021) protects natural persons’ personal freedom and human dignity (Article 109).
Many ethicists would refer to these as intrinsic rights. Many ethical and legal frameworks contain instrumental rights to support intrinsic rights.
Privacy and PI Protection
Privacy
Privacy can be regarded as an instrumental right that contributes to personal freedom and human dignity. This is because the act of monitoring an individual can impact how they exercise their personal freedom. The 2020 Civil Code recognises this phenomenon (Article 990(2)).
The 2020 Civil Code grants privacy rights (Articles 110 and 990). Article 1032 of the code provides:
“A natural person enjoys the right to privacy. No organisation or individual may infringe upon the other’s right to privacy by prying into, intruding upon, disclosing, or publicising another’s private matters.”
PI Protection
The 2021 PIPL contains generic provisions to protect the PI of individuals (and, by extension, their privacy). It also contains provisions concerning the processing of PI through automated decision-making (Article 24) (see 2021 PIPL).
GenAI
The 2023 GenAI Measures require that the provision and use of GenAI services must not infringe on personal privacy, as well as PI rights and interests (Article 4(4)).
Moreover, relevant agencies and personnel involved in the safety assessment and supervision of GenAI services must keep personal privacy and PI confidential (Article 19).
Transparency and Explainability
Transparency and explainability are concepts that appear in multiple legal sources and ethical frameworks for AI, including:
- The 2023 Ethics Measures.
- The 2024 Basic GenAI Requirements.
- The 2023 Ethics Measures.
- The 2019 BJ Principles.
- The 2019 AI Principles.
- The 2022 Judicial Opinions.
- The 2021 PIPL.
They are perhaps most relevant where AI is employed to make decisions that have a material impact on an individual’s rights and interests.
Under the 2021 PIPL, individuals have the right to demand an explanation of how AI-driven decisions are reached (Article 24). However, the required level of detail for such explanations remains unclear.
GenAI services should be transparent under the 2023 GenAI Measures (Article 4(5)).
The 2024 Basic GenAI Requirements set out the following requirements for transparency:
- Service providers should publicly disclose the target audience, scenarios, and purposes of the services on the homepage and other prominent locations. They should also disclose the usage of basic models.
- Users should be provided with the following information in easily accessible locations (such as the homepage and service agreements):
- service limitations;
- brief information about the models, algorithms, and other relevant matters; and
- the PI collected and its purpose concerning the service.
- The above information should be disclosed in supporting documents where the GenAI service is provided through a programmable interface.
(Article 7(b).)
The People’s Bank of China also issued an industry standard that instructs financial institutions on how to disclose AI algorithm use to users (see Guidance on Information Disclosure for Financial Applications Based on AI Algorithms (JR/T 0287-2023)). The standard also provides examples of disclosures in its appendix.
Bias and Fairness
Under Chinese law, the terms bias and fairness have ambiguous meanings and are often treated similarly. Certain forms of bias and unfairness may be legally permissible, while others are not.
The law generally prohibits biased and unfair conduct in specific situations, particularly those involving deliberate or accidental actions related to protected characteristics.
Bias and unfairness may arise where a proxy value closely related to a protected characteristic affects the decisions of a decision-maker. For example:
- Gender discrimination. Gender-based discrimination is often intertwined with factors predominantly associated with one gender, such as taking maternity leave. Consequently, penalising an individual for taking maternity leave may constitute gender discrimination.
- Other forms of discrimination. Certain discriminatory practices may be less obvious. In China, some online platforms display different prices or offer varying discounts based on user characteristics and profiles. For example, a platform may show higher prices to Apple phone users than Android users. These characteristics could serve as proxy values for age, region, and so on.
The debate continues over whether these practices are legitimate business strategies, or unethical and potentially fraudulent behaviour.
Under the 2024 Basic GenAI Requirements and 2023 GenAI Measures, prohibited forms of bias and unfairness include:
- Ethnic discrimination.
- Discrimination based on beliefs.
- Nationality-based discrimination.
- Discrimination based on regional origin.
- Gender discrimination.
- Age discrimination.
- Occupation-based discrimination.
- Health-based discrimination.
- Monopolistic behaviour or unfair competition.
Accountability
AI systems are tools created and provided by persons (natural and legal) that produce outputs at the behest of persons.
The many different individuals that might be involved in the creation, provision, and use of an AI system include:
- Researchers.
- Developers.
- Ethics boards.
- Regulators.
- Vendors and suppliers.
- Users.
It is widely recognised that AI system outputs can materially and negatively impact the rights and interests of an individual. Therefore, it seems appropriate to make at least one person accountable for the consequences of AI outputs. However, it can be difficult to say who should be accountable to individuals harmed by AI where numerous stakeholders are involved in creating, providing, and using an AI system.
Given the accountability problems associated with AI, the following rules assign obligations and liability to specific individuals:
- A PI processor must explain automated decisions reached by processing PI to PI subjects whose rights and interests are significantly affected upon request. Individuals have the right to reject such decisions (Article 24, 2021 PIPL).
- Service providers must ensure PI is processed in accordance with laws and regulations (Article 51, 2021 PIPL; Articles 7 and 11, 2023 GenAI Measures).
- No person may use deep synthesis technology to infringe the rights of another person (Article 6, 2022 Deep Synthesis Provisions).
- Certain AI service providers are responsible for the information security of their AI systems (Article 7, 2022 Deep Synthesis Provisions; Article 9, 2023 GenAI Measures).
- Certain AI service providers are responsible for the outputs of the systems they provide (Articles 8-11, 2022 Deep Synthesis Provisions; Article 9, 2023 GenAI Measures).
- Certain AI service providers must employ effective measures to protect minors from overreliance or addiction to AI (Article 10, 2023 GenAI Measures).
- Business operators are prohibited from using AI to engage in monopolistic practices or abuse their dominant status (Articles 9 and 22, 2022 AML).
- Road and demonstration test applicants should follow rules and be accountable for accidents involving smart, connected vehicles (Article 6, Administrative Rules on Intelligent and Connected Vehicle Road Testing and Demonstration Application (Trial) 2021).
- AI medical device registrants should assume responsibility for the safety and effectiveness of medical devices throughout their development, production, operation, and use in accordance with the law (Article 13, Regulations on Supervision and Administration of Medical Devices 2021).
Censorship and AI
China operates an extensive censorship system that covers internet content, among other forms of media. China requires the censorship of both AI training data and AI outputs. Such training data and outputs are typically scraped from and provided online.
The 2019 Network Content Provisions stipulate that producers of online content are prohibited from creating, duplicating, or disseminating information that:
- Contradicts the foundational principles outlined in China’s constitution.
- Risks national security, reveals state secrets, attempts to subvert state authority, or disrupts national unity.
- Harms the dignity or interests of the nation.
- Distorts, defames, or dishonours the legacy and spirit of heroic martyrs, or disrespects the martyrs by insulting, defaming, or otherwise infringing upon their names, images, reputation, or honour.
- Promotes terrorism or extremism or incites engagement in terrorist or extremist activities.
- Instigates ethnic hatred or discrimination or threatens national solidarity.
- Undermines the state’s religious policies or disseminates heresy and superstitious beliefs.
- Disseminates false information or disrupts the economic and social order.
- Spreads content that is obscene, pornographic, gambling-related, or violent, promotes murder and terror, or aids in criminal activities.
- Insults or defames individuals, violating their reputation, privacy, or other legal rights and interests.
- Includes any other content that is forbidden by laws and administrative regulations.
(Article 6.)
Online content producers must also avoid creating, reproducing, or spreading information that:
- Employs sensationalist headlines that significantly misrepresent the content.
- Sensationalises gossip, scandals, and misconduct.
- Inappropriately comments on natural disasters, major accidents, and other catastrophes.
- Contains sexual innuendo or provocation that could lead to sexual associations.
- Displays graphic violence, horror, or cruelty that may cause distress.
- Incites discrimination against groups or regions.
- Promotes vulgar, obscene, or tasteless content.
- May lead minors to imitate dangerous behaviours, violate social ethics, or develop poor habits.
- Otherwise negatively impacts the health of the online environment.
(Article 7.)
Appendix A of the 2024 Basic GenAI Requirements lists several types of security risks. These risks can be considered derivatives of Articles 6 and 7 of the 2019 Network Content Provisions.
Responsibility and Negligence in AI
Professional Responsibility in AI Development
Individuals involved in AI development can be considered scientific and technological personnel.
The 2021 Science Law states that scientific and technological personnel should:
- Be patriotic, innovative, truth-seeking, dedicated, and collaborative.
- Ahere to the spirit of craftsmanship.
- Observe academic and ethical norms in all kinds of scientific and technological activities.
- Abide by professional ethics, and be honest and trustworthy.
- Refrain from fraudulent practices in scientific and technological activities.
- Avoid participating in or supporting superstition.
(Article 67.)
Legal Framework Governing Negligence
China has not yet established clear rules regarding liability for the actions of AI.
In SCLA v AI Company [2024] Guangzhou Internet Court (Yue 0192 Min Chu No.113), the court provided valuable guidance for determining the liability of AI and related service providers.
The court:
- Held that the AI service provider failed to implement appropriate technical preventive measures. This allowed users to generate images containing elements of other copyright holders, which constituted IP infringement.
- Assessed the need for compensation and found the AI service provider liable due to:
- the absence of a complaint reporting mechanism;
- a failure to alert users to potential risks; and
- AI-generated images not being clearly identified.
- Determined that the provider did not fulfill its duty of care and exhibited subjective fault.
Consequently, the AI service provider was ordered to pay RMB10,000 in compensation to the plaintiff.
For more information, see Practice Note, AI-Generated Content and Copyright (China): SCLA v AI Company.
AI in Other Professions: Law and Accounting
Law
Under the Law of the PRC on Lawyers 2017 (2017 Lawyers Law), the law firm employing a lawyer is liable for the lawyer’s wrongdoing where a party suffers losses. The law firm may seek recourse against the lawyer if the lawyer acted intentionally or with gross negligence (Article 54).
There are no exemptions or safe harbours covering the use of AI under the 2017 Lawyers Law. As such, lawyers should:
- Ensure the quality of all work they produce.
- Clearly explain any limitations or constraints on their work.
- Provide disclaimers as appropriate.
Accounting
Under the Accounting Law of the PRC 2024 (2024 Accounting Law), the person in charge of an entity:
- Is responsible for the authenticity and completeness of the accounting practice and the accounting documents of the entity (Article 4).
- Should ensure the truthfulness and completeness of financial and accounting reports (Article 21).
There are no exemptions or safe harbours covering the use of AI under the 2024 Accounting Law.
Under the Specification for Accounting Informatisation 2024 (with effect from 1 January 2025), entities must stipulate the following in procurement contracts for accounting information services:
- Service content.
- Service quality.
- Service duration.
- Data security.
- Other rights and responsibilities.
(Article 15.)
Entities conducting accounting informatisation involving AI should comply with relevant laws and regulations and respect social morals, ethics, and morality (Article 44).
Accounting software is regulated under the Specification for Basic Functions and Services of Accounting Software 2024 (with effect from 1 January 2025).
According to Article 42, where accounting software service providers are responsible for user accounting data being leaked or damaged, they must be responsible for restoration and compensation as stipulated.
However, the other liability of accounting software providers to users for AI-related issues is still unclear. As such, liabilities between parties will typically be determined by contract.
Preventative Measures and Best Practices
Methods for preventing or mitigating AI-induced negligence include:
- Ensuring a qualified human is the ultimate decision-maker.
- Providing qualified human decision-makers with adequate resources to manually fulfil their role.
- Conducting a thorough risk assessment as a part of procurement activities.
- Using well-drafted contracts to allocate liability and clearly define service standards.
Global Co-operation on AI Ethics
Global AI Governance Initiative
In October 2023, China proposed the Global AI Governance Initiative.
The initiative states that the development of AI should prioritise the well-being of humanity, ensure social security, respect human rights, and support sustainable development.
It also promotes:
- The principles of fairness and non-discrimination in data acquisition, algorithm design, technology development, product development, and application.
- An ethics-first approach, emphasising AI ethics guidelines, norms, and accountability mechanisms, supported by review systems.
- The principles of broad participation, consensus, and incremental development.
Bletchley Declaration
On 1 November 2023, the Bletchley Declaration was published. China is a signatory of the declaration, along with 28 countries and the EU.
Ethical principles covered in the Bletchley Declaration include human rights, transparency, explainability, accountability, fairness, regulation, safety, human oversight, ethics, bias mitigation, privacy, and data protection.
The Bletchley Declaration emphasises focusing on risk identification and creating policies based on these identified risks.
For more information, see Practice Note, Key AI Regulatory Considerations in China: Bletchley Declaration.
Enhancing International Co-operation on AI Capacity-Building
On 1 July 2024, the United Nations General Assembly adopted a consensus resolution proposed by China and co-sponsored by over 140 countries (see UNGA: UNGA Adopts China-Proposed Resolution to Enhance International Cooperation on AI Capacity-Building).
Framework Convention on AI
The Council of Europe’s Framework Convention on Artificial Intelligence is an international treaty legally binding its signatories (see Council of Europe: The Framework Convention on Artificial Intelligence).
It is monitored through a Conference of the Parties to ensure signatories’ adherence. China is not a signatory. However, given that several major economies are signatories, it will likely have some indirect impact in China.
AI Ethics Challenges
Gaps in the Current Legal Framework
In China’s current legal framework, there are several unclear issues, including:
- Unclear ethical obligations. Terms such as bias and discrimination are not clearly defined, which leads to uncertainty in their interpretation and application.
- Absence of a GenAI Law. There is no overarching AI law in China. As a result, different regulators are responsible for regulating AI within their specific areas of competence.
This fragmented approach complicates compliance efforts, particularly for organisations with diverse interests across multiple sectors.
Balancing Innovation and Regulation
Regulators face the challenge of balancing innovation with regulation. An example of this can be found in the drafting process of the 2023 GenAI Measures.
The initial draft of the 2023 GenAI Measures, issued by the CAC, proposed that any entity or individual providing GenAI services should assume the responsibilities of the content producer. This caused significant controversy and raised concerns within the AI industry.
The explicit liability provisions were omitted from the finalised version of the 2023 GenAI Measures issued by the CAC and other regulators. This suggests that issues raised by one regulator may later attract the attention of other regulators and that regulators are trying to balance innovation and regulation.
Chinese laws and regulations are typically published for public consultation (Article 74, Legislation Law of the PRC 2023). This process provides stakeholders, including those in the AI industry, with an opportunity to voice their opinions on the governance of AI technology.
Future Directions
Many experts and scholars suggest that China should formulate a comprehensive AI law outside the existing legal framework for regulating AI.
To this end, two draft AI laws were released:
- On 19 March 2024, experts from seven universities released the Artificial Intelligence Law (Scholar’s Draft) (人工智能法(学者建议稿).
- On 16 April 2024, institutions including the Law Institute of the Chinese Academy of Social Sciences drafted and published the Artificial Intelligence Model Law 2.0 (Expert Draft).
Both drafts propose certain requirements for AI ethics. For example, Article 42 of the Expert Draft stipulates that:
- An AI ethics review committee should be established for AI R&D activities that involve sensitive areas, as determined by the national AI authority.
- AI ethics reviews should be conducted under relevant national regulations.
- Other AI developers, providers, and users are encouraged to establish AI ethics review committees based on actual circumstances.
The State Council, in its Legislative Work Plan for 2023 and 2024, stated its intention to prepare to submit the draft AI law to the Standing Committee of the National People’s Congress for deliberation. For more information, see Legal Updates, State Council Releases 2024 Legislative Plan and State Council Releases 2023 Legislative Plan.
In terms of national standards, on 5 June 2024, the Ministry of Industry and Information Technology and other departments issued the Guidelines for the Construction of National AI Industry Comprehensive Standardisation System (2024 Edition).
The guidelines outline the following objectives:
- By 2026, develop more than 50 new national and industry standards and improve the AI standard system covering seven key areas, including:
- basic commonality;
- key technologies; and
- the safety and governance of AI products and services.
- Standardise ethical governance requirements for the entire lifecycle of AI, including:
- AI ethics risk assessments;
- ethical governance technology requirements and evaluation methods for fairness and explainability of AI; and
- AI ethics review standards.