China’s Personal Information Protection Law (“PIPL”), enacted in 2021, establishes a structured regulatory framework for cross-border transfers of personal information (“PI”). Depending on the volume, sensitivity and context of PI being exported, exporters may face varying levels of compliance obligations. For instance, small-scale exports of non-sensitive PI may be exempt from formal applications to the Cyberspace Administration of China (“CAC”), while larger or more sensitive transfers may require a CAC-prepared standard PI transfer contract (“Standard Contract”) or a CAC-organized data security assessment (“Data Security Assessment”).

For multinational corporations (“MNCs”) operating in China, navigating these requirements can seem daunting. However, with the right approach, compliance is achievable and manageable. 

We recently supported an MNC in successfully securing an approval for a Data Security Assessment. Through the application process, we gained firsthand experience in engaging with both provincial and national levels of CAC. This was a valuable opportunity to better understand CAC’s approach to exercising its authority under the PIPL and its interpretation of relevant regulations.

Notably, the number of successfully completed Data Security Assessments remains relatively low (only 285 as of December 2024), making this experience particularly rare and insightful.

This article summarizes our experience and provides a guide to key aspects of cross-border PI compliance, focusing on:

  1. Understanding the key regulatory requirements for PI export
  2. Identifying the appropriate level of compliance through a data audit
  3. Preparing for the impact assessment and application with CAC
  4. Maintaining post-approval compliance

1.  Understanding the Regulatory Landscape: A Moving Target

PIPL provides the legal foundation for cross-border PI transfers, but the regulatory environment continues to evolve. Key requirements under PIPL Article 38 include:

  • Passing a CAC-organized Data Security Assessment.
  • Signing a CAC-Standard Data Transfer Contract with the overseas recipient.
  • Obtaining PI protection certification from a professional institution.

Condition 3 (certification) is a less commonly used. Our focus is on Conditions 1 (Data Security Assessment) and 2 (Standard Contract).

When Standard Contracts are Required

The Measures for Personal Information Export Standard Contract (2022) (“Standard Contract Measures”) (Article 4) outline scenarios requiring a Standard Contract with foreign recipients and its filing with CAC, including:

  • The exporter is not a Critical Information Infrastructure (CII) operators.
  • Handling PI of fewer than 1 million individuals in China.
  • Exporting sensitive PI of fewer than 10,000 individuals within a calendar year.
  • Exporting non-sensitive PI of fewer than 100,000 individuals within a calendar year.

When Security Assessments Are Required

The Measures for Security Assessment of Data Exports (2022) (“Security Assessment Measures”) (Article 4) specify scenarios requiring mandatory CAC risk assessments, including:

  • Exporting important data.
  • CII operators or processors handling PI of over 1 million individuals.
  • Exporting sensitive PI of 10,000 individuals or non-sensitive PI of 100,000 individuals within a calendar year.

Regulatory Relaxations Under the 2024 Provisions

The Provisions on Promoting and Standardizing Cross-Border Data Flows (2024) (“the Provisions”) introduce exemptions to ease compliance burdens for certain categories of data transfers. Businesses falling under these categories are exempt from Security Assessments or Standard Contracts:

  • Contractual necessity: For activities like cross-border shopping, payment processing, shipping, or services such as hotel bookings and visa applications.
  • Employment management: For cross-border human resource management, such as processing employee information for global payroll or benefits.
  • Emergency situations: To protect an individual’s life, health, or property in emergencies.
  • Low-volume, non-sensitive transfers: Non-sensitive PI of fewer than 100,000 individuals annually.

These changes are consolidated in the Regulations on the Management of Network Data Security, enacted shortly after the Provisions. The table below summarizes the regulatory requirements before and after the relaxation:

Level of ComplianceBefore the RelaxationAfter the Relaxation
Exemption applies (internal compliance is still required)No exemptionSatisfying the necessity requirements in the three stipulated scenarios; orLow-volume, non-sensitive transfer
Standard ContractExporting sensitive PI < 10,000 individuals or non-sensitive PI < 100,000 individuals within a calendar yearExporting sensitive PI < 10,000 individuals or non-sensitive PI < 1 million individuals within a calendar year
Data Security AssessmentHandling PI of over 1 million individuals; or   Exporting sensitive PI ≥ 10,000 individuals or non-sensitive PI ≥ 100,000 individuals within a calendar year.  Handling PI of over 1 million individuals; or   Exporting sensitive PI ≥ 10,000 individuals or non-sensitive PI ≥ 1 million individuals within a calendar year.

2. Conducting a Comprehensive Data Audit: The Cornerstone of Compliance

A successful compliance strategy begins with a detailed data audit, which involves:

  • Evaluating the purpose, volume, and sensitivity of PI exports.
  • Assessing the security capabilities of both the exporter and the foreign recipient.
  • Reviewing legal agreements to ensure alignment with regulatory requirements.

The first point is critical, as the three elements correspond to the three key determinants of compliance obligations imposed on MNCs:

  • Purpose – The necessity of the PI export must be justified. Unnecessary PI cannot be exported.
  • Volume – Higher volumes of PI exports trigger stricter compliance requirements. For example, an MNC handling PI of over 1 million individuals must undergo a Data Security Assessment, even if exporting just one piece of PI.
  • Sensitivity – MNCs can now benefit from the Provisions and export more PI. But sensitive PI exports are subject to stricter rules. Even a very small volume of sensitive PI export may require a Standard Contract.

If PI is deemed sensitive and the export volume reaches certain thresholds, the MNC must arrange a corresponding Security Assessment or Standard Contract to be filed with CAC. The MNC must also justify the necessity of the PI export, often by demonstrating how the transfer is essential to its business operations. Common justifications include:

  • Global customer relationship management (e.g., membership systems).
  • Cross-border analytics to improve customer experiences.
  • Compliance with international legal or contractual obligations.

3. Preparing PIPIA and CAC application: Building a Strong Justification

Even if an exemption under the Provisions applies, MNCs are not waived from preparing Personal Information Protection Impact Assessment (“PIPIA”) to document compliance efforts. After the data audit, if an MNC determines that a CAC application is not required, it is advisable to engage a reputable, independent and domestic third party to prepare a PIPIA. This serves as a critical record in case of future regulatory challenges.

If no exemption applies, the next step is determining whether to pursue a Standard Contract or a Security Assessment. Under the Provisions, non-sensitive PI benefits from relaxed thresholds, allowing annual exports of up to 1 million individuals’ data under a Standard Contract. In contrast, sensitive PI exports remain strictly regulated. Exporting sensitive PI of even one individual requires a Standard Contract, while exports exceeding 10,000 individuals annually trigger a mandatory Data Security Assessment.

Accurate classification PI is therefore pivotal. Misclassification could lead to unnecessary assessments or, worse, regulatory non-compliance.

According to PIPL Article 28, sensitive PI is defined as data that, if leaked or misused, could harm an individual’s dignity, personal safety, or property. This includes:

  • Biometric data
  • Religious beliefs
  • Political opinions
  • Health and medical information
  • Financial account details
  • Location tracking data
  • Information about minors under 14 years old

If an MNC exports sensitive PI of between 1-9999 individuals in a calendar year, a Standard Contract must be signed with its overseas recipient, usually its headquarters outside China. Since the terms are standard, the application process is straightforward, requiring submission of the signed contract and a PIPIA to the provincial CAC. Approval typically takes 10 working days.

For exports exceeding 10,000 sensitive PI individuals annually, a Security Assessment is required. The MNC must submit the following documents to the provincial CAC, which will forward them to the national CAC for approval:

  • Application form
  • Data Export Risk Self-Assessment Report
  • Data contract between exporter and foreign recipient
  • Other supporting materials as required

National CAC is legally required to complete the assessment within 45 working days from a formal acceptance. But prior to it, MNCs should expect to respond to multiple rounds of inquiries from both provincial and national CACs.

4. Embracing Post-approval Compliance: A Continuous Journey

Securing PI export approval is a significant milestone, but the journey doesn’t end there. Both Security Assessment Measures and Standard Contract Measures stipulate that any changes to the conditions recorded in the CAC application require a new application. Additionally, a Security Assessment is valid for only two years, requiring renewal even if no changes occur.

MNCs are encouraged to take the following measures to ensure full and continuous compliance with PIPL:

  • Implement Ongoing Risk Monitoring: Establish mechanisms for continuously monitoring data security risks and promptly addressing any emerging threats.
  • Conduct Regular Security Audits: Periodically assess data security posture to identify and rectify vulnerabilities.
  • Invest in Robust Security Technologies: Leverage encryption, data masking, and access control technologies to safeguard data.
  • Cultivate a Culture of Security Awareness: Regularly train employees on data security best practices.

Conclusion: MNC’s PI Compliance in China is Achievable and Manageable

While China’s PI export regulations may appear stringent, they are designed to protect individuals’ privacy without unduly burdening businesses. Recent relaxations under the Provisions demonstrate a pragmatic approach to balancing security and business needs.

For MNCs, successful compliance hinges on:

  • Understanding regulatory requirements and staying updated on changes.
  • Conducting thorough data audits to determine the appropriate level of compliance.
  • Engaging proactively with CAC authorities to address inquiries and justify data export activities.

Our experience assisting an MNC through the Data Security Assessment process underscores that compliance is both achievable and manageable with the right preparation and expertise. By adopting a structured approach and leveraging professional guidance, MNCs can confidently navigate China’s data export regulations and ensure their operations remain compliant and secure.

In short, while the process requires effort, it is far from insurmountable. With clear guidelines, practical exemptions, and a collaborative approach, MNCs can successfully meet China’s personal information compliance requirements and continue to thrive in one of the world’s most dynamic markets.

As Chinese insurance companies expand their overseas operations, the frequency of insurance disputes in foreign jurisdictions has also increased. In addition to traditional litigations and institutional arbitrations, there are various alternative dispute resolution methods available for resolving international insurance disputes. One such method is arbitration under relevant rules of insurance industry association.

Recently, AnJie Broad assisted a Chinese P&C insurance company in defending a reinsurance arbitration under the ARIAS US arbitration rules. Throughout the case, we assisted the client navigate multiple reinsurance claims involved in the dispute, engaging in proactive communication with opposing counsel to request a stay of the arbitration proceedings. This effort brought our client valuable time to continue processing the claims. Ultimately, with our assistance, the parties reached an amicable settlement, avoiding the negative consequences of a hasty loss in an overseas trial.

This article draws on our experience handling this case to provide an introduction to the basics of ARIAS US arbitration. We aim to offer guidance to Chinese insurance companies facing potential overseas disputes, helping them avoid the losses that can arise from rushed and disorganized responses to foreign disputes.

I. Introduction to ARIAS US

ARIAS stands for the AIDA Reinsurance and Insurance Arbitration Society, which is the international association for reinsurance and insurance arbitration under the International Insurance Law Association (AIDA).

AIDA (Association Internationale de Droit des Assurances) is a global organization that brings together lawyers, academics, regulators, and others with an interest in comparative insurance law and regulation. Founded in 1960 in Luxembourg, AIDA has since expanded to include approximately 50 national branches. The organization is dedicated to advancing the understanding and practice of international insurance law.

(Logo of AIDA)

ARIAS US is the U.S. branch of ARIAS. Founded in 1994 and based in Chicago, ARIAS US is a non-profit organization committed to improving the arbitration process for both international and domestic insurance and reinsurance markets in the United States. ARIAS US has certified a group of qualified arbitrators, enabling parties involved in disputes to appoint professionals who can resolve disagreements in an efficient and expert manner.

(Logo of ARIAS US)

As of now, ARIAS has a total of seven global branches, including the US branch, as detailed below:

BranchTime of Foundation
ARIAS-UK1991
ARIAS-US1994
CEFAREA ARIAS France1995
ARIAS Germany2006
ARIAS LATAM2011
ARIAS ASIA (Hong Kong)2017
ARIAS-Ireland2021

Bylaw of ARIAS US describes the its objectives as follows:

  • To promote the integrity of the private dispute resolution process, particularly in the insurance and reinsurance industry.
  • To promote just awards in accordance with industry practices and procedures.
  • To certify objectively qualified and experienced individuals to serve as arbitrators.
  • To provide training sessions in the skills needed to be certified as arbitrators.
  • To propose model rules of arbitration proceedings and model arbitration clauses.
  • To promote high ethical standards in the conduct of arbitration proceedings.
  • To foster the development of arbitration law and practice as a means of resolving national and international insurance and reinsurance disputes in an efficient, economical and just manner.

Therefore, we can see that ARIAS US is not a traditional arbitration institution. Instead, it serves as a specialized dispute resolution body under the umbrella of the industry association. In other words, it does not offer arbitration case management services typically provided by arbitration institutions. Rather, its mission and function are to support arbitration activities for the parties involved, enhancing the overall dispute resolution standards within the insurance industry. Its activities include formulating arbitration rules, providing a candidate panel of arbitrators, and offering training for arbitrators. The organization’s objectives strongly reflect its role as a service-oriented industry association.

Since ARIAS US does not manage or intervene in arbitration cases, the arbitration proceedings conducted under the ARIAS US rules fall under the category of “ad hoc arbitration”, which is a common practice in international arbitration. Ad hoc arbitration, in contrast to institutional arbitration, is a system where the parties, in accordance with their arbitration agreement, independently establish the arbitration tribunal. Even when a permanent arbitration institution is involved, the institution does not manage the procedural aspects; instead, the parties agree on a temporary procedure or refer to specific arbitration rules, or they may authorize the tribunal to determine its own procedures. Ad hoc arbitration and institutional arbitration are two different types within the arbitration framework. PRC domestic arbitration bodies like CIETAC and BAC are examples of institutional arbitration.

Compared to institutional arbitration, ad hoc arbitration offers more flexibility to meet the parties’ specific needs. The parties can freely agree on and select the arbitration rules, and they can also design, amend, or supplement the temporary arbitration rules according to their preferences or authorize others to do so. Arbitration conducted under the ARIAS US arbitration rules reflects a strong element of party autonomy, which will be further described below.

II. Introduction to ARIAS US Arbitration Rules

As mentioned earlier, one of the key functions of ARIAS US is to provide arbitration rules for the resolution of insurance and reinsurance industry disputes. The organization has developed the following arbitration rules:

  • ARIAS US Rules for the Resolution of U.S. Insurance and Reinsurance Disputes
  • ARIAS US Neutral Panel Rules for the Resolution of U.S. Insurance and Reinsurance Disputes
  • ARIAS US Streamlined Rules for the Resolution of U.S. Insurance and Reinsurance Disputes
  • ARIAS US Panel Rules for the Resolution of Insurance and Contract Disputes

The four sets of arbitration rules mentioned above are broadly similar, though each has its own distinct features, offering parties flexibility to choose the most appropriate set for their specific situation.

For example, the ARIAS US Rules for the Resolution of U.S. Insurance and Reinsurance Disputes is the standard set of rules for ARIAS US, characterized by its broad applicability and suitability for most insurance and reinsurance dispute cases. The ARIAS US Neutral Panel Rules for the Resolution of U.S. Insurance and Reinsurance Disputes establishes a more complex procedure for the appointment of arbitrators compared to the standard rules. The ARIAS US Streamlined Rules for the Resolution of U.S. Insurance and Reinsurance Disputes are designed for cases involving disputes under USD 1 million, and under such rules, a sole arbitrator will handle the case. The ARIAS US Panel Rules for the Resolution of Insurance and Contract Disputes introduces ARIAS US’s assistance in the arbitrator appointment process, enabling parties to designate arbitrators smoothly and facilitating the progression of the arbitration process.

It is worth noting that, with the exception of the streamlined rules, the other three sets of ARIAS US arbitration rules contain the following provisions:

“These Rules are not intended to supersede any express contractual agreement between the Parties. Accordingly, the Parties may agree on any rules or procedures not specified herein, or may alter these Rules by written agreement. These Rules shall control any matters not changed by the Party-agreed procedures.”

The Panel shall have all powers and authority not inconsistent with these Rules, the agreement of the Parties, or applicable law.”

It is evident that the intention behind ARIAS US’s arbitration rules is not to compel parties to strictly adhere to its procedural framework. On the contrary, these rules reflect a high degree of respect for party autonomy, allowing parties to modify the arbitration rules as needed to facilitate the arbitration process. At the same time, the tribunal holds significant authority in managing and advancing the arbitration proceedings.

Therefore, we advise insurance companies to be mindful that, in specific cases, even if the parties have selected particular arbitration rules, they should also give due attention to any specific procedural provisions outlined in the disputed agreements or arbitration clauses. In cases where the parties’ agreement conflicts with the arbitration rules, the parties’ agreement should take precedence.

III. Introduction to the ARIAS US Arbitration Process

Taking the ARIAS US Rules for the Resolution of U.S. Insurance and Reinsurance Disputes as an example, the arbitration process under ARIAS US generally consists of the following stages:

  1. Commencement of Arbitration and Respondent’s Reply

The ARIAS US arbitration process is initiated when the claimant sends a Notice of Arbitration to the respondent. Once the respondent or its designated representative receives the claimant’s Notice of Arbitration, the arbitration proceedings officially commence.

The Notice of Arbitration shall include the following details: (1) Petitioner and the name of the contact person to whom all communications are to be addressed (including telephone and e-mail information); (2) Respondent against whom arbitration is sought; (3) contracts at issue; and (4) a short and plain statement of the nature of the claims and/or issues. In addition, the Claimant shall appoint one arbitrator in its Notice of Arbitration.

It is important to note that, unlike traditional institutional arbitration, initiating arbitration under the ARIAS US arbitration rules does not require the claimant to send any notification directly to ARIAS US itself. ARIAS US also typically does not intervene in the arbitration proceedings or send any notifications to the respondent regarding the initiation of the arbitration. This may be unfamiliar to domestic insurance companies that are accustomed to arbitration institutions managing the arbitration process.

In the recent case we have handled, the Notice of Arbitration was sent by the overseas claimant to the brokers handling the reinsurance business disputed, thus completing the service to the domestic respondent and initiating the arbitration. Throughout the entire case, no ARIAS US personnel were involved.

Furthermore, the Arbitration Rules explicitly state that the claims set out in the Notice of Arbitration may be amended prior to the Organizational Meeting. Any amendments made after the Organizational Meeting must be approved by the tribunal.

Once the respondent receives the Notice of Arbitration, they are required to respond within 30 days. The response shall include: (1) identification of the entities on whose behalf the Response is sent and the name of the contact person to whom all communications are to be addressed (including telephone and e-mail information); (2) designation of the Respondent’s Party-appointed arbitrator, in accordance with ¶ 6.3; (3) a short and plain response to the Petitioner’s statement of the nature of its claims and/or issues; and (4) a short and plain statement of any claims of the Respondent. Additionally, the respondent shall appoint one arbitrator in its response.

The respondent’s reply may be amended prior to the Organizational Meeting. Any modifications made after the Organizational Meeting require the tribunal’s consent.

2. Establishment of the Tribunal

According to the Arbitration Rules, both the claimant and respondent must each appoint one arbitrator within 30 days after the arbitration proceedings commence. If no appointment is made within this time, the other party may appoint the second arbitrator.

For the respondent, this 30-day period begins from the date they receive the Notice of Arbitration. As such, the respondent faces a relatively tight timeline, as they must complete a range of tasks within 30 days, including reviewing the arbitrator panel list and candidate arbitrators’ backgrounds, selecting and connecting an arbitrator, and completing the appointment process—all without the assistance of an arbitration institution. This can be a significant challenge for parties unfamiliar with ARIAS US arbitration.

In the recent case we have handled, the domestic insurance company failed to appoint an arbitrator within 30 days of receiving the Notice of Arbitration. The claimant then attempted to appoint the second arbitrator on behalf of the respondent, which would have resulted in a substantial procedural disadvantage to the respondent.

However, after carefully reviewing the relevant reinsurance policy disputed, we discovered that the policy set a 45-day deadline for appointing arbitrators in case of a consolidated arbitration. Since the claimant alleged to file a consolidated arbitration against the respondent and we intervened before the 45-day deadline had expired, we raised an objection to the claimant and successfully completed the respondent’s arbitrator appointment within this timeframe. The claimant ultimately accepted our position and recognized the arbitrator appointed by the respondent.

Regarding arbitrator qualifications, the Arbitration Rules specify that arbitrators must be current or former officers or executives of insurance or reinsurance companies and must be certified by ARIAS. Currently, ARIAS US has certified more than 100 arbitrators for appointment.

Once the parties have appointed one arbitrator each, the two party-appointed arbitrators shall select an Umpire within 30 days of the appointment of the second arbitrator.

As for arbitrator fees, the Arbitration Rules state that the appointing party shall bear the costs of its selected arbitrator, while the fees for the Umpire shall be shared equally between both parties.

3. Pre-Hearing Procedures: Organizational Meeting and Discovery

The arbitrators will convene a pre-hearing Organizational Meeting to confirm key arbitration matters, including reviewing the qualifications of the arbitrators and officially confirming the tribunal’s establishment, determining the arbitration schedule, and clarifying the disputed issues. After the Organizational Meeting, the fundamental process and schedule of the arbitration will be set, and the tribunal’s establishment will be confirmed.

Following the meeting, the parties will proceed with discovery according to the schedule confirmed in the Organizational Meeting. The tribunal will lead the discovery process.

4. Arbitration Hearing

The Arbitration Rules does not provide extensive details on procedures of the hearing. Instead, it grants the tribunal significant discretion. The Arbitration Rules state that “The Panel shall not be obligated to follow the strict rules of law or evidence”. This highlights that decisions regarding the procedures of the hearing will largely depend on the tribunal’s discretion.

Additionally, the Arbitration Rules specifies that the parties may agree on the tribunal’s discretion as follows:

“The Panel Shall interpret this contract as an honorable engagement, and shall not be obligated to follow the strict rules of law or evidence. In making their Decision, the Panel shall apply the custom and practice of the insurance and reinsurance industry, with a view to effecting the general purpose of the this contract.”

We can see that the Arbitration Rules encourage the tribunal to adopt a “substance over form” approach in its rulings, emphasizing that the tribunal should address issues in accordance with the practices and customs of the insurance and reinsurance industry. This aligns with the rule requiring arbitrators to be professionals with industry experience in the insurance sector.

For foreign parties, this rule offers a dual effect. On the one hand, it alleviates concerns about unfamiliarity with U.S. insurance laws, enabling a more confident approach to the proceedings. On the other hand, it places significant demands on the legal counsel’s understanding of insurance industry practices, particularly those specific to the U.S. insurance market.

5. Issuance of the Arbitral Award

The Arbitration Rules specifies that the tribunal should generally render its award within 30 days after the hearing concludes. The tribunal’s decision is made by a majority vote, with the minority in disagreement deferring to the majority’s ruling.

As for the scope of arbitral award, the Arbitration Rules stipulates that: “The Panel is authorized to award any remedy permitted by the Arbitration Agreement or subsequent written agreement of the Parties. In the absence of explicit written agreement to the contrary, it is within the Panel’s power to award any remedy allowed by applicable law, including, but not limited to: monetary damages; equitable relief; pre- or post- award interest; costs of arbitration; attorney fees; and other final or interim relief”.

Therefore, we can see that the tribunal also enjoys considerable autonomy regarding the scope of the arbitral award.

Regarding the form of the award, the tribunal typically issues a simple award that outlines the outcome of the case, usually without including a detailed reasonings. However, the arbitration rules also stipulate that: “If both Parties request a written rationale for the Panel’s final award, the Panel shall provide one. If one Party requests a written rationale but the other party objects, the decision whether to issue one is at the Panel’s discretion.”

In conclusion, based on the above Arbitration Rules, we can observe the following significant characteristics of arbitration under the ARIAS US Arbitration Rules:

  • Parties’ agreements prevail
  • Emphasis on Industry Practices
  • Simple Procedure and Fast Pace
  • Tribunal-led Process

IV. Enforceability of ARIAS US Awards in Mainland China

For respondents in mainland China, a key concern regarding ARIAS US arbitration cases is whether the arbitral award can be enforced in mainland China.

Since both China and the U.S. are parties to the Convention on the Recognition and Enforcement of Foreign Arbitral Awards (the New York Convention), parties can apply to PRC courts to recognize and enforce an arbitral award made in the U.S.

According to the Notice of the Supreme People’s Court on the Enforcement of Foreign Arbitral Awards under the New York Convention, if there are no special circumstances such as invalid arbitration agreement, serious flaws in the arbitration procedure, overstepping jurisdiction, defects in the award’s validity, or violations of China’s public policy, PRC court will generally recognize and enforce foreign arbitral awards. In addition, according to the Supreme People’s Court’s Provisions on the Judicial Review of Arbitration Cases, if a PRC court intends to refuse recognition of a foreign arbitral award, it must report the decision to the higher PRC court and the Supreme People’s Court for approval. Therefore, the likelihood of a PRC court rejecting the recognition and enforcement of a foreign arbitral award is relatively low.

It is worth noting that although the arbitration under the ARIAS US rules is ad hoc arbitration rather than institutional arbitration, according to Article 543 of the Interpretation of the Civil Procedure Law of the People’s Republic of China, arbitration awards made by an ad hoc tribunal outside of China can still be recognized and enforced by PRC courts in accordance with the New York Convention. Therefore, the ad hoc nature of the ARIAS US arbitration does not affect the recognition and enforcement of related arbitral awards by PRC courts.

V. Conclusion and Insights

From the above, it is evident that the arbitration under the ARIAS US rules differs significantly from the arbitration procedures typically encountered by domestic insurance companies. However, this difference does not affect the enforceability of the arbitral award in mainland China. Therefore, insurance companies involved in such arbitration cases should still give these matters due attention.

Furthermore, since the arbitration process is fast-paced, once involved in a dispute, we recommend that insurance companies engage professional lawyers as soon as possible to safeguard their procedural and substantive interests and avoid potential losses resulting from unfamiliarity with international arbitration rules.

Following the liquidation order from the Hong Kong High Court on January 29, 2024, against China Evergrande Group Corporation, on September 12, 2024, China Evergrande Group filed a liquidation petition for its wholly owned subsidiary, CEG Holdings (BVI) Limited, which has been scheduled a hearing on February 17, 2025. In the liquidation process of a Hong Kong company, the maintenance and realisation of the company’s assets and the return of the value to its creditors and other stakeholders are key concerns. Clarifying the rules and procedures for the disposal of the company’s assets in Mainland of China under the Hong Kong company liquidation order is of great significance for reducing the uncertainty and risks in commercial activities and enhancing the confidence of market entities.

一、Legal Consequences of a Winding-Up Order by the Hong Kong High Court

According to Section 178 of the Companies (Winding Up and Miscellaneous Provisions) Ordinance (hereinafter referred to as the “Winding Up Ordinance”), if a company owes an amount equal to or exceeding USD 10,000 in debts to creditors that are due for payment, any one or more creditors, contributories (persons liable to contribute to the company’s assets in the event of liquidation), or the trustee or personal representative of a contributory may jointly or separately file a winding-up petition. The issuance of a winding-up order has the following main legal consequences:

1. Actions stayed on winding-up order

According to Section 186 of the Winding Up Ordinance, once a winding-up order is issued or a provisional liquidator is appointed, no legal actions or proceedings shall be proceeded with or commenced against the company except by leave of the court. Any permitted actions must comply with the terms imposed by the court, except in cases involving national security.

2. Cessation of Business Operations

According to Section 194/228 of the Winding Up Ordinance,Upon the issuance of a winding-up order by the Hong Kong High Court, the company must cease all business activities (unless the liquidator allows continued operations to facilitate the liquidation). All company assets and affairs will be taken over by the court-appointed liquidator, and the directors and shareholders will lose control of the company. However, directors may need to cooperate with the liquidator by providing financial records and other relevant information. All company assets will be frozen, and the liquidator has the authority to dispose of these assets to repay the debts. Any prior disposal of company assets (e.g., transactions before the winding-up order) may be reviewed by the liquidator and potentially reversed if fraud or misconduct is involved.

3. Protection of Employee Rights

According to Section 265 of the Winding Up Ordinance,The employment relationship between the company and its employees will generally terminate upon liquidation, resulting in employee layoffs. The company is required to pay wages, severance compensation, and other entitlements as prescribed by law. Employee wages and social security claims are paid in priority to all other debts. However, if the company’s assets are severely insufficient, employee rights may not be fully protected.

4. Initiation of Debt Repayment Mechanisms

According to Section 199B of the Winding Up Ordinance,After the court appoints a liquidator, the liquidator is responsible for managing the liquidation process, including recovering company assets such as fixed assets, liquid assets, and intangible assets to determine the quantity, value, and ownership of these assets. The liquidator will verity the assets, dispose of them, repay the debts and distribute the assets accordingly.

5. Benefit to All Stakeholders

According to Section 187 of the Winding Up Ordinance,An order for winding up a company shall operate in favour of all the creditors and of all the contributories of the company as if made on the joint petition of a creditor and of a contributory. Once the petition is issued, any creditor has the right to file a claim for the debts owed. However, the priority of claims varies according to statutory rules. Under the Winding-Up Ordinance, liquidation costs and liquidator remuneration are prioritized, followed by employee wages and severance compensation. Then, creditors with mortgages or pledges over company assets have priority in repayment, as do government tax claims. Ordinary creditors are repaid proportionally after satisfying the above priorities, while shareholders’ interests are last in line.

二、The Asset Scope and Judicial Assistance Practice of Hong Kong Liquidated Companies in Mainland of China

According to Article 5 of the the Supreme People’s Court’s Opinion on Taking Forward a Pilot Measure in relation to the Recognition of and Assistance to Insolvency Proceedings in the Hong Kong Special Administrative Region (hereinafter referred to as the “Pilot Opinion”), if the debtor’s principle assets in the Mainland are in pilot area, or it has a place of business or a representative office in a pilot area, the Hong Kong Administrator may apply for recognition of and assistance to the Hong Kong Insolvency proceedings in Mainland courts. Accurately defining the scope of a debtor’s assets in Mainland of China under a Hong Kong winding-up order is essential for ensuring a fair and effective liquidation process. The debtor’s assets is extensive, covering both tangible and intangible assets, which are crucial for repaying debts and protecting creditors’ rights. Based on cases recognized and enforced by Mainland courts, the following provides a preliminary overview of the assets of Hong Kong companies in liquidation in Mainland of China.

1. Shenzhen Intermediate People’s Court Recognizes and Assists Samson Paper Company’s Hong Kong Liquidation Procedure

On August 14, 2020, the A-share shareholders of Samson Paper Company Limited passed a written resolution to voluntarily wind up the company and make the appointment of a liquidator. On August 30, 2021, the liquidator applied to the Shenzhen Intermediate People’s Court for recognition of the liquidator’s status and permission to perform duties in Mainland of China. The application mentioned that Samson Paper’s main assets in Mainland of China included equity investments, property assets, and accounts receivable. The equity investments were primarily in its wholly-owned subsidiaries in Mainland of China, the property asset was an apartment in Beijing, and the accounts receivable were payments due from related parties in Mainland of China. On December 15, 2021, the Shenzhen Intermediate People’s Court issued Civil Ruling (2021) Yue 03 Ren Gang Po No. 1, allowing the liquidator to perform duties in Mainland of China, including taking over, managing, and disposing of Samson Paper’s assets.

2. Shanghai Intermediate People’s Court Recognizes and Assists Hong Kong Ze International Group’s Liquidation Proceedings

On March 17, 2021, following an application by a creditor (a Hong Kong branch of a bank), the Hong Kong High Court issued a winding-up order (HCCW 429/2020), initiating the liquidation procedure for Hong Kong Ze International Group. The liquidator considered the significant value of the company’s direct investments in Mainland of China and thus filed an application with the Hong Kong High Court. Hong Kong Ze International Group had four wholly-owned subsidiaries in Shanghai and held shares in three other companies in Shaanxi Province.

On March 30, 2023, the Shanghai No. 3 Intermediate People’s Court issued Civil Ruling (2022) Hu 03 Ren Gang Po No. 1, allowing the liquidator to perform duties in Mainland of China, including taking over, managing, and disposing of company’s assets and investigating its property. The court consider that the equity of four wholly-owned subsidiaries of Hong Kong’s Ze Company registered in Shanghai is the main property of Hong Kong’s Ze Company in mainland of China.

3. Xiamen Intermediate People’s Court Recognizes and Assists Husk’s Green Technology Holdings’ Liquidation Proceedings

On October 26, 2022, the Hong Kong High Court issued a compulsory winding-up order against Husk’s Green Technology Holdings Co., Ltd. On January 17, 2023, creditors applied to convert the compulsory liquidation into a voluntary liquidation. In the judicial assistance request letter from the Hong Kong High Court, it was stated that Husk’s Green Technology’s main assets in Mainland of China were its wholly-owned subsidiaries, including five subsidiaries, four of which were located in Xiamen. 

The Hong Kong High Court requested that the Xiamen Intermediate People’s Court permit the liquidator to take over all assets and properties related to these subsidiaries in Mainland of China, investigate their affairs, and initiate legal proceedings in Mainland courts.

In 2024, the Xiamen Intermediate People’s Court issued Civil Ruling (2024) Min 02 Ren Gang Po No. 1, determining that Husk’s assets in Mainland of China were limited to the equity of its wholly-owned subsidiaries. The Xiamen court did not narrowly define “principal assets” as traditional tangible assets but explored and recognized a broader and more diverse forms of assets. This suggests that Mainland courts may further expand the scope of “principal assets” in the future to include various forms of indispensable corporate assets.

From the above cases, it can be seen that at current stage the scope of assets of Hong Kong liquidated companies in Mainland of China primarily involves equity investments, real estate, and accounts receivable, with wholly-owned subsidiaries being the main assets. After obtaining recognition and assistance from pilot region courts, liquidators have the authority to investigate and manage the company’s assets in Mainland of China. However, pilot region courts retain the right to approve significant asset disposals, such as relinquishing property rights, creating security interests, borrowing, or transferring assets out of Mainland of China. With the development of IP and AI technologies, the scope of corporate asset recognition will continue to expand.

三、Rules for Disposing of Mainland Assets in Hong Kong Company Liquidations

1. Preservation Arrangements Before Allowing Liquidators to Take Over Mainland Assets

After submitting a request for recognition and assistance to a pilot court under the Pilot Opinions, if the Hong Kong liquidator applies for asset preservation, the People’s Court will handle the matter in accordance with Mainland laws. Specifically, the application for preservation measures must be made to the court with jurisdiction over the location of the assets to be preserved, the domicile of the respondent, or the court with jurisdiction over the case, and appropriate guarantees must be provided. Once the People’s Court recognizes the Hong Kong bankruptcy proceedings, preservation measures on the Hong Kong company’s Mainland assets will be lifted.

2. Application of Mainland Bankruptcy Laws to the Disposal of Mainland Assets

According to the Pilot Opinions, after recognizing Hong Kong bankruptcy proceedings, the People’s Court may appoint a Mainland administrator upon application by the Hong Kong liquidator or creditors. After the designation, the Mainland administrator will assume the responsibilities of managing and disposing of the Hong Kong company’s Mainland assets as permitted by the pilot court. The debtor’s affairs and assets in Mainland of China will be governed by the Enterprise Bankruptcy Law of the People’s Republic of China, which the bankrupt enterprise may sell its assets either in whole or in part, with intangible assets and other properties potentially sold separately. Assets that are prohibited from being auctioned or restricted from transfer under national regulations must be handled in accordance with the prescribed legal procedures.

Regarding the Mainland assets of the Hong Kong companies in liquidation, prioritized Mainland debts must be paid first, and the remaining assets will be distributed and repaid under the Hong Kong liquidation process. Therefore, the scope of assets available for debt repayment in Hong Kong liquidation proceedings will be subject to coordination between the liquidators and the administrators of both jurisdictions in practice.

四、Key Points of Collaboration between Hong Kong Administrators and Mainland Administrators Regarding Hong Kong Bankruptcy Procedures

1. Collaboration on the Declaration of Claims by Mainland Creditors

In the liquidation cases of Hong Kong companies, creditors can declare their claims through the website of the Official Receiver’s Office of the Hong Kong Special Administrative Region. After the liquidation order is issued, creditors can submit a claim request by filling out the “Proof of Debt” Form. The Proof of Debt can be submitted by the creditor themselves, or by a person authorized by the creditor or on behalf of the creditor and having knowledge of the facts. There is no distinction made regarding the geographical location of the creditors. In terms of judicial practice, mainland creditors or their authorized representatives can register their claims with the Official Receiver’s Office directly.

According to section 14 of Pilot Opinions, after recognizing the bankruptcy proceedings in Hong Kong, the people’s court may, upon application, rule to allow the Hong Kong administrator to perform duties in mainland of China, including accepting and reviewing claims from mainland creditors. If a mainland administrator is designated upon the request of the Hong Kong administrator or creditors, the relevant duties shall be undertaken by the mainland administrator. That is, in cases where a Hong Kong liquidation is recognized and assisted by a pilot court in the mainland, the relevant creditors may file their claims and be reviewed through the designated mainland administrator. 

According to Article 69 of the Hong Kong “Bankruptcy Ordinance”, in the calculation and distribution of a dividend the trustee shall make provision for debts provable in bankruptcy appearing from the bankrupt’s statements, or otherwise, to be due to persons resident in places so distant from Hong Kong that in the ordinary course of communication they have not had sufficient time to tender their proofs or to establish them if disputed, and also for debts provable in bankruptcy the subject of claims not yet determined. This clause also reflects the arrangement for protecting the rights and interests of creditors outside of Hong Kong.

2. Collaboration on the Investigation, Management, and Disposal of the Debtor’s Assets in the Mainland

Upon request from the Hong Kong liquidator or provisional liquidator, a mainland administrator may be appointed to investigate the asset status of the debtor, and manage and dispose of the debtor’s property in mainland. If the debtor’s assets in the Mainland mainly involve wholly-owned subsidiaries, generally, the mainland administrator can obtain information on the financial status and relevant assets of the subsidiaries through the local administration for market regulation, taxation bureau, and banks where the subsidiaries are located. Additionally, information about the employees may be obtained through the company’s internal channels or the human resources and social security departments.

The mainland administrator can also examine the intellectual property assets under the company’s name through platforms such as the Ministry of Industry and Information Technology’s government service platform, the Trademark database of the China National Intellectual Property Administration, the China Copyright Protection Center, and the Patent Search and Analysis Website of the China National Intellectual Property Administration. Furthermore, by consulting platforms like the China Judgments Online, the mainland administrator can review the litigation status of the debtor’s mainland subsidiaries to assess whether there are any accounts receivables or other disposable assets.

3. Collaboration in Representing the Debtor in Litigation, Arbitration and other Judicial Proceedings

The mainland administrator appointed upon the application of the Hong Kong liquidator or provisional liquidator can represent the debtor in litigation, arbitration, preservation measures and enforcement procedures related to the debtor’s property in the mainland. In such Judicial cooperation, the mainland administrator should promptly communicate and record in a timely manner with the Hong Kong liquidator regarding the judicial procedures of the case and legal issues under Chinese law, so as to properly promote the management and disposal of the debtor’s assets.

五、Conclusion

The Evergrande Group liquidation case highlights the challenges posed by the massive scale of debt and the legal and judicial cooperation barriers between Mainland of China and Hong Kong, leading to slow progress in asset disposal. The Samson Paper case, where the Shenzhen Intermediate People’s Court recognized Hong Kong bankruptcy proceedings and the liquidator’s status, setting a precedent for cross-border bankruptcy assistance. However, during the execution process, issues such as inadequate information sharing and difficulties in procedural coordination were revealed. The disposal of Mainland assets of Hong Kong companies in liquidation will continue to be a key issue for the liquidators in both jurisdictions to jointly explore and cooperate on.

From 26 to 28 February 2025, Dr Zhan Hao and Ms Song Ying, partners at AnJie Broad Law Firm, participated in the ABA Asia-Pacific Conference held in Singapore Management University.

Bringing together over 200 legal professionals from around the world, the conference served as a platform for in-depth discussions on key legal and regulatory developments in the Asia-Pacific region. Covering topics such as AI, Competition Law, Corporate Transactions, Dispute Resolution, Legal Ethics, Restructuring and Technology, and Trade and Sanctions, the forum facilitated valuable exchanges on emerging trends and practical challenges.

Dr Zhan Hao contributed as a speaker in the panel discussion “Convergence or Conflict: Decoding the New Balance Between Antitrust and Intellectual Property in Asia.” In this session, the experts from various jurisdictions deeply explored the evolving intersection of antitrust and intellectual property rights across jurisdictions such as China, South Korea, Japan, Singapore, and Australia. Discussions highlighted recent precedents and legislative changes, focusing on the challenges faced by innovation-intensive industries like high-tech and pharmaceuticals amid increasing antitrust scrutiny in global mergers, governmental investigations, and antitrust litigations.

Dr Zhan Hao shared insights on the evolving relationship between intellectual property (IP) and competition law and emphasized the importance of balancing innovation incentives with fair competition principles. Particularly, he stressed China’s Supreme People’s Court’s (SPC) constant efforts to refine the application rules of antitrust law in IP matters, highlighted the landmark cases such as Hitachi Metal and Huaming. Dr Zhan also addressed the increasing focus on standard essential patents (SEPs) and the potential rise in private antitrust litigation in China. Also, he advised stakeholders in the pharmaceutical and technology sectors to closely follow upcoming antitrust guidelines and heightened scrutiny in merger reviews. The attendees highly appreciated Dr Zhan Hao’s presentation and applauded for his in-depth analysis and practical perspectives, which added valuable insights to the forum.

The event also provided opportunities to strengthen professional exchanges and communications through various networking sessions and evening gatherings at the Singapore Supreme Court and the Singapore Cricket Club.

We appreciate the opportunity to exchange insights with our global peers and look forward to continuing these conversations. Let’s stay connected!

Text:

Deep synthesis (“DS”) technology is a widely used application in the field of artificial intelligence (“AI”) and a variety of scenarios, particularly in audio and video production, media communication and information services. For instance, using AI technology to “resurrect” dead people on digital gadgets is becoming a business as shown during the Qingming Festival in 2024.[1] But whether the technology and related business infringe upon the rights and interests of others or public interests is open to question. To address legal and ethical risks and harm posed by DS techniques, China has promulgated a series of laws, regulations and standards, aiming to implement the adaptable and agile governance approach, mitigate AI-related risks and ultimately ensure responsible AI development.

1. Key Regulatory Authorities and Fundamental Principles

On November 25, 2022, the Cyberspace Administration of China (“CAC”), the Ministry of Industry and Information Technology (“MIIT”) and the Ministry of Public Security (“MPS”) have jointly issued the Administrative Provisions on Deep Synthesis of Internet Information Services (the “DS Administrative Provisions”), which takes effect on January 10, 2023.

According to the DS Administrative Provisions, the regulatory authorities include: (i) the national cyberspace authority, which is responsible for the overall organization and coordination of the governance and the related supervision and regulation of DS services nationwide, (ii) the telecommunications authority and public security authority under the State Council, which are responsible for the supervision and regulation of DS services. The local counterparts of each regulatory authority are responsible for the overall organization and coordination of the governance or supervision and regulation of DS services within their respective administrative regions.[2]

The DS Administrative Provisions is promulgated based upon the following laws and regulations: (i) the Cybersecurity Law of the People’s Republic of China (the “PRC”), (ii) the Data Security Law of the PRC, (iii) the Personal Information Protection Law of the PRC, (iv) the Administrative Measures on Internet Information Services and other laws and administrative regulations, It can not only be regarded as one of the key components in guaranteeing the healthy and orderly development of the digital economy, but also is in response to the rapid development of AI technology in the new digital era.

For purposes of curbing “deepfake” risks and the risk of violating people’s rights, or even committing crimes, the DS Administrative Provisions require DS service providers (i) to operate their business in accordance with relevant laws and regulations, (ii) to respect social morality and ethics, (iii) to maintain correct political direction, right direction of public opinion as well as correct value orientation, so as to promote the growth of DS services for good.[3]

Chapter II of the DS Administrative Provisions sets out the general rules DS service provides shall comply with, details of which are summarized in the following table:

No.
 
What a DS service provider

Should Do
 
What a DS service provider Should Not Do
 
1.
 
establish and improve management mechanisms and systems in terms of user registration, algorithm review, scientific and technological ethics review, information release review, data security, personal information protection, anti-telecom and online fraud, emergency response;[4] formulate and disclose management rules, platform conventions, user/service agreements[5]
 
use DS services to produce, reproduce, release or distribute information or engage in activities prohibited by laws or administrative regulations[6]
 
2.
 
verify the identity information of DS service users based on mobile phone number, ID card number, unified social credit code or national online identity authentication services[7]
 
use DS services to produce, reproduce, release or distribute fake news information[8]
 
3.
 
strengthen management of DS-produced and employee technical or manual methods to review the data input and synthesis results generated by DS services users[9]
 

 
4.
 
establish the rumor refutation mechanism[10] as well as convenient access for users to file complaints and the public to submit complaints; deal with complaints and provide feedback in a timely manner[11]
 

 


2. Basic Requirements of Legal Safeguard

(1) Data and information process

Massive data and rich application scenarios serve as the strong advantages of AI development in China. The application data generated by massive users continuously optimizes model performance, forming a positive cycle of “data-algorithm-scenario”.[12]

China has established a sound system of laws to protect people’s rights and interests in cyberspace and built a line of defense in law for protecting personal information rights and interests, which is reflected in (i) the Civil Code, (ii) the Criminal Law (adding provisions on the crime of infringing upon citizens’ personal information), (iii) the Cybersecurity Law, (iv) the Data Security Law, and (v) the Personal Information Protection Law. The DS Administrative Provisions consist of a specific chapter to regulate data-related activities,[13] requiring DS service providers to take necessary measures to ensure the safety of training data[14] and strength the technical management, reviewing, assessing and verifying algorithm mechanism on a regular basis.[15]

In particular, the heart of DS technology lies the application of generative AI to process sensitive personal data, including biometric features, which raises several compliance concerns. Key issues include the sourcing of training data, the risk of generating misleading or false information, and ensuring the security of the data.[16] If AI-generated content misrepresents a target object, like the deceased in the business of “resurrecting” dead people on digital gadgets or is used to deceive others, it could lead to widespread misinformation and potential harm.

Overall, the entire process of collecting, processing, and using data must adhere to the principles of legality, necessity, fairness, and transparency as outlined in the Personal Information Protection Law. The DS Administrative Provisions stipulates that (i) providers of DS services and functions such as editing biometric information should prompt users to inform and obtain consent from individuals whose data is being tampered with,[17] and (ii) the application of DS technology must not infringe on citizens’ rights. In addition, providers of the DS services, such as face-swapping, must provide “prominent identification functions”, which should be able to identify the virtual synthesized “persons”, if the service is “likely to cause public confusion or misidentification”.[18]

(2) AI-generated content labeling

On September 14, 2024, CAC has released a draft regulation, named the Measures for Labeling AI-generated Synthetic Content (Draft for Comment) (the “Draft Regulations on Labeling AIGC”), aiming to standardize the labeling of AI-generated synthetic content to protect national security and public interests.

AI-generated synthetic content (“AIGC”), as defined by the Draft Regulations on Labeling AIGC, is any text, image, audio or video created using AI technologies.[19] According to the Draft Regulations on Labeling AIGC, (i) internet information service providers shall adhere to mandatory national standards when labeling AIGC. Providers offering functions like downloading, copying or exporting AI-generated materials that may cause confusion or misidentification of the public must ensure that explicit labels are embedded in the files;[20] and (ii) platforms that distribute content are also required to regulate the spread of AI-generated materials by offering identification functions and reminding users to disclose whether their posts contain AIGC.[21]

As a practical matter, requiring service providers to label products is a good way of regulating the DS technique. AIGC-related products should adhere to technical management norms. Relevant industry standards have been published and may be officially introduced to require technology service providers to unify the code when labeling AIGC[22] or even add watermarks to generative content.

(3) Mandatory algorithm filing

It is well known that personal information security, fake information and algorithm discrimination are problems that have emerged in the development of AI technology. In response to problems raised by algorithm, China has issued laws and regulations, like the Administrative Provisions on Recommendation Algorithms of Internet Information Service (the “Algorithms Administrative Provisions”),[23] to improve and optimize the legality of AI applications.

According to the Algorithms Administrative Provisions, algorithm-based service providers shall comply with relevant laws and regulations, respect social morality and ethics, observe commercial and professional ethics, and follow the principles of fairness and equity, openness and transparency, scientific reasonableness, as well as honesty and credibility.[24]

In addition to (i) information management standards such as adopting measures to ensure information security and (ii) specific rules for protecting users’ rights and interests,[25] the recommendation algorithm-based service providers with public opinion attributes or social mobilization capabilities shall complete the filing procedures within ten business days since it starts offering such services.[26]

Pursuant to the DS Administrative Provisions, the DS service providers with public opinion attributes or social mobilization capabilities shall complete the filing procedures in accordance with the Algorithms Administrative Provisions.[27] CAC has also published a guide to proceeding with such online filing. To date, CAC has released nine batches of algorithms filing Information of DS service providers and over 2,800 algorithms have been filed at CAC.

3. Rules to Address Ethical Challenge

AI not only presents enormous opportunities for humanity but also creates unpredictable risks and challenges. Both legal rules and ethical standards are needed to ensure that AI does not pose a threat to humanity. As AI continues to evolve, the ethical implications are becoming an increasingly significant aspect of its regulation. In this process, ethics are increasingly becoming the foundation for legislation, and AI governance is evolving to address these ethical challenges directly. Laws are beginning to shift from focusing solely on technical concerns to addressing broader ethical questions.

First of all, the improvement of ethical reviews is a way to boost the healthy development of DS technology, which is one of DS service providers’ obligations.[28] Pursuant to the DS Administrative provisions, service providers should demonstrate the value, morality and safety of their DS technology in accordance with relevant laws and regulations as well as industrial standards.[29]

In the meanwhile, relying on DS techniques, AI may be used to generate false information for illicit activities, as exemplified by prevalent face-swapping scams. The large-scale dissemination of AI-generated false information has the potential to not only infringe upon individuals’ legal rights, but also to worsen the online ecosystem and pose risks of misleading the public. For example, the abuse of AI face-swap technology, one the one hand, constitutes a universal threat to the rights related to personal dignity, such as reputation and image rights, and may further seriously infringe on people’s property right. On the other hand, such technology can also cause a crisis of trust in society, as the effectiveness of some civil actions will be disputed. For example, it is now somewhat difficult to prove that the person signing the contract is a real person, when the whole process happens on video links. To address these unintended harms, both legal requirements and ethical principles are reflected in the DS Administrative Provisions.[30]

On a separate note, in the development process of AI technology, challenges remain in terms of algorithm improvement, privacy protection and data security both domestically and globally. There were similar cases occurring in the United States, which involves OpenAI and Scarlet Johansson, alleging OpenAI engaged in misappropriation of her voice.[31] The world has reached the consensus that technological innovation knows no bounds, but its application should be within legal and ethical boundaries.[32] The application of DS technology, therefore, should adhere to the principle from both legal and ethical fronts, which has been raised in the evolving legal landscapes and further addressed in relevant Chinese laws, regulations, guides, standards, norms and so forth.


[1] See Xinhua News Agency, AI ‘resurrection’ can’t be outside law’s purview, China Daily (April 9, 2024).

[2] See article 3 of the DS Administrative Provisions.

[3] See article 4 of the DS Administrative Provisions.

[4] See article 7 of the DS Administrative Provisions.

[5] See article 8 of the DS Administrative Provisions.

[6] See paragraph 1 of article 6 of the DS Administrative Provisions.

[7] See article 9 of the DS Administrative Provisions.

[8] See paragraph 2 of article 6 of the DS Administrative Provisions.

[9] See article 10 of the DS Administrative Provisions.

[10] See article 11 of the DS Administrative Provisions.

[11] See article 12 of the DS Administrative Provisions.

[12] See 21ST CENTURY BUSINESS HERALD, Rise of DeepSeek sheds light on Chinese AI path, China Daily (February 10, 2025).

[13] See chapter III Data and Technology Management Standards of the DS Administrative Provisions.

[14] See article 14 of the DS Administrative Provisions.

[15] See article 15 of the DS Administrative Provisions.

[16] See Zhang Linghan, AI ‘resurrection’? Let bygones be bygones, China Daily (January 3, 2025).

[17] See supra note 14.

[18] See article 17 of the DS Administrative Provisions.

[19] See article 3 of the Draft Regulations on Labeling AIGC.

[20] See article 4 of the Draft Regulations on Labeling AIGC and article 17 of the DS Administrative Provisions. 

[21] See article 6 of the Draft Regulations on Labeling AIGC.

[22] See Standard Practice Guide: Coding Rule for Service Providers of Network Security Artificial Intelligence Generated Synthetic Content Label (Draft for Comment), published on January 22, 2025.

[23] The Algorithms Administrative Provisions is jointly issued by CAC, MIIT, MPS and the State Administration for Market Regulation on December 31, 2021, taking effect on March 1, 2022. 

[24] See article 4 of the Algorithms Administrative Provisions.

[25] See chapter II and chapter III of the Algorithms Administrative Provisions.

[26] See article 24 of the Algorithms Administrative Provisions.

[27] See article 19 of the DS Administrative Provisions.

[28] See supra note 4.

[29] See article 4 and article 5 of the DS Administrative Provisions.

[30] See article 4, article 7 and article 17 of the DS Administrative Provisions. 

[31] See Todd Spangler, Scarlett Johansson Says She Was ‘Shocked’ and ‘Angered’ Over OpenAI’s Use of a Voice That Was ‘Eerily Similar to Mine’, Variety (May 20, 2024).

[32] See supra note 1.

Introduction

The primary objective of insurance law across jurisdictions is to balance the interests of policyholders, insureds, and insurers. However, the inherent complexity and standardization of insurance contracts, often drafted unilaterally by insurers, pose challenges for policyholders who may lack the expertise to fully comprehend their terms.

Insurance contracts are typically contracts of adhesion, where the policyholders are left with little choice beyond electing among standardized provisions offered by insurers. Despite regulatory efforts to mandate certain provisions and prohibit others, insurers retain significant drafting autonomy. This imbalance often leads to unfair outcomes, even when insurers fulfill their disclosure obligations, it also underscores the necessity of a legal mechanism to protect policyholders from unjust exclusions and limitations embedded in insurance policies.

The principle of Reasonable Expectations addresses this issue by prioritizing the objectively reasonable understanding of the policyholders, ensuring that insurance contracts are interpreted in a manner that aligns with what an average policyholder would reasonably expect, even if strict policy wording might suggest otherwise. This principle is crucial in mitigating the imbalance of power between insurers and insureds and promoting fairness in contractual interpretations. While not explicitly codified in Chinese law, Chinese courts have increasingly applied this principle in judicial practice. This article explores the development, judicial application, and comparative analysis of PRE in U.S. and PRC jurisdictions.

Definition and Development of the Principle

The Principle of Reasonable Expectations was formally articulated by Judge Robert E. Keeton in his seminal 1970 article, Insurance Law Rights at Variance with Policy Provision, published in the Harvard Law Review:

“The objectively reasonable expectations of applicants and intended beneficiaries regarding the terms of insurance contracts will be honored even though painstaking study of the policy provisions would have negated those expectations.”

This principle ensures that policyholders are not bound by unexpected exclusions or limitations hidden within complex insurance contracts. Keeton emphasized that “An important corollary of the expectations principle is that insurers ought not to be allowed to use qualifications and exceptions from coverage that are inconsistent with the reasonable expectations of a policyholder having an ordinary degree of familiarity with the type of coverage involved.” Over time, this principle has been adopted in various forms by jurisdictions in the United States and other common law countries, gradually evolving into a brand-new principle for interpreting insurance contracts.

While various doctrines share common goals with the principle of Reasonable Expectations, it is distinct. Contra Proferentem dictates that ambiguities in a contract should be interpreted against the drafter – typically the insurer. While both principles address protection mainly for insurers amid the ambiguities, the Reasonable Expectations extends further by protecting insureds even in the absence of ambiguity, focusing on their understanding of the policy. The Doctrine of Utmost Good Faith emphasizes the mutual duty of honesty and full disclosure between the insurer and the insured. Unlike Contra Proferentem, Utmost Good Faith is an ongoing obligation that applies from the pre-contractual stage throughout the life of the policy.

Development of the Principle in U.S. Judicial Practice

According to Roger C. Henderson the Principle of Reasonable Expectations has been influential[1], while its adoption across U.S. jurisdiction has been inconsistent. Some jurisdictions have embraced the principle broadly, prioritizing policyholders’ expectations over strict policy wordings while others have limited its application to cases involving ambiguous terms or misleading practices by insurers. Critics argue that overly liberal application of the principle could undermine contractual freedom and create uncertainty in contract enforcement.

In the U.S., courts have adopted varied approaches to the Principle of Reasonable Expectations, falling into three main categories:

Position One: Interpreting Ambiguous Terms in Favor of the Insured
CaseBackground Facts & Key IssuesJudgment
Eli Lilly& Co. V. Home Ins. Co (Indiana Supreme Court)[2]  Eli Lilly & Co., a prominent pharmaceutical company, faced environmental contamination liabilities related to diethylstilbestrol (DES), a drug later found to cause long-term health issues in the daughters of women who had taken it during pregnancy. Eli Lilly had purchased various Commercial General Liability (CGL) policies from Home Insurance Company. When the company sought compensation, the insurers denied coverage, arguing that the pollution incidents did not constitute an “occurrence” under the policy terms. A key issue was the determination of when the “occurrence” of injury took place under the terms of the policies—whether at the time of drug ingestion (policy coverage at the time of exposure or first sold, insurers’ position) or at the time when injuries manifested (policy coverage at the time of discovery, Eli Lilly’s position).The Indiana Supreme Court ruled in favor of Eli Lilly, determining that the policy wording was ambiguous concerning when an “occurrence” should be recognized in cases of long-latency injuries. Applying the multiple trigger interpretation, the court held that each insurer on the risk between ingestion of DES and the manifestation of a DES-related illness would be liable for indemnification, ensuring broader protection for policyholders facing delayed injury consequences.
Position Two: Rejecting Contract Terms That Conflict with Reasonable Expectations, usually referring to additional clauses
CaseBackground Facts & Key IssuesJudgment
Woodson v. Manhattan Life Insurance Co. of New York (Kentucky Supreme Court)[3]On February 8, 1982, the policyholder, Woodson, was called before the Executive Committee of Kentucky Finance Company and was told to resign from his position. He signed a resignation letter on February 26, resigning from all officer and director positions but not from his employment or other duties as an employee. The company continued to pay his regular salary, including deductions for taxes and life insurance premiums, for six months following his resignation. Woodson was killed by his estranged wife on May 19, 1982. At the time of his death, the company was still remitting his life insurance premiums to Manhattan Life Insurance Company. After his death, the company ceased salary payments. Woodson’s estate filed a claim for life insurance benefits under the group policy, arguing that the policy had already been approved before his death, however, the insurer denied the claim, asserting that Woodson’s coverage had terminated upon his resignation. The core issue was whether Woodson was still a covered individual under the group life insurance policy at the time of his death. Specifically, the phrase “leave of absence” in the policy was key to whether Davis was covered at the time of his death.The Kentucky Supreme Court held that the policy wording was ambiguous. The phrase “leave of absence” was not qualified as “temporary” and the court found that the policy did not clearly exclude terminal leave. Therefore, the ambiguity should be resolved in favor of the insured. The court also applied the doctrine of reasonable expectations, finding that both Woodson and the company officials had treated him as a covered employee on leave until his death. The continuation of salary payments and life insurance premiums made it reasonable for Woodson to expect that his life insurance coverage remained in effect during the leave period.
Position three: Overriding Explicit Policy Terms Based on Reasonable Expectations
CaseBackground Facts & Key IssuesJudgment
C&J Fertilizer, Inc. v. Allied Mutual Insurance Co. (Iowa Supreme Court)[4]C&J Fertilizer, Inc. purchased a burglary insurance policy from Allied Mutual Insurance Co. to protect its property. The policy defined “burglary” as requiring visible signs of forcible entry into a locked building. C&J Fertilizer experienced a theft where the burglars entered without visible signs of forced entry but stole property from inside the building. Allied Mutual denied the claim, citing the policy’s definition of burglary and the lack of visible marks indicating forced entry. C&J Fertilizer sued, arguing the policy terms were ambiguous and that the denial contradicted its reasonable expectations of coverage. The key issue was whether the insured’s reasonable expectations of coverage should override the policy’s explicit terms, especially when those terms were not clearly communicated or understood.The Iowa Supreme Court held that the policy’s definition of “burglary” was not consistent with the reasonable expectations of the insured, and a layperson purchasing burglary insurance would reasonably expect coverage for theft, regardless of visible signs of forced entry. Insurance contracts, especially adhesion contracts, should be interpreted in line with what an average policyholder would reasonably expect. The court applied the doctrine of reasonable expectations, holding that an insured’s reasonable understanding of the policy terms should prevail, even when the wording might technically limit coverage.

The Principle’s Application in China

Although the Principle of Reasonable Expectations is not formally codified in PRC Law, courts have already consciously applied this principle in judicial decisions. Two cases are presented hereby for reference.

CaseBackground Facts & Key IssuesJudgment
(2019) Xiang 13 Minzhong No. 986There is a health insurance dispute between the insureds, Zeng and his husband Zou, and the insurer. Zeng purchased a “360 Family Happiness Card Insurance” which provides coverage over accidental injury and hospitalization. Zeng claimed insurance compensation after her husband was injured by falling from a makeshift staircase, suffered severe injuries, and was later diagnosed with an 8th-grade disability. The insurer denied coverage by arguing that the insurance card was not activated and therefore the policy was not valid. The plaintiffs sued for the insurance payout. The main issue lies over the effectiveness of the clause in the insurance contract stating, “It is necessary to insure (activate) before the last insurance (activation) date stipulated on this card, in the manner provided by this card, and the insurance contract will only take effect after which the insured can enjoy insurance coverage.”The court determined the “activation” as a condition precedent for the policy’s effectiveness. Although such a condition is not an absolute exemption or limitation of the insurer’s liability, it creates a gap between the formation of the contract and the activation of coverage, effectively delaying the insurer’s liability. This delay, the court found, imposed additional obligations on the insured and was inconsistent with the insured’s reasonable expectations. The court emphasized that the insurer had a duty to clearly explain the activation requirement to the insured. Since the insurer failed to fulfill this obligation, the activation clause was deemed invalid. As a result, the court ruled that the policy became effective upon the purchase and payment of the premium, and the insurer was obligated to provide coverage for Zou’s injuries.
(2020) Su Minzhong No. 372There is an accident insurance dispute between the insureds, Wang and her son, and the insurer. Wang’s husband purchased an “Anxin Card H” insurance policy, which covered accidental death and accidental medical expenses. On October 13, 2017, Wang’s husband was involved in a traffic accident and died on May 25, 2018, due to complications from the injuries. The insurer denied paying the insurance indemnities, arguing that Wang’s husband died more than 180 days after the accident, which was outside the policy’s coverage period. The plaintiffs sued for insurance indemnities. The main issue was the effectiveness of the standard clause, stating, “If the insured dies within 180 days from the date of the accidental injury due to the same cause, the company shall pay the death benefit according to the insurance amount of this contract”.The court ruled in favor of the insured, holding that the 180-day time limitation clause violated the Principle of Reasonable Expectations. The court reasoned that, from the perspective of a layperson, the insurer should pay death benefits if the insured dies as a result of an accidental injury, regardless of the time elapsed between the accident and the death. The 180-day limitation, the court found, was inconsistent with the reasonable expectations of the general public and contravened the principle of good faith. Furthermore, the court highlighted that the clause violated the principle of public policy and good morals, which could lead to moral hazards, such as incentivizing the insured’s relatives to hasten death to ensure coverage within the 180-day window or discouraging them from providing timely medical treatment.

In addition, in certain circumstances, PRC courts also attempt to meet the insured’s reasonable expectations of the insurance contract by denying the effectiveness of exclusion clauses. According to Article 17 of the PRC Insurance Law, when entering into an insurance contract using standard terms provided by the insurer, the proposal form provided by the insurer to the policyholder shall be accompanied by the standard terms, and the insurer shall explain the content of the contract to the policyholder. For any clauses in the insurance contract that exempt the insurer from liability, the insurer shall, at the time of contract conclusion, provide a clear prompt on the proposal form, insurance policy, or other insurance vouchers that is sufficient to draw the policyholder’s attention, and shall clearly explain the content of such clauses in writing or orally to the policyholder; if no prompt or clear explanation is given, such clauses shall not be effective.

In practice, if an insurer cannot provide sufficient and compelling evidence to demonstrate that it has given clear and explicit explanations regarding the standard clauses that exclude its liabilities, the PRC courts are inclined to reject the validity of such exclusion clauses. This ensures that the policy coverage meets the expectations of the policyholders.

Conclusion

    The principle of Reasonable Expectations plays a critical role in addressing the inherent complexities and imbalances in insurance contracts. In the U.S., its application has been influential, though inconsistent, with courts prioritizing the reasonable understanding of policyholders in cases involving ambiguous terms or misleading practices. PRC courts are aligning insurance practices with the needs and expectations of policyholders. This judicial approach continues to serve as a vital tool for balancing the interests of insurers and insured parties, promoting fairness, and ensuring the integrity of insurance contracts. As the principle continues to evolve in Chinese jurisprudence, the Principle of Reasonable Expectations will likely remain a cornerstone of judicial reasoning, adapting to new challenges and contexts in the global insurance landscape.

    *The contribution of our intern, Chen Mengjun, to this article is also acknowledged with thanks.


    [1] Roger C. Henderson, The Doctrine of Reasonable Expectations in Insurance Law after Two decades, 51 OHIO ST. L.J

    [2] Eli Lilly & Co. v. Home Insurance Co., 482 N.E.2d 467 (Ind. 1985). https://law.justia.com/cases/indiana/supreme-court/1985/685s243-2.html

    [3] Woodson v. Manhattan Life Insurance Co. of New York, 743 S.W.2d 835 (Ky. 1987). https://law.justia.com/cases/kentucky/supreme-court/1987/87-sc-247-dg-1.html

    [4]  C&J Fertilizer, Inc. v. Allied Mutual Insurance Co., 227 N.W.2d 169 (Iowa 1975). https://law.justia.com/cases/iowa/supreme-court/1975/2-56355-0.html

    Multinational enterprises adeptly allocate insurance resources on a global scale. However, China’s laws and regulations impose restrictions on overseas insurance arrangements, requiring policyholders to navigate regulatory compliance challenges, particularly in insurance and foreign exchange regulation, when selecting coverage for their Chinese subsidiaries. Drawing on practical experience, this article examines the insurance and foreign exchange regulation risks multinational enterprises might face when an overseas parent company acts as the policyholder, arranging insurance for its Chinese subsidiaries.

    (I) Insurance Regulation

    According to Article 7 of the Insurance Law, “legal persons and other organizations within the People’s Republic of China that need to obtain domestic insurance shall purchase insurance from insurers within People’s Republic of China.” However, the Insurance Law and related regulations do not explicitly define what constitutes “domestic insurance.” The former China Insurance Regulatory Commission (CIRC) has issued several letters to clarify the application of Article 7 of the Insurance Law, which can be referenced as follows:

    Letter on the Interpretation of Provisions in the Insurance Law (Yin Jian Ban Han [2002] No. 112)

    From the wording of Article 6 of the Insurance Law, it can be understood that the requirement to purchase insurance from domestic insurance companies contains two conditions: First, the policyholder or the insured is a domestic legal entity or organization; second, the insurance is primarily for insurance objects within China.

    Letter on the Interpretation of Article 7 of the New Insurance Law (Yin Jian Ban Han [2003] No. 19)

    The interpretation of Article 7 of the new Insurance Law should focus on two aspects: First, the domestic legal entity or other organization is the policyholder paying the insurance premium; second, the insurance subject and risk are located within China.

    Letter on the Interpretation of Article 7 of the Insurance Law (Yin Jian Ting Han [2009] No. 124)

    According to the Letter on the Interpretation of Provisions in the Insurance Law (Yin Jian Ban Han [2002] No. 112), the Insurance Law requires that insurance be purchased from domestic insurance companies when the policyholder or the insured is a domestic legal entity or organization, and the insurance object is located within China.

    Based on the above letters, the situation where insurance should be purchased from domestic insurance companies requires that both conditions are met: (1) the policyholder or the insured must be a domestic legal entity or organization, and (2) the insurance subject must be located within People’s Republic of China. Regarding the scenery we will discuss here,  the policyholder is an oversea entity, the premium is paid by the oversea parent company, the insured is a domestic entity, and the insurance subject is usually located within China. From the literal understanding, since both the insured and the insurance subject are within China, this model could theoretically be considered as falling under the scope of “domestic insurance”.

    However, since only one of the insured parties is a domestic company and the policyholder is an oversea parent company, so this could be deemed as not “domestic insurance”. Therefore, the compliance risk is relatively low. Based on the currently available information, no penalty cases for this scenario have been identified through public information research.

    (II) Foreign Exchange Regulation

    Under the abovementioned model, the Chinese subsidiary will share premiums payment with the parent company and may receive insurance benefit from the oversea insurer when an insured event occurs. The analysis of these capital inflows and outflows is as follows:

    First, the nature of such payment and receipt need to be clarified. According to the Q&A section on the official website of the State Administration of Foreign Exchange (SAFE), service trade foreign exchange receipts and payments refer to the current account foreign exchange receipts and payments other than those related to goods trade, collectively known as service trade foreign exchange transactions. Specifically, service trade foreign exchange transactions include: (1) transactions under the following categories: transportation services, travel, construction, insurance services, financial services, telecommunications, computer and information services, other commercial services, cultural and entertainment services, etc.; (2) primary income (earnings), including wages, investment income, and other primary income; (3) secondary income (current transfers), including donations, non-life insurance compensation, social security, and other secondary income. Thus, premiums shared by the Chinese subsidiary and insurance benefits received from overseas insurers both fall under the category of service trade foreign exchange transactions. No prohibitive foreign exchange regulations on the outflow of shared premiums or the inflow of insurance claims have been identified.

    However, the foreign exchange transactions in this model still need to comply with the relevant requirements for service trade foreign exchange transactions. According to Article 49 of the Circular on the Current Account Foreign Exchange Business Guidelines (2020 Edition), “For service trade foreign exchange transactions of a single amount equivalent to $50,000 or less, banks are generally not required to review transaction documents. For foreign exchange transactions where the nature of funds is unclear, the bank must require domestic institutions and individuals to submit transaction documents for reasonable review. For service trade foreign exchange transactions of a single amount exceeding $50,000 (exclusive of $50,000), the bank should confirm that the transaction documents align with the transaction entity, amount, and nature stated in the foreign exchange application.” Therefore, for premiums shared with the parent company and insurance benefits received from overseas, if the transaction amount is $50,000 or less, the bank generally does not need to review the transaction documents; if the amount exceeds $50,000, the transaction documents should be submitted for bank review.

    In addition, for foreign exchange outflows from the Chinese subsidiary to share premiums with the parent company, Article 49 also stipulates, “(1) Costs of advance payments or shared expenses between related domestic and foreign entities should generally not exceed 12 months.”  the Chinese subsidiary should pay attention to the relevant time limit requirements when remitting shared premiums, ensuring that the remittance is made within 12 months.

    In conclusion, multinational enterprises engaging in cross-border insurance arrangements for their Chinese subsidiaries must carefully navigate both insurance and foreign exchange regulatory risks. While the structure of having an overseas parent company as the policyholder and a domestic subsidiary as the insured (coinsured) can be a cost-effective and efficient way to manage risks, it must be done in compliance with China’s regulatory frameworks.

    In recent years, the AnJie Broad insurance team has handled several arbitration cases in the United States, the United Kingdom, and other jurisdictions involving reinsurance contract disputes, all of which concerned situations where Chinese companies, acting as reinsurers, assumed risks from overseas. Owing to the technical complexity inherent in international reinsurance business—often compounded by excessively long retrocession chains, incomplete documentation, and missing information during the ceding process—different parties may later interpret the scope of the reinsurer’s assumed risk differently following a loss. For example, there may be disputes as to which specific policy is deemed to have been ceded. This article presents observations and views primarily based on disputes arising from the coexistence of local policies and global policies commonly encountered in international reinsurance transactions.

    For multinational enterprises, which typically have locations, operating offices, and factories around the world, business activities are subject not only to domestic risks but also to risks faced by their subsidiaries, affiliates, and branches located in various jurisdictions subject to different policies, regulations, and sometimes special rules applicable to foreign companies. This reality has driven international insurance companies to develop integrated global risk solutions for such multinationals. One such solution is the Controlled Master Program (“CMP”). This program generally combines a Master Policy issued in the home country of the multinational enterprise with a series of Local Policies issued in regions where the enterprise has risk exposure. In an ideal scenario, the terms of the Master Policy and the Local Policies would be completely consistent; however, in practice, they often are not. Due to differing legal and regulatory requirements across countries, variations in the risks involved, and differences in the insured’s local operating circumstances, even if the vast majority of the terms contained in the Master Policy and Local Policies are consistent, significant differences usually exist. Moreover, with respect to the selection of the insurer issuing the Local Policy, international insurers may leverage their global resources and networks by issuing local policies through their subsidiaries, affiliates, or partner local insurers. By adopting the CMP, an insurer can provide a multinational enterprise with both an integrated global insurance solution and, at the same time, tailor policy designs to specific local needs, thereby meeting the enterprise’s unified yet diversified insurance requirements worldwide.

    Both the Master Policy and the Local Policies almost invariably contain “Difference in Conditions” (DIC) and “Difference in Limits” (DIL) clauses. The DIC clause typically provides that if a claim is made under the Local Policy but its terms do not apply or are insufficient to cover the loss, the broader coverage under the Master Policy will then apply. The DIL clause generally stipulates that if the limits of the Local Policy are exhausted, the higher limits of the Master Policy may be used to cover the claim. More importantly, both the Master Policy and the Local Policies typically specify that if a loss from a single insured event falls within the coverage scope of both policies, the Local Policy will serve as the primary responder. Only once the Local Policy’s limits are exhausted or its coverage is otherwise insufficient will the Master Policy respond to any remaining uncovered loss.

    On the face of these provisions, the contractual language appears clear: when a loss resulting from a risk falls under the coverage responsibilities of both the Local Policy and the Master Policy, the Local Policy should pay first, with the Master Policy serving only as supplementary coverage. Such a provision can resolve the complex issues of double insurance or overlapping coverages that might arise with two or more policies. However, in international reinsurance transactions—especially among Chinese insurers—when ceding CMP risks, the reinsurance slip typically only lists the Master Policy number and records risk information identical to that of the Master Policy, with no reference to the Local Policy or the CMP as a whole. Consequently, in the event of a loss, because the Local Policy is contractually obligated to respond first, the insured and the insurer usually settle and agree upon the claim under the Local Policy. But when the original insurer or the cedant forwards the settlement agreement and the corresponding loss adjuster’s report to the reinsurer for contribution, the reinsurer may contend that it only assumed risk under the Master Policy—not under the Local Policy. Although there is a certain commercial linkage between the Local Policy and the Master Policy, they remain legally independent policies. If an incident falls within the coverage of both policies, yet the loss is entirely settled under the Local Policy (i.e., the Master Policy’s coverage is not triggered), the reinsurer is well within its rights to argue that because its assumed liability under the Master Policy has not been activated, it need not bear any responsibility at the reinsurance layer.

    In a recent case handled by AnJie Broad, an in-depth examination of the parties’ true intent at the time of reinsurance contract formation revealed that both the ceding company and the ceded company did not limit their understanding of the ceded risk merely to the Master Policy but, in fact, intended to include the entire global integrated risk program—the CMP. For instance, during the retrocession phase of that case, the ceding company, via its broker, sent an email to the reinsurer that effectively ceded the entire CMP as the risk, discussing a global reinsurance arrangement without limiting the scope solely to the Master Policy or deliberately distinguishing between the Master Policy and the Local Policy. However, in the final reinsurance slip, due to insufficient clarity—for example, listing only the Master Policy number without expressly stating that the reinsurer was assuming the entire CMP—the reinsurer, based on the plain language of the slip, interpreted its assumed risk as confined solely to that under the Master Policy; i.e., only losses paid under the Master Policy would trigger its liability. Particularly when the underwriting and claims functions are handled by different personnel within a company, it is difficult to expect the reinsurer’s claims department to fully understand the true intent of the underwriting personnel when the reinsurance contract was originally concluded.

    It should be noted that under Chinese law, when interpreting reinsurance contracts, the principle of contra proferentem under Article 30 of the Insurance Law (which works to the detriment of the insurer) does not apply. Reinsurance contracts are generally regarded as commercial contracts between two equally sophisticated and professional parties that have engaged in extensive negotiations; hence, the general rules of contract interpretation apply. As stipulated in Article 142 of the Civil Code of PRC, “the meaning of an expression shall be determined in light of the words used, in conjunction with the relevant clauses, the nature and purpose of the conduct, trade usages, and the principles of good faith.” Therefore, in typical cases where no other evidence can more clearly demonstrate that the true intent of the parties was to cede the entire global risk program (CMP), the content stated in the reinsurance slip will directly form the basis for determining the parties’ rights and obligations, including the scope of assumed risk. This is why reinsurers often contend that, since the reinsurance slip only contains the Master Policy number and related information, losses occurring under the Local Policy should not trigger their liability. However, the customary practices between the parties may also serve as an interpretative method, and the reinsurer’s past practices in settling claims might be used as a basis for determining liability in future similar cases.

    Disputes regarding the relationship between the Local Policy and the Master Policy may further arise from several additional aspects:

    As mentioned above, while the Master Policy and the Local Policy are commercially linked, they remain two legally independent policies, with different insured parties and insurers. Moreover, the risks insured, the limits of indemnity, deductibles, and exclusion clauses may vary. Although an insurer may design the Master Policy and the Local Policy with the intention of having the Local Policy pay first and then supplement any deficiency with the Master Policy, the sequence-of-payment clauses (such as the DIC, DIL, and Local Policy-first payment clauses) are often only stipulated in the Master Policy and not in the Local Policy. This may lead the insurer of the Local Policy to deny the applicability of the DIC, DIL, and Local Policy-first payment clauses contained in the Master Policy, thereby asserting that the Local Policy and the Master Policy constitute two “parallel policy structures” rather than an “umbrella policy structure,” with both insurers bearing losses in accordance with the principles of double insurance or insurance concurrence.

    Furthermore, additional issues arise when the Master Policy expressly excludes “U.S. policies” from the definition of Local Policy, without a clear provision as to which specific policy or which insurer’s policy is meant by “U.S. policy.” This ambiguity can ultimately result in a situation where, after a loss is settled under the so-called “U.S. policy,” no link can be established between the indemnity under that “U.S. policy” and the Master Policy, thereby increasing the risk of reinsurer denial.

    In summary, the CMP integrated global risk solution provided by international insurers to multinational enterprises is now widely applied. Both domestic Chinese insurers and reinsurers participate in these arrangements either directly or indirectly. However, regardless of how industry practitioners interpret such arrangements, it is imperative that the parties accurately and unambiguously incorporate their true intent into the contract; otherwise, disputes can easily arise. As noted above, while interpretative tools such as contextual interpretation, purposive interpretation, and reference to industry practice can be employed to ascertain the parties’ true intent, these methods are often less reliable than a clear and unequivocal contractual provision in delineating the parties’ rights and obligations. This is especially true for complex international reinsurance transactions, where both insurers and brokers must fully recognize the critical importance of the contractual text and endeavor to preemptively address potential risks or disputes during the underwriting stage.

    Introduction

    In the digital age, data is a vital asset, and its security is of the utmost importance, particularly within the financial services industry which is relied on by all levels of society.

    The National Financial Regulatory Administration (“NFRA”) has introduced the Measures for Data Security Management of Banking and Insurance Institutions (“2024 FinSec Measures”; effective 27 December 2024), nine months after having consulted public opinions in March 2024. The 2024 FinSec Measures are designed to strengthen data and financial security, promote the rational development and use of data, protect the rights and interests of natural and legal persons, and safeguard national security and public interests (Article 1).

    The 2024 FinSec Measures build upon a robust legal framework that includes:

    • Cybersecurity Law of the People’s Republic of China (“2016 CSL”)
    • Data Security Law of the People’s Republic of China (“2021 DSL”)
    • Personal Information Protection Law of the People’s Republic of China (“2021 PIPL”)
    • Banking Regulation Law of the People’s Republic of China
    • Law of the People’s Republic on Commercial Banks
    • Insurance Law of the People’s Republic of China
    • Other laws and regulations

    (Article 1.)

    The 2024 FinSec Measures target the following types of banking and insurance entities:

    “… policy banks, commercial banks, rural cooperative banks, rural credit cooperatives, financial asset management companies, finance companies of enterprise groups, financial leasing companies, automotive finance companies, consumer finance companies, currency brokerage companies, trust companies, wealth management companies, insurance companies, insurance asset management companies, and insurance group (holding) companies established within the territory of the People’s Republic of China.”

    They also apply mutatis mutandis to other financial institutions in the banking and insurance sectors and financial holding companies established with the approval of the NFRA, entities managed by the NFRA, as well as financial organisations established with the approval of local financial regulatory authorities (Article 80).

    (The above entities are collectively referred to as “Institutions” below.)

    Key Provisions and Principles

    The 2024 FinSec Measures are based on several core principles and operational requirements. We discuss some key provisions and principles below:

    Definitions and Scope

    The 2024 FinSec Measures begin by defining key terms (Article 3) including:

    • Data” refers to records of information in electronic or other forms. This aligns with the definition of data in the 2021 DSL.
    • Data Processing” refers to activities including collection, storage, use, editing, transmission, provision, sharing, transfer, disclosure, deletion, and destruction of data. This roughly aligns with the definitions of processing found in the 2021 PIPL and 2021 DSL.
    • Data Security” refers to managing and controlling Data Processing activities and data application scenarios through necessary measures, ensuring that data is effectively protected and lawfully always utilised, as well as possessing the capability to ensure continuous security. This aligns with the definition provided by the 2021 DSL.
    • Data Subjects” refers to natural persons identified by data or their guardians, or enterprises, institutions, social organisations or other organisations. While this definition is not controversial, it is novel in terms of Chinese data legislation, as it is the first time that this concept is legally defined.
    • Personal Information” refers to various information recorded in electronic or other forms related to an identified or identifiable natural person, excluding information that has been anonymized. This aligns with the definition found in the 2021 PIPL. In context, Personal Information is a category of Data.  

    High-Level Principles

    Institutions are expected to abide by laws and regulations, respect social morality and ethics, observe business ethics and professional ethics, act in good faith and with integrity, fulfil Data Security protection obligations, assume social responsibilities, and not undermine national security, political security, financial security, or public interests, or infringe upon the lawful rights and interests of individuals or organisations (Article 6).

    Moreover, Institutions must adhere to the principles of legality, legitimacy, necessity, and good faith when collecting data, define the purposes, methods, scope, and rules of data collection and processing, and ensure Data Security and traceability during collection. Institutions must not collect data from Data Subjects beyond the scope of consent unless allowed to by law (Article 24). It is interesting to note that the concept of Data Subjects, as defined in the 2024 FinSec Measures also includes legal persons. We understand that consent in the context of legal persons refers to any authorisation granted in agreements with Institutions.  

    Governance Framework

    For effective data governance, Institutions must establish a structured risk-based Data Security framework in alignment with the Institution’s development goals, which cover the entire data lifecycle, and comply with the Multi-level Protection Scheme (Article 5).

    Organisational Roles

    Institutions should define the responsibilities of the board, senior management, specialised departments (Article 9) and business segments (Article 12). Leadership roles should be clearly defined, with internal Party structures and the board of directors or board of supervisors holding primary accountability for Data Security (Article 10).

    Institutions are expected to have specialised departments that centrally manage Data Security. The specific obligations of those specialised departments are outlined in detail within Article 11 of the 2024 FinSec Measures. Information technology departments are described separately from centralised Data Security departments, and their responsibilities appear to be limited to the technical aspects of Data Security, including the development of baseline security controls, establishing technical standards, ensuring the implementation of the technical measures, establishing technical management mechanisms, and organising technological research (Article 14).

    Risk management, internal control and compliance, and audit departments are expected to incorporate the requirements of the 2024 FinSec Measures into internal controls and audits (Article 13).

    Moreover, each business segment’s Data Security responsibilities and management requirements shall be defined following the principle that those “who manage the business manage the data and Data Security of that business” (Article 12). This is regarded as a noteworthy regulatory development as it clearly requires data management and Data Security roles to be distributed along business lines.  

    Training and Awareness

    Institutions are expected to organise Data Security awareness promotion and training to enhance employees’ awareness and skills in Data Security protection (Articles 11 and 15). Moreover, they should establish a “sound Data Security culture” (Article 15), which could be a more challenging task, and conduct regular emergency response drills (Article 68).

    Data Classification and Management

    Data classification and grading are central to the 2024 FinSec Measures. Data is categorised as core, important, sensitive, and general (based on their risk levels from highest to the lowest), each with specific handling requirements (Article 16), and the following characteristics:

    • Core data can, for discussion purposes, be understood as a special class of especially sensitive important data.
    • Important data refers to data covering a specified field, group, or region, or data reaching a certain level of precision and scale, the leakage, tampering, or destruction of which may directly endanger national security, economic operation, social stability, or public health and safety.
    • Sensitive data refers to data that, the leakage, tampering, or destruction of which may have a certain impact on economic operation, social stability, or public interests, or may significantly impact the organisation itself or individual citizens.
    • General data is any data that is not core data, important data or sensitive data.

    Institutions are required to maintain dynamic data inventories and implement security measures appropriate to their data’s classification (Article 19). This requirement, combined with the risk-based approach to categorising data, essentially requires data to be reviewed and assessed on an ongoing basis.

    We note that China already has detailed national standards for classifying data in the financial industry, such as JR/T 0197-2020, issued by the PBoC, which contains a 5-level classification standard that could be leveraged to support compliance with the 2024 FinSec Measures. It should be noted that important data is identified based on the relevant catalogues published by the regulatory authority. We understand many local financial industry regulators are actively reaching out to Institutions that they supervise as part of their efforts to formulate important data catalogues.

    Lifecycle Data Security

    Data Security management spans the entire lifecycle of data. Institutions must develop robust systems to manage data from acquisition to disposal (Article 20). This includes data mapping, conducting risk assessments for Data Processing activities (Article 22) and ensuring compliance with regulations governing external data sharing and outsourcing (Articles 26, 30 and 61).

    Given the assessment requirements under the 2024 FinSec Measures, the Personal Information protection impact assessment templates issued by the CAC or contained in national standard GB/T 39335-2020 might provide a starting point for Institutions to begin structuring their assessment activities.

    Third-party Risk Management

    Under the 2024 FinSec Measures, external data procurement should be centrally approved and backed by a robust procurement process to ensure that the Data has been acquired and will be provided in accordance with the law (Article 26). It should be noted that Institutions are already subject to broad and detailed procurement obligations under the Regulatory Measures for Risks in the Outsourcing of Information Technology by Banking and Insurance Institutions (2021), which were issued by the predecessor to the NFRA. As such, the 2024 FinSec Measures appear to offer some additional clarity and emphasis to the existing regulatory framework.

    Firewalls

    In the context of the 2024 FinSec Measures, firewalls have the extended meaning of data isolation. Institutions are expected to implement firewalls to prevent other organisations within their group (i.e., affiliates, parents, subsidiaries, etc.) from accessing Data (i) without the consent of the Data Subject or (ii) unless permitted by law (Article 29). Strictly speaking, this should not be viewed as controversial under pre-existing law, given that entities within a group would typically have independent legal existence and compliance obligations. However, the firewall requirement does appear to be somewhat novel because it seems to extend the Data rights of natural persons (i.e., consent to transfers of Personal Information under Article 23 of the PIPL) to legal persons. We understand that the consent of legal persons would typically be contained in contracts with Institutions. 

    Technical Protections

    The 2024 FinSec Measures require Institutions to implement cybersecurity measures tailored to a diverse range of environments (Article 39). Article 43 requires Institutions to control access to sensitive data, important data and core data (using rules and technology), and ensure Data use is necessary and secure. All processing, except for general Data Processing, must be logged. Logs must be kept for up to 3 years and audited at least every six months.

    Secure storage and transmission are mandated (Article 45). In particular: “Personal identity authentication data shall not be stored, transmitted, or displayed in plaintext.” According to financial industry standard JR/T 0197-2020 issued by the PBoC, “personal identity authentication data” is defined as the information relied upon for personal identity authentication, the leakage of which can cause serious harm to the property safety of the Data Subject, including bank card magnetic stripe data, card verification codes, bank card passwords, payment passwords, account login passwords, etc.  

    Personal Information Protection

    Personal Information protection is a critical focus of the 2024 FinSec Measures. Institutions must obtain explicit and informed consent before processing Personal Information unless provided for by law (Article 54). They must adhere to the principle of necessity, collecting only the minimum information required to achieve the financial business processing purpose (Article 55). Excessive data collection is prohibited, and transparency is emphasised through rules (i.e., privacy policies) for informing individuals about their Data Processing (Article 56).

    Risk Monitoring and Incident Response

    Risk monitoring and incident response mechanisms are provided in the 2024 FinSec Measures. Data breaches must immediately be reported to the NFRA (Article 63). Article 69 elaborates on the reporting timescale by stating, among other things, that Institutions “shall report a Data Security incident to the NFRA or its local offices within two hours of its occurrence, and submit a formal written report within 24 hours after the incident.” It is unclear whether separate reports still need to be made to other authorities. However, it would be prudent to assume that such reports are still required.

    Institutions must continuously monitor for threats, including unauthorised access or data breaches, and have systems and procedures in place to deal with such threats (Article 64).

    Incidents are graded based on their severity, with clear guidelines for containment and remediation (Article 67).

    In the event of a Data Security incident, Institutions are expected to have robust reporting and emergency response systems in place and take appropriate post-mortem activities (Article 68).

    Institutions need to conduct Data Security risk assessments annually, while audit departments should conduct comprehensive Data Security audits at least every 3 years and conduct special audits after major Data Security incidents (Article 66). While the specific content of audits is not prescribed, it is possible at present to conduct Data Security audits based on various standards.

    Where third-party Data Security audits occur, Institutions are barred from using products provided by those third-party auditors for an undefined time. This presumably prevents an auditor from having a conflict of interest.

    Regulatory Oversight and Compliance

    The NFRA supervises compliance with the 2024 FinSec Measures, conducts inspections, and enforces penalties for non-compliance (Article 70).

    The NFRA is obliged to develop an important and core data catalogue (Article 71). It is unclear when this catalogue will be issued. However, if the NFRA takes the same approach as the CAC, then any data not listed in the catalogue will not be important or core data.

    Regulatory Filing and Reporting Obligations

    It is important to note the triggers for certain regulatory filing and reporting obligations and their timelines required by the 2024 FinSec Measures:

    • Security incidents: Institutions shall report a Data Security incident to the NFRA or its local offices within 2 hours of its occurrence and submit a formal written report within 24 hours after the incident. In the case of a particularly major Data Security incident, Institutions shall immediately take disposal measures, inform users as required, and report the incident to the NFRA or its local offices, as well as to the local public security authorities. Institutions shall report on the progress of the disposal every 2 hours until the disposal is completed. Upon having disposed of a Data Security incident, Institutions shall submit a report, including an assessment and summary of the disposal and related improvements, within 5 working days to the NFRA or its local offices (Article 69).
    • Transfers and outsourcing: Institutions shall report data sharing, outsourced processing, trading and data transfers involving batches of sensitive data, important data or core data to the NFRA or its local offices within 20 working days before the processing activity or execution of the contract unless otherwise provided by law (Article 73).
    • Annual report: Institutions shall submit a Data Security risk assessment report for the previous year to the NFRA or its local offices before 15 January each year. The annual report should describe Data Security governance, technical protection, Data Security risk monitoring and disposal measures, Data Security incidents and their disposal, outsourced and joint processing, outbound cross-border data transfers, Data Security assessments and reviews, and Data Security-related complaints and their handling, among other things (Article 74).

    We note that the timelines associated with these reporting obligations could be challenging. To meet these requirements, Institutions are advised to prepare relevant policies and templates to support regulatory filings and promptly begin filing procedures once filings are triggered.  

    Penalties

    Under Article 77, and in line with the Banking Regulation Law and the Insurance Law,  the NFRA and its local offices may issue correction orders and impose fines of up to CNY500,000 for banks and CNY300,000 for insurers if they violate the 2024 FinSec Measures. In more severe cases, or if corrections are not promptly made, the institutions may face suspension or business license revocation. Directly responsible directors, executives, and other personnel can also be subject to disciplinary actions, fines, and disqualification from their roles for a period up to life.

    It is worth noting that a violation of the 2024 FinSec Measures  may also be subject to penalties under the 2016 CSL, 2021 DSL or 2021 PIPL if said violation also violates those laws.

    Implications for Financial Institutions

    The 2024 FinSec Measures represent a significant step towards strengthening Data Security in the financial sector and developing a culture of security and compliance. Essentially, they require Institutions to establish and maintain a comprehensive and robust data governance, data security and data compliance framework. Institutions will need to invest in the mandated corporate governance, legal compliance activities, technology, training, and process optimisation to align with these regulations.

    Fully complying with the 2024 FinSec Measures will also help financial institutions foster trust among customers and stakeholders, reduce operational risks associated with data breaches, and enhance competitive advantage in a data-driven economy.

    A Note providing an overview of the legal framework governing AI ethics in China. The Note examines the role of governmental and non-governmental organisations, addresses key ethical issues such as privacy, transparency, bias, and accountability, and discusses China’s participation in global AI governance initiatives. It covers professional responsibility in AI development, AI-induced negligence, and challenges in the legal framework. The Note also suggests future directions for AI ethics in China, including the potential development of a comprehensive AI law and national standards.

    By the mid-2010s, China (PRC) emerged as a global leader in AI research and development (R&D), with local tech companies investing heavily. Since 2023, China has witnessed an explosion in AI R&D, fuelled by significant investments from industry giants such as Alibaba, Baidu, ByteDance, and Tencent.

    This growth is driven by significant government support, massive data pools, widespread adoption, and relatively high quantities of research papers and patents. These advancements occur amidst increasing global technological competition.

    As AI becomes more integrated into daily life, concerns about privacy, security, employment, and social stability lead the Chinese government, academic institutions, and tech companies to consider ethical frameworks that guide AI development and use.

    This Note provides an overview of the legal framework governing AI ethics in China, examines the role of governmental and non-governmental organisations, addresses key ethical issues such as privacy, transparency, bias, and accountability, and discusses China’s participation in global AI governance initiatives. It covers professional responsibility in AI development, AI-induced negligence, and challenges in the legal framework. The Note also suggests future directions for AI ethics in China, including the potential development of a comprehensive AI law and national standards.

    Emergence of AI Ethics as a Concern

    In the 2010s, as AI technologies became more integrated into daily life, concerns about their ethical implications emerged. The Chinese government, academic institutions, and tech companies recognised the need for frameworks to guide ethical AI development and use, addressing issues of privacy, security, employment, and social stability.

    During this period, high-profile incidents involving the misuse of facial recognition technology, data privacy concerns, and targeted advertising abuse highlighted the potential risks of unchecked AI development.

    The overall environment in China during the late 2010s likely contributed to widespread concerns about ethics in science and technology.

    In July 2019, the government established the National Science and Technology Ethics Committee (国家科技伦理委员会) (National Ethics Committee) to promote the development of a more comprehensive, ordered and co-ordinated governance system for science and technology.

    The National Ethics Committee set up a subcommittee for AI, formally incorporating AI into the national science and technology ethics regulatory system (see AI Subcommittee of the National Ethics Committee).

    Measures for the Review of Science and Technology Ethics (Trial)

    AI science and technology are developing faster than laws and regulations.

    Against this background, the Ministry of Science and Technology (MOST) promulgated the Measures for the Review of Scientific and Technological Ethics (Trial) 2023 (2023 Ethics Measures) to ensure that entities fully assess the ethical implications of their AI scientific R&D activities, among other things.

    Ethical Review Requirements

    Under the 2023 Ethics Measures, entities conducting AI R&D in sensitive ethical areas must set up an ethics review committee (Article 4).

    The measures propose that ethics review committees should adhere to the following guidelines:

    • Composition and appointment. Committees must consist of at least seven members appointed for terms of up to five years, with the possibility of re-appointment (Article 7).
    • Expertise requirements. Members should include peer experts with relevant scientific and technical backgrounds, as well as those in ethics, law, and other related fields (Article 7).
    • Diversity and inclusion. Committees should include members of different genders and individuals from outside the unit. It is essential to include members familiar with local conditions for ethnic autonomous areas. (Article 7.)
    • Integrity and co-operation. Members should have a good track record of integrity and co-operate with other tasks arranged by the committee (Article 8).

    AI R&D should undergo an ethics risk assessment before initiation.

    Ethics review committees should review AI R&D activities within the scope of the 2023 Ethics Measures. The scope includes:

    • Scientific and technological activities that do not directly involve humans or experimental animals but may pose ethical risks and challenges. For example, in areas such as life and health, the ecological environment, public order, and sustainable development.
    • Other scientific and technological activities that require ethics reviews in accordance with laws, regulations, and relevant national provisions.

    (Articles 2 and 9.)

    Expert Re-Evaluation

    Certain AI R&D activities are subject to expert re-evaluation, which is a type of government ethics review. Activities include:

    • Research on synthesising new species that significantly impact human life and values, the ecological environment, and so on.
    • Research related to introducing human stem cells into animal embryos or foetuses, and their subsequent development into individuals in animal utero.
    • Fundamental research involving alterations to the genetic material or genetic patterns of human germ cells, fertilised eggs, and pre-implantation embryos.
    • Clinical research on invasive brain-computer interfaces for treating neurological and mental health disorders.
    • Developing human-machine fusion systems that strongly impact human subjective behaviour, psychological emotions, and overall well-being.
    • Developing algorithm models, applications, and systems that have the ability to mobilise public opinion, shape social awareness, or influence behaviour.
    • Developing highly autonomous decision-making systems for scenarios involving human safety and health risks.

    (Article 25 and Appendix, 2023 Ethics Measures.)

    Conducting an Ethics Review

    An ethics review committee must check:

    • Whether the proposed AI R&D complies with scientific and technological ethics principles. The participating personnel, research infrastructure, and facility conditions must also meet relevant requirements.
    • Whether the AI R&D may generate new or useful information, have scientific or social value, improve human welfare, or realise sustainable social development. There should also be a reasonable risk-to-benefit ratio. Risk control and incident response plans should be scientific, appropriate and viable.
    • Whether AI R&D recruitment schemes involving human research participants are fair and reasonable, and personal privacy data, biometric and other sensitive information is processed following personal information (PI) protection laws. Informed consent processes must also be compliant and appropriate.
    • That reviews for scientific and technological activities involving data and algorithms:
      • cover data collection, storage, processing, and use activities;
      • cover the R&D of new data technologies;
      • comply with relevant national regulations on data security and PI protection; and
      • have reasonable data security risk monitoring, emergency response plans, ethical risk assessments, and user rights protection measures, as appropriate.
    • That conflict-of-interest statements and management plans are reasonable.

    (Article 15, 2023 Ethics Measures.)

    Ethical Compliance

    Under the 2023 Ethics Measures, entities must:

    • Establish an ethics review committee.
    • Provide necessary staff, office space, and funding for performing ethics reviews.
    • Take measures to ensure that ethics review committees can independently conduct ethical review work.

    (Article 4.)

    Ethics review committees must:

    • Abide by China’s constitution, laws and regulations, and ethical norms of science and technology.
    • Develop and improve management systems and work standards.
    • Provide ethics consultations and guide personnel in conducting ethics risk assessments.
    • Conduct ethical reviews and track and supervise the entire process of AI R&D.
    • Determine whether AI R&D falls within the scope of key scrutiny.
    • Organise training for committee members and personnel.
    • Accept and assist in investigating complaints and reports.
    • Register, report, and co-operate with relevant departments for regulatory work.

    (Articles 5 and 8.)

    To ensure compliance with the 2023 Ethics Measures, entities should create and implement several internal policies, procedures, and guidelines.

    Entities have a clear legal obligation to consider ethical issues. However, the legal framework for ethics in AI can be considered vague and fragmented.

    This could result in:

    • Ethics committees facing difficulties when making decisions.
    • Different ethics committees and regulators reaching inconsistent conclusions on ethical issues.

    National Policies Addressing AI Ethics

    This section covers the key policies, opinions, and guidelines governing the AI ethical landscape in China.

    Due to significant overlaps, some principles within the mentioned documents are not addressed. However, novel or interesting concepts are discussed.

    2017 AI Plan

    The release of the Next Generation Artificial Intelligence Development Plan 2017 (2017 AI Plan) by the State Council was a seminal moment for AI in China.

    The 2017 AI Plan:

    • Sets ambitious goals for China to become the world leader in AI by 2030.
    • States the need for ethical standards in AI development and calls for integrating ethical considerations into AI research, development, and deployment.
    • Emphasises the importance of formulating laws, regulations, and ethical norms to promote the development of AI.

    It sets out specific requirements, including:

    • Strengthening research on legal, ethical, and social issues related to AI.
    • Establishing a legal and ethical framework to ensure the healthy development of AI.
    • Accelerating the research and formulation of relevant safety management regulations in areas such as autonomous driving and service robots.

    The 2017 AI Plan calls for research on:

    • Legal issues related to civil and criminal liability confirmation.
    • Privacy and property protection.
    • Information security in AI applications.

    It also emphasises the need to establish a system of accountability and clarity on legal subjects, rights, obligations, and responsibilities concerning AI.

    Additionally, the 2017 AI Plan calls for:

    • Developing ethical norms and a code of conduct for AI R&D personnel.
    • Enhancing the assessment of potential AI benefits and risks.
    • Establishing solutions for emergencies in complex AI scenarios.

    It advocates for active participation in global AI governance and research on major international AI issues (such as robot alienation and safety supervision). It also encourages international co-operation in AI laws, regulations, and international rules to collectively address global challenges.

    2019 Beijing AI Principles

    In 2019, a group of Chinese academic institutions led by the Beijing Academy of Artificial Intelligence released the Beijing AI Principles 2019 (2019 BJ Principles).

    These principles provided one of the first comprehensive ethical frameworks for AI in China, addressing fairness, transparency, privacy, and security issues.

    The 2019 BJ Principles were significant because they reflected a growing awareness within China’s AI community of aligning AI development with ethical norms.

    2019 AI Principles

    The Chinese government issued the Next Generation AI Governance Principle 2019 (2019 AI Principles) shortly after the 2019 BJ Principles.

    The 2019 AI Principles aim to guide AI governance in China. They comprise eight main principles:

    • Harmony and friendliness.
    • Fairness and justice.
    • Inclusivity and sharing.
    • Respect for privacy.
    • Safety and controllability.
    • Shared responsibility.
    • Open collaboration.
    • Agile governance.

    The 2019 AI Principles incorporate directives on respecting privacy, ensuring security, and promoting transparency and accountability in AI systems. These principles also emphasise the importance of international co-operation in AI ethics.

    2020 Education Opinions

    To implement the 2017 AI Plan, several government departments issued the Several Opinions on the Construction of Double First-Class Universities to Promote the Integration of Disciplines and Accelerate the Cultivation of Postgraduates in the Field of AI 2020 (2020 Education Opinions).

    This move represents a form of governmental intervention in the higher education system aimed at accelerating AI development.

    The 2020 Education Opinions mandate:

    • Strengthening AI research ethics education (Chapter 2, Article 3).
    • Promoting relevant international standards and ethical norms (Chapter 4, Article 11).
    • Cultivating talent prepared for global AI governance (Chapter 4, Article 11).

    2021 AI Code of Ethics

    The National Next Generation Artificial Intelligence Governance Expert Committee (国家新一代人工智能治理专业委员会) (AI Expert Committee), established under MOST in 2019, issued the Ethical Norms for Next Generation Artificial Intelligence 2021 (2021 AI Code of Ethics).

    The code is non-binding but influential. It encourages entities to adopt ethical considerations throughout the entire AI life cycle, in order to:

    • Promote fairness, justice, harmony and safety.
    • Prevent issues such as prejudice, discrimination, and privacy and data leaks.

    (Article 1.)

    The 2021 AI Code of Ethics emphasises the following key principles:

    • Enhancing human well-being.
    • Promoting fairness and justice.
    • Protecting privacy and safety.
    • Ensuring controllability and reliability.
    • Enhancing accountability.
    • Improving ethical literacy.

    (Article 3.)

    The code also provides guidelines for management, R&D, supply, and usage practices.

    Overall, it encourages the responsible development of AI technologies, stresses the need to protect individual rights, and promotes fairness. It is considered a step towards safely integrating AI into society.

    2022 Judicial Opinions

    China began using and testing AI within its judicial system in the late 2010s.

    The Opinions of the Supreme People’s Court on Regulating and Strengthening the Judicial Application of AI 2022 (2022 Judicial Opinions) provide guidance for regulating and strengthening the application of AI in the judicial field.

    It espouses the basic principles of:

    • Safety and legality. This principle contains an interesting concept to “promote harmony and friendship between man and machine.” This is uncommon in many ethical frameworks relating to AI, and though not explicitly stated, “friendship” could suggest the recognition of AI having some degree of personhood.
    • Fairness and justice. This requires ensuring that AI products and services are free from discrimination and prejudice. Technological interventions, including model or data deviations, should not compromise the fairness of trial processes and outcomes.
    • Auxiliary adjudication. The explanation of this principle clearly states that AI should be used to support judges, not replace them.
    • Transparency and trustworthiness. This requires that every aspect of AI systems is interpretable, testable, and verifiable.
    • Public order and good customs. This refers to integrating core socialist values (CSVs) into the entire process of judicial AI technology.

    2022 CPC Opinions

    On 20 March 2022, the General Office of the Communist Party of China Central Committee and the State Council jointly issued the Opinions on Strengthening the Governance of Science and Technology Ethics 2022 (2022 CPC Opinions).

    The 2022 CPC Opinions outline the values and behavioural norms that scientific research, technological development, and other similar activities should follow.

    Opinion 2 sets out the following ethical principles:

    • Improve human well-being.
    • Respect for the right to life. (While this principle does not prohibit animal experimentation, such practices must be reduced, replaced and optimised where possible.)
    • Adhere to fairness and justice.
    • Take reasonable control of risks.
    • Be open and transparent.

    Opinion 4 elaborates further on ethical considerations:

    • Item 2 proposes the exploration of ethical certification measures.
    • Item 3 recommends strengthening ethics laws pertaining to AI and elevating crucial ethical norms to the status of law.

    2023 Research Guidelines

    In December 2023, the Department of Supervision of MOST issued the Guidelines on Code of Conduct for Responsible Research 2023 (2023 Research Guidelines). The guidelines set out scientific ethics and academic research norms that should generally be followed during scientific research.

    While the 2023 Research Guidelines are not AI-focused, they provide that:

    • Generative AI (GenAI) may not be listed as a co-author (Section 4.7).
    • GenAI must not be directly used to generate scientific research project application materials (Section 1.1(2)).
    • Content marked as AI-generated by other authors should not generally be cited as original literature. Where it does need to be cited, an explanation should be provided. (Section 3.4).)
    • Peer reviewers should be careful when using AI during the review process (Section 5.3(6)). The consent of the review activity organiser should be obtained in advance (Section 6.1(7)).
    • Authors should disclose whether they use GenAI (Section 5.3(3)).

    Standardised Guidelines for Ethical Governance of AI 2023

    The Standardised Guidelines for Ethical Governance of AI 2023 was prepared to implement the 2022 CPC Opinions.

    It derives the following ten ethical guidelines for AI from the principles stated in the 2022 CPC Opinions:

    Ethic Principles in the 2022 CPC OpinionsEthical Guidelines for AI
    Improving human well-beingHuman-oriented
    Sustainability
    Respect for the right to lifeCollaboration
    Privacy
    Adhere to fairness and justiceFairness
    Sharing
    Reasonable control of risksSecurity
    Safety
    Be open and transparentTransparency
    Accountability

    Legal Framework for AI Ethics in China

    The Chinese legal framework governing AI ethics is contained in a patchwork of laws and regulations, including:

    The Chinese government indicates that it is in the process of drafting a general AI law (see Legal Update, State Council Releases 2024 Legislative Plan). It is unclear when this law will be finalised.

    2016 CSL

    The 2016 CSL applies to:

    • The construction, operation, maintenance, and use of networks.
    • The supervision and administration of cybersecurity by network operators (who are defined as network owners, administrators, and network service providers).

    Due to the nature of AI, businesses operating in the AI sector often fall within the definition of network operators.

    The 2016 CSL requires network operators to:

    • Abide by laws and administrative regulations.
    • Show respect for social moralities.
    • Follow business ethics.
    • Act in good faith.
    • Perform the obligation of cybersecurity protection.
    • Accept supervision by the government and social public.
    • Undertake social responsibilities.

    (Article 9.)

    Some concepts within Article 9 can be the subject of dispute. For instance:

    • Business ethics are a recurring topic in unfair competition litigation.
    • Good faith can be an issue in contract disputes.

    2021 DSL

    The 2021 DSL applies to data handling activities carried out in China and the security of such activities (Article 2).

    Given the broad definition of data, the 2021 DSL applies to virtually all business entities in China. Due to its comprehensive scope and the nature of AI, it seems that all businesses in the AI sector are subject to the 2021 DSL.

    The 2021 DSL provides that during data handling activities, entities must (among other things):

    • Observe laws and administrative regulations.
    • Respect social public morals and ethics.
    • Follow commercial and professional ethics.
    • Uphold sincerity and trustworthiness.
    • Fulfil data security protection obligations.
    • Undertake social responsibilities.

    (Article 8.)

    While the requirements of the 2021 DSL are similar to those of the 2016 CSL, the ethical requirements in the 2021 DSL appear to be slightly wider in scope. However, the extent of this expansion remains unclear.

    2021 PIPL

    The 2021 PIPL applies to:

    • PI processing activities in China.
    • Certain PI processing activities targeting individuals in China.

    While the 2021 PIPL does not explicitly mention ethics or morals, it incorporates several high-level principles that can be interpreted as ethical guidelines.

    For instance, Article 5 provides that PI should be processed in accordance with the principles of lawfulness, legitimacy, necessity and good faith, and not in any manner that is misleading, fraudulent, or coercive.

    Article 24 contains specific obligations applicable to AI ethics. It states that:

    • Where PI processors use automated decision-making to process PI, they must:
      • ensure transparency in the decision-making process;
      • ensure fairness and impartiality of the results; and
      • avoid implementing unreasonable differential treatment of individuals regarding transaction prices or other terms.
    • Where automated decision-making is used in business marketing or information push services, individuals must be provided with:
      • an option to opt out of targeting based on their personal characteristics; or
      • an easily accessible method to refuse such information.
    • If an automated decision significantly impacts an individual’s rights and interests, the individual has the right to:
      • request an explanation from the PI processor; and
      • refuse decisions made solely through automated processes.

    Though not explicitly stated, Article 24 suggests the ethical principles of transparency, fairness, impartiality, non-discrimination, and autonomy when using AI to process PI.

    2019 AUCL

    The 2019 AUCL was enacted to promote the healthy development of the socialist market economy.

    It aims to:

    • Encourage and protect fair competition.
    • Prevent acts of unfair competition.
    • Safeguard the legitimate rights and interests of business operators and consumers.

    (Article 1.)

    When carrying out production or business activities, a business operator must:

    • Follow the principles of voluntariness, equality, fairness, and good faith.
    • Abide by laws and observe business ethics.

    (Article 2.)

    In the context of AI training, the following actions are generally considered to violate the aforementioned principles:

    • Ignoring a website’s Robots.txt file or user agreements.
    • Overusing or misusing scraped data.
    • Disrupting or hindering the normal operation of legitimate online services or products.

    In practice, the concept of business ethics became a point of discussion under the 2019 AUCL and its 2017 and 1993 predecessors in several litigations involving data and algorithms. Notably, several cases highlighted this concept, as illustrated below.

    Yi Zhong Min Chu

    In Yi Zhong Min Chu [2013] No. 2668, Company A accused Company B of violating the Robots protocol on Company A’s website by crawling and providing content from Company A’s website as search results to users.

    One of the key disputes between the two parties was whether B’s non-compliance with the Robots protocol constituted a violation of business ethics.

    Before this case, 12 search engine service companies in Beijing, through the China Internet Association, jointly established the Internet Search Engine Service Self-Discipline Convention 2012.

    The convention explicitly stipulates the following provisions, which were adopted by the Beijing First Intermediate People’s Court:

    • Restrictions on search engine crawling should be based on industry-recognised reasonable justifications.
    • Robot protocols should not be used for unfair competition.

    (Article 8.)

    The court recognised this convention as an industry consensus among leading search engine companies, which are highly representative and dominate much of the market. This reflects the industry’s recognised business ethics and standards of behaviour.

    However, the court also opined that the healthy development of the market requires an orderly market environment and fair market competition rules as safeguards.

    The court further observed that Company B, in launching its search engine services, published the content and setting methods of the Robots protocol on its website. This action, the court reasoned, indicates that the entire internet industry, including Company B, recognises and complies with the Robots protocol.

    Consequently, the court held that the Robots protocol should be recognised as:

    • A prevailing rule in the industry.
    • The business ethics that should be followed in the search engine industry.

    Shanghai 73 Min Zhong

    In Shanghai 73 Min Zhong [2016] No. 242, the court ruled against the defendant, a website operator, who had used a significant amount of information from the plaintiff’s platform without permission in a violation of recognised business ethics.

    This unauthorised use substantially replaced the plaintiff’s products and services, causing harm to their interests.

    The court further held that:

    • When assessing the business ethics of commercial transactions, it is essential to consider the interests of operators, consumers, and the public comprehensively.
    • Unfairness can be directed at actions that improperly infringe on consumer interests or harm public interests, as well as competitors, which may also be considered unfair.
    • In determining unfair competition in specific cases, judgments must be based on the standards of honesty and creditworthiness, considering the impact of the behaviours on competitors, consumers, and the public.

    2021 Recommendation Algorithm Regulations

    The 2021 Recommendation Algorithm Regulations were enacted to:

    • Regulate recommendation algorithm activities in internet-based information services.
    • Promote CSVs.
    • Safeguard national security and public interests.
    • Protect the legitimate rights and interests of citizens, legal persons, and other organisations.
    • Promote the sound and orderly development of internet-based information services.

    (Article 1.)

    Recommendation algorithm-based service providers should:

    • Abide by laws and regulations.
    • Respect social morality and ethics.
    • Observe commercial and professional ethics.
    • Follow the principles of being fair and equitable, open and transparent, scientific and reasonable, and act in good faith.
    • Conduct science and technology reviews.
    • Not set up algorithm models that:
      • promote user addiction;
      • encourage excessive consumption; or
      • violate laws, regulations, ethics, or morals.

    (Articles 4, 7 and 8.)

    2021 Science Law

    In January 2022, the 2021 Science Law came into effect. It strengthens the ethical framework for science and technology, including AI, by enhancing ethics review, assessment, and supervision systems.

    The 2021 Science Law contains the following key ethical provisions:

    • Science and technology personnel must adhere to academic norms and ethical codes, maintain professional integrity, and act in good faith. Fraud and support for superstition are strictly prohibited. (Article 67.)
    • China enhances intellectual property (IP) rights protection, ethics in science and technology, and security review mechanisms in international scientific research co-operation (Article 82).
    • China improves research integrity, supervision systems, and governance structures for science and technology ethics (Article 98).
    • A committee on science and technology ethics is established to enhance institutional norms, ethics education, and research. Entities involved in science and technology must take primary responsibility for conducting ethics reviews. (Article 103.)
    • R&D activities that harm national security, public interests or human health, or violate research integrity and ethical standards are prohibited. Serious violations must be recorded in a database of dishonest conduct. (Article 107.)
    • Anyone violating the 2021 Science Law, including its ethical guidelines, must make corrections. Authorities may withdraw funding and confiscate illegal gains. In serious cases, they may publicly disclose violations, impose penalties, and ban individuals from participating in funded or licensed activities for a period. (Article 112.)

    2019 Network Content Provisions

    The 2019 Network Content Provisions stipulate information content requirements in cyberspace based on CSVs.

    Online content producers should avoid creating, reproducing, and disseminating:

    • Eleven types of illegal information.
    • Nine types of harmful information.

    (Articles 6-7.)

    2023 GenAI Measures

    Under Article 4 of the 2023 GenAI Measures, the provision and use of GenAI services must adhere to all applicable laws, regulations, social morals, and ethical standards.

    This includes compliance with the following requirements:

    • Adherence to CSVs, prohibiting the generation of content that:
      • incites subversion of state power;
      • overturns the socialist system;
      • harms national security and interests;
      • damages the national image;
      • incites separatism;
      • undermines national unity and social stability;
      • promotes terrorism, extremism, ethnic hatred, national discrimination, violence, or obscenity; and
      • spreads false or harmful information prohibited by laws and regulations.
    • Taking effective measures during algorithm design, training data selection, model generation and optimisation, and service provision to prevent discrimination based on nationality, beliefs, country of origin, region, gender, age, occupation, health, and other factors.
    • Respecting IP rights, business ethics, and preserving trade secrets.
    • Refraining from using advantages in algorithms, data, and platforms to engage in monopolistic and unfair competition practices.
    • Respecting the legitimate rights and interests of others, refraining from harming the physical and mental health of others, and avoiding violations of others’ rights to image, reputation, honour, privacy, and PI.
    • Implementing effective measures based on the service type characteristics to enhance the transparency of GenAI services and improve the accuracy and reliability of generated content.

    Selected National Standards and Guidelines

    China has a growing list of national standards that provide practical guidance on regulatory compliance issues and best practices concerning AI development, deployment, and use.

    Some of these national standards provide guidance on ethical issues and include:

    2021 AI Ethics Risk Prevention Guidelines

    In January 2021, TC260 released the 2021 AI Ethics Risk Prevention Guidelines, providing guidance on:

    • The ethical use of AI.
    • The prevention of security risks associated with AI.

    The guidelines require that an ethical safety risk analysis is done before AI-related activities are begun. The analysis must address five risk categories:

    • Uncontrollability risk. AI behaviour and impact exceed the predetermined, understood, and controllable scope.
    • Sociability risk. AI is unreasonably used, including abuse and misuse.
    • Infringement risk. AI infringes on basic human rights, including personal, privacy, and property rights.
    • Discrimination risk. AI influences fairness and justice with subjective or objective biases towards specific human groups.
    • Responsibility risk. Inappropriate behaviour of various parties related to AI, with unclear responsibilities.

    2024 Basic GenAI Requirements

    Appendix A to the 2024 Basic GenAI Requirements lists 31 safety risks. Under these requirements, the concept of safety should be construed broadly from the perspective of multiple stakeholders.

    Appendix A is structured into the following sections:

    • A.1: Content in contravention of the CSVs.
    • A.2: Discriminatory content.
    • A.3: Commercial violations.
    • A.4: Infringement of the legitimate rights and interests of others.
    • A.5: Non-compliance with safety requirements of specific service types.

    While many provisions in Appendix A could be characterised as ethical issues, the term ethics is only used in the phrase “violation of business ethics” (for more information, see 2019 AUCL).

    2024 AI Framework

    In September 2024, TC260 released the AI Safety Governance Framework 2024 (2024 AI Framework).

    The framework outlines seven types of AI safety risks. This includes three ethical risks:

    • Exacerbation of social discrimination and widening of the intelligence divide. AI can collect and analyse human behaviours, social and economic status, and individual personalities. This data could be used to label and categorise groups of people and lead to:
      • systematic and structural social discrimination;
      • increased prejudice; and
      • widening intelligence divides among groups and regions.
    • Challenges to the traditional social order. AI development and application may significantly change production tools and relations. This could:
      • accelerate the reconstruction of traditional industry modes;
      • transform traditional views on employment, fertility, and education; and
      • challenge the stability of traditional social orders.
    • Becoming uncontrollable. The fast development of AI technologies means there is a risk of AI:
      • acquiring external resources;
      • self-replicating;
      • becoming self-aware;
      • seeking external power; and
      • attempting to seize control from humans.

    To address these risks, the 2024 AI Framework proposes two response measures:

    • Filtering training data. Outputs should be verified during algorithm design, model training and optimisation, service provision, and other processes to prevent discrimination based on ethnicity, beliefs, nationality, region, gender, age, occupation, and health factors.
    • Ensuring AI safety. AI systems applied in key sectors (such as government departments, critical information infrastructure, and areas directly affecting public safety) should be equipped with highly efficient emergency management and control measures.

    Role of Governmental and Non-Governmental Organisations

    Chinese Association for Artificial Intelligence (CAAI)

    The Chinese Association for Artificial Intelligence (CAAI), founded in 1981, is the only national-level academic association in intelligence science and technology officially authorised by China’s Ministry of Civil Affairs.

    Article 3 of the CAAI Charter 2014 aims to promote AI science and technology development by adhering to China’s constitution, laws, regulations, state policies, social morals and customs.

    The CAAI has an AI Ethics and Governance Working Committee.

    Cyberspace Administration of China (CAC)

    The Cyberspace Administration of China (CAC) is the central regulatory body overseeing internet policies, cybersecurity, PI protection, and data security in China.

    The CAC plays a critical role in shaping China’s digital landscape, including its approach to AI. It has issued several regulations governing AI development, deployment, and use in China.

    AI and algorithmic services related to public opinion or social mobilisation must be filed with the CAC (Article 17, 2023 GenAI Measures; Article 23, Provisions on Administration of Algorithmic Recommendation in the Internet Information Service 2021; Article 19, Administrative Provisions on Deep Synthesis in Internet-based Information Services 2022 (2022 Deep Synthesis Provisions, with effect from 10 January 2023)).

    National Data Administration

    In 2023, the National Data Administration was established to advance:

    • The planning and building of a digital China.
    • A digital economy.
    • A digital society.

    Though its remit appears relevant to AI, it has yet to impact AI ethics.

    Chinese Academy of Social Sciences (CASS)

    The Chinese Academy of Social Sciences (CASS) is China’s premier academic organisation and comprehensive research centre for philosophy and social sciences.

    It organises symposiums on AI and AI ethics, and its researchers regularly publish articles on AI.

    AI Subcommittee of the National Ethics Committee

    In July 2019, China established the National Ethics Committee and set up a subcommittee for AI.

    According to the Party and State Institutional Reform Plan 2023, the role of the National Ethics Committee will transition from a co-ordinating body under the State Council to an academic and professional expert committee within MOST.

    The AI subcommittee is responsible for:

    • Drafting guiding documents.
    • Organising academic seminars.
    • Facilitating in-depth discussions and exchanges among domestic and international experts and entrepreneurs.

    On 2 February 2024, the AI subcommittee released the Ethical Guidelines for Brain-Computer Interface Research.

    AI Expert Committee

    In 2019, MOST established the AI Expert Committee to advance the development plan proposed in the 2017 AI Plan. The committee comprises experts and scholars from academic institutions, research units, and technology enterprises.

    The AI Expert Committee released important guiding documents that expand upon China’s AI governance framework and action guidelines. These include:

    AI Ethical Issues Under Chinese Law

    Personal Freedom and Human Dignity

    The Civil Code of the PRC 2020 (2020 Civil Code, with effect from 1 January 2021) protects natural persons’ personal freedom and human dignity (Article 109).

    Many ethicists would refer to these as intrinsic rights. Many ethical and legal frameworks contain instrumental rights to support intrinsic rights.

    Privacy and PI Protection

    Privacy

    Privacy can be regarded as an instrumental right that contributes to personal freedom and human dignity. This is because the act of monitoring an individual can impact how they exercise their personal freedom. The 2020 Civil Code recognises this phenomenon (Article 990(2)).

    The 2020 Civil Code grants privacy rights (Articles 110 and 990). Article 1032 of the code provides:

    “A natural person enjoys the right to privacy. No organisation or individual may infringe upon the other’s right to privacy by prying into, intruding upon, disclosing, or publicising another’s private matters.”

    PI Protection

    The 2021 PIPL contains generic provisions to protect the PI of individuals (and, by extension, their privacy). It also contains provisions concerning the processing of PI through automated decision-making (Article 24) (see 2021 PIPL).

    GenAI

    The 2023 GenAI Measures require that the provision and use of GenAI services must not infringe on personal privacy, as well as PI rights and interests (Article 4(4)).

    Moreover, relevant agencies and personnel involved in the safety assessment and supervision of GenAI services must keep personal privacy and PI confidential (Article 19).

    Transparency and Explainability

    Transparency and explainability are concepts that appear in multiple legal sources and ethical frameworks for AI, including:

    • The 2023 Ethics Measures.
    • The 2024 Basic GenAI Requirements.
    • The 2023 Ethics Measures.
    • The 2019 BJ Principles.
    • The 2019 AI Principles.
    • The 2022 Judicial Opinions.
    • The 2021 PIPL.

    They are perhaps most relevant where AI is employed to make decisions that have a material impact on an individual’s rights and interests.

    Under the 2021 PIPL, individuals have the right to demand an explanation of how AI-driven decisions are reached (Article 24). However, the required level of detail for such explanations remains unclear.

    GenAI services should be transparent under the 2023 GenAI Measures (Article 4(5)).

    The 2024 Basic GenAI Requirements set out the following requirements for transparency:

    • Service providers should publicly disclose the target audience, scenarios, and purposes of the services on the homepage and other prominent locations. They should also disclose the usage of basic models.
    • Users should be provided with the following information in easily accessible locations (such as the homepage and service agreements):
      • service limitations;
      • brief information about the models, algorithms, and other relevant matters; and
      • the PI collected and its purpose concerning the service.
    • The above information should be disclosed in supporting documents where the GenAI service is provided through a programmable interface.

    (Article 7(b).)

    The People’s Bank of China also issued an industry standard that instructs financial institutions on how to disclose AI algorithm use to users (see Guidance on Information Disclosure for Financial Applications Based on AI Algorithms (JR/T 0287-2023)). The standard also provides examples of disclosures in its appendix.

    Bias and Fairness

    Under Chinese law, the terms bias and fairness have ambiguous meanings and are often treated similarly. Certain forms of bias and unfairness may be legally permissible, while others are not.

    The law generally prohibits biased and unfair conduct in specific situations, particularly those involving deliberate or accidental actions related to protected characteristics.

    Bias and unfairness may arise where a proxy value closely related to a protected characteristic affects the decisions of a decision-maker. For example:

    • Gender discrimination. Gender-based discrimination is often intertwined with factors predominantly associated with one gender, such as taking maternity leave. Consequently, penalising an individual for taking maternity leave may constitute gender discrimination.
    • Other forms of discrimination. Certain discriminatory practices may be less obvious. In China, some online platforms display different prices or offer varying discounts based on user characteristics and profiles. For example, a platform may show higher prices to Apple phone users than Android users. These characteristics could serve as proxy values for age, region, and so on.

    The debate continues over whether these practices are legitimate business strategies, or unethical and potentially fraudulent behaviour.

    Under the 2024 Basic GenAI Requirements and 2023 GenAI Measures, prohibited forms of bias and unfairness include:

    • Ethnic discrimination.
    • Discrimination based on beliefs.
    • Nationality-based discrimination.
    • Discrimination based on regional origin.
    • Gender discrimination.
    • Age discrimination.
    • Occupation-based discrimination.
    • Health-based discrimination.
    • Monopolistic behaviour or unfair competition.

    Accountability

    AI systems are tools created and provided by persons (natural and legal) that produce outputs at the behest of persons.

    The many different individuals that might be involved in the creation, provision, and use of an AI system include:

    • Researchers.
    • Developers.
    • Ethics boards.
    • Regulators.
    • Vendors and suppliers.
    • Users.

    It is widely recognised that AI system outputs can materially and negatively impact the rights and interests of an individual. Therefore, it seems appropriate to make at least one person accountable for the consequences of AI outputs. However, it can be difficult to say who should be accountable to individuals harmed by AI where numerous stakeholders are involved in creating, providing, and using an AI system.

    Given the accountability problems associated with AI, the following rules assign obligations and liability to specific individuals:

    • A PI processor must explain automated decisions reached by processing PI to PI subjects whose rights and interests are significantly affected upon request. Individuals have the right to reject such decisions (Article 24, 2021 PIPL).
    • Service providers must ensure PI is processed in accordance with laws and regulations (Article 51, 2021 PIPL; Articles 7 and 11, 2023 GenAI Measures).
    • No person may use deep synthesis technology to infringe the rights of another person (Article 6, 2022 Deep Synthesis Provisions).
    • Certain AI service providers are responsible for the information security of their AI systems (Article 7, 2022 Deep Synthesis Provisions; Article 9, 2023 GenAI Measures).
    • Certain AI service providers are responsible for the outputs of the systems they provide (Articles 8-11, 2022 Deep Synthesis Provisions; Article 9, 2023 GenAI Measures).
    • Certain AI service providers must employ effective measures to protect minors from overreliance or addiction to AI (Article 10, 2023 GenAI Measures).
    • Business operators are prohibited from using AI to engage in monopolistic practices or abuse their dominant status (Articles 9 and 22, 2022 AML).
    • Road and demonstration test applicants should follow rules and be accountable for accidents involving smart, connected vehicles (Article 6, Administrative Rules on Intelligent and Connected Vehicle Road Testing and Demonstration Application (Trial) 2021).
    • AI medical device registrants should assume responsibility for the safety and effectiveness of medical devices throughout their development, production, operation, and use in accordance with the law (Article 13, Regulations on Supervision and Administration of Medical Devices 2021).

    Censorship and AI

    China operates an extensive censorship system that covers internet content, among other forms of media. China requires the censorship of both AI training data and AI outputs. Such training data and outputs are typically scraped from and provided online.

    The 2019 Network Content Provisions stipulate that producers of online content are prohibited from creating, duplicating, or disseminating information that:

    • Contradicts the foundational principles outlined in China’s constitution.
    • Risks national security, reveals state secrets, attempts to subvert state authority, or disrupts national unity.
    • Harms the dignity or interests of the nation.
    • Distorts, defames, or dishonours the legacy and spirit of heroic martyrs, or disrespects the martyrs by insulting, defaming, or otherwise infringing upon their names, images, reputation, or honour.
    • Promotes terrorism or extremism or incites engagement in terrorist or extremist activities.
    • Instigates ethnic hatred or discrimination or threatens national solidarity.
    • Undermines the state’s religious policies or disseminates heresy and superstitious beliefs.
    • Disseminates false information or disrupts the economic and social order.
    • Spreads content that is obscene, pornographic, gambling-related, or violent, promotes murder and terror, or aids in criminal activities.
    • Insults or defames individuals, violating their reputation, privacy, or other legal rights and interests.
    • Includes any other content that is forbidden by laws and administrative regulations.

    (Article 6.)

    Online content producers must also avoid creating, reproducing, or spreading information that:

    • Employs sensationalist headlines that significantly misrepresent the content.
    • Sensationalises gossip, scandals, and misconduct.
    • Inappropriately comments on natural disasters, major accidents, and other catastrophes.
    • Contains sexual innuendo or provocation that could lead to sexual associations.
    • Displays graphic violence, horror, or cruelty that may cause distress.
    • Incites discrimination against groups or regions.
    • Promotes vulgar, obscene, or tasteless content.
    • May lead minors to imitate dangerous behaviours, violate social ethics, or develop poor habits.
    • Otherwise negatively impacts the health of the online environment.

    (Article 7.)

    Appendix A of the 2024 Basic GenAI Requirements lists several types of security risks. These risks can be considered derivatives of Articles 6 and 7 of the 2019 Network Content Provisions.

    Responsibility and Negligence in AI

    Professional Responsibility in AI Development

    Individuals involved in AI development can be considered scientific and technological personnel.

    The 2021 Science Law states that scientific and technological personnel should:

    • Be patriotic, innovative, truth-seeking, dedicated, and collaborative.
    • Ahere to the spirit of craftsmanship.
    • Observe academic and ethical norms in all kinds of scientific and technological activities.
    • Abide by professional ethics, and be honest and trustworthy.
    • Refrain from fraudulent practices in scientific and technological activities.
    • Avoid participating in or supporting superstition.

    (Article 67.)

    Legal Framework Governing Negligence

    China has not yet established clear rules regarding liability for the actions of AI.

    In SCLA v AI Company [2024] Guangzhou Internet Court (Yue 0192 Min Chu No.113), the court provided valuable guidance for determining the liability of AI and related service providers.

    The court:

    • Held that the AI service provider failed to implement appropriate technical preventive measures. This allowed users to generate images containing elements of other copyright holders, which constituted IP infringement.
    • Assessed the need for compensation and found the AI service provider liable due to:
      • the absence of a complaint reporting mechanism;
      • a failure to alert users to potential risks; and
      • AI-generated images not being clearly identified.
    • Determined that the provider did not fulfill its duty of care and exhibited subjective fault.

    Consequently, the AI service provider was ordered to pay RMB10,000 in compensation to the plaintiff.

    For more information, see Practice Note, AI-Generated Content and Copyright (China): SCLA v AI Company.

    AI in Other Professions: Law and Accounting

    Law

    Under the Law of the PRC on Lawyers 2017 (2017 Lawyers Law), the law firm employing a lawyer is liable for the lawyer’s wrongdoing where a party suffers losses. The law firm may seek recourse against the lawyer if the lawyer acted intentionally or with gross negligence (Article 54).

    There are no exemptions or safe harbours covering the use of AI under the 2017 Lawyers Law. As such, lawyers should:

    • Ensure the quality of all work they produce.
    • Clearly explain any limitations or constraints on their work.
    • Provide disclaimers as appropriate.

    Accounting

    Under the Accounting Law of the PRC 2024 (2024 Accounting Law), the person in charge of an entity:

    • Is responsible for the authenticity and completeness of the accounting practice and the accounting documents of the entity (Article 4).
    • Should ensure the truthfulness and completeness of financial and accounting reports (Article 21).

    There are no exemptions or safe harbours covering the use of AI under the 2024 Accounting Law.

    Under the Specification for Accounting Informatisation 2024 (with effect from 1 January 2025), entities must stipulate the following in procurement contracts for accounting information services:

    • Service content.
    • Service quality.
    • Service duration.
    • Data security.
    • Other rights and responsibilities.

    (Article 15.)

    Entities conducting accounting informatisation involving AI should comply with relevant laws and regulations and respect social morals, ethics, and morality (Article 44).

    Accounting software is regulated under the Specification for Basic Functions and Services of Accounting Software 2024 (with effect from 1 January 2025).

    According to Article 42, where accounting software service providers are responsible for user accounting data being leaked or damaged, they must be responsible for restoration and compensation as stipulated.

    However, the other liability of accounting software providers to users for AI-related issues is still unclear. As such, liabilities between parties will typically be determined by contract.

    Preventative Measures and Best Practices

    Methods for preventing or mitigating AI-induced negligence include:

    • Ensuring a qualified human is the ultimate decision-maker.
    • Providing qualified human decision-makers with adequate resources to manually fulfil their role.
    • Conducting a thorough risk assessment as a part of procurement activities.
    • Using well-drafted contracts to allocate liability and clearly define service standards.

    Global Co-operation on AI Ethics

    Global AI Governance Initiative

    In October 2023, China proposed the Global AI Governance Initiative.

    The initiative states that the development of AI should prioritise the well-being of humanity, ensure social security, respect human rights, and support sustainable development.

    It also promotes:

    • The principles of fairness and non-discrimination in data acquisition, algorithm design, technology development, product development, and application.
    • An ethics-first approach, emphasising AI ethics guidelines, norms, and accountability mechanisms, supported by review systems.
    • The principles of broad participation, consensus, and incremental development.

    Bletchley Declaration

    On 1 November 2023, the Bletchley Declaration was published. China is a signatory of the declaration, along with 28 countries and the EU.

    Ethical principles covered in the Bletchley Declaration include human rights, transparency, explainability, accountability, fairness, regulation, safety, human oversight, ethics, bias mitigation, privacy, and data protection.

    The Bletchley Declaration emphasises focusing on risk identification and creating policies based on these identified risks.

    For more information, see Practice Note, Key AI Regulatory Considerations in China: Bletchley Declaration.

    Enhancing International Co-operation on AI Capacity-Building

    On 1 July 2024, the United Nations General Assembly adopted a consensus resolution proposed by China and co-sponsored by over 140 countries (see UNGA: UNGA Adopts China-Proposed Resolution to Enhance International Cooperation on AI Capacity-Building).

    Framework Convention on AI

    The Council of Europe’s Framework Convention on Artificial Intelligence is an international treaty legally binding its signatories (see Council of Europe: The Framework Convention on Artificial Intelligence).

    It is monitored through a Conference of the Parties to ensure signatories’ adherence. China is not a signatory. However, given that several major economies are signatories, it will likely have some indirect impact in China.

    AI Ethics Challenges

    Gaps in the Current Legal Framework

    In China’s current legal framework, there are several unclear issues, including:

    • Unclear ethical obligations. Terms such as bias and discrimination are not clearly defined, which leads to uncertainty in their interpretation and application.
    • Absence of a GenAI Law. There is no overarching AI law in China. As a result, different regulators are responsible for regulating AI within their specific areas of competence.

    This fragmented approach complicates compliance efforts, particularly for organisations with diverse interests across multiple sectors.

    Balancing Innovation and Regulation

    Regulators face the challenge of balancing innovation with regulation. An example of this can be found in the drafting process of the 2023 GenAI Measures.

    The initial draft of the 2023 GenAI Measures, issued by the CAC, proposed that any entity or individual providing GenAI services should assume the responsibilities of the content producer. This caused significant controversy and raised concerns within the AI industry.

    The explicit liability provisions were omitted from the finalised version of the 2023 GenAI Measures issued by the CAC and other regulators. This suggests that issues raised by one regulator may later attract the attention of other regulators and that regulators are trying to balance innovation and regulation.

    Chinese laws and regulations are typically published for public consultation (Article 74, Legislation Law of the PRC 2023). This process provides stakeholders, including those in the AI industry, with an opportunity to voice their opinions on the governance of AI technology.

    Future Directions

    Many experts and scholars suggest that China should formulate a comprehensive AI law outside the existing legal framework for regulating AI.

    To this end, two draft AI laws were released:

    • On 19 March 2024, experts from seven universities released the Artificial Intelligence Law (Scholar’s Draft) (人工智能法(学者建议稿).
    • On 16 April 2024, institutions including the Law Institute of the Chinese Academy of Social Sciences drafted and published the Artificial Intelligence Model Law 2.0 (Expert Draft).

    Both drafts propose certain requirements for AI ethics. For example, Article 42 of the Expert Draft stipulates that:

    • An AI ethics review committee should be established for AI R&D activities that involve sensitive areas, as determined by the national AI authority.
    • AI ethics reviews should be conducted under relevant national regulations.
    • Other AI developers, providers, and users are encouraged to establish AI ethics review committees based on actual circumstances.

    The State Council, in its Legislative Work Plan for 2023 and 2024, stated its intention to prepare to submit the draft AI law to the Standing Committee of the National People’s Congress for deliberation. For more information, see Legal Updates, State Council Releases 2024 Legislative Plan and State Council Releases 2023 Legislative Plan.

    In terms of national standards, on 5 June 2024, the Ministry of Industry and Information Technology and other departments issued the Guidelines for the Construction of National AI Industry Comprehensive Standardisation System (2024 Edition).

    The guidelines outline the following objectives:

    • By 2026, develop more than 50 new national and industry standards and improve the AI standard system covering seven key areas, including:
      • basic commonality;
      • key technologies; and
      • the safety and governance of AI products and services.
    • Standardise ethical governance requirements for the entire lifecycle of AI, including:
      • AI ethics risk assessments;
      • ethical governance technology requirements and evaluation methods for fairness and explainability of AI; and
      • AI ethics review standards.