Introduction

As AI technology continues to advance, it has increasingly been exploited for infringing activities that violate general personality rights protected under the Civil Code of the People’s Republic of China (the “Civil Code”), including the right to name, right to image, and right to reputation.  In many reported cases, infringers upload unauthorized content through online platforms, exploiting the speed and scale that AI technology enables while evading traditional enforcement mechanisms.  In response to circumstances described above, on April 3, 2026, the Cyberspace Administration of China (“CAC”) released the Administrative Measures for Digital Virtual Human Information Services (Draft for Comments) (《数字虚拟人信息服务管理办法(征求意见稿)》,the “Draft Measures”), inviting public commentary on this pioneering regulatory framework.  This legislative initiative represents China’s latest effort to address the legal gaps and potential risks arising from the rapid advancement of artificial intelligence (“AI”) technologies capable of generating sophisticated virtual digital representations of human beings.

This article provides a preliminary legal analysis of the Draft Measures, examining its definitional framework, core regulatory provisions concerning rights protection and service standards, platform obligations, and its position within the broader legislative landscape governing AI-generated content in China.

  1. Definitional Framework: Establishing Regulatory Scope

The Draft Measures establishes a comprehensive definitional framework that systematically categorizes the various actors and technologies involved in the digital virtual human ecosystem.  This approach reflects a deliberate effort to bring clarity and predictability to the regulatory landscape, addressing concerns that have emerged as AI-generated virtual human technology has proliferated across entertainment, education, commerce, and social media platforms.

Digital Virtual Human is defined as “a virtual digital image that exists in a non-physical world, utilizing computer graphics, digital image processing, or AI technologies, driven by real humans or computation, simulating human appearance, and possessing characteristics such as voice, behavior, interaction capabilities, or personality.”  This definition encompasses both motion-capture-driven virtual humans that map real human expressions, movements, and speech in real-time, as well as computationally-generated virtual entities.  The breadth of this definition ensures comprehensive regulatory coverage while remaining technology-neutral enough to accommodate future technological developments.

The Draft Measures further distinguishes between three categories of regulated entities, each subject to different regulatory expectations:

Digital Virtual Human Service Providers (the “DSPs”) are defined as organizations or individuals that provide digital virtual human services.  This category captures the primary commercial actors in the ecosystem, including platforms that offer virtual human creation tools and businesses that deploy virtual human customer service representatives.

Digital Virtual Human Technical Supporters (the “DTSs”) are organizations or individuals that provide technical support for digital virtual human services.  This category addresses the underlying technology providers, including AI model developers and cloud infrastructure providers that enable digital virtual human services without directly providing end-user services.

Digital Virtual Human Service Users (the “DSUs”) are organizations or individuals that use digital virtual human services to produce, reproduce, or publish information.

This tripartite classification enables differentiated allocation of legal obligations and responsibilities based on the role each entity plays in the digital virtual human value chain.

  1. Core Regulatory Provisions: Rights Protection and Service Standards

The Draft Measures is organized into five major sections, with the core regulatory content concentrated in Chapter II (“Rights Protection”) and Chapter III (“Service Standards”).  This structure reflects the dual objectives of the regulation: safeguarding individual rights while establishing operational standards for DSPs.

  1. Rights Protection Framework

Chapter II of the Draft Measures establishes a multi-dimensional rights protection framework that addresses several critical concerns:

General Personality Rights Protection: The Draft Measures explicitly addresses the need to prevent the creation of digital virtual humans that infringe upon individuals’ general personality rights as protected under the Civil Code.  This includes the right to name, right to image, and right to reputation.  The Draft Measures recognizes that digital virtual human technology, while offering significant creative potential, can be readily exploited to produce content that violates these fundamental rights through unauthorized face-swapping and voice-cloning applications.

Personal Information Protection: The Draft Measures incorporates requirements for lawful collection and processing of personal information in the development and deployment of digital virtual humans, aligning with Personal Information Protection Law of the People’s Republic of China (the “PIPL”).

Intellectual Property Protection: The Draft Measures includes provisions aimed at preventing IP infringement in the creation and dissemination of digital virtual human content, addressing concerns about unauthorized use of copyrighted materials in content generation.

Minor Protection and Anti-Addiction Measures: Reflecting broader regulatory trends in China’s digital economy, the Draft Measures incorporates requirements for protecting minors from potential harms associated with digital virtual human services, including anti-addiction mechanisms.

  • Service Standards and Platform Obligations

Chapter III of the Draft Measures imposes detailed obligations on DSPs and DSUs concerning content management compliance, lawful acquisition of personal information, data processing compliance, and labeling of AI-generated synthetic content.  In particular, Articles 15 and 16, along with article 20 set forth the following specific measures to be taken by DSPs and DSUs in the course of their daily business operations.

Risk Monitoring and Emergency Response: DSPs are required to establish mechanisms for monitoring and early warning of security risks, emergency response protocols, and anti-addiction alerts.  They must also develop comprehensive content-oriented management systems.

Technical and Human Oversight: DSPs must deploy technical capabilities and personnel commensurate with their scale of operations, employing AI, big data, and other technical means combined with human review to enhance identification, monitoring, and early warning of risks associated with digital virtual human services.  All logs must be recorded and retained.

Enforcement Measures: When DSPs discover illegal activities conducted through their services, they must promptly implement measures including dynamic identity verification, warnings, feature restrictions, and service termination.  Where significant risks are identified, immediate suspension or termination of digital virtual human services, deregistration of the virtual human, and elimination of effects are required.

User Grievance Mechanisms: DSPs shall establish user complaint and public reporting mechanisms, setting up convenient channels for grievance submission with requirements for timely processing and feedback.

Service Agreements: DSPs must enter into service agreements with DTSs and DSUs, specifying rights and obligations concerning content security and data collection, usage, and storage standards.

  • Algorithmic Accountability

For DSPs and DTSs with public opinion attributes or social mobilization capabilities, the Draft Measures requires compliance with algorithmic filing and change/cancellation procedures as prescribed under the Administrative Provisions on Internet Information Service Algorithmic Recommendation Management.   This requirement subjects such entities to additional regulatory oversight concerning their algorithmic systems.

  • Penalties and Enforcement

Article 24 of the Draft Measures establishes a tiered penalty structure:

  • warnings, public criticism, and orders to rectify within a specified period;
  • for refusal to rectify or serious circumstances, suspension of services with fines of 10,000 to 100,000 yuan;
  • for violations involving endangerment of citizens’ life and health with harmful consequences, fines of 100,000 to 200,000 yuan.
  • Legislative Context and Regulatory Evolution

The Draft Measures represents the latest addition to China’s evolving regulatory framework governing AI-generated content and platform responsibilities in the digital age.  This legislative ecosystem includes:

  • Administrative Provisions on Internet Information Service Algorithmic Recommendation Management (《互联网信息服务算法推荐管理规定》): Governing algorithmic recommendation systems,
  • Administrative Provisions on Deep Synthesis in Internet Information Services (《互联网信息服务深度合成管理规定》): Addressing deep synthesis technologies including virtual human generation,
  • Interim Measures for the Management of Generative Artificial Intelligence Services (《生成式人工智能服务管理暂行办法》): Governing general generative AI services, and
  • Measures for Labeling AI-Generated or Composed Content (《人工智能生成合成内容标识办法》): Establishing content labeling requirements.

In addition, complementary technical standards such as the Cybersecurity Standard Practice Guide—Methods for Labeling Generative AI Service Content (《网络安全标准实践指南——生成式人工智能服务内容标识方法》)provide detailed implementation guidance for compliance with these regulatory requirements.

This comprehensive regulatory architecture demonstrates China’s proactive approach to governing emerging technologies, seeking to establish clear rules of the road before potential harms become widespread.

  1. Assessment and Proposed Recommendations
  2. Proportionality in Regulatory Design

While the Draft Measures represents a commendable effort to address regulatory gaps in the digital virtual human domain, several refinements would enhance its effectiveness and proportionality.  The current “one-size-fits-all” approach to platform obligations may require differentiation based on entity size, user base, and risk profile.  While large-scale platform enterprises have already deployed comprehensive technical and human review mechanisms in response to existing regulatory requirements, the application of identical standards to small and medium-sized platforms may impose substantial compliance burdens that could impede their development.

This regulatory challenge is not unique to China.  The European Union’s approach to platform regulation under the Digital Services Act distinguishes between different categories of platforms based on their user base and systemic impact, with correspondingly differentiated obligations.  Adopting principles similar to the EU’s tiered platform regulation would better balance regulatory objectives with the need to foster innovation and competition, particularly regarding technical deployment and human review requirements among smaller market participants.

  • Harmonization with Existing Frameworks

The Draft Measures must be read in conjunction with existing provisions of the Civil Code and other sector-specific regulations.  The Civil Code imposes specific duties on platforms in this context: when a rights holder notifies a platform of infringing conduct, the platform must take necessary measures such as takedown, blocking, or disconnection of links.  However, the Civil Code does not mandate platforms to act solely on user reports; it adopts the mechanism of “forwarding notices” to balance technological development with the protection of individual rights.  The relationship between the emergency response measures and user grievance mechanisms under the Draft Measures and the Civil Code provisions requires careful calibration to avoid potential conflicts or unintended expansion of platform liability that could indirectly increase compliance burdens.

Greater clarity is needed regarding the relationship between the Draft Measures and existing legal provisions, particularly concerning the interplay between emergency suspension requirements and user grievance mechanisms with Civil Code provisions governing platform liability.  Explicit guidance on how these new obligations interact with existing rights-protection mechanisms would reduce legal uncertainty and compliance costs.

Given the rapidly evolving nature of digital virtual human technology, a graduated implementation approach that allows for regulatory learning and adaptation would be beneficial.  Establishing review mechanisms to assess the Draft Measures’s effectiveness and make necessary adjustments based on technological and market developments would enhance long-term regulatory coherence.

  • Classification-Based Responsibility Allocation

The Draft Measures currently adopts a relatively generalized approach to defining platform obligations across the tripartite classification of DSPs, DTSs, and DSUs.  While this classification framework provides a useful starting point, the actual scope of obligations imposed on each category remains broadly similar, potentially creating challenges for effective enforcement and compliance.  Future regulatory development should consider adopting a more granular classification-based system that better matches responsibilities with the specific roles, capabilities, and risk profiles of different types of market participants.

For instance, DTSs who provide foundational infrastructure but do not directly interface with end users might reasonably bear different obligations than DSPs who exercise greater control over content creation and dissemination.  Similarly, commercial DSPs operating at scale might reasonably face more stringent requirements than individuals using digital virtual human technology for personal creative projects.  The development of detailed implementing rules or regulatory guidance that provides concrete examples and criteria for different categories would significantly enhance practical compliance.

Conclusion

The Draft Measures represents a significant and timely regulatory initiative by Chinese authorities to address the legal challenges posed by rapidly advancing AI technologies capable of generating sophisticated virtual human representations.  By establishing comprehensive definitions, detailed platform obligations, and enforcement mechanisms, the Draft Measures seeks to protect individual rights while fostering responsible development of the digital virtual human industry.  The Draft Measures reflects a broader trend in Chinese technology regulation toward targeted, sector-specific governance frameworks.

However, several areas warrant further refinement.  These include the adoption of proportionality-based regulatory standards differentiated by entity size and risk profile, improved harmonization with existing legal frameworks particularly the Civil Code’s notice-and-takedown provisions, and graduated implementation mechanisms that accommodate technological evolution. The development of detailed implementing rules and regulatory guidance would significantly enhance practical compliance and regulatory effectiveness.

As digital virtual human technology continues to penetrate daily life and as more startups enter the market as DSPs, the need for nuanced, adaptive regulatory frameworks becomes increasingly urgent.  The ongoing public consultation provides an important opportunity for stakeholders to contribute to the development of sound regulatory policy in this critical area of AI governance.