— A Brief Analysis on the AI Safety Governance Framework
1. Background and Structure of AI Framework
On September 9, 2024, the National Technical Committee 260 on Cybersecurity of Standardization Administration of China (“TC260”) has promulgated the AI[1] Safety Governance Framework (V1.0) (the “AI Framework), which aims to implement the Global AI Governance Initiative and promote consensus and coordinated efforts on AI security governance among governments, international organizations, companies, research institutes, civil organizations, and individuals, effectively preventing and mitigating AI security risks.[2]
On October 18, 2023, the Cyberspace Administration of China has issued the Global AI Initiative (the “AI Initiative”), which puts forward to an open, fair and efficient approach to the development, security and governance of AI, intending to harness the transformative technologies for the benefit of humanity.[3] According to the preface of the AI Framework, the AI Framework has been formulated to implement the AI Initiative, which highlights that the principles of development and security shall be equally guaranteed and facilitated, reflecting China’s commitment to addressing frontier AI safety issues and showcasing its proactive stance in shaping a secure AI landscape.
In general, the AI Framework, for one thing, identifies the AI-related security risks. For another, it stipulates several measures that all stakeholders involved, like technology research institutions, product and service providers, users, governmental agencies, and social organizations should take to prevent and respond to those risks.
In order to thoroughly address the security concern in relation to AI, the structure of AI Framework encompasses the following aspects:
- Safety/security risks;
- Technical countermeasures;
- Comprehensive governance measures; and
- Safety guidelines for AI development and application.
2. AI Security Risks and Proposed Measures
In the main body of the AI Framework, it outlines AI-related risks and proposed solutions, which consists of technical measures, comprehensive governance measures as well as guidelines on guaranteeing the security regarding the development and application of AI.
To begin with, considering the risk sources of AI mainly come from two parts, the AI Framework presents AI-related security risks in dual aspects. One part is the security issues originated from the AI technology itself, such as models and algorithms, training data and data output, as well as the AI system, which are categorized as the inherent safety risks in the AI Framework.
In addition to the inherent safety risks, during the AI application process, there are risks in respect of personal information leakage, misuse of AI technology, expanding effects of information cocoons, exacerbation of social bias and discrimination, and even the potential uncontrollability of AI, etc., which are classified as the safety risks in AI application.
Despite security risks and challenges brought by AI and posed in its application, there is no doubt that the coordination and unity of AI development is emphasized, and it has also reached the consensus that the stagnation of development is the biggest insecurity.[4] Thus, the countermeasures at both technical levels and other aspects are stated in the AI Framework, aiming to build an agile and collaborative governance system, and ensuring that technology develops in an orderly manner under human control and serves the growing needs to the mankind.[5]
Below is a table attached to the last section of the AI Framework showing the AI-related security risks and the technical measures as well as comprehensive governance measures corresponding to each risk.
Safety risks | Technical countermeasures | Comprehensive governance measures | ||
Inherent safety risks | Risks from models and algorithms | Risks of explainability | 4.1.1(a) | Advance research on AI explainability Create a responsible AI R&D and application system |
Risks of bias and discrimination | 4.1.1(b) | |||
Risks of robustness | 4.1.1(b) | |||
Risks of stealing and tampering | 4.1.1(b) | |||
Risks of unreliable output | 4.1.1(a)(b) | |||
Risks of adversarial attack | 4.1.1(b) | |||
Risks from data | Risks of illegal collection and use of data | 4.1.2(a) | Improve AI data security and personal information protection regulations | |
Risks of improper content and poisoning in training data | 4.1.2(b)(c)(d)(e)(f) | |||
Risks of unregulated training data annotation | 4.1.2(e) | |||
Risks of data leakage | 4.1.2(c)(d) | |||
Risks from AI systems | Risks of exploitation through defects and backdoors | 4.1.3(a)(b) | Strengthen AI supply chain security Share information, and emergency response of AI safety risks and threats | |
Risks of computing infrastructure security | 4.1.3(c) | |||
Risks of supply chain security | 4.1.3(d) | |||
Safety risks in AI applications | Cyberspace risks | Risks of information and content safety | 4.2.1 (a) | Implement a tiered and category-based management system for AI applicationEstablish a traceable management system for AI servicesIncrease efforts to train talent in AI safety and securityEstablish and improve mechanisms for AI safety and security education, industry self-regulation, and social supervisionPromote international exchange and cooperation on AI safety governance |
Risks of confusing facts, misleading users and bypassing authentication | 4.2.1 (a) | |||
Risks of information leakage due to improper usage | 4.2.1 (b) | |||
Risks of abuse for cyberattacks | 4.2.1 (a) | |||
Risks of security flaw transmission caused by model reuse | 4.2.1 (a) (b) | |||
Real-world risks | Inducing traditional economic and social security risks | 4.2.2 (b) | ||
Risks of using AI in illegal and criminal activities | 4.2.2 (a) (b) | |||
Risks of misuse of dual-use items and technologies | 4.2.2 (a) (b) | |||
Cognitive risks | Risks of amplifying the effects of “information cocoons” | 4.2.3 (b) | ||
Risks of usage in launching cognitive warfare | 4.2.3 (a) (b) (c) | |||
Ethical risks | Risks of exacerbating social discrimination and prejudice, and widening the intelligence divide | 4.2.4 (a) | ||
Risks of challenging traditional social order | 4.2.4 (a) (b) | |||
Risks of AI becoming uncontrollable in the future | 4.2.4 (b) |
With respect to the comprehensive governance measures, the AI Framework states a series of measures to tackle security risks posed in AI developing and application processes, offering chance for multi-stakeholders to participate and collaborate in the governance process. For example, according to the AI Framework, the research on the transparency, trustworthiness, and error-correction mechanism in AI decision-making process shall be organized and conducted, based on the machine learning theory, training methods and human-computer interaction, thereby enhancing the explainability and predictability of AI systems while avoiding malicious consequences generated from unintended decisions made by AI.[6]
Furthermore, security risks in relation to the AI development and application is inevitably associated with data and network security, personal privacy, and intellectual property issues. To coordinate with existing laws and regulations, the AI Framework follows the currently adopted practice in those areas. For example, a tiered and category-based management mechanism should also be employed in AI application, which imposes requirements for specific users utilizing AI technologies in specific scenarios, as a way of effectively preventing the abuse of AI system.[7]
3. Guidelines and Practical Advice
It is worth noting that, in the last part of the AI Framework, several guidelines are offered for various market players engaged in AI developing and application processes, including the model and algorithm developers, AI services providers, users in Key Areas,[8] and general users.
With respect to practitioners developing AI model and algorithm or providing AI-related services, the guidelines are directed to pertinent phases of each practicing process. The AI model and algorithm developers, for instance, shall take the following measures: participating in internal discussions, organizing expert evaluations, conducting technological ethical reviews, listening to public opinions, communicating and exchanging ideals with potential target audiences, and strengthening employee safety education and training at key stages such as requirement analysis, project initiation, model design and development and training data selection and use.[9]
Regarding users either in the Key Areas or from a broad sense, safety guidelines are provided for raising the public awareness of AI-related security issues especially in terms of personal information and privacy protection, prevention of critical information leakage, and improving network security capabilities.[10]
As a practical matter, practitioners could adopt and further implement those guidelines during its daily practice and operation in developing and applying AI. There are several pieces of practical advice merit attention:
- Always keep in mind the redlines set by laws and regulations in terms of network safety, information security, personal information and privacy, and intellectual property protection during the AI developing and applying processes.
- Establish and embrace mechanisms in respect of risk tracking, internal review, self-testing and evaluation as well as internal reporting mechanisms to prevent AI’s inherent security risks.
- Ongoing trainings are required for both the AI developers and public users, for promoting awareness and engagement in ensuring secure AI development and application.
On a separate note, and particularly for purposes of eliminating AI-related security risks, not only the guidelines under the AI Framework should be borne in mind, but also the tiered and classification-based requirements for protecting data sources like the personal information under the current laws and regulations such as the Network Security Law, the Data Security Law and the Personal Information Protection Law, should also be complied with and implemented, so as to avoid any security risks (including but not limited to the data leakage and abuse), ensuring the safe and steady application and development of AI.
Additionally, as mentioned above, AI is closely intertwined with security and development policies. As the core driver of the fourth industrial revolution, AI plays an irreplaceable role in both development and security.[11] For AI to reach its full potential, it is widely recognized that governments should be thoughtful about protecting citizens, while also creating room for the positive innovation that AI can bring.[12] The current PRC legal system addresses such inter-connected issue through a spectrum of laws, regulations, guidelines, opinions and so forth.
In order to fully take advantage of the cutting-edge technology, maximizing its benefits brought to key industrial sectors, the laws and regulations focus more on the application of AI technology in relevant industrial sectors to further promote the AI development, facilitating the actual application in particular scenarios.[13] For instance, on June 18, 2024, the National Medical Products Administration of People’s Republic of China issued the List of Typical Application Scenarios of Artificial Intelligence for Drug Governance (the “AI Drug List”), presenting fifteen application scenarios that can play a leading demonstration role, possess characteristic of development potential, address pain points during the work, and tailor to more urgent needs.[14]
In conclusion, the AI Framework identifies risks at both AI developing and application levels which practitioners should pay attention to in their customary practice. To deal with those risks and accompanying challenges, relevant practitioners could embrace and implement the measures and guidelines set forth in the AI Framework at certain stage throughout the AI development and application process. What’s more, regarding a specific industrial sector, it is also worth noting and complying with the specialized sets of rules governing the security and development of AI technology in that field.
[1] AI stands for artificial intelligence.
[2] See the preface of the AI Framework.
[3] See the second paragraph of the AI Initiative.
[4] See Alibaba Group, China Electronics Standardization Institute, Alibaba Cloud, and Alibaba Damo Academy, Generative Artificial Intelligence Governance & Practice White Paper (October 31, 2023).
[5] See supra note 4.
[6] See section 5.6 of the AI Framework.
[7] See section 5.1 of the AI Framework.
[8] According to the AI Framework, the Key Areas refer to the governmental departments, critical information infrastructure, and areas directly affecting the public security and peoples’ health and safety.
[9] See section 6.1(a) of the AI Framework.
[10] See section 6.3 and 6.4 of the AI Framework.
[11] See Han Na, China promotes coordination of AI governance, China Daily (July 2, 2024).
[12] See Catherine Jewell, Artificial intelligence: the new electricity, WIPO Magazine (June 2019).
[13] See Xiangxiang Ma, A Substantial Move to Advance AI Application in Pharmaceutical Industry — A Brief Analysis on the List of Typical Application Scenarios of Artificial Intelligence for Drug Governance (July 1, 2024), https://www.lexiscn.com/mnl/detail.php?meta_content_id=3618.
[14] See supra note 13.