On December 19, 2024, the Personal Information Protection Commission (“PIPC”) announced the AI Privacy Risk Management Model for Safe Utilization of AI and Data (“Model”) (available in Korean, Link).
The Model is designed to present the direction and principles of AI privacy risk management for companies and institutions that intend to establish and maintain an internal management system related to privacy in the course of adopting and applying AI technology. The Model will replace the PIPC’s Self-Checklist for Privacy Protection in Artificial Intelligence (May 31, 2021).
The Model will guide AI developers and providers regarding managing AI privacy risks to ensure that they safely process personal data in the course of providing AI services. The risk management process would involve (i) identifying the types and applications of the AI model and system, (ii) identifying and measuring privacy risks associated with each type and application of the AI model and system, and (iii) preparing measures to mitigate such risks. The key details of the Model are as follows:
1.
|
Identifying the Types and Application of AI
The Model distinguishes the types and uses of AI in two stages: (i) the planning/development stage in which training data is collected and used for AI development; and (ii) the service provision stage where AI services are actually provided to users. The second stage is further divided into generative AI and discriminative AI, depending on the type of service.
|
Type
|
Concept
|
Planning/Development Stage
|
|
Service Provision Stage
|
Generative AI
|
-
A system that generates text, image, audio, video, etc., by using the users’ input and context
-
Examples: ChatGPT, image generative AI
|
Discriminative AI
|
-
A system that classifies users’ input into specific classes or makes predictions by scoring users’ input
-
Examples: Recruiting AI, credit rating AI, Fraud Detection System (“FDS”), targeted advertisement/recommendation, medical assistant AI, autonomous vehicle sensor
|
2.
|
Identifying and Measuring Privacy Risks
The Model sets out the key privacy risks corresponding to each type and application of AI model and system as follows. AI developers and providers are advised to assess the probability of occurrence of each risk and the significance of the impact on the organization, individuals, and society when the risk is realized so as to determine whether to accept the risk and the priority of response measures.
|
Type
|
Major Privacy Risks
|
Planning/Development Stage
|
-
Unlawful collection and use of training data
-
Improper storage and management of AI training data
-
Greater complexity in the data flow and guaranteeing data subjects’ rights due to diversification of AI value chain
|
Service Provision Stage
|
Generative AI
|
|
Discriminative AI
|
|
3.
|
Measures to Mitigate Privacy Risks
AI developers and providers should consider and implement managerial and technical measures to mitigate privacy risks based on the results of their identification and measurement of the risks. The managerial and technical mitigation measures introduced in the Model are as follows. According to the Model, however, it is not mandatory to implement these mitigation measures.
|
Managerial Mitigation Measures
Type
|
Details
|
Management of Source/History of Training Data
|
-
Reviewing the legality of the collection and use of personal information for each source of training data
-
Disclosing the standards for collection, processing, and use of training data in privacy policies, technical documents, FAQs, etc.
|
Safe Storage/Destruction of Training Data
|
-
Implementing security measures, such as access control and restriction of access authority for training data
-
Destroying training data in an irrecoverable manner once the purpose of processing has been achieved
|
Clarification of the Roles of Participants in the AI Value Chain
|
-
Reviewing how the processing of personal information is delegated or how personal information is transferred overseas
-
Considering the means for ensuring the appropriate allocation and execution of roles among various participants (e.g., contracts, licenses, guidelines)
|
Preparation and Disclosure of the Acceptable Use Policy (“AUP”)
|
|
Organization and Operation of AI Privacy Red Team
|
|
Preparation of Means for Data Subjects to Report Incidents and Response Measures
|
-
Preparing measures in response to requests to delete faces, voices, etc., generated in AI output against the intent of the data subject and implementing a function to report inappropriate output
|
Compliance with Automated Decision-Making Related Obligations
|
|
Privacy Impact Assessment
|
|
Technical Mitigation Measures
Type
|
Details
|
Processing of Training Data
|
-
Minimizing the collection of training data, and if possible, using and retaining training data through anonymization or pseudonymization immediately after collection
-
Removing redundant sentences, words, etc., in the training data using de-duplication
|
Use of Synthetic Data
|
-
Training AI model using synthetic data (i.e., simulated or artificial data generated by learning the format, structure, and statistical distribution characteristics and patterns of the original data for a specific purpose)
|
Fine-Tuning
|
|
Input and Output Filtering
|
-
Refusing to create a response or providing a pre-determined response to a prompt that induces personal profiling or privacy-infringing responses
-
Applying a filter technology that detects and removes personal information generated in the output
|
Differential Privacy Techniques
|
|
Measures to Track Data from Sources and Detect Synthetic Content
|
|
Pseudonymization or Anonymization of Biometric Information
|
-
Applying various pseudonymization and anonymization technologies to video information, video, voice, etc.
|
4.
|
AI Privacy Risk Management Framework
The Model emphasizes the role of the Chief Privacy Officer and recommends that an organization responsible for AI privacy be formed to establish an AI privacy risk management framework. In addition, the Model provides a self-assessment checklist regarding AI privacy risk management methods for use by AI developers and providers.
The Model states that compliance with the Model is optional and that AI companies can establish specific AI privacy risk management systems according to their own circumstances. However, it also notes that companies and institutions that have exerted their best efforts to ensure safety in accordance with the Model may be deemed to have complied with the Personal Information Protection Act or that such efforts may be considered as a mitigating factor in imposing administrative sanctions. In the process of developing and providing AI, companies and institutions should consider establishing an AI privacy risk management system by referring to the Model, carefully review the application cases of the Model, and continue monitoring further developments of the Model.
|
[Korean Version]