Skip Navigation
Menu
Newsletters

The Korea Communications Commission Issues the Guidelines on the Protection of Users of Generative AI Services

2025.03.07

On February 28, 2025, the Korea Communications Commission (“KCC”) published the “Guidelines on the Protection of Users of Generative AI Services” (the “Guidelines”), aimed at mitigating risks of user harm associated with generative AI services. The Guidelines will take effect on March 28, 2025.
 

1.

Background and Purpose

The KCC has been actively working on various AI user protection initiatives as part of its 2024 and 2025 annual work plans, including efforts to enact the (tentatively titled) “Act on the Protection of Users of AI Services.”[1] In 2024, the KCC established the “Public-Private Council for the Protection of Users of AI Services” and launched the “Damage Reporting Channel for AI Service Users,” allowing users to report AI-related harm or complaints.

In line with the above, the Guidelines signal the KCC’s ongoing efforts to continue working toward legal and institutional measures to promote user-centric AI services while ensuring their safety and reliability. According to the KCC’s press release, the Guidelines are designed to protect users from potential risks of generative AI, such as defamation, discrimination, and bias.
 

2.

Summary of the Guidelines

Though not legally binding, the Guidelines set forth four basic principles and six implementation methods that generative AI developers and service providers should aim to adopt to protect users’ rights and interests and mitigate potential risks of generative AI. In addition, the Guidelines provide best practice examples from existing services.
 

  • Four Basic Principles

(1)

Generative AI services should respect human dignity, ensure individual freedom and human rights, and allow for appropriate human oversight and control.

(2)

Generative AI services should provide clear, easily understandable explanations regarding their operational principles, AI generated outputs, and their impact on users.

(3)

Generative AI services should operate safely, minimize unexpected damage, and prevent malicious use or alterations.

(4)

Generative AI services should be operated in a manner that prevents discriminatory, unfair, or biased outputs.
 

  • Six Implementation Methods

Implementation Methods

Key Methods and Implementation Examples

Protect users’ personal rights

  • Create algorithms to detect and control elements that may infringe on personal rights

  • Acknowledge responsibility for managing outputs

  • Establish internal monitoring system and user reporting procedures

Implementation examples: Display warning messages or temporarily block users attempting to input prompts that infringe on others’ rights

Disclose decision-making processes

  • Indicate that outputs are AI-generated

  • Provide information on the process used to generate outputs

Implementation examples: Provide information to help users understand the AI system’s decision-making process by indicating its source or providing links to relevant sources

Make efforts to respect diversity

  • Establish filtering functions to prevent discriminatory or biased use

  • Establish reporting mechanisms for biased content

Implementation examples: Provide users with a prominently placed reporting channel within the service and create damage report forms

Manage the data collection and utilization process

  • Establish procedures for obtaining prior consent to the collection and use of user input data for training

  • Appoint designated personnel to oversee the collection and utilization of user data

Implementation examples: When obtaining consent for the collection and use of personal information during sign-up or service use, provide an option for users to allow or decline the use of their generated content for training

Take responsibility for and participate in addressing issues

  • Define the respective responsibilities of service providers and users

  • Establish a risk management system, such as monitoring system

Implementation examples: Clearly outline potential user liabilities for misuse/abuse in the terms of use or terms of service

Make efforts to ensure ethical content distribution and moderation

  • Inform users that they should not generate or share inappropriate output

  • Review and manage user inputs and outputs to ensure compliance with moral and ethical standards

Implementation examples: Implement input/output filters to prevent harmful content generation and provide guidance to users when there is a risk of inappropriate content generation, such as lewd photos

 

3.

Future Outlook and Next Steps

The KCC plans to review the Guidelines every two years from March 28, 2025, and amend them as necessary. As such, the implementation methods and examples outlined in the Guidelines may be updated or supplemented over time.

The four basic principles and six implementation methods outlined in the Guidelines can serve as reference points when applying and enforcing existing user protection laws and regulations, such as the Telecommunications Business Act and the Network Act. Moreover, they may inform the development of new laws and regulations, such as the (tentatively titled) “Act on the Protection of Users of AI Services.”

Notably, the principle of providing clear explanations to users is similar to Article 3(2) of the AI Basic Act.[2] Among the implementation methods, (i) notifying users that outputs are AI-generated and (ii) the prior consent procedures for the collection and use of input data for training resemble Article 31 of the AI Basic Act and relevant provisions of the Personal Information Protection Act. Therefore, the interpretation and implementation of these overlapping regulations and the Guidelines will likely remain an area of ongoing discussion.

Furthermore, in early 2025, the Ministry of Science and ICT launched the “Subordinate Legislation Task Force” to draft the Enforcement Decree of the AI Basic Act and related guidelines. The forthcoming Enforcement Decree and related guidelines are expected to further clarify the rights and obligations of businesses developing or deploying generative AI. Given this evolving regulatory landscape, generative AI developers and service providers should closely monitor developments and assess the relationship and alignment between the AI Basic Act, its subordinate laws and regulations, and the Guidelines.

 


[1]   This legislative effort is distinct from the “Act on the Development of Artificial Intelligence and Establishment of Trust” (the “AI Basic Act”) that was enacted on January 21, 2025.
[2]   Article 3(2) of the AI Basic Act stipulates that “[t]he [users of AI services] shall be provided with an explanation, which is clear and meaningful to the extent technically and reasonably possible, of the key criteria and principle, etc., used by AI to derive the final outputs.”

 

[Korean Version]

Share

Close

Professionals

CLose

Professionals

CLose