On February 28, 2025, the Korea Communications Commission (“KCC”) published the “Guidelines on the Protection of Users of Generative AI Services” (the “Guidelines”), aimed at mitigating risks of user harm associated with generative AI services. The Guidelines will take effect on March 28, 2025.
1. |
Background and Purpose |
2. |
Summary of the Guidelines |
-
Four Basic Principles
(1) |
Generative AI services should respect human dignity, ensure individual freedom and human rights, and allow for appropriate human oversight and control. |
(2) |
Generative AI services should provide clear, easily understandable explanations regarding their operational principles, AI generated outputs, and their impact on users. |
(3) |
Generative AI services should operate safely, minimize unexpected damage, and prevent malicious use or alterations. |
(4) |
Generative AI services should be operated in a manner that prevents discriminatory, unfair, or biased outputs. |
-
Six Implementation Methods
Implementation Methods |
Key Methods and Implementation Examples |
Protect users’ personal rights |
Implementation examples: Display warning messages or temporarily block users attempting to input prompts that infringe on others’ rights |
Disclose decision-making processes |
Implementation examples: Provide information to help users understand the AI system’s decision-making process by indicating its source or providing links to relevant sources |
Make efforts to respect diversity |
Implementation examples: Provide users with a prominently placed reporting channel within the service and create damage report forms |
Manage the data collection and utilization process |
Implementation examples: When obtaining consent for the collection and use of personal information during sign-up or service use, provide an option for users to allow or decline the use of their generated content for training |
Take responsibility for and participate in addressing issues |
Implementation examples: Clearly outline potential user liabilities for misuse/abuse in the terms of use or terms of service |
Make efforts to ensure ethical content distribution and moderation |
Implementation examples: Implement input/output filters to prevent harmful content generation and provide guidance to users when there is a risk of inappropriate content generation, such as lewd photos |
3. |
Future Outlook and Next Steps |
[1] This legislative effort is distinct from the “Act on the Development of Artificial Intelligence and Establishment of Trust” (the “AI Basic Act”) that was enacted on January 21, 2025.
[2] Article 3(2) of the AI Basic Act stipulates that “[t]he [users of AI services] shall be provided with an explanation, which is clear and meaningful to the extent technically and reasonably possible, of the key criteria and principle, etc., used by AI to derive the final outputs.”