1. |
Content Regulation and Copyright |
-
Introduction of Labeling Obligations of AI-Generated Content under the AI Act
The Framework Act on the Development of Artificial Intelligence and the Establishment of Foundation for Reliability (the “AI Act”), which has been recently passed by the National Assembly and will come into effect in January 2026, sets forth labeling obligations regarding AI-generated content. Based on the monitoring of the requirements to be specified in the future, companies are expected to establish measures to comply with the labeling obligations and to minimize legal risks.
2. |
Privacy |
-
PIPC Announces AI Privacy Risk Management Model as well as Guidebook on Synthetic Data
The Personal Information Protection Commission (the “PIPC”) published its AI Privacy Risk Management Model, outlining the PIPC’s model of how companies can address major privacy risks relating to AI. The PIPC also released a guidebook explaining key considerations for generating and using synthetic data.
3. |
Antitrust and Competiton |
-
Competition Law Issues and Regulatory Trends in Generative AI and Cloud Computing Services
As generative AI services become more advanced, cloud services gain increasing importance in the AI value chain. In particular, the expansion of cloud services may lead to vertical integration within the AI value chain, which in turn may raise competition law issues, including concerns over dependency on largescale cloud services and restrictions on multi-homing. Competition authorities around the world have expressed their concerns about anti-competitive issues arising from the cloud services and AI markets, such as market concentration, data monopoly, and exclusion of competitors. The Korea Fair Trade Commission has also announced that it will scrutinize competition law concerns in the AI and cloud services markets based on the results of its cloud services and AI market surveys.
4. |
Labor, Employment and ESG |
-
Regulatory Trends Concerning the Use of AI in Labor and Employment Matters in Key Countries and Implications for Korea
The use of AI in labor and employment matters is triggering a more robust regulatory response in many countries in the form of new regulation and administrative guidelines.
– |
United States: The US Department of Labor has outlined a set of guidance for AI that emphasizes the eight principles aimed at mitigating the risks AI poses to workers. |
– |
United Kingdom: The UK Department for Science, Innovation, and Technology has issued a guide on the use of AI in recruitment, which aims to prevent discrimination and ensure data protection, etc. |
In Korea, the AI Act is planned to be implemented, to oversee regulation of high-risk cases involving the use of AI.
In light of these regulatory responses to the growing use of AI, it will be necessary to closely monitor how different countries regulate AI (together with the underlying policy frameworks).
5. |
Governance and Risk Management |
-
Main Obligations and Compliance Measures under the AI Act from Governance and Risk Management Perspectives
The AI Act sets forth various obligations on AI business operators. From a governance and risk management perspective, the AI Act requires high-impact AI business operators to implement measures to ensure safety and reliability. Moreover, this obligation extends to AI business operators with AI systems surpassing a predetermined threshold of cumulative compute used for training, even if their AI systems are not classified as high-impact AI.
The details of the obligations will be further specified through subordinate regulations. As such, it is advisable to closely monitor legislative developments of an enforcement decree of the AI Act and notifications of the Ministry of Science and ICT. In order to minimize any legal risks, we recommend business operators to identify the relevant obligations and take necessary steps to comply before the AI Act takes effect.
6. |
Foundation Models and Platforms |
-
Ensuring Safety under the AI Act: Using Red Teaming to Promote Compliance
Red teaming is emerging as a key methodology for AI safety evaluation. Since this technique is likely to be included as a statutory method for some of the safety requirements under the AI Act - specifically, risk identification, evaluation, and mitigation - we recommend monitoring the developments.
[Korean Version]
[Related Newsletter] AI Issues and Implications in Q3, 2024
[Related Newsletter] AI Issues and Implications in Q2, 2024
[Related Newsletter] AI Issues and Implications in Q1, 2024
Attachment AI Issues and Implications in Q4, 2024.pdf