Launch of the National Artificial Intelligence Strategy Commission, and Announcement of the Initiatives for the Korea AI Basic Action Plan and National AI Computing Centers
On September 8, 2025, the Ministry of Science and ICT (“MSIT”) announced the official launch of the National Artificial Intelligence Strategy Commission (“NAISC”) and released a draft Enforcement Decree (the “Enforcement Decree”) of the Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act”).
The Enforcement Decree sets forth the standards and procedures delegated by the AI Basic Act, including eligibility and recipient criteria for government support to foster the AI industry, as well as the methods for complying with transparency and safety obligations imposed on AI business operators and the scope of related exceptions. Notably, the Enforcement Decree includes provisions addressing: (i) methods for implementing transparency obligations applicable to generative and high-impact AI business operators, (ii) criteria for defining high-performance AI, (iii) standards for determining whether an AI system qualifies as high-impact AI and the corresponding implementation measures of relevant obligations, and (iv) the scope of AI business operators obliged to designate a domestic agent. These provisions are expected to significantly affect relevant business operators and should be closely monitored.
In September, the MSIT will seek public comments on the Enforcement Decree from a broad range of stakeholders, including industry, academia, civil society, and relevant government agencies. Following this public comment period, the MSIT plans to make an advance notice in October, with the goal of finalizing and promulgating the Enforcement Decree by year-end. According to its official press release, the MSIT expects to publish accompanying drafts of relevant notifications and guidelines[1] by December. Accordingly, close monitoring of these developments will be essential.
The key details and implications of the Enforcement Decree are summarized below.
1. |
Obligation to Secure Transparency |
(1) |
Advance notice obligation for high-impact and/or generative AI (Article 31 (1) of the AI Basic Act, and Article 22 (1) of the Enforcement Decree) |
(2) |
Labeling obligation for generative AI (Article 31 (2) of the AI Basic Act, and Article 22 (2) of the Enforcement Decree) |
(3) |
Notification or labeling obligation for “deepfake” content (Article 31 (3) of the AI Basic Act, and Article 22 (3) of the Enforcement Decree) |
(4) |
Exceptions to the Obligation to Secure Transparency (Article 31 (4) of the AI Basic Act, and Article 22 (4) of the Enforcement Decree) |
2. |
Obligation to Ensure Safety for “High-Performance” AI (Article 32 of the AI Basic Act, and Article 23 of the Enforcement Decree) |
3. |
Confirmation of High-Impact AI (Article 33 of the AI Basic Act, and Article 24 of the Enforcement Decree) |
4. |
Obligation to Ensure Safety and Reliability of High-Impact AI (Article 34 of the AI Basic Act, and Article 26 of the Enforcement Decree) |
5. |
High-Impact AI Impact Assessment (Article 35 of the AI Basic Act, and Article 27 of the Enforcement Decree) |
-
Identification of individuals whose fundamental rights could potentially be affected by products or services using high-impact AI systems
-
Identification of the types of fundamental rights that may be impacted in connection with high-impact AI
-
Details and scope of the potential social and economic impact on fundamental rights of individuals resulting from the use of high-impact AI
-
Usage patterns of the high-impact AI system
-
Quantitative or qualitative evaluation indicators used in the impact assessment and the methodology for calculating the result
-
Measures for preventing risks and remedying damages arising from the use of high-impact AI
-
Action plans for implementing any necessary improvements identified as a result of the impact assessment
6. |
Criteria for Designating a Domestic Agent (Article 36 of the AI Basic Act, and Article 28 of the Enforcement Decree) |
-
Total revenue exceeding KRW 1 trillion in the previous (fiscal) year
-
Revenue from AI services exceeding KRW 10 billion in the previous (fiscal) year
-
An average daily number of domestic end-users exceeding 1 million during the three months preceding the end of the previous year
-
If the Minister of the MSIT requests the submission of relevant items and documents in connection with an actual or potential incident that substantially compromises the safety of AI service usage due to violations of the AI Basic Act
7. |
Introduction of the Statutory Basis for Exemption from Fact-Finding Investigations (Article 40 of the AI Basic Act, and Article 31 of the Enforcement Decree) and a Grace Period for Administrative Fines |
8. |
Conclusion: Launch of the NAISC, and Announcement of the Initiatives for Korea's AI Basic Action Plan and National AI Computing Centers |
[1] (i) Guidelines on the Criteria and Examples of High-Impact AI, (ii) Notification and Guidelines on the Responsibilities of High-Impact AI Business Operators, (iii) Notification and Guidelines on the Obligation to Ensure AI Safety, (iv) Guidelines on Securing AI Transparency, and (v) Guidelines on AI Impact Assessment.
[2] (i) Establishment and operation of risk management measures, (ii) establishment and implementation of measures to explain, to the extent technologically feasible, the final results produced by AI, the major criteria used in producing such results, and an overview of the training data used in the development and utilization of AI, (iii) establishment and operation of user protection measures, (iv) human supervision and oversight of high-impact AI, (v) preparation and retention of documentation that can confirm the measures taken to ensure safety and reliability, and (vi) other matters deliberated and resolved by the Commission to ensure the safety and reliability of high-impact AI.