Skip Navigation
Menu
法律简讯

The MSIT Releases Draft Enforcement Decree of the AI Basic Act

2025.09.10

Launch of the National Artificial Intelligence Strategy Commission, and Announcement of the Initiatives for the Korea AI Basic Action Plan and National AI Computing Centers

On September 8, 2025, the Ministry of Science and ICT (“MSIT”) announced the official launch of the National Artificial Intelligence Strategy Commission (“NAISC”) and released a draft Enforcement Decree (the “Enforcement Decree”) of the Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act”).
 
The Enforcement Decree sets forth the standards and procedures delegated by the AI Basic Act, including eligibility and recipient criteria for government support to foster the AI industry, as well as the methods for complying with transparency and safety obligations imposed on AI business operators and the scope of related exceptions. Notably, the Enforcement Decree includes provisions addressing: (i) methods for implementing transparency obligations applicable to generative and high-impact AI business operators, (ii) criteria for defining high-performance AI, (iii) standards for determining whether an AI system qualifies as high-impact AI and the corresponding implementation measures of relevant obligations, and (iv) the scope of AI business operators obliged to designate a domestic agent. These provisions are expected to significantly affect relevant business operators and should be closely monitored.
 
In September, the MSIT will seek public comments on the Enforcement Decree from a broad range of stakeholders, including industry, academia, civil society, and relevant government agencies. Following this public comment period, the MSIT plans to make an advance notice in October, with the goal of finalizing and promulgating the Enforcement Decree by year-end. According to its official press release, the MSIT expects to publish accompanying drafts of relevant notifications and guidelines[1] by December. Accordingly, close monitoring of these developments will be essential.
 
The key details and implications of the Enforcement Decree are summarized below.
 

1.

Obligation to Secure Transparency

The AI Basic Act requires generative and high-impact AI business operators to provide advance notice to end-users of such services and generative AI business operators to label AI-generated outputs for its end-users (together, the “Obligation to Secure Transparency”). The Enforcement Decree further clarifies how these transparency obligations are to be implemented and sets out certain exceptions. The MSIT will provide additional details and exceptions in the upcoming “Guidelines on Securing AI Transparency.”
 

(1)

Advance notice obligation for high-impact and/or generative AI (Article 31 (1) of the AI Basic Act, and Article 22 (1) of the Enforcement Decree)

Under the AI Basic Act, AI business operators providing high-impact and/or generative AI products and services are required to provide an advance notice to their end-users that the relevant services are operated based on AI. The Enforcement Decree allows operators to meet this advance notice obligation not only by labeling on the product or service directly, but also by: (i) including a statement in contracts, user manuals, terms of service, or displaying the information on the user interface or device, and (ii) posting a notice in an easily recognizable manner at the location where the product or service is provided.
 

(2)

Labeling obligation for generative AI (Article 31 (2) of the AI Basic Act, and Article 22 (2) of the Enforcement Decree)

Under the AI Basic Act, AI business operators providing generative AI products and services are required to label outputs to indicate that they were created using generative AI. The Enforcement Decree provides that such labeling may be made in a “human- or machine-readable format,” explicitly permitting the use of invisible watermarks, which may not be readily apparent to humans but can be detected by machines.
 

(3)

Notification or labeling obligation for “deepfake” content (Article 31 (3) of the AI Basic Act, and Article 22 (3) of the Enforcement Decree)

For “deepfake” content (audio, images, or video generated by AI that are difficult to distinguish from reality), AI business operators must notify users or provide label in a manner that allows users to clearly recognize that the content was generated by AI. The Enforcement Decree provides relevant standards for AI business operators to (i) implement the notification or labeling in a manner that users can easily identify the content through visual or auditory cues, or with the aid of software, and (ii) tailor the notification method in consideration of the primary users’ age, physical ability, and person’s social situation.
 

(4)

Exceptions to the Obligation to Secure Transparency (Article 31 (4) of the AI Basic Act, and Article 22 (4) of the Enforcement Decree)

The Enforcement Decree provides that the Obligation to Secure Transparency does not apply (i) where it is already obvious that a product or service uses high-impact and/or generative AI—for example, from the product or service name, user interface, or external labeling, or (ii) where the AI product or service is used exclusively for internal business purposes. The Enforcement Decree also grants the Minister of the MSIT the authority, through subordinate regulations, to exempt all or part of the Obligation to Secure Transparency, considering product type or nature, outputs, manner of use, and degree of technical sophistication.
 

2.

Obligation to Ensure Safety for “High-Performance” AI (Article 32 of the AI Basic Act, and Article 23 of the Enforcement Decree)

The Enforcement Decree describes a “high performance” AI, which is subject to the obligation to ensure safety, as a system with computational capability of at least 1026 floating point operations (FLOPs). Additionally, such system must meet detailed criteria established by the Minister of the MSIT through subordinate notification, taking into account factors such as the technical advancement and risk level.
 

3.

Confirmation of High-Impact AI (Article 33 of the AI Basic Act, and Article 24 of the Enforcement Decree)

Under the AI Basic Act, AI business operators may request the Minister of the MSIT to confirm whether a particular system qualifies as “high-impact AI.” The Enforcement Decree provides that this determination will consider the following factors: (i) the sector in which the AI is used, (ii) the impact, severity, frequency of potential risks to human life, physical safety, and fundamental rights, as well as sector-specific characteristics, (iii) the results of the operator’s preliminary review as to whether the system constitutes high-impact AI, and (iv) the advice of the High-Impact AI Expert Committee (established under Article 25 of the Enforcement Decree, with members appointed by the Minister of the MSIT). The Minister of the MSIT may also prescribe additional factors through subordinate regulations.
 

4.

Obligation to Ensure Safety and Reliability of High-Impact AI (Article 34 of the AI Basic Act, and Article 26 of the Enforcement Decree)

AI business operators providing high-impact AI systems are required to establish and implement specific measures[2] under the AI Basic Act to ensure system safety and reliability. The Enforcement Decree further requires AI business operators to disclose the following information on their websites (unless it constitutes a trade secret): (i) key risk management measures, (ii) essential information regarding explanation measures of the AI system and its training data, (iii) user protection measures, and (iv) the names and contact information of the individuals responsible for managing and overseeing high-impact AI systems. AI business operators are also required to retain documentary evidence demonstrating compliance with these safety and reliability obligations for a period of five years.
 
Additionally, AI deployers may request AI developers to provide necessary information for complying with the relevant obligations, and AI developers are required to make reasonable efforts to cooperate with such requests.

 

5.

High-Impact AI Impact Assessment (Article 35 of the AI Basic Act, and Article 27 of the Enforcement Decree)

Under the AI Basic Act, AI business operators that provide products or services using high-impact AI are required to make efforts to conduct an impact assessment. The Enforcement Decree further specifies the key elements that such assessment must cover and authorizes the Minister of the MSIT to establish additional details and methodologies. The impact assessment must address the following elements:
 

  • Identification of individuals whose fundamental rights could potentially be affected by products or services using high-impact AI systems

  • Identification of the types of fundamental rights that may be impacted in connection with high-impact AI

  • Details and scope of the potential social and economic impact on fundamental rights of individuals resulting from the use of high-impact AI

  • Usage patterns of the high-impact AI system

  • Quantitative or qualitative evaluation indicators used in the impact assessment and the methodology for calculating the result

  • Measures for preventing risks and remedying damages arising from the use of high-impact AI

  • Action plans for implementing any necessary improvements identified as a result of the impact assessment
     

6.

Criteria for Designating a Domestic Agent (Article 36 of the AI Basic Act, and Article 28 of the Enforcement Decree)

AI business operators without an address or place of business in Korea must designate a domestic agent in writing and report this to the Minister of the MSIT. The Enforcement Decree stipulates that a domestic agent must be appointed if the business operator meets any of the following criteria:
 

  • Total revenue exceeding KRW 1 trillion in the previous (fiscal) year

  • Revenue from AI services exceeding KRW 10 billion in the previous (fiscal) year

  • An average daily number of domestic end-users exceeding 1 million during the three months preceding the end of the previous year

  • If the Minister of the MSIT requests the submission of relevant items and documents in connection with an actual or potential incident that substantially compromises the safety of AI service usage due to violations of the AI Basic Act
     

7.

Introduction of the Statutory Basis for Exemption from Fact-Finding Investigations (Article 40 of the AI Basic Act, and Article 31 of the Enforcement Decree) and a Grace Period for Administrative Fines

The Minister of the MSIT may impose suspension and/or corrective orders after conducting a fact-finding investigation into certain violations of statutory obligations. However, the Enforcement Decree allows the authority to dispense with such investigation if: (i) sufficient evidence of the violations or alleged violations has already been secured, or (ii) the authority determines that a report or complaint was filed for improper purposes, such as for the complainant’s personal gain or obstruction of official duties.
 
Also, in a recent press release, the MSIT announced its plans to introduce a “grace period for administrative fines” during the initial enforcement phase of the AI Basic Act in order to minimize confusion for enterprises. The MSIT will decide the length of this grace period after reviewing stakeholder comment. Therefore, we recommend continuous monitoring of these developments.

 

8.

Conclusion: Launch of the NAISC, and Announcement of the Initiatives for Korea's AI Basic Action Plan and National AI Computing Centers

Alongside the public release of the Enforcement Decree, the MSIT also announced the launch of the NAISC established under the President, the initiatives to develop Korea AI Basic Action Plan, and the establishment of National AI Computing Centers.
 
The AI Basic Act currently requires the creation of a “National Artificial Intelligence Commission” under the President to deliberate and decide on major policies regarding AI development and advancement. However, the MSIT has indicated its intention to amend the AI Basic Act to rename the body as the NAISC to strengthen its function, and to reflect these changes in the Enforcement Decree. Under the proposed changes, the NAISC would assume expanded roles in inter-ministerial policy coordination, compliance monitoring, and performance management. The MSIT also noted that it will establish a legal basis for appointing a Chief Artificial Intelligence Officer (“CAIO”) and forming and operating a CAIO Council.
 
At NAISC’s first plenary session, NAISC approved several key initiatives while also discussing the direction of subordinate legislation, such as the Enforcement Decree of the AI Basic Act. These approved initiatives include “the Korea AI Basic Action Plan,” “the Initiative to Establish National AI Computing Centers to Build an AI Highway,” and “the Proposed Operating Rules for the NAISC.”
 
With respect to the National AI Computing Centers, the government announced its plans to establish a Special Purpose Company (“SPC”) to attract private sector investment and to leverage private expertise. The goal is to secure more than 15,000 high-tech GPUs by 2028, with a long-term target of expanding capacity to 50,000, thereby creating an “AI highway.” As the government plans to issue a project announcement and hold a briefing session in September, stakeholders are recommended to monitor these developments closely.
 
As the effective date of the AI Basic Act approaches, the Enforcement Decree further clarifies the obligations of AI business operators. However, the current text remains a draft, and the MSIT will collect stakeholder comments from industry, academia, civil society and relevant government agencies in September.
 
The draft Enforcement Decree delegates the authority to the Minister of the MSIT to determine specific matters such as: (i) the circumstances “where equivalent measures are taken under other relevant laws,” in which the high-impact AI business operators may be exempt from safety and reliability obligations, and (ii) specific standards for imposing administrative fines and other details to be set out in appendices and attachments. The MSIT will finalize these details after reviewing comments from the relevant ministries and publish the final version thereafter.

Therefore, business operators considering or already engaging in AI-related businesses or services are encouraged to closely monitor the development of the forthcoming subordinate regulations. It is essential to assess the potential impact of the draft Enforcement Decree, as well as subsequent announcements and guidelines, and to proactively prepare for their implications on their business and service operations.

 


[1]   (i) Guidelines on the Criteria and Examples of High-Impact AI, (ii) Notification and Guidelines on the Responsibilities of High-Impact AI Business Operators, (iii) Notification and Guidelines on the Obligation to Ensure AI Safety, (iv) Guidelines on Securing AI Transparency, and (v) Guidelines on AI Impact Assessment.
[2]   (i) Establishment and operation of risk management measures, (ii) establishment and implementation of measures to explain, to the extent technologically feasible, the final results produced by AI, the major criteria used in producing such results, and an overview of the training data used in the development and utilization of AI, (iii) establishment and operation of user protection measures, (iv) human supervision and oversight of high-impact AI, (v) preparation and retention of documentation that can confirm the measures taken to ensure safety and reliability, and (vi) other matters deliberated and resolved by the Commission to ensure the safety and reliability of high-impact AI.

 

[Korean Version]

分享

Close

专业人员

CLose

专业人员

CLose