Skip Navigation
Menu
Newsletters

Recent Developments in AI Basic Act

2026.01.21

Subordinate statutes and guidelines are under refinement as the Framework Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act” or the “Act”) approaches its effective date (i.e., January 22, 2026).

After releasing a preliminary draft of the Enforcement Decree of the AI Basic Act (the “Draft”) on September 8, 2025, the Ministry of Science and ICT (the “MSIT”) gathered comments from stakeholders and issued an advance notice regarding the bill for the Enforcement Decree (the “Bill”) (MSIT Notice No. 2025-0970) on November 12, 2025. The public comment period was open until December 22, 2025 (Link). In addition, on September 17, 2025, the MSIT released seven relevant draft notifications/guidelines (Link).

The subordinate regulations for each obligation imposed on AI business operators are as follows:
 

Obligation

Act

Enforcement Decree

Notification/Guideline

Obligation to Secure Transparency

Article 31

Article 22

  • Draft Guidelines on Securing AI Transparency (“Transparency Guidelines”)

Obligation to Ensure Safety of High-Performance AI

Article 32

Article 23

  • Draft Notification on Obligations to Ensure Safety (“Safety Notification”)

  • Draft Guidelines for Ensuring Safety of AI (“Safety Guidelines”)

Determination of High-Impact AI

Article 2, Subparagraph 4; Article 33

Article 24

  • Draft Guidelines for Determining High-Impact AI (“Determination Guidelines”)

Obligation to Ensure Safety and Reliability of High-Impact AI

Article 34

Article 26

  • Draft Notification on Responsibilities of Business Operators (“Responsibilities Notification”)

  • Draft Guidelines for Responsibilities of AI Business Operators of High-Impact AI (“Responsibilities Guidelines”)

AI Impact Assessment for High-Impact AI

Article 35

Article 27

  • Draft Guidelines for AI Impact Assessment (“Impact Assessment Guidelines”)

 

In light of the subordinate statutes and guidelines, the key obligations of AI business operators under the AI Basic Act and their implications are as follows:
 

1.

Obligation to Secure Transparency

With respect to the obligation to secure transparency, Article 31 of the AI Basic Act stipulates the obligation to (i) notify users in advance that certain operations use generative or high-impact AI (Paragraph (1), “advance notice obligation”), (ii) label that certain outputs were produced by generative AI (Paragraph (2), “output labeling obligation”), and (iii) notify and label deepfake outputs (Paragraph 3). However, unlike the Draft, the Bill exempts the output labeling obligation when an AI business operator has fulfilled its obligation to notify and label deepfake outputs. The specific methods for implementing the obligation to secure transparency are provided in Article 22 of the Enforcement Decree and the Transparency Guidelines, and regarding the output labeling obligation, the guidelines require particular attention as the specific method of labeling is provided in detail for each content type (i.e., image, video, audio and text).

Since the obligation to secure transparency is imposed on AI business operators that ultimately provide AI products or services to users, AI developers that directly provide AI products or services to users may also be subject to the same requirements.

However, as an exception, the obligation to secure transparency will not apply if (i) it is apparent that certain operations are based on high-impact or generative AI, or (ii) AI is used only for internal business purposes. It should be noted that to satisfy requirement (i) mentioned above, simply indicating the use of AI in the name of the product/service is not enough, as users should be able to clearly identify “which part” of the product/service uses generative AI.
 

2.

Determination of High-Impact AI

The AI Basic Act defines “high-impact AI” as “AI systems that may have a significant effect on, or pose risks to, human life, physical safety or fundamental rights,” and imposes more stringent obligations on business operators that provide such AI systems.

The Determination Guidelines introduce a two-step test assessment for classifying high-impact AI: (i) First, in order to be classified as high-impact AI, the AI system should be utilized in one of the sectors specified in each Item of Article 2, Subparagraph 4 of the AI Basic Act (i.e., energy, drinking water, healthcare, medical devices, nuclear energy, biometrics, employment, credit evaluation, transportation, public services, student evaluation, etc.); and (ii) next, in order to be classified as high-impact AI, the AI system should have a significant impact, or pose risks to, human life, physical safety or fundamental rights. The assessment of the impact or risks of the system should focus on negative impacts or risks and take into account the AI system’s intended purpose, function and context of use. Further, not only the direct users of the AI system but also those that are indirectly affected are taken into consideration.

Since high-impact AI is subject to various statutory obligations, it is necessary to thoroughly review whether an AI system will be classified as high-impact AI at the initial planning stage. As the Determination Guidelines set out sector-specific “high-impact evaluation criteria,” it is important to actively utilize the guidelines to conduct a self-review and retain written records that explain the rationale for that determination. In addition, as AI business operators may request an official confirmation from the Minister of Science and ICT on whether a particular system falls within the “high-impact AI” category, they may consider using such a confirmation process strategically.
 

3.

Obligation to Ensure Safety and Reliability of High-Impact AI

Business operators that provide high-impact AI are required to establish and operate risk management plans, maintain explainability of AI systems and their training data, and implement user protection measures.

The specific details of each obligation are explained in the Responsibilities Notification and Guidelines. For example, the establishment and operation of risk management plans, which are the core of the obligation to ensure safety and reliability of High-Impact AI (the “safety and reliability obligation”), are subdivided into: (i) establishment and implementation of risk management policies (establishment of plans → identification of risks → analysis and assessment of risks → handling of risks → improvement of policies), (ii) formation of a risk management organizational system, and (iii) cooperation with relevant organizations. In addition, since the Responsibilities Notification and Guidelines provide a self-inspection checklist for the safety and reliability obligation, each business operator should use the checklist to establish an internal compliance system.

Meanwhile, in order to prevent redundant regulations, the AI Basic Act provides that a person who has implemented safety measures under other laws and regulations that are equivalent to the safety and reliability obligation will be deemed to have met such obligation, and sets out the specific standards and procedures for recognizing such measures in Annex 1 of the Bill. Below are some representative examples.
 

  • In common: Where AI business operators adhering to the requirements in Chapter 4 (Safe Management of Personal Information) and Chapter 5 (Guarantee of Data Subjects’ Rights) of the Personal Information Protection Act are deemed to have satisfied the obligations under each Subparagraph of Article 34, Paragraph (1) of the AI Basic Act, to the extent that those obligations relate to the processing and protection of personal information.

  • Establishment and operation of risk management plans (Subparagraph 1): Where a product has a quality management system pursuant to Article 8, Paragraph (4) or Article 12, Paragraph (3) of the Digital Medical Products Act and is determined to be in compliance with the quality management standards pursuant to Article 24, Paragraph (2).

  • Maintaining explainability (Subparagraph 2): Where the obligation to explain under Article 35-2 of the Credit Information Use and Protection Act is complied with, and the procedures for fulfilling the obligation to explain under Article 36-2 of the same Act are in place.

  • User protection measures (Subparagraph 3): Where a financial product seller has fulfilled its obligations under Article 10 of the Financial Consumer Protection Act.
     

However, unlike the Draft, the provision that deemed “cases where all the requirements for an electronic investment advisory device under Article 2 Subparagraph 6 of the Enforcement Decree of the Financial Investment Services and Capital Markets Act are satisfied” to constitute compliance with the “human management and supervision” measures under the AI Basic Act has been deleted.
 

4.

AI Impact Assessment for High-Impact AI

Under the AI Basic Act, AI business operators that offer a product or service using high-impact AI should make efforts to conduct an impact assessment. The Impact Assessment Guidelines explain the matters to be considered at each stage of an impact assessment: (i) the pre-assessment stage, (ii) the main assessment stage, and (iii) the post-assessment stage.

During (ii) the main assessment stage, the operators must record various potential risk scenarios that may arise during actual service operations, identify the fundamental rights that may be affected in each scenario and describe the impacts on those fundamental rights in detail. The Impact Assessment Guidelines not only present examples of affected fundamental rights and example scenarios of each right, but also provide examples of how to prepare an AI impact assessment. Therefore, it is necessary to conduct an impact assessment in good faith based on such examples.
 

5.

Obligation to Ensure Safety of “High-Performance” AI

The AI Basic Act imposes an obligation to ensure safety even for high-performance AI. This reflects an intent to apply rules similar to those for high-impact AI even when high-performance AI does not formally fall within the high-impact AI category, due to its potential influence.

Accordingly, AI business operators that provide high-performance AI must establish and implement a system to identify, assess, mitigate and manage risks throughout the AI lifecycle and submit the results thereof to the MSIT.

The Safety Notification and Safety Guidelines provide specific criteria for determining high-performance AI that are subject to such obligations. An AI system is deemed as “high-performance AI” if it (i) has a computational volume threshold of 1026 FLOPs or greater, (ii) incorporates the most advanced AI technologies available in light of the latest global AI developments, and (iii) poses significant risks of broadly affecting human life, physical safety or fundamental rights.

Alongside the criteria for determining high-performance AI, the Safety Notification and Safety Guidelines also provide guidance on specific risk identification, assessment and mitigation measures, the operation of risk management systems and procedures for submitting implementation results. Thus, AI business operators should review them carefully.

Besides the AI Basic Act, each relevant government agency is creating various guidelines for AI business operators in relation to its responsible duties and laws.
 

  • Personal Information Protection Commission: Announced the “AI Privacy Risk Management Model for Safe Utilization of AI Data” (December 2024) and the “Guidelines for Processing Personal Information for Development and Utilization of Generative AI” (August 2025)

  • Korea Communications Commission: Announced the “Guidelines for Protection of Users of Generative AI Services” (February 2025)

  • Ministry of Culture, Sports and Tourism: Announced the “Guidelines for Registration of Copyrights of Copyrighted Works Using Generative AI” and “Guidelines for Prevention of Copyright Disputes Due to Generative AI Outputs” (June 2025), and released the Draft of the “Guidelines for Fair Use of AI Copyrighted Works” (December 2025)
     

In addition, under the EU Artificial Intelligence Act (the “EU AI Act”), (i) the prohibition on certain AI systems took effect in February 2025, (ii) the rules for general-purpose AI took effect in August 2025, and (iii) the rules for high-impact AI systems will take effect partially in August 2026 and fully in August 2027. The EU AI Act can apply to business operators outside of the EU as well if their AI systems are offered or distributed to the EU market or if the systems’ outputs are used within the EU. Meanwhile, in California, Senate Bill No. 243, which strengthens AI chatbot providers’ obligations to protect children and adolescents, was enacted and will take effect in January 2026. It may also apply to operators outside of California serving California users. Therefore, in order to ensure compliance, business operators already operating in or planning to enter markets overseas should assess whether these overseas AI regulations apply to them and impose measures to ensure compliance.
 

AI business operators should proactively establish internal compliance and risk management systems by, for example, clearly identifying all applicable regulations and self assessing and managing the potential risks of the AI systems they operate and provide.

 

[Korean Version]

Share

Close

Professionals

CLose

Professionals

CLose