Skip Navigation
Menu
Newsletters

AI Basic Act and the Revised Key Guidelines Now in Effect

2026.01.27

The Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act”) and its Enforcement Decree (the “Enforcement Decree”) officially took effect on January 22, 2026. In addition, key AI regulatory guidelines, initially released in September 2025, have been revised and published based on stakeholder briefings and public consultations.

This newsletter covers (i) major revisions to the AI Transparency Guidelines (the “Transparency Guidelines”), (ii) the key provisions of the Enforcement Decree, (iii) the grace period under the AI Basic Act, and (iv) the Ministry of Science and ICT (“MSIT”)’s plan for the AI Support Desk (the “Support Desk”) and the system improvement research team established in connection with the AI Basic Act. For a detailed analysis of the AI Basic Act’s subordinate statutes and key guidelines, please see here (Link).
 

1.

Key Provisions of the Transparency Guidelines

Among the five guidelines published concurrently with the implementation of the AI Basic Act, the Transparency Guidelines underwent the most substantial revisions following industry consultations (e.g., different level of labeling obligation by service type) to allow for operational flexibility.
 

(1)

Scope of the Obligation to Ensure Transparency

The Transparency Guidelines clearly provide that “the obligation to ensure transparency” applies strictly to “AI deployers” – entities that directly provide AI products and services to users. Accordingly, internal use of AI as a productivity “tool” by employees does not trigger these obligations.
 

(2)

Key Requirements of the Obligation to Ensure Transparency

With respect to the labeling obligation for generative AI (Article 31(2) and (3) of the AI Basic Act), the Transparency Guidelines distinguish between (i) AI outputs provided within the service environment and (ii) AI outputs exported outside the service environment (e.g., download or sharing). This distinction effectively permits more flexible labeling in the former cases while requiring stricter labeling in the latter.
 

Category

AI Products Provided Only Within the Service Environment

AI Products Exported Outside the Service Environment

Domain Where AI Products Are Provided

AI products displayed or provided only within the service environment (e.g., UI), such as on-screen or in-app

AI products are exported outside the service environment through downloading or sharing

Labeling Obligation

Flexible Labeling

Strict Labeling

Disclosure may be made through the UI, including via symbol or logo

Clear and reliable disclosure that the content was created using AI

Key Requirements & Examples

  • For interactive services (e.g., chatbots), obligation may be satisfied by providing pre-use guidance and/or displaying a symbol or logo on screen.

  • For games or virtual environments (e.g., metaverse), guidance may be provided at log in, or by indicating that the characters are AI-generated.

  • For AI generated content that users may download/share (e.g., texts, images, videos), AI business operators must either (i) provide a “human-readable indication” (e.g., visible or audible watermark), or (ii) where only machine-readable indication is used (e.g., metadata), provide users with both text and audio guidance that the content is AI generated.

  • For virtual content (e.g., deepfake) that may be difficult to distinguish from reality, users must be clearly informed to prevent misunderstanding;
    however, where the content qualifies as an artistic/creative expression, labeling may be implemented in a manner that does not interfere with artistic appreciation.

 

2.

Major Changes to the Enforcement Decree

The Enforcement Decree was finalized and took effect following the legislative notice period (November to December 2025) and the cabinet approval. While it does not introduce major new obligations, the revised Enforcement Decree provides more detailed guidance regarding AI business operators’ obligations, including the transparency obligations (Article 23 of the Enforcement Decree).
 
Notably, the revised Enforcement Decree clarifies the category of AI business operators required to designate a domestic agent namely: “AI business operators who have been subject to administrative fines for failing to comply with suspension and/or corrective orders issued by the MSIT.” This revision replaces the earlier, more ambiguous reference to AI business operators deemed to be “at risk of incidents.”

 

3.

One-Year Grace Period for the AI Basic Act

The MSIT announced its plans to implement a grace period of at least one year to allow companies adapt and minimize disruption. During this grace period, fact-finding investigations are expected to be conducted only in exceptional circumstances, such as where serious social issues arise (e.g., fatal accidents or human rights violations). The MSIT has also indicated its plan to revise the Transparency Guidelines based on the stakeholder feedback collected during the grace period. AI business operators should closely monitor these developments and consider participating in relevant consultations.
 

4.

MSIT Support Desk and System Improvement Research Team

At a public briefing held on January 20, 2026, shortly before the AI Basic Act took effect, the MSIT emphasized that the AI Basic Act is intended to promote, not restrict, AI innovation. The MSIT further announced plans to refine the Act and related guidance throughout the grace period by incorporating feedback from industry and other stakeholders.
 
To this end, the MSIT plans to operate an AI Basic Act Support Desk through the Korea AI and Software Industry Association (“KOSA”), aimed at supporting companies experiencing compliance-related challenges. The Support Desk is expected to be staffed by organizations and professionals with AI expertise and will offer confidential consultations. The MSIT has indicated that general inquiries will be addressed within 72 hours, and more complex legal inquiries within 14 days.
 
In addition, the MSIT also plans to launch a system improvement research team beginning in February 2026 to continuously review potential improvements and revisions to the AI Basic Act and the applicable guidelines, based on feedback from industry, civil society, and academia.

 

[Korean Version]

Share

Close

Professionals

CLose

Professionals

CLose