On November 12, 2025, the Ministry of Science and ICT (“MSIT”) issued an advance notice regarding the bill for the Enforcement Decree (the “Bill”) of the Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act”) (MSIT Notice No. 2025-0970) with the advance notice period open until December 22, 2025.
As background, on September 8, 2025, the MSIT released a preliminary draft of the Enforcement Decree (the “Draft”) (refer to our newsletter (Link) dated September 10, 2025). The current Bill incorporates refinement in language as well as substantive changes to certain requirements. We summarize below the principal amendments.
|
1.
|
Revisions to Labeling Obligations for Generative AI Outputs
Under the AI Basic Act, AI business operators providing generative AI or related products and services are required to clearly label the outputs as being generated by AI (Article 31(2); the “Labeling Obligations”). The Bill modifies the scope of this obligation and provides further details on acceptable methods of compliance, as compared to the earlier Draft.
|
1)
|
Obligation to Provide Additional Information for Machine-Readable labels
The Draft permitted AI business operators to fulfill the Labeling Obligations by using either human- or machine-readable formats. While the Bill retains both options, it now specifically requires that when a machine-readable format is used, users must also be notified at least once - via text or voice message - that the output is AI generated.
|
|
2)
|
Exemption When “Deepfake” Transparency Obligations are Satisfied
The AI Basic Act establishes three separate transparency obligations: (i) advance notice obligations regarding generative AI (Article 31(1)), (ii) Labeling Obligations (Article 31(2)), and (iii) the notification or labeling obligations for “deepfake” content (Article 31(3)). According to the Guidelines on Securing AI Transparency announced by the MSIT (see our newsletter (Link) dated September 26, 2025), each obligation generally must be fulfilled separately. However, the Bill introduces an exemption: where notification or labeling requirements for “deepfake” content under Article 31(3) are met, the Labeling Obligations under Article 31(2) need not be separately fulfilled.
|
Draft dated September 8, 2025
|
Bill dated November 12, 2025
|
|
Article 22 (Obligation to Secure Transparency for AI)
(2) An AI business operator may choose to use either human- or machine-readable formats to make the labeling under Article 31(2) of the Act when providing generative AI or any products, etc. utilizing the same.
|
Article 22 (Obligation to Secure Transparency for AI)
(2) An AI business operator may choose one of the following methods to satisfy the labeling under Article 31(2) of the Act (other than for outputs that are difficult to distinguish from reality and users are clearly notified or labeled as AI pursuant to Article 31(3) of the Act):
1. In human-readable formats; or
2. In machine-readable formats, subject to a mandatory at least one-time user notification by text or voice that the content is AI generated.
|
|
|
|
2.
|
Revisions to the Criteria for Recognizing Compliance for High-Impact AI
The AI Basic Act imposes obligations to ensure the safety and reliability of high-impact AI. The Act allows these obligations to be deemed satisfied if equivalent measures under other laws have been implemented (Articles 34(1) and (2)), with details set out in Annex 1 of the Enforcement Decree.
Compared to the Draft, the Bill introduces a new provision stating that AI business operators adhering to the requirements in Chapters 4 and 5 of the Personal Information Protection Act shall be deemed as meeting obligations under Article 34(1) of the AI Basic Act, to the extent that those obligations relate to the processing and protection of personal information (Annex 1, Paragraph 7 of the Bill) (However, if partial compliance only to secure safety and reliability, it will result in partial recognition to the specific parts). The Bill also removes the prior provision recognizing full compliance with the “robo-advisor” requirements as defined in Article 2, Subparagraph 6 of the Enforcement Decree of the Financial Investment Services and Capital Markets Act as satisfying the “human oversight” obligations under the AI Basic Act (Annex 1, Paragraph 4).
In addition, the Bill introduces specific additions, such as (i) mandating the MSIT to publish the results, statistics, and indices of any AI status surveys on its website (Article 29(3)); and (ii) authorizing a reduction of up to 50% in administrative fines based on mitigating factors, such as the severity and intent of the violation (Annex 2, Criteria for Imposition of Administrative Fines, Item C).
|
|
3.
|
Next Steps
If you wish to submit comments on the Bill, you may do so online (Link) or by post by December 22, 2025.
The Bill provides greater clarity regarding the scope of specific obligations under the AI Basic Act and set forth detailed compliance pathways. AI business operators planning to develop or launch any AI-based products or services are strongly encouraged to carefully review the Bill and assess its potential impacts on your business operations.
|
[Korean Version]