Subordinate statutes and guidelines are under refinement as the Framework Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act” or the “Act”) approaches its effective date (i.e., January 22, 2026).
After releasing a preliminary draft of the Enforcement Decree of the AI Basic Act (the “Draft”) on September 8, 2025, the Ministry of Science and ICT (the “MSIT”) gathered comments from stakeholders and issued an advance notice regarding the bill for the Enforcement Decree (the “Bill”) (MSIT Notice No. 2025-0970) on November 12, 2025. The public comment period was open until December 22, 2025 (Link). In addition, on September 17, 2025, the MSIT released seven relevant draft notifications/guidelines (Link).
The subordinate regulations for each obligation imposed on AI business operators are as follows:
|
Obligation |
Act |
Enforcement Decree |
Notification/Guideline |
|
Obligation to Secure Transparency |
Article 31 |
Article 22 |
|
|
Obligation to Ensure Safety of High-Performance AI |
Article 32 |
Article 23 |
|
|
Determination of High-Impact AI |
Article 2, Subparagraph 4; Article 33 |
Article 24 |
|
|
Obligation to Ensure Safety and Reliability of High-Impact AI |
Article 34 |
Article 26 |
|
|
AI Impact Assessment for High-Impact AI |
Article 35 |
Article 27 |
|
In light of the subordinate statutes and guidelines, the key obligations of AI business operators under the AI Basic Act and their implications are as follows:
|
1. |
Obligation to Secure Transparency |
|
2. |
Determination of High-Impact AI |
|
3. |
Obligation to Ensure Safety and Reliability of High-Impact AI |
-
In common: Where AI business operators adhering to the requirements in Chapter 4 (Safe Management of Personal Information) and Chapter 5 (Guarantee of Data Subjects’ Rights) of the Personal Information Protection Act are deemed to have satisfied the obligations under each Subparagraph of Article 34, Paragraph (1) of the AI Basic Act, to the extent that those obligations relate to the processing and protection of personal information.
-
Establishment and operation of risk management plans (Subparagraph 1): Where a product has a quality management system pursuant to Article 8, Paragraph (4) or Article 12, Paragraph (3) of the Digital Medical Products Act and is determined to be in compliance with the quality management standards pursuant to Article 24, Paragraph (2).
-
Maintaining explainability (Subparagraph 2): Where the obligation to explain under Article 35-2 of the Credit Information Use and Protection Act is complied with, and the procedures for fulfilling the obligation to explain under Article 36-2 of the same Act are in place.
-
User protection measures (Subparagraph 3): Where a financial product seller has fulfilled its obligations under Article 10 of the Financial Consumer Protection Act.
However, unlike the Draft, the provision that deemed “cases where all the requirements for an electronic investment advisory device under Article 2 Subparagraph 6 of the Enforcement Decree of the Financial Investment Services and Capital Markets Act are satisfied” to constitute compliance with the “human management and supervision” measures under the AI Basic Act has been deleted.
|
4. |
AI Impact Assessment for High-Impact AI |
|
5. |
Obligation to Ensure Safety of “High-Performance” AI |
-
Personal Information Protection Commission: Announced the “AI Privacy Risk Management Model for Safe Utilization of AI Data” (December 2024) and the “Guidelines for Processing Personal Information for Development and Utilization of Generative AI” (August 2025)
-
Korea Communications Commission: Announced the “Guidelines for Protection of Users of Generative AI Services” (February 2025)
-
Ministry of Culture, Sports and Tourism: Announced the “Guidelines for Registration of Copyrights of Copyrighted Works Using Generative AI” and “Guidelines for Prevention of Copyright Disputes Due to Generative AI Outputs” (June 2025), and released the Draft of the “Guidelines for Fair Use of AI Copyrighted Works” (December 2025)
In addition, under the EU Artificial Intelligence Act (the “EU AI Act”), (i) the prohibition on certain AI systems took effect in February 2025, (ii) the rules for general-purpose AI took effect in August 2025, and (iii) the rules for high-impact AI systems will take effect partially in August 2026 and fully in August 2027. The EU AI Act can apply to business operators outside of the EU as well if their AI systems are offered or distributed to the EU market or if the systems’ outputs are used within the EU. Meanwhile, in California, Senate Bill No. 243, which strengthens AI chatbot providers’ obligations to protect children and adolescents, was enacted and will take effect in January 2026. It may also apply to operators outside of California serving California users. Therefore, in order to ensure compliance, business operators already operating in or planning to enter markets overseas should assess whether these overseas AI regulations apply to them and impose measures to ensure compliance.
AI business operators should proactively establish internal compliance and risk management systems by, for example, clearly identifying all applicable regulations and self assessing and managing the potential risks of the AI systems they operate and provide.




