On November 20, 2025, the National Intelligence Service (the “NIS”), together with the National AI Security Center, released the AI Risk Casebook (the “Casebook”). The Casebook identifies 70 representative incidents and damage scenarios that may arise across national security, disaster management, industrial operations, economy, society, and public welfare, as AI becomes more integrated into public and private sector activities. It also contains recommended prevention and response measures.
The Casebook classifies potential AI-related risk areas into four domains: (i) national security, (ii) disasters, emergencies and infrastructure, (iii) the economy, industry and healthcare, and (iv) society, public welfare and human rights. It also classifies threat types into four groups: (i) AI system failures, (ii) misuse and malicious use, (iii) attacks targeting AI systems, and (iv) social-structural changes driven by AI adoption. With Korean companies rapidly deploying AI tools to enhance operations and services, the Casebook is positioned as a key reference point for identifying AI-related risks and developing corresponding governance and mitigation strategies.
The Casebook highlights numerous practical scenarios, including:
-
API Dependency Risk: Suspension of an external AI provider’s API – due to unexpected outrages – could disrupt operations at dependent organizations (e.g., public agencies, major corporations, and financial institutions), impairing customer services, transactions, and public-facing functions, and potentially causing financial and social impact.
-
Operational Accidents from AI Malfunction: Failures in sensors or perception systems of AI-enabled collaborative robots in manufacturing or industrial environments may result in undetected human presence, leading to collision or injuries.
-
Market Disruption from AI-Amplified Disinformation: Sophisticated fake news campaigns targeting financial institutions’ AI-driven trading models may trigger automated sell-offs, destabilizing stock prices and eroding market confidence.
-
Security Vulnerabilities and Data Leakage: “Jailbreak” or similar attacks on AI chatbot tools deployed on corporate intranets may circumvent guardrails and expose sensitive customer and corporate information.
-
Copyright-Related Risks: The use of AI-generated content (e.g., images, music) may infringe third-party intellectual property rights or disrupt existing cultural and creative ecosystems, potentially entangling companies in legal disputes.
To reduce AI-related risks across sectors such as industry and healthcare, the Casebook recommends measures including: (i) assessing dependency on third-party AI services and establishing backup fail-safe systems, (ii) enhancing physical separation and safety controls for AI-powered robots in workplaces, (iii) implementing alert systems to detect unauthorized data transfers, (iv) strengthening AI capabilities for evaluating credibility, and (v) providing workforce training and developing security incident response protocols.
Organizations adopting AI on an enterprise-wide basis should consider: (i) conducting an internal AI inventory to identify AI systems currently deployed or used by employees, and assessing legal, technical, security, and operational risks, (ii) developing and implementing internal AI guidelines, safety policies, and controls to support responsible and secure use of AI in both internal workflows and customer-facing services, and (iii) establishing an enterprise-level AI governance framework, including organizational structures, risk-management processes, and internal policies, to support continuous oversight and ensure business continuity.
Related Topics
#AI #Artificial Intelligence #Technology Media & Telecommunications




