[인공지능(AI),AI사업을 위한 규제 및 이슈 가이드] #9. Korea's New AI Law is in Effect: What Businesses Need to Know

 I.  Introduction

On January 22, 2026, the Framework Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act” or the “Act”) came into effect, making South Korea the first country in the Asia-Pacific region — and only the second jurisdiction in the world, after the European Union — to enforce comprehensive AI legislation.

This article provides an overview of the Act’s key provisions, summarizes the obligations it imposes on AI business operators (both domestic and foreign), and offers practical guidance for companies seeking to navigate this new regulatory landscape.


▶ II.  Overview of the AI Basic Act

A. Scope and Definitions

The Act applies broadly to all “AI Business Operators” offering AI-enabled products or services in the Korean market, a category that encompasses both “AI Development Business Operators” (those who develop and provide AI systems) and “AI Utilization Business Operators” (those who provide products or services using another party’s AI). Notably, the Act’s jurisdictional reach extends extraterritorially, meaning that foreign companies whose AI products or services affect the Korean market may fall within its scope. AI developed exclusively for national defense or security purposes is exempt.

The Act establishes a risk-based regulatory framework organized around three categories of AI systems, each subject to escalating obligations:

Category  Definition
Key Obligations
High-Impact AI AI systems that may significantly affect human life, safety, or fundamental rights, across 11 designated sectors (e.g., healthcare, employment, finance, transportation, education, public administration). Pre-deployment self-assessment; risk management plans; explainability of outputs; human oversight; documentation retention (5 years); fundamental rights impact assessments.
Generative AI AI that generates text, images, audio, video, or other content by learning from input data. Advance notification to users; labeling of AI-generated outputs; conspicuous labeling for deepfakes; non-visible watermarking (e.g., C2PA) accepted for other outputs.
High-Performance AI   AI systems whose cumulative computational learning capacity exceeds 1026 FLOPs—a threshold ten times higher than the EU’s benchmark for general-purpose AI with systemic risk. Lifecycle risk identification, assessment, and mitigation; incident response systems; reporting obligations to MSIT.

      

B. Governance Structure

The Act establishes a multi-layered institutional architecture. At the apex sits the National AI Strategy Committee, chaired by the President, which serves as the top cross-ministerial policymaking body. Below it, the AI Policy Center (housed within the Ministry of Science and ICT, or “MSIT”) handles day-to-day policy development and societal impact analysis. The AI Safety Research Institute, launched in 2024, is charged with evaluating AI risk, developing safety benchmarks, and addressing content authentication issues such as deepfakes. MSIT itself functions as the lead regulatory authority, with investigative and enforcement powers.


▶ III.  Key Obligations for AI Business Operators

A.  Transparency and Labeling

All operators deploying high-impact AI or generative AI must provide advance notification to users. Generative AI outputs must be labeled, with a heightened “conspicuous labeling” standard for deepfake content — defined as AI-generated material that is difficult to distinguish from reality. An exception exists for artistic or creative expression, where adapted labeling methods are permitted. Non-visible watermarking compliant with provenance standards such as C2PA is accepted for non-deepfake generative outputs. The Act also carves out limited exceptions for AI used solely for internal business purposes or where the AI basis of a system is self-evident.

B.  High-Impact AI Compliance

Operators of high-impact AI systems bear the most substantial obligations. Before deploying such a system, the operator must conduct a self-assessment to determine whether the system falls within one of the 11 designated sectors. Operators may also request a formal confirmation from MSIT—a useful mechanism for managing regulatory uncertainty. Once classified, the operator must implement risk management plans, ensure explainability of AI outputs, maintain user protection measures, institute human oversight mechanisms, and retain documentation for a minimum of five years. The Act encourages (though does not strictly mandate) fundamental rights impact assessments, and operators who voluntarily conduct them may receive preferential treatment in public procurement.

C. Foreign Operator Requirements

Foreign AI businesses that do not maintain a physical office in Korea must designate a domestic representative if they meet any one of the following thresholds: total revenue exceeding KRW 1 trillion (approximately US$681 million), AI services revenue exceeding KRW 10 billion (approximately US$6.8 million), or an average of more than one million daily Korean users over the preceding three months. The domestic representative bears legal responsibility for responding to government inquiries and supporting the foreign operator’s compliance efforts.


 IV.  Enforcement and Penalties

Perhaps the most telling feature of the Act is the modesty of its penalty regime. Maximum administrative fines are capped at KRW 30 million (approximately US$21,000) — a figure that stands in sharp contrast to the EU AI Act’s potential fines of up to €35 million or 7% of global annual turnover. Criminal penalties apply only in narrow circumstances, specifically for the unauthorized disclosure of confidential business information obtained through duties under the Act, and carry a maximum sentence of three years’ imprisonment. Equally significant is MSIT’s announced enforcement posture. The Ministry has committed to a minimum one-year grace period — extending through approximately January 2027 — during which guidance, consultations, and corrective orders will take priority over sanctions. 


 V.  Practical Takeaways

While the enforcement grace period provides a meaningful runway, businesses would be well advised to use this window strategically rather than passively. The following steps merit consideration:

  • Conduct a preliminary self-assessment. 
Determine whether any of your AI products or services fall within the Act’s 11 high-impact sectors. If the classification is ambiguous, consider requesting a formal determination from MSIT — the Act provides a mechanism for doing so, and early engagement with the regulator can reduce future uncertainty.


  • Implement transparency and labeling infrastructure.
For generative AI operators, the most immediate practical obligation is establishing systems for advance user notification and output labeling, such as C2PA-compatible watermarking infrastructure, which aligns with both the Act’s requirements and emerging global standards.


  • Designate a domestic representative.
Foreign operators meeting the revenue or user thresholds should appoint a qualified domestic representative in Korea without delay. This individual or entity will serve as the primary point of contact for regulatory communications.


  • Monitor secondary regulations closely.
The Enforcement Decree is largely finalized, but sector-specific guidelines continue to evolve. Notable developments include the Korea Food and Drug Administration’s pioneering guidelines on generative AI medical devices, the Financial Services Commission’s AI guidelines for the financial sector, and ongoing revisions to the personal data protection framework as it intersects with AI governance.


  • Build compliance documentation during the grace period.
Risk management plans, human oversight protocols, incident response procedures, and record-keeping systems should be developed now, while the regulatory environment remains consultative rather than punitive. 


▶ VI.  Conclusion

The AI Basic Act represents a considered effort to chart a middle course between the EU’s prescriptive regulatory model and the lighter-touch approaches that have prevailed elsewhere in Asia. By pairing substantive but measured compliance obligations with robust industrial promotion mechanisms and a generous enforcement runway, Korea has signaled that it intends to be both a serious regulator and an attractive destination for AI innovation. Businesses operating in or targeting the Korean market should would benefit from conducting a thorough analysis of where their business sits within the new legal framework of the Act and whether any immediate action is needed.


This article is for informational purposes only and does not constitute legal advice. For guidance tailored to your specific circumstances, please contact the professionals at SEUM Law. Copyright ©2026 SEUM Law.