Chat with Sales

Talk to our product experts.

info88@example.com
Visit Our Office

Stop by and meet the team.

4140 Parker Rd. Allentown, New Mexico 31134
Chat with Sales

Talk to our product experts.

+1 (800) 321-9876

Get in touch with us

Thank you! Your submission has been received and we are excited to show you more of Limbo.
Oops! Something went wrong while submitting the form.

South Korea and India AI Labeling Laws: What Global Brands Need to Know

Hero Bg Shape Image

The conversation about AI content regulation has largely focused on Europe. The EU AI Act gets the headlines. But South Korea and India — two of the largest and fastest-growing digital media markets in the world — have enacted their own AI labeling requirements. For global brands, ignoring them is not an option.

South Korea: The AI Act and Content Disclosure

South Korea's AI Basic Act, passed in late 2024 and entering enforcement phases through 2025-2026, establishes mandatory disclosure requirements for AI-generated and AI-manipulated content. Key requirements include:

  • Disclosure labeling: Content generated or significantly modified by AI must be clearly labeled as such before distribution
  • Watermarking obligations: High-risk AI-generated content — deepfakes, synthetic media used in news or advertising — must carry technical watermarks that enable authenticity verification
  • Traceability requirements: Organizations must maintain records of AI-generated content sufficient to trace origin and verify compliance
  • Platform liability: Platforms distributing unlabeled AI content face significant liability, creating downstream pressure on content suppliers to comply

The law applies to content distributed to South Korean audiences regardless of where the content was created. A US-based brand running an AI-generated advertising campaign in South Korea is subject to Korean disclosure requirements.

India: IT Rules and AI Content Guidelines

India's approach has evolved through amendments to the Information Technology Rules and guidance from the Ministry of Electronics and Information Technology (MeitY). Key requirements include:

  • Synthetic media disclosure: AI-generated images, video, and audio used in public communications must be labeled with clear disclosure of their synthetic origin
  • Watermarking for deepfakes: Content that depicts real individuals in AI-generated scenarios must carry technical markers enabling identification as synthetic
  • Platform obligations: Intermediary platforms are required to implement detection and labeling systems for AI-generated content
  • Pharmaceutical and healthcare content: Particularly strict requirements apply to AI-generated content in healthcare communications, given the potential for harm from synthetic medical misinformation

India's market scale makes compliance commercially significant. With over 800 million internet users and rapidly growing digital advertising spend, India is not a market global brands can treat as a compliance afterthought.

What These Laws Have in Common

Both South Korea and India's requirements converge on the same technical foundations as the EU AI Act:

  • Mandatory disclosure of AI involvement in content creation
  • Technical watermarking for synthetic media
  • Traceability and record-keeping obligations
  • Particular scrutiny for high-stakes categories: news, advertising, healthcare, government communications

This convergence is not coincidental. Regulators across jurisdictions are arriving at the same technical conclusions: metadata-based labeling is insufficient without watermarking, and watermarking is insufficient without a verifiable provenance chain. The C2PA standard, referenced explicitly in EU regulation and increasingly cited in Asian regulatory guidance, provides the technical framework that satisfies requirements across all three major jurisdictions.

The Global Compliance Challenge

For a multinational brand or media organization, the practical challenge is managing compliance across jurisdictions with different enforcement timelines, different specific requirements, and different liability structures — while maintaining a single content workflow.

Building separate compliance systems for the EU, South Korea, and India is not viable. The solution is a single provenance infrastructure layer that satisfies all three:

  • C2PA metadata generation and embedding at the point of content creation
  • Imperceptible watermarking that survives distribution across all markets
  • Provenance certificates for text and document content
  • Audit trails that satisfy traceability requirements in all jurisdictions

This is the architecture global brands need — and it's the architecture Limbo provides, as an API-first platform that integrates with existing content workflows regardless of scale or geography.

The Window to Act

Enforcement timelines vary across jurisdictions, but the direction is uniform: requirements are tightening, not loosening. The EU AI Act's August 2026 deadline, South Korea's ongoing enforcement, and India's evolving guidance all point to a world where unverified content faces increasing legal and commercial risk.

Global brands that build provenance infrastructure now are building compliance capability that compounds — each new jurisdiction's requirements becomes easier to satisfy when the underlying infrastructure is already in place.

Talk to Limbo about building a global content compliance strategy.

Hero Bg Shape Image