
.png)
Provenance is an old word. In the art world, it refers to the documented history of an artwork — who created it, who owned it, where it's been. A painting without provenance can't be authenticated. It can't be insured. It can't be sold at auction.
Digital content has never had the equivalent. And for a long time, that didn't matter. Now it does.
Content provenance is the ability to answer, with certainty, three questions about any digital asset:
Without answers to these questions, content is essentially anonymous. It can be copied, modified, misattributed, and used in ways the creator never intended — with no technical mechanism to challenge any of it.
Digital files have always carried metadata — information embedded in the file about its origin and properties. The problem is that metadata is fragile. It gets stripped when files are uploaded to social platforms. It disappears when images are screenshotted or reposted. It's lost when video is transcoded, when audio is compressed, when PDFs are converted.
Over decades of digital distribution, the norm became content without context. A photograph might be downloaded and reshared thousands of times with no record of who took it, when, or under what license. A video clip might be repurposed in ways that directly contradict the creator's intent, with no way to prove it.
This wasn't anyone's fault. The infrastructure to maintain provenance at scale simply didn't exist.
AI systems trained on the internet inherited all of this broken provenance. They trained on content scraped from across the web — content without authorship, without rights information, without context. The result is models that reproduce creative work without attribution, that generate content indistinguishable from authentic sources, and that operate in a legal gray zone because the content they were trained on had no verifiable status to begin with.
AI didn't create the provenance problem. It made the consequences impossible to ignore.
The C2PA standard — developed by Adobe, Microsoft, Google, the BBC, and the Associated Press — defines what verifiable content provenance looks like in practice. A C2PA-compliant asset carries a cryptographically signed manifest that records:
If any of this information is altered, the signature breaks. The tampering is detectable. Provenance is either intact or it isn't — there's no ambiguity.
Regulatory pressure is accelerating the shift from optional to mandatory. The EU AI Act requires C2PA-compliant metadata for AI-generated content by August 2026. South Korea and India have enacted AI labeling requirements. More regulations are coming.
Beyond compliance, provenance is becoming a competitive differentiator. Brands that can prove their content is authentic — that it wasn't AI-generated without disclosure, that it hasn't been manipulated, that rights are clear — are building trust that brands without provenance infrastructure simply can't claim.
Most organizations know provenance matters. Few have the infrastructure to maintain it. The gap isn't awareness — it's implementation. Embedding C2PA metadata, maintaining it through distribution, applying watermarks that survive transformation, and issuing provenance certificates at scale requires infrastructure that CMS platforms and DAMs don't natively provide.
That's the problem Limbo solves. See how it works.
.png)