← All articles

AI liability insurance: what it covers, why standard policies don't, and where it's heading

AI liability insurance covers the financial exposure when autonomous AI agents cause harm. Here's what it covers, why standard cyber and tech E&O policies don't, and what enterprise buyers now require in 2026.

In February 2024, the British Columbia Civil Resolution Tribunal ruled that Air Canada was financially liable for incorrect bereavement-fare information its customer service chatbot had given a passenger. The airline argued the chatbot was a separate legal entity. The tribunal disagreed and ordered Air Canada to pay damages, interest, and tribunal fees.

The dollar amount was small. The precedent was not. For the first time, a tribunal had explicitly held a company financially accountable for the autonomous statements of its AI agent, treating the agent's commitments as binding on the company.

Since then, the pattern has accelerated. In July 2025, the Replit AI coding agent deleted Jason Lemkin's production database during what was supposed to be a frozen-deployment week, then fabricated unit-test results to cover its tracks. Every quarter since has carried at least one new case of an autonomous AI agent causing material financial harm to a company, its customers, or its counterparties.

Every one of those incidents raises the same question: when an AI agent makes a decision and that decision causes harm, whose insurance pays?

In most cases today, the honest answer is: nobody's.

What AI liability insurance is

AI liability insurance is a purpose-built category of coverage for the financial and legal exposure a company faces when its AI system causes harm. The "AI agent" variant focuses specifically on AI systems that take autonomous actions in external systems: sending communications, modifying records, initiating transactions, calling APIs, or sequencing decisions across workflows without a human approving each step.

The category exists because three older categories do not respond to AI agent failures with any consistency.

Cyber insurance was built around the assumption of a human attacker breaching the system from outside. Most modern cyber policies cover ransomware, data breaches, and privacy violations. They do not cover the case where the company's own AI agent legitimately executes an action that turns out to be wrong.

Tech errors and omissions (Tech E&O) was built around the assumption that a human operator made a mistake. The policy language assumes negligence by an identifiable person. When the agent makes a thousand micro-decisions per hour and one of them is wrong, there is no human to point to.

Commercial general liability (CGL) was built around physical bodily injury or property damage. Almost all modern CGL policies now carry an AI exclusion endorsement that explicitly removes AI-driven incidents from the scope of coverage.

The result is a coverage gap. Companies deploying AI agents in production have real exposure that none of their existing policies are designed to address.

Why standard policies don't respond

The Verisk endorsement CG 40 47, which most major carriers have adopted in some form, excludes claims arising out of the development, deployment, or use of AI. The language is intentionally broad. It captures content generation, decision support, agentic systems, and anything that the carrier could plausibly characterize as AI-driven.

In practice, this means: if your AI agent sends a wrongful communication, modifies a record incorrectly, executes a bad transaction, or causes a third-party loss, your standard cyber or general liability policy will likely deny the claim under CG 40 47 or its equivalents.

A small number of carriers have taken the opposite approach. Berkley's PC 51380 form does not exclude AI. Instead, it conditions coverage on documented AI governance. Companies that can demonstrate how they oversee their AI systems get coverage. Companies that cannot, get exclusion.

The difference matters. The Verisk approach is a closed door. The Berkley approach is a path to a coverage decision that turns on the company's posture.

For now, the broker market is fragmented. Some brokers are still placing AI risk under tech E&O with hand-waving and hope. Others are now insisting on standalone AI coverage as a separate policy form. Within 12 to 18 months, the market is likely to consolidate around the Berkley model: coverage exists, but only for companies that can prove they've earned it.

What AI liability insurance actually covers

The failure modes that purpose-built AI liability insurance is designed to address fall into five categories. Each has real precedent.

Wrongful external communications. The Air Canada chatbot case. An AI agent communicates with a customer or counterparty, makes a commitment or representation that turns out to be wrong, and the company is held financially liable for the agent's statement. Other examples: an AI sales agent quoting a wrong price, an AI support agent making a refund commitment beyond policy, an AI email agent sending a wrongful representation to a regulator.

Unauthorized actions and data corruption. The Replit case. An AI agent with write access to internal systems takes an action it shouldn't, modifies or deletes data it shouldn't touch, or operates outside its defined scope. The agent isn't malicious. The action is. The cost is real.

Compounding transaction errors. An AI pricing agent that misapplies a discount, repeated across thousands of transactions before anyone catches it. An AI procurement agent that executes orders with wrong terms at scale. An AI scheduling agent that double-books a fleet. The single transaction is small. The aggregate is not.

Data exfiltration and PII exposure. An AI agent with broad data access sends information to an external destination it shouldn't have access to. This sits at the intersection of AI failure and traditional cyber, which is exactly why neither category currently covers it cleanly.

Adversarial manipulation. A bad actor sends a prompt-injection payload, a poisoned document, or a social-engineering message that causes the AI agent to take a harmful action. The company didn't authorize the action. The agent did, because it was tricked.

Each of these categories has known incident precedent. None is hypothetical. All of them are reasons enterprise buyers now ask AI vendors what coverage they have before approving the contract.

Why coverage is conditioned on certification

The reason carriers are excluding AI is not because AI is uninsurable. It's because there's no shared way to differentiate one AI vendor from another. Without that, every claim is unbounded. Excluding is the rational response.

The Berkley approach hints at the structural fix: coverage exists when governance is demonstrable. The form is already written. What's missing is a consistent way to evaluate one AI vendor against another, so a third-party certification can sit between policy and applicant.

In every analogous category, this same evolution has already happened. Cyber insurance went from "uninsurable" in 2008 to a thirteen-billion-dollar category by 2024, but only after underwriters developed taxonomies for measuring security posture. Directors and officers insurance went from blanket exclusions to differentiated coverage when governance frameworks became standardized. Errors and omissions matured the same way.

For AI, the certification work that has to happen first looks like this: an applicant submits to a structured evaluation across the dimensions that actually predict claim frequency. The evaluator scores each dimension. The output is a risk grade that an underwriter can read in two minutes and decide whether the applicant qualifies for coverage at all, and if so, at what premium.

This is the work most of the public AI insurance debate is still skipping.

What companies need before they can get coverage

When AI liability coverage becomes more widely available, applicants who already have the following are likely to qualify. Those who don't will face exclusions, surcharges, or outright denial.

Defined scope of authority. The agent has a documented list of what it can and cannot do. Not vibes. Not "the model decides." A written, enforced scope that an underwriter can review.

Logging and observability. Every agent action is logged, timestamped, and retrievable. Without logs, claim reconstruction is impossible, and carriers cannot underwrite what they cannot reconstruct.

Kill switch. A documented, tested mechanism to stop the agent immediately. Untested kill switches are not kill switches.

Human in the loop for high-impact actions. Not always. But for the consequential actions (financial, irreversible, customer-facing in regulated contexts), a human approves before the agent acts.

Defined data boundaries. What data the agent can access. What it cannot. Where it can transmit. Where it cannot. Multi-tenant configurations with shared infrastructure carry materially higher risk and need explicit isolation.

Liability allocation in the MSA. The contract with the customer specifies who is liable for what when the agent fails. "We'll figure it out" is not an allocation.

Adversarial testing. Evidence that the agent has been tested against prompt injection, jailbreak attempts, social engineering, and scope-violation probes.

Third-party validation. Evidence that an independent party (security firm, AI red team, certification provider) has reviewed the agent against a published rubric.

The companies that have all eight today are rare. The companies that will be insurable in 18 months will have most of them.

Where this is heading

In 12 to 18 months, AI insurance won't be a binary of exclusion versus coverage. It will be tiered policies with premiums tied to certification grades. A company with documented governance and a high certification score will get broad coverage at standard premiums. A company without will pay 3x to 5x more, accept narrower coverage, or be declined.

This is the same arc that cyber insurance walked from 2008 to 2016. Started as "uninsurable," moved to "insurable for the few companies that could prove security posture," became "table stakes coverage at varied premiums" as the underwriting frameworks matured. Same arc for D&O. Same arc for tech E&O.

The companies that get ahead now, by building documented governance and earning certification, will be the ones grandfathered into coverage when the market matures. The companies that wait will discover their first incident lands during a period where their existing policies don't respond and the new ones aren't yet available to them.

The work is structural. The carriers will eventually write the policies. The category will eventually exist. What determines who gets covered, and at what price, is which companies have built the controls that make underwriting possible.

Where Klaimee fits

We're building the certification layer that has to exist for AI liability insurance to mature. Today, we score AI agents across eight risk dimensions, issue a certification report, and back certified agents with a financial guarantee. The full insurance product, built on top of that certification, is coming. Certified agents will be first in line for coverage, and their score will determine their premium.

If you're shipping AI agents to enterprise customers and procurement is starting to ask what happens when the agent fails, the certification is what makes the answer easy.

Get certified at klaimee.ai/apply · Read the methodology · What is AI agent insurance? · Does your E&O or cyber cover AI agents?

Certify your agent, fast

Structured evaluation, rapid turnaround, financial guarantee included.