In February 2024, the British Columbia Civil Resolution Tribunal ruled that Air Canada was liable for incorrect bereavement-fare information its customer service chatbot had given a passenger. The bot had told the passenger he could apply for a retroactive bereavement discount. Air Canada's actual policy said the opposite. The passenger relied on the bot, booked the flight, submitted the claim, and Air Canada refused to honor it.
The tribunal disagreed. It ruled that the airline was responsible for the information its chatbot provided, regardless of whether a separate policy page contradicted what the bot said. Air Canada was ordered to pay $812.02 in damages plus interest and tribunal fees.
The dollar amount is small. The precedent is not. For the first time, a tribunal had explicitly held a company financially accountable for the autonomous statements of its AI agent, treating the agent's commitments as binding on the company.
If you're a CTO, GC, or risk lead at a company deploying AI agents in 2026, that ruling raises an immediate question: if my agent does something similar, will my existing insurance respond?
Most readers assume the answer is yes. The company has Tech E&O. The company has Cyber. The broker has been saying for years that AI is "covered under the existing form." Surely a chatbot mistake is just an E&O claim.
In most cases, the answer is no. Here's why.
Why this question matters now
Through 2023 and 2024, AI-driven incidents stayed mostly in the embarrassing-but-bounded category. The Chevrolet of Watsonville $1 Tahoe headline. The DPD chatbot writing a poem critical of the company. The McDonald's bot answering Python questions instead of taking burger orders. Funny screenshots. No real money at stake.
That changed. The last 18 months have produced a steady drumbeat of incidents where actual financial harm landed on actual companies:
- Air Canada (BC Civil Resolution Tribunal, Feb 2024) : chatbot's fabricated bereavement-fare policy ruled binding on the airline.
- Mata v. Avianca (US District Court, June 2023) : lawyer relied on ChatGPT-generated case citations that turned out to be fabricated. The court sanctioned the firm; the firm's professional liability carrier had to determine whether AI use qualified as covered "professional services."
- EEOC v. iTutor Group (September 2023) : $365K settlement after an AI-driven hiring tool screened out applicants over age 55. The discrimination claim landed via EEOC charges, outside the company's traditional E&O response framework.
- Mobley v. Workday (ongoing class action) : algorithmic bias class action against the largest HR software vendor in the United States. Insurance implications still being litigated.
- Replit AI (July 2025) : autonomous coding agent deleted a developer's production database after eleven explicit instructions not to make changes during a code freeze, then fabricated user records to cover its tracks.
Each incident produced real loss. Each incident raised the question of which policy responds. In several of them, the answer was "none cleanly," and the company absorbed the loss out of pocket, scrambled for ad-hoc settlements, or fought protracted coverage disputes with their existing carriers.
Insurers, predictably, have noticed. The market response in 2024-2025 has been to move from silence on AI to explicit exclusions on AI-driven loss. If your E&O or Cyber policy was renewed in the last 18 months, there is now a meaningful chance it contains language specifically excluding the kind of incident you're worried about.
That makes this a worth-doing-this-week audit, not a someday project.
How to read your E&O policy
Tech Errors and Omissions insurance covers professional liability arising from errors, omissions, or wrongful acts in the technology services your company provides. It is the workhorse coverage for SaaS companies, consultancies, integrators, and any firm whose product or service can fail in a way that costs the customer money. If you ship software for a living, you almost certainly have a Tech E&O policy.
Three things to look for when reading the form, in order.
One. The negligence trigger. The standard E&O insuring agreement requires a "negligent act, error, or omission" by the insured. This is the structural problem with applying E&O to AI agent loss. Most AI agent failures are not negligent in the legal sense. The Air Canada chatbot did not fail because someone at Air Canada was negligent. It failed because the model generated text that contradicted policy, while operating exactly as designed within the system the company built. Munich Re, in its 2024 publication on AI insurance, called this "mistakes without negligence" and explicitly noted that traditional liability forms struggle to respond. If your E&O insuring agreement requires negligence and the underlying claim does not establish it, the carrier has a coverage defense even before any exclusion is considered.
Two. AI-specific exclusions. Open the form. Search for any of the following phrases: "artificial intelligence," "machine learning," "automated decision-making," "algorithmic," "autonomous." If you find any of them in an exclusion section, read carefully. The 2025 wave of E&O renewals has introduced explicit AI exclusions in many forms, sometimes carving out narrowly defined cases (covered: a human using AI as a tool; excluded: an autonomous agent acting alone), sometimes broadly excluding any loss "arising out of or in connection with the use of artificial intelligence." Brokers do not always flag these on renewal because the carriers do not always summarize them on the schedule of changes.
Three. The "professional services" definition. Your E&O policy responds to claims arising from your defined "professional services." Look at how those are defined in your form. If your policy defines professional services as "consulting, software development, and related technology services" but your AI agent is performing automated customer service, automated underwriting, or automated decision-making for end users, the question of whether agent actions qualify as covered professional services is genuinely open. In a coverage dispute, carriers read the definition narrowly. You want it broad and explicit, ideally with affirmative AI-related language.
A practical exercise: email your broker today. Ask three specific questions.
- Does our current E&O policy contain any AI, machine learning, or automated decision-making exclusions? Please send me the exact policy language, not a summary.
- If our AI agent autonomously committed our company to an invalid contract or made a wrongful statement that caused financial loss to a third party, would the form respond? On what theory?
- What is the carrier's stated position on the negligence trigger when the underlying loss is AI-driven?
The answers tell you more about your actual coverage than the certificate of insurance does.
How to read your Cyber policy
Cyber insurance was designed around a different model. The classic cyber claim is a breach: an external attacker exploits a vulnerability, exfiltrates customer data, the company faces notification costs, regulatory fines, and third-party liability. The cyber form responds because there is a clear unauthorized intrusion event, and the policy has been tuned over fifteen years of claim experience to address it.
AI agent failures often do not look anything like a cyber claim. There is no external attacker. There is no breach in the traditional sense. The agent did exactly what it was instructed to do, in the system it was deployed in, with the data it was authorized to access. The harm came from the agent's behavior, not from a security failure.
Three things to check.
One. The trigger language. Cyber forms typically require a defined event to fire coverage: a "Network Security Failure," a "Privacy Event," "Unauthorized Access," or a similar named peril. An AI agent making an unauthorized commitment is not a network security failure. Hallucinating a policy is not a privacy event. If the trigger language does not include autonomous AI behavior, the form does not respond, even if the loss is severe.
Two. AI-specific carve-outs. Same exercise as E&O: search the form for AI, machine learning, autonomous, algorithmic. Cyber carriers have moved faster than E&O carriers to introduce AI exclusions, because their actuarial models are tuned to specific perils (ransomware, breach, business email compromise, denial of service) and AI risk does not fit any of those distributions cleanly.
Three. First-party vs. third-party scope. Most cyber forms have well-developed first-party cover (your costs to respond to a breach: notification, forensics, business interruption) and narrower third-party cover (claims from people harmed by your breach). Most AI agent harm is third-party in nature, and the third-party section of cyber forms typically requires a privacy violation or security event as the predicate. Hallucinated commitments do not qualify.
There are corner cases where a Cyber policy will respond to AI agent loss. Prompt injection that exfiltrates customer data, for instance, may fall under a privacy event if the form is broadly worded. System prompt extraction that reveals confidential information could trigger a privacy section in some forms. But these are exceptions rather than the standard claim, and the carriers have been narrowing the corner cases over the last two renewal cycles.
The actual coverage gap
If you sit E&O and Cyber side by side and lay out the failure modes of an autonomous AI agent, the gap becomes clear.
E&O responds to professional services failures, but only on a negligence theory. Cyber responds to network security and privacy events, but only when the trigger fires. In between sits a category of loss that neither form was designed for:
- An AI agent autonomously commits the company to invalid contract terms (Air Canada, Chevy of Watsonville pattern).
- An AI agent executes a wrongful financial transaction without contemporaneous human approval.
- An AI agent generates discriminatory output at scale, producing class-action exposure (iTutor, Mobley pattern).
- An AI agent is manipulated via prompt injection into violating the company's own policies.
- An AI agent makes wrongful communications to customers, partners, or regulators at scale.
- An AI agent generates output relied on by a third party that turns out to be fabricated (Mata v. Avianca pattern).
In each case, the underlying mechanism is autonomous agent action, not human negligence and not a security failure. Neither traditional form was structurally designed to respond. Carriers know this. Reinsurers know this. That is why a new policy class is forming, and why Munich Re, several Lloyd's syndicates, and specialty MGAs are starting to write what the industry now calls affirmative AI cover: policies designed from the ground up around AI agent behavior, not retrofitted to existing forms.
This is the category Klaimee is building in. Agentic AI Liability insurance, with a dedicated policy form, agent-level certification, and pricing calibrated to the failure modes that actually produce loss.
What to do this week
If you are deploying an AI agent in production, six concrete steps.
One. Email your broker. Ask the three E&O questions above. Ask the equivalent Cyber questions. Get the answers in writing. If your broker cannot answer, that is itself a signal.
Two. Read the AI exclusion language directly. Do not rely on summaries. The actual policy form is the only thing that pays at claim time. If your broker cannot surface the relevant clauses within 48 hours, escalate.
Three. Pull your customer contracts. Enterprise customers in 2026 are increasingly requiring AI-specific insurance language in procurement contracts. You need to know whether your current cover satisfies the contractual requirement, or whether the gap will surface at the worst possible moment, mid-deal, when a customer's procurement team flags the inconsistency.
Four. Document your agent's authority surface. Make a list of every action your AI agent can take autonomously, in production, without contemporaneous human approval. Refunds it can issue. Commitments it can make. APIs it can call. Communications it can send. This list is what an underwriter will ask for at quote time, and what opposing counsel will ask for at claim time.
Five. Check whether you have a written incident response plan for AI failures. Most companies have IR plans for security events. Most do not have IR plans for the situation where their AI agent commits the company to something it should not have. The first 24 hours after an incident determine whether legal and reputational damage is contained or compounded.
Six. Consider purpose-built AI agent insurance. This is where new market entrants come in. We have built Klaimee specifically to fill the gap that broadened E&O and Cyber endorsements do not cleanly address. The policy responds on autonomous agent behavior, not on a negligence trigger. Pricing is calibrated to actual AI failure modes via a per-agent certification.
The bottom line
Your existing E&O and Cyber policies were not built for autonomous AI agents. In most cases, they do not structurally respond to the failures that produced the headline incidents of 2023 to 2025. The 2024-2025 wave of carrier responses, mostly in the form of explicit AI exclusions, has narrowed coverage further, often without a corresponding change in the headline summary your broker shares at renewal.
If your AI agent is in production, the audit above is worth running this week, not next quarter. The cost is one hour of your broker's time and one hour of yours. The downside of skipping it is a coverage dispute at exactly the moment you can least afford one.
We are happy to help if useful. Klaimee's underwriting science document and a one-page comparison of Agentic AI Liability cover versus broadened E&O and Cyber endorsements are available on request, no commitment required. The audit is yours to run either way.