Who pays when your AI makes a mistake? The new liability gap small businesses need to know about.
Three things happened in the first quarter of 2026. Colorado's AI Act went live in February. California followed with disclosure rules for automated decisions. And Berkshire Hathaway, Chubb, and Travelers quietly got approval to exclude AI-related damages from tens of thousands of general liability policies.
Most small business owners who adopted AI tools in the past eighteen months did so without changing their insurance. That made sense at the time. The tools were cheap, the vendors promised compliance, and nobody was asking hard questions about what happens when a chatbot gives bad advice or a resume screener filters out the wrong candidate.
That assumption is now breaking down. Not because of a wave of lawsuits (though those are coming), but because the insurance industry has decided it does not want to cover AI liability under standard policies.
What just changed
State regulators approved more than 80 percent of requests from major carriers to add AI exclusions to commercial general liability, errors and omissions, and directors and officers policies. The exclusions started appearing in renewed policies in January. Florida, Connecticut, and Maryland approved the highest volume.
The language varies, but the pattern is consistent. If an AI system you deploy causes harm, your standard policy may not respond. That includes the scheduling assistant that double-books a client meeting, the pricing algorithm that accidentally violates fair lending rules, the customer service bot that makes a promise your company cannot keep, or the applicant tracking system that screens out protected groups.
The scope is broader than most operators expect. Insurers are not just excluding cutting-edge AI agents. The exclusions often cover any automated decision-making tool, which can include the autocomplete feature in your CRM, the spam filter routing customer inquiries, or the inventory forecasting model someone built in Excel three years ago.
Why insurers are pulling back
The insurance industry has seen this pattern before. A decade ago, businesses argued that cyber incidents should be covered under existing property and liability policies because those policies did not explicitly exclude them. Some of those arguments succeeded. Insurers responded by carving cyber out of standard coverage and building standalone products.
AI is following the same trajectory, but faster. Insurers cannot price a risk they do not understand, and the range of ways AI tools can create liability is still expanding. Rather than wait for claims to clarify the exposure, carriers are excluding it now and creating separate AI liability products for businesses willing to pay for them.
Those standalone products exist, but they are expensive and inconsistent. Premiums range from a few hundred dollars to several hundred thousand annually, depending on coverage limits (typically $2 million to $50 million) and the specific AI use cases involved. Most are being sold to software companies building AI products, not to small businesses using off-the-shelf tools.
Where the exposure sits
Small businesses face AI liability from four directions, and most do not have clear coverage for any of them.
First, there is direct operational risk. Your AI tool makes a decision that costs someone money or denies them an opportunity. A loan application gets rejected by an automated underwriting system. A job candidate gets filtered out by a resume screener. A customer gets quoted the wrong price by a chatbot. If that decision violates anti-discrimination law, consumer protection rules, or contractual obligations, you own the liability.
Second, there is vendor risk. You use a third-party AI tool, and that tool fails or produces biased outcomes. The vendor's terms of service almost certainly disclaim liability and put the responsibility on you as the deployer. Your insurance may argue that you should have conducted due diligence before adopting the tool. The vendor's insurance will not cover your losses.
Third, there is data and privacy risk. Your team puts customer information, employee records, or proprietary data into an AI tool that was not designed to handle it securely. The data gets exposed, misused, or incorporated into a training dataset you never agreed to. Cyber policies are starting to cap AI-related data losses or exclude them entirely.
Fourth, there is reputational and IP risk. Your AI tool generates content that infringes someone's copyright, mimics a competitor's trademark, or makes claims you cannot substantiate. These risks intersect with advertising injury coverage, but insurers are increasingly arguing that AI-generated content does not fit within traditional policy definitions.
What the regulations actually require
The compliance picture is a patchwork. There is no federal AI law yet, though that may change. What exists now is a growing list of state rules that apply if you do business in those states, regardless of where your company is located.
Colorado's law, which took effect 1 February, requires businesses deploying high-risk AI systems (those affecting employment, housing, credit, healthcare, education, or insurance decisions) to conduct impact assessments, implement reasonable care to prevent algorithmic discrimination, and disclose to consumers when AI is making a consequential decision about them. Penalties go up to $20,000 per violation.
California's rules require businesses to notify users before an automated decision-making system interacts with them, particularly in customer service, pricing, or personalization contexts. Violations carry penalties up to $7,500 each, plus exposure to private lawsuits under consumer protection law.
Illinois requires notification when AI assists with hiring, performance reviews, promotions, or disciplinary actions. The state also passed rules requiring disclosure when customers interact with chatbots.
New York City's Local Law 144 has been in effect since 2023 and requires independent bias audits for any automated employment decision tool, regardless of company size. A five-person startup using an AI resume screener faces the same audit, notice, and penalty structure as a multinational.
Most small businesses using AI tools have not conducted bias audits, have not written AI usage policies, and have not assessed whether their tools qualify as high-risk under any of these frameworks. That gap creates liability whether or not your insurance responds.
What to do before renewal
The next time your business insurance comes up for renewal, three questions matter.
First, does your policy now exclude AI? Read the exclusions section. If you see language referencing artificial intelligence, automated decision-making, algorithmic systems, or machine learning, your coverage has likely narrowed. Ask your broker to explain exactly what is excluded.
Second, what AI tools are you actually using? Make a list. Include the obvious ones (chatbots, transcription services, content generators) and the less obvious ones (CRM auto-routing, applicant tracking systems, pricing tools, fraud detection). For each tool, identify what decisions it makes and what data it touches.
Third, where is your liability actually sitting? For each tool, ask whether your vendor's terms of service put liability on you as the deployer. Ask whether the tool makes decisions that could be considered high-risk under Colorado, California, Illinois, or New York rules. Ask whether your current insurance would respond if that tool caused harm.
If the answer to the third question is no or unclear, you have four options. You can buy standalone AI liability coverage (expensive, and availability is limited for non-software companies). You can stop using the highest-risk tools (operationally difficult if they are embedded in your workflow). You can implement controls and documentation to reduce your exposure (time-consuming but necessary). Or you can accept the risk and move forward uninsured (not recommended, but that is what most small businesses are currently doing by default).
The compliance minimum
If you are going to keep using AI tools, three things reduce your liability exposure whether or not your insurance covers it.
First, document what you are using and why. Create a simple inventory of AI tools in use, what business function each serves, and who approved the purchase. If you ever face a regulatory inquiry or lawsuit, being able to show you knew what you had is the baseline.
Second, write a usage policy. It does not need to be complicated. Specify what employees can and cannot put into AI tools (no customer PII, no confidential business data, no employee records unless the tool is specifically designed for that purpose). Specify that AI outputs require human review before being sent to customers or used in decisions about people. Make sure the team has actually read it.
Third, audit your highest-risk tools. If you use AI for hiring, lending, tenant screening, insurance quoting, or pricing, get an independent bias audit or ask your vendor to provide evidence they have done one. If you use AI for customer-facing decisions in Colorado, California, or Illinois, confirm you are providing the required notices. These steps do not eliminate liability, but they demonstrate reasonable care, which matters both to regulators and to insurers deciding whether to cover a claim.
The insurance industry is not wrong to treat AI as a distinct risk. The tools are powerful, the failure modes are not fully understood, and the legal landscape is still forming. But the gap between when insurers exclude AI and when affordable standalone coverage becomes widely available is going to leave a lot of small businesses exposed.
The operators who come through that gap in the best shape will be the ones who treated AI deployment as a risk management question from the start, not just a productivity tool they bought because everyone else was buying it.
Related: From ChatGPT to action: giving AI safe access to your business data • When your team won't touch the AI tools you bought