BRIEFING 010

Do small businesses need to comply with AI regulations? What actually applies and what you can ignore.

Sixty-five percent of small businesses say they're worried that new AI regulations could harm their operations. The concern is understandable. On August 2, 2026, the majority of the EU AI Act's rules became fully applicable, Colorado's AI Act took effect February 1, 2026, and California added new disclosure requirements for AI systems. But most of the alarm is premature. Here's what small business operators actually need to know.

The gap between headlines and reality

If you've been reading business media lately, you might think every company using ChatGPT needs a compliance officer. Ninety-five percent of small businesses expect compliance challenges from proposed AI laws, but the actual enforcement surface is much narrower than the fear suggests.

The reason for the confusion is simple. Most AI laws passed in 2026 target specific high-risk scenarios (hiring, lending, housing, healthcare decisions) or apply only to companies meeting revenue or data volume thresholds that most 10 to 50 person operators don't hit.

That doesn't mean you can ignore the topic entirely. But it does mean you can stop worrying about the things that don't apply to you.

Three scenarios that actually require attention

1. You use AI to make hiring, promotion, or termination decisions

Three states lead the charge with immediate compliance obligations that affect common small business AI use cases. California's AB 1008 amendment now includes AI systems capable of outputting personal information under the California Consumer Privacy Act (CCPA). California's Privacy Protection Agency approved final regulations requiring pre-use notices for automated decision-making technology. Businesses using AI for customer service, pricing, or content personalization must notify users before AI interaction begins.

If you're screening resumes with an AI tool, scoring candidates, or using any automated system to filter applicants, you're in scope. California's Civil Rights Department has issued comprehensive regulations on automated decision systems in employment, requiring record-keeping, bias testing, and accommodation procedures.

What you need: written documentation of what the tool does, how it was tested for bias, and clear disclosure to applicants that AI is part of the process. You don't need a legal team. You do need to be able to answer the question "how does this system work?" in plain terms.

2. You serve customers in California, Colorado, or the EU and use AI for pricing, credit, or access decisions

Effective February 1, 2026, Colorado's AI Act represents the most comprehensive state-level AI legislation in the U.S. For high-risk AI systems (those making consequential decisions about housing, employment, education, healthcare, insurance, or lending), Colorado requires detailed impact assessments evaluating discrimination risks. Even small businesses using AI for loan applications, insurance quotes, or tenant screening must comply if serving Colorado customers.

You can sit in the U.S. and still face the EU AI Act when you sell into the EU, serve EU users, or support an EU customer through a partner. The Act uses risk tiers: unacceptable, high-risk, limited, minimal. Hiring, credit scoring, education access, and parts of healthcare can fall into high risk.

If your business operates entirely in states without AI-specific laws and you don't serve EU customers, this likely doesn't apply. But if you do cross those lines, the compliance obligation follows the customer, not your headquarters.

3. You're handling customer data through third-party AI tools

This is the scenario that catches most operators off guard. Data privacy is the most consistently cited concern, particularly around how customer data is handled when processed through AI models. Small businesses with fewer resources to dedicate to compliance research show the highest anxiety on this issue.

According to OpenAI, 27% of ChatGPT consumer messages in June 2025 were work-related. That means more than a quarter of all ChatGPT usage involves professional or business content. Much of this is happening on personal, free accounts rather than secure enterprise versions.

The risk isn't regulatory in most cases. It's contractual and reputational. If an employee pastes a customer list into a free AI tool to draft an email campaign, you've just sent regulated data to a third party without a data processing agreement. That can void your own privacy commitments, trigger breach notification rules, or create liability under your vendor contracts.

What you need: a written policy that says which AI tools are approved for which data types. It doesn't need to be complicated. "Don't put customer names, emails, or payment info into unapproved tools" is a policy. Enforcing it is harder than writing it, but writing it is step one.

What you don't need to worry about (yet)

Most small businesses using AI for internal productivity, drafting marketing copy, summarizing meeting notes, or generating images are not in scope for the regulations that took effect in early 2026. The patchwork nature of state regulations means businesses operating nationally face the most stringent requirements from any state where they have customers or employees.

The regulations target consequential decisions. That means decisions that materially affect someone's rights, opportunities, or access to services. Writing a blog post with AI assistance is not consequential. Deciding who gets interviewed for a job is.

A checklist you can actually use

Here's what most 10 to 50 person companies should do in the next 30 days:

  • Inventory where AI is being used. Not a formal audit. A spreadsheet with three columns: tool name, what it's used for, what data it sees.
  • Flag anything touching hiring, lending, pricing, or customer access decisions. Those get a second look.
  • Check your vendor agreements for any AI tools that process customer data. Make sure you have data processing terms in place.
  • Write a one-page policy: approved tools, prohibited data types, who to ask if you're unsure.
  • If you're using AI for hiring or other high-risk decisions, document what the tool does and how you validated it doesn't introduce bias. Keep it simple. "We tested the tool on 50 past candidates and reviewed outcomes by gender and ethnicity" is sufficient for most tools.

AI regulations are no longer something small and mid-sized businesses can ignore. If your team uses chatbots, screening tools, pricing automation, or AI-powered workflows in daily operations, compliance needs to become part of the process. The good news is that getting ready does not have to mean building a heavy legal program. A clear inventory, simple risk checks, documented controls, and honest disclosures can go a long way.

The compliance question that actually matters

The question isn't "do I need to comply with AI regulations?" It's "am I using AI to make decisions that matter to people outside my company?"

If the answer is no, you're mostly in the clear. If the answer is yes, you need documentation, transparency, and in some cases formal testing. None of it requires a law degree. All of it requires honesty about what the tool does and attention to where the data goes.

Survey data from the U.S. Chamber of Commerce reveals that among non-adopters, 33 percent express concerns about tool quality and 28 percent about legal or compliance issues. These concerns, which include reliability, data privacy, regulatory compliance, and vendor accountability, pertain to all businesses, but smaller ones may have fewer resources to evaluate associated risks.

The good news is that the compliance bar for most small businesses is reachable. It just requires knowing which bar you're actually being measured against.


Related: Who pays when your AI makes a mistake?From ChatGPT to action: giving AI safe access to your business data

Keep reading

More from the Field Guide