AI for small business: a realistic 90-day plan.
If you've been told you need an "AI strategy" but every consultant pitch costs more than your annual office rent, this is for you. A real 90-day plan, written for a real small operation, with what to skip and what to invest in.
You've read four articles this week telling you that "AI is transforming every industry." None of them tell you what to actually do on Monday morning. This one does.
This plan is written for a specific operation: a 10-to-50 person company with at least one tech-comfortable person on staff (doesn't have to be a developer), a few SaaS tools, and the faint suspicion that some of the manual work could be automated. If that's you, this is the playbook.
Days 1–14. The audit (don't skip this)
The single biggest mistake small operations make with AI is starting with the technology instead of the work. The work first. The technology second.
Get a notebook (paper, ideally, slows you down in the right way). For two weeks, every time you or someone on your team does a task that feels repetitive, write three things:
- What was the task? (One sentence.)
- How long did it take?
- What's the input and what's the output?
Don't try to judge if it's "automatable." Just write it down. At the end of two weeks, you'll have somewhere between 20 and 80 entries. Roughly 80% of them are uninteresting noise (one-offs, mostly-cognitive work, things that change every time). Roughly 20% follow a pattern: same input shape, same output shape, runs on a regular cadence, takes a known amount of time.
Those 20% are your candidate list.
Days 15–21. Pick exactly one
This is where small operations diverge from Fortune 500s. They run six pilots in parallel because they have six budgets. You should run exactly one pilot, because if it fails everyone in your operation is going to remember and the next pilot won't get political support.
From your candidate list, pick the one task that meets all of these:
- It runs on a schedule (not "whenever something happens"). Daily, weekly, monthly, doesn't matter, but predictable.
- It has a clear good/bad outcome. After it runs, you can tell within seconds whether it worked.
- The data lives in a tool that has an API (every modern SaaS does, if a tool only lets you export to CSV by hand, defer it).
- It currently takes someone 2+ hours per cadence. Anything less and the project economics don't work for the first one.
- If it goes wrong, the consequences are reversible. Do NOT pick the task that emails customers. Pick the one that posts a summary internally.
The most common right answer for first projects: a recurring internal report. Weekly KPIs, monthly compliance digest, daily ops snapshot. Boring, internal, low-stakes if it goofs once.
Days 22–35. Build it (or have it built)
You have two paths.
Path A: Build it yourself, if you have someone on staff who codes
The right tools as of mid-2026: Claude Code or similar AI-assisted coding (the AI writes most of the code), an existing MCP server for whichever data tool you're pulling from, and a cron service or scheduled task runner (GitHub Actions on the free tier works for most small operations).
Budget: ~40 hours of one person's time, spread over two weeks. The AI does most of the typing; the human does the integration testing and the final "is this right?" review.
Path B: Hire a small specialist
The right partner is not a 50-person agency. They'll quote you $80k for a 6-month project. The right partner is a one or two person shop that specializes in this exact category of work, small, scoped, fixed-price. Look for: someone who quotes ≤3 weeks, ≤$15k, and refuses to start without seeing the task in detail. (Disclosure: WildBreeze is one of these. There are others. If we're a fit, we're a fit; if not, we'll point you elsewhere.)
Days 36–60. Watch it run
This phase is mostly waiting and watching. The agent runs on its schedule. Every time it runs, you read the output. The first 4-6 runs you'll find small things, wrong number formatting, missing edge cases, a phrasing you don't like. Fix those. By run 8-10, it'll be running clean.
Resist the urge to add scope. The whole point of this phase is to prove that one well-scoped agent can run reliably for a month. If you start adding "and also...", you're back to the Fortune 500 mistake.
What to track during this phase:
- Did it run when it was supposed to?
- Did the output need any human correction?
- How much time did it save the person who used to do it?
If after 30 days the answer is "yes / rarely / hours per week," you have a green light. Move on.
Days 61–75. The second project
Now go back to your candidate list and pick the next one. This time it's much faster, you already know the tools, you have an MCP server inventory, you have a deployment pattern that worked. A second project that took 2 weeks to scope and build the first time often takes 4-5 days the second time.
Common second projects: a daily monitoring agent that pings someone if a critical pipeline failed overnight; an inbox triage agent that drafts replies (drafts only, humans send); an inventory anomaly detector for retail/ecommerce.
Days 76–90. Institutionalize
You now have two production AI agents. This is the moment to do three things, in order:
1. Document them properly
Each agent gets a one-page README: what it does, when it runs, who owns it, what to do if it breaks, who to call. This is the single highest-use hour you'll spend in the whole 90 days. The cost of "the person who knew how the agent worked left the company" is enormous.
2. Set up alerts
If an agent fails to run, someone needs to know. Email or Slack notification. This is usually 30 minutes of work and saves you the "wait, when's the last time we got that report?" panic.
3. Add the third project to the queue
By now your team has opinions about what should be next. Take them seriously, they're closer to the work than you are. The right cadence for a small operation is roughly one new agent per quarter. That sounds slow. It's the right speed. It compounds.
What to skip in the first 90 days
- Customer-facing AI (chatbots, AI customer service, AI sales emails). Higher stakes, harder to evaluate, more brand risk. Wait until you have internal experience.
- "AI strategy" consulting engagements over $10k. The advice you'll get for $50k is the advice in this article. The work is the work.
- Large language model fine-tuning, custom models, anything with the word "training" in it. Almost certainly not what you need. Off-the-shelf models with custom MCP servers covers 95% of small-operation needs.
- Centralized "AI dashboards" or "AI command centers." Another product category that exists only to sell to operations that haven't built anything yet. Build first; centralize later, if at all.
Realistic budget for the 90 days
Path A (in-house): ~$0 in cash, ~80 hours of one person's time across the period. AI API costs for two production agents: typically $20-100/month combined.
Path B (hired specialist): $8-15k per agent, two agents = $16-30k total. Same ongoing AI API cost.
Either path is dramatically less than the consulting engagements that dominate small-business AI conversations. That's the secret nobody tells you: the actual technical work has gotten cheap. The expensive part is figuring out what to build, and that's a conversation, not a project.
Related: Replacing the spreadsheet someone updates at 11pm · From ChatGPT to action: giving AI safe access to your business data