Glossary

AI for small business, defined in plain English.

Every term a small-business owner actually runs into when looking at AI for their operation. No jargon, no hand-waving. Each entry has a one-sentence definition, a longer plain-English explainer, and a real-world example.

Agent a.k.a. AI agent

An AI assistant that does work on its own. It has a job, a schedule, and access to your tools. Not a chatbot.

Example: an agent that pulls last week's revenue from QuickBooks every Monday at 7am, formats it as a report, and posts it to a Slack channel before the leadership meeting starts.

Full explainer →

API application programming interface

A standardized way for software to talk to other software. The phone number a program uses to call another program.

Example: when your Shopify store sends order data to your accounting tool automatically, that's an API call. APIs are how every modern integration works under the hood.

Chatbot

An AI that waits in a chat window for you to ask it something, answers, then forgets. The thing most people mean when they say "AI" in 2026. Useful, but the floor of what AI can do for an operation, not the ceiling.

Example: ChatGPT, Claude, Gemini in their default chat mode. You type a question, it answers. Compare this to an agent, which acts without being asked.

Context window

How much information an AI can hold in its working memory during a single conversation. Measured in tokens (roughly, words). When the context window fills up, the model starts forgetting earlier parts of the conversation.

Example: a 200,000-token context window is large enough to read a full novel in one go. Small operations rarely hit context window limits. The exception: long technical documents or full codebases.

Embedding

A mathematical representation of a piece of text that lets a computer find similar pieces of text. The technology behind "semantic search" — finding things by meaning, not by exact keyword match.

Example: an embedding-powered search lets a customer ask "do you ship to Italy?" and find a help article titled "European delivery policies" even though those words never overlap.

Fine-tuning

Taking an existing AI model and training it further on your specific data so it picks up your style, your terminology, your domain. Expensive. Rarely the right answer for a small operation. Usually a custom MCP server gets you 90% of the result for 1% of the cost.

Example: a law firm fine-tuning a model on its past briefs so it writes drafts in their voice. Real use case, real cost (often $20k+ per fine-tuning run, repeated as the underlying model evolves).

Hallucination

When an AI confidently states something that is not true. Not a malfunction, a feature of how language models work. The reason every AI output that touches business data needs a verification step or a human review before action.

Example: asking an AI for "the title of the Q3 earnings call PDF" and getting a plausible-sounding filename that doesn't actually exist. The defense: ground the AI in real data via MCP servers and RAG, not freeform recall.

LLM large language model

The kind of AI that powers ChatGPT, Claude, Gemini. Trained to predict the next word given a context, scaled up until it can do useful work. The engine. Everything else (agents, chatbots, RAG) is built on top.

Example: GPT-5, Claude Opus 4.6, Gemini 2.5 Ultra. Each has different strengths; small operations rarely need the absolute frontier model.

MCP Model Context Protocol

An open standard that lets AI assistants talk to your tools. The bridge between Claude or ChatGPT and your CRM, Slack, database, or internal API. USB for AI. Designed by Anthropic in 2024, now adopted everywhere.

Example: an MCP server for ClickUp lets an AI agent read your tasks, post comments, and update statuses. The same server works whether the agent is run by Claude, ChatGPT, or any other AI client.

Full explainer →

Prompt

The instruction you give an AI. Can be a single sentence ("summarize this email") or a multi-page system prompt with detailed rules. Quality of prompt matters less than people think; quality of input data matters more.

Example: "Read this customer email and draft a reply in our usual tone" is a prompt. "Read these 10,000 emails and tell me what people are complaining about" is also a prompt. Different requirements.

RAG retrieval-augmented generation

A pattern where the AI looks up relevant information from your documents before answering. Used to make AI answers grounded in your actual data instead of the model's training data. The right answer for "I want AI to answer questions about our internal documentation."

Example: a customer service AI that looks up the relevant section of your help center before drafting a reply. The same pattern works for legal contracts, internal wikis, product manuals.

Token

The unit AI providers bill in. Roughly: 1 token = 0.75 words in English. A typical email is 100-300 tokens. Pricing is usually per million tokens, with input and output billed separately.

Example: at current pricing (Claude Sonnet 4.5), a 1,000-word email reply costs about $0.005 in input + output combined. Useful to know when sizing whether an automation is economically viable.

Tool use function calling

When an AI calls an MCP server, an API, or a piece of code to get information or take an action, instead of just generating text. The mechanism that turns a language model into something useful for an operation.

Example: an AI agent uses tool use to call your weather API, get the current forecast, then write a friendly reply to a customer asking if their delivery will arrive on time.

Vector database

A specialized database that stores embeddings (see above) and lets you search them quickly. The infrastructure under most RAG systems. Most small operations don't need to know the brand name; they just need to know one is involved when an AI assistant searches their documents.

Example: Pinecone, Weaviate, pgvector (Postgres extension). Choice rarely matters for small operations; off-the-shelf options work fine up to several million documents.


Don't see a term? Tell us and we'll add it.