A business owner has three workflows they'd like to hand to an AI agent. All three seem automatable. One of them is. Two require human judgment that nobody has written down — and until someone does, no agent will handle them reliably.
The question most teams ask is: "Can an AI agent do this?" That question returns an answer for almost every workflow. The better question is different.
The readiness question most businesses ask wrong
"Can an AI agent handle this workflow?" is almost always answered yes. AI agents can handle pattern-matching, drafting, routing, triggering, and data entry. For most business workflows, the technology is not the constraint.
The real constraint is specification. An agent cannot run a workflow that hasn't been written down precisely enough to follow. The readiness question is not about AI capability. It is about whether the business has ever made this process explicit.
Most businesses haven't. The workflows that seem most automatable are often the ones where the team knows exactly what to do — and has never needed to write it down, because everyone doing the task already knows it. That implicit knowledge is invisible when assessing automation potential. It becomes visible the moment someone tries to write the brief.
The new-employee test for workflow readiness
Handing a workflow to an AI agent requires a written procedure. Most businesses discover, at this point, that no such procedure exists.
The most reliable readiness test is simple: could a new employee do this correctly on day one, given only a written procedure?
Not a new employee who has been trained for a week. Not one who has watched someone else do it twice. A new employee with a document, no prior context, and a task to complete.
If the answer is yes, the workflow is specifiable. A specifiable workflow is automatable.
If the answer involves "they'd need to know a bit about how we handle X" or "they'd need to check with someone first" — that knowledge is the gap. The agent hits the same gap, produces a wrong output, and the team calls the agent unreliable. The agent was not unreliable. The specification was incomplete.
What a ready workflow looks like
A workflow is ready for an AI agent when it has four properties.
A defined trigger — something specific and observable starts the workflow. Not "when a lead needs following up" — that requires judgment to evaluate. "When a lead has not replied within five business days and their status is Proposal Sent" is observable and testable.
A consistent input format — the agent receives the same type of information every time it runs. The CRM record has the same fields filled in. The email arrives from the same channel. The form submission contains the same structure. Variation in input format is a readiness problem, not a technology problem.
A clear success criterion — it is possible to evaluate whether the agent completed the task correctly without reading the output in detail. "The email was sent" is checkable. "The response was appropriate" requires reading and judgment.
A finite exception set — the cases where the workflow behaves differently are enumerable. Not "it depends on the situation" — a named list of specific situations and what happens in each.
What an unready workflow looks like
Unready workflows share recognizable patterns.
The task description contains the word "appropriate." As in: the agent should send an appropriate response. Appropriate is not a specification. It is a placeholder for judgment that hasn't been defined.
The team says "it depends" when asked how an exception is handled. Every workflow has exceptions. Unready workflows handle them through experience and intuition — knowledge that lives in someone's head and has never been made explicit.
The person who does the task is the only one who can review the agent's output. If the only way to verify the agent's work is to ask whoever normally does it, the task has not been fully specified. The verification criteria are implicit.
The process changes frequently. A workflow that shifts based on client preferences, seasonal variation, or team decisions requires constant brief updates. That maintenance cost often exceeds the benefit of automation until the process stabilizes.
How to make an unready workflow ready
If you can't write the procedure, you can't brief the agent.
An unready workflow does not mean automation is off the table. It means the process work comes first.
Narrow the scope. "Handle customer communication" is not a workflow. "Respond to refund requests submitted via the contact form with a confirmation and next-steps email" is. Every workflow that is too broad to specify can be narrowed into one that is.
Document the exception cases. Sit with the person who currently does the task and walk through the last twenty instances together. For each one handled differently from the others, write down why. Those reasons become the exception rules. A workflow with fifteen documented exceptions is specifiable. A workflow with "it depends" is not.
Add an explicit escalation path. Every brief needs a named action for inputs the agent wasn't designed to handle. Not "the agent will figure it out" — a specific step. Flag for review, route to a queue, reply with a holding message. The escalation path keeps the agent from producing wrong outputs on edge cases.
A workflow that fails the new-employee test today can pass it in a week if someone sits down and writes the procedure. That document is not preparation for automation. It is the brief. The agent runs from it directly.