The lead qualification agent has been running for two months. The right contacts get flagged. The right deals get prioritized. A colleague asks whether to remove the approval step for outbound introduction emails — the agent is getting it right, so why keep the gate?
Because accurate is not the same as safe to automate.
Accuracy measures how often the agent is right. Recoverability measures what happens when it is wrong. Those are different questions, and only the second one tells you what should run without a human decision.
Why accuracy is the wrong measure for autonomous action
A 95% accuracy rate means one in twenty actions is wrong. For an agent handling twenty interactions per day, that is one mistake every day. The question is not whether the mistake happens — at any meaningful scale, it will. The question is what that mistake costs in the specific workflow where it occurs.
An agent that mislabels an internal CRM tag costs thirty seconds to fix. An agent that sends an introduction to the wrong contact costs a relationship and a credibility hit that takes weeks to repair. The agent may have been wrong at the same rate in both cases. The damage is not comparable.
Accuracy is the right question when evaluating whether an agent is capable of a task. Recoverability is the right question when deciding how much autonomy to give it.
The three conditions that make autonomous action safe
Accuracy is not the right measure for autonomous action. Even a 95%-accurate agent makes one mistake for every twenty actions it takes. The question is what that mistake costs in the specific context where it happens — not whether the mistake ever occurs.
Three conditions must all be true for full autonomy to be appropriate for a given action type.
The action is reversible. A tag can be removed. A draft can be deleted. A calendar entry can be moved. An email cannot be unsent. A deleted record cannot be trivially restored. If correcting an error requires another person's involvement — a client, a partner, a colleague in a different system — the action is not reversible in any operationally useful sense.
The blast radius of an error is bounded. An error that affects only internal state — a row in a tracker, a tag on a contact, a note in a CRM — stays within the system. An error that reaches an external party does not. Any action that crosses the boundary between your system and someone else's carries blast radius that makes a human checkpoint meaningful.
The decision space is narrow enough that edge cases are rare and identifiable. An agent given a narrow, well-defined task — categorize incoming support tickets by type — encounters a finite number of edge cases. An agent asked to "handle client follow-up" encounters an unbounded input space. Narrow decision spaces keep edge cases predictable. Wide ones produce surprises.
Actions that should almost always require approval
Some action types carry enough weight that autonomous execution is not appropriate regardless of accuracy.
Outbound communication. Any message sent to a client, prospect, or partner in your name represents your judgment. An agent that drafts and sends that message without review is making judgment calls about tone, timing, and relationship context it cannot fully assess. The draft is useful. The send requires a human.
Financial actions. Payments, invoices, refunds, and adjustments to financial records affect parties outside your control. Getting one wrong is not a private failure.
High-visibility CRM updates. Deal stage changes, close dates, and account health flags are inputs to decisions made by other people. A deal marked closed incorrectly affects pipeline forecasting, commission calculations, and team expectations. The downstream consequences of an error extend far past the record itself.
Anything with legal or contractual implications. Contract sends, compliance communications, and terms updates are not candidates for autonomous execution at any accuracy level.
Actions that are safe to automate fully
The right question isn't accuracy — it's whether the mistake is reversible.
Some action types are reversible, bounded, and routine enough that an approval gate defeats the purpose of automation.
Internal classification and tagging. Labeling support tickets, categorizing leads, tagging contacts by type — these are reversible with a single edit and affect only internal state.
Draft creation. An agent that prepares a reply, populates a template, or formats a document for human review is not acting autonomously — the draft is an input to a decision, not a decision. Automating draft creation while keeping the send or publish step behind a gate is a sound design.
Data formatting and enrichment. Normalizing field formats, pulling company data from a lookup, filling empty CRM fields from known sources — these are low-stakes and easily corrected.
Internal routing and assignment. Assigning a ticket to the right queue, routing an inbound inquiry to the right person, marking an item as reviewed — errors stay inside the system and are easily fixed.
How to move an action from approved to autonomous
Starting with an approval gate on a new action type is the right default. The gate is not a sign of distrust — it is how you collect the evidence needed to make the autonomy decision safely.
Watch the approval history for the action type. Track how often the reviewer approves without editing, approves with small changes, makes significant edits, or dismisses entirely. After a meaningful sample — fifty decisions is a reasonable minimum, more for high-stakes action types — review the pattern.
If approvals are consistent, edits are minor, and dismissals are rare: the action is a candidate for autonomy. Remove the gate, monitor the first few weeks closely, and reintroduce it if new conditions produce unexpected outputs.
If the history shows regular significant edits or a pattern of dismissals: the agent's judgment on that action type is not reliable enough for autonomous execution. Narrow the scope before removing the gate.