An agent observes its environment, decides what to do next, calls a tool, sees the result, and repeats until the goal is met. Powerful, early-stage, governance-critical.
The ReAct loop: observe state, reason about it, take an action via a tool, read the result, decide what is next. Each loop spends tokens and increases the chance of cascading errors. Cap the loop count. Audit every tool call.
No autonomous agent actions in finance, HR, legal, security, or customer commitments. Human-in-the-loop on any tool call with real-world side effects. This is the BIITS rule. Write it into the policy before you write the code.
Anything with a financial, regulatory, or customer-visible consequence: the answer must be "none." Read-only and draft-only actions are where agentic AI earns its keep right now.
Want the boardroom version of this?