When American Express published its agentic commerce architecture, the headline most readers picked up was “AMEX is preparing for AI agents to make purchases on cards.” The deeper signal was the architecture itself. According to VentureBeat's reporting on the AMEX stack, the company built two specific control primitives — intent contracts and single-use tokens — for letting an AI agent transact under bounded authority. That is the part the rest of the market should be reading carefully, because the same control pattern is what every business letting an AI agent touch a procurement card, a vendor portal, or a recurring-payment system needs to put in place.
For a mid-market business in Fort Wayne, Indianapolis, or Detroit, the question this raises is not “are we doing agentic commerce yet” — most are not, and AMEX-grade infrastructure is a long way from a 40-person operations team. The question is “what do we do when our AI Employee starts placing repeat orders, paying SaaS renewals, dispatching parts requests, or settling vendor invoices on its own?” That is the agentic-commerce surface for a real Northeast Indiana business, and the AMEX pattern tells you what the safe version of it looks like before a payment goes wrong.
This post explains intent contracts and single-use tokens in plain English, walks through three concrete mid-market scenarios where the pattern applies — professional services, manufacturing back office, and operations-team SaaS — and lays out the smallest practical version a Fort Wayne business can deploy this quarter using a Secure AI Gateway and a human approval gate.
Key Takeaways
- AMEX's published agentic-commerce architecture pairs two control primitives: intent contracts (rules that bound what an AI agent may buy) and single-use tokens (payment instruments that bound how much exposure each transaction carries).
- The same pattern applies far below enterprise scale — any AI Employee that can spend money on behalf of a business needs a bounded-authority architecture, not a “we trust the model” architecture.
- OWASP's 2025 LLM Top 10 names Excessive Agency (LLM06) as a top-tier risk, and intent contracts are the direct architectural mitigation: the agent is given less authority than the underlying credential carries.
- For Fort Wayne mid-market businesses — HVAC, light manufacturing, distribution, professional services — the agentic-commerce surface starts with repeat orders, vendor invoicing, and SaaS renewals at $5K to $50K monthly volumes, not with consumer e-commerce.
- The minimum-viable version is a per-spend-authority rule (the intent contract), a brokered payment instrument that scopes exposure per transaction (the single-use token), and a human approval gate above a defined threshold.
What Is an Intent Contract, in Plain English?
An intent contract is a structured rule, written in advance, that bounds what an AI agent is authorized to buy on a business's behalf. It is the agentic-commerce equivalent of a corporate purchasing-card policy that has been encoded so a software agent can read it before placing an order, and so the payment network can validate the order against it at transaction time.
A complete intent contract typically encodes at least four dimensions: who is buying (which agent identity), what is being bought (item category, vendor, SKU, or service description), how much (per-transaction cap, weekly cap, monthly cap), and under what conditions (time window, approval requirements, exception triggers). According to VentureBeat's reporting on AMEX's stack, the contract is the network-side enforcement primitive, not just a guardrail on the agent: a transaction that violates the contract gets rejected at authorization, not after settlement.
The reason this matters is that an AI agent is, by default, given the full authority of whatever credential it holds. If your AI Employee has access to a procurement card, the card's full credit limit is its potential blast radius. The intent contract pattern inverts that default — the agent is given strictly less authority than the credential carries, and the network refuses transactions outside the bound. That is the architectural answer to the OWASP-named Excessive Agency risk (LLM06), which sits in the top half of the 2025 LLM Top 10 specifically because granting an LLM operational authority that exceeds what its decision-making warrants is a recurring source of incidents.
A mid-market business's intent contract for a procurement-bot AI Employee might read: “This agent may spend up to $1,500 per transaction, up to $8,000 per week, on items in office-supplies and IT-peripherals categories, from approved vendors X, Y, Z, on Monday through Friday between 7 AM and 6 PM Eastern. Any transaction above $500 requires a human approval acknowledgment within four hours.” That sentence is the contract. The hard part is wiring it into a payment instrument that enforces it.

What Are Single-Use Tokens and Why Do They Matter for AI Agents?
A single-use token is a payment instrument issued for one specific transaction (or a tightly-bounded set of transactions) that becomes invalid after use. The pattern has been part of corporate payments for years — virtual card numbers, single-use account credentials, ephemeral transaction IDs — and the PCI Security Standards Council has long recognized tokenization as a risk-reduction primitive in the payment-card data lifecycle.
The agentic-commerce twist is that the token is bound to a specific intent contract instance. The AI agent does not hold a long-lived credential. It holds, on a per-transaction basis, a token that authorizes exactly that transaction within exactly that contract's bounds, and the token expires at the end of the authorization window. If the agent is compromised — through a prompt-injection-class attack, through a credential leak, through a model gone wrong — the blast radius is one transaction's worth of exposure, not the credential's underlying credit limit.
This control primitive maps directly onto the credential-isolation pattern we covered in our zero-trust AI agents credential isolation analysis, which built on VentureBeat's earlier reporting on AI agent zero-trust architecture and the per-request audit and credential-scoping primitives the major AI security vendors started shipping in early 2026. The architectural assumption is simple: any agent will, eventually, do something its operator did not intend. The job of the surrounding architecture is to make that eventual misbehavior bounded and recoverable, not unbounded and catastrophic.
| Control Primitive | What It Bounds | Equivalent Without It |
|---|---|---|
| Intent contract | What categories, vendors, amounts, and conditions the agent may transact under | Agent has full credential authority; only model behavior bounds spend |
| Single-use token | The dollar exposure of any single transaction; the validity window | Agent holds a long-lived card credential with full credit limit exposure |
| Human approval gate | Which transactions require a human acknowledgment before completing | Agent settles transactions silently; human only sees aftermath |
| Per-transaction audit log | What the agent did, on what authority, against what intent contract | Reconciliation done after the fact from card statements |
The combination of all four is what AMEX is doing at enterprise scale. The combination is also what a Fort Wayne mid-market business can do at much smaller scale using a deployed AI Employee on a Secure AI Gateway plus a virtual-card provider that supports per-transaction issuance. The technical primitives are no longer enterprise-only.

What Three Concrete Mid-Market Scenarios Make This Real?
The AMEX-class pattern lands at mid-market scale in three places, and each has a distinct intent-contract shape.
Scenario one: a 40-person professional-services firm running repeat office-supply purchases. The AI Employee handles the recurring office-supply, IT-peripheral, and small-equipment orders that an office manager would otherwise spend two to four hours per week on. The intent contract bounds the agent to the firm's approved-vendor list, a per-transaction cap (say $500), a per-week cap (say $2,500), and a human approval acknowledgment for any line item above $250. The single-use token is issued at the moment of order placement, scoped to the exact transaction amount and the specific vendor. The exposure if the agent goes wrong is, at most, $500 on one order at one approved vendor — recoverable through the dispute process without any meaningful business risk.
Scenario two: a Fort Wayne-area HVAC service company dispatching parts orders. The AI Employee monitors the field-service dispatch system and pre-orders parts for tomorrow's job tickets — compressors, refrigerant, electrical components — from approved distributor accounts. The intent contract bounds vendor (approved distributors only), category (parts only, no equipment), per-transaction cap ($2,000), and a hard requirement that any order above $750 routes to a service manager for approval. The single-use token is issued per dispatch ticket, scoped to the ticket's expected parts cost. The operational gain is that the next-morning crew arrives with parts already at the supply house — and the financial exposure on any single agent error is bounded to one job's parts ceiling, not a quarter's worth of card spend.
Scenario three: an operations manager letting an AI Employee handle SaaS renewals. The AI Employee monitors upcoming SaaS contract renewals across the business's tool stack, negotiates within a pre-approved range, and processes renewals that fall inside that range. The intent contract bounds the renewal range (up to a 5% increase year-over-year, no more), the vendor list (existing approved SaaS contracts only, no new vendors), and a hard-stop human approval gate for any renewal above the prior year's contract value plus the agreed increase. The single-use token is issued per renewal, scoped to the renewal amount. The operational gain is that no renewal slips into auto-renew at an unexpected price, and the financial exposure on any single renewal is bounded.
In each scenario, the intent contract is the policy artifact, the single-use token is the payment instrument that enforces it at the network level, and the human approval gate covered in our AI Employee human approval gate post is the safety net that catches the edge cases the contract did not anticipate. None of the three scenarios requires AMEX-grade infrastructure. All three require the same control pattern AMEX built — at appropriately small scale. The cross-application coordination side, including the approval-dialog UX that exposes these decisions to a human operator, is covered in our cross-app AI agent governance and approval dialogs post.

How Does This Land in Fort Wayne and Northeast Indiana Mid-Market Practice?
Fort Wayne's mid-market commercial base — professional services, light manufacturing, distribution, HVAC and home services — runs on a combination of in-person operations management and a relatively thin tooling stack. The agentic-commerce conversation here is not “are you ready for AI shopping agents on consumer e-commerce.” It is more concrete: when an HVAC operations manager retires next year, or when a manufacturer's purchasing coordinator goes on FMLA, the back-office work of placing repeat orders, dispatching parts, paying recurring vendor invoices, and settling SaaS renewals does not go away. It either sits on someone's desk or it gets handed to a tool.
The current default in Northeast Indiana mid-market businesses is the first option — it sits on someone's desk, and the operations team picks up the slack. The AMEX-grade pattern makes the second option — handing it to an AI Employee — practical for the first time without exposing the business to a runaway-credential class of risk. The broader market context is consistent with this read: Stanford HAI's 2026 AI Index documented widespread enterprise AI adoption alongside an accelerating incident rate, which is exactly the combination — high adoption, real failure rate — that makes bounded-authority architectures a now-not-later procurement decision rather than a theoretical posture.
Three specific Fort Wayne mid-market verticals are the immediate applicability surface:
Light manufacturing. Allen, DeKalb, and Whitley County manufacturers running quote-to-cash and parts-procurement workflows already touch monthly card spend in the $10K–$100K range across MRO, packaging, freight, and small-tool categories. The intent-contract pattern here scopes by SKU category, by approved vendor list, and by job-cost-code association. Our AI Employees for Fort Wayne manufacturing post covers the broader operations-floor framing; the procurement layer sits on top.
Professional services. Fort Wayne's CPA firms, law firms, architecture and engineering firms, and consulting practices run a recurring-spend pattern that is a textbook intent-contract fit: a defined SaaS stack, a small approved-vendor list, predictable monthly volumes, and partner-level oversight that reads exception reports rather than every transaction. The agentic-commerce question here is which of the office manager's procurement and renewal workflows hand off cleanly to an AI Employee with a bounded card.
HVAC, plumbing, and skilled-trades operations. The Fort Wayne home-services market — dozens of multi-truck operations across Allen, DeKalb, and surrounding counties — runs on next-day parts availability and tight job-margin discipline. The agentic-commerce pattern here is parts pre-ordering against tomorrow's dispatch board, with intent-contract caps that match average parts cost per job class and single-use tokens issued per dispatch ticket. The operational and financial efficiency gains compound; the risk floor is bounded by design.
Across all three, the implementation substrate is the same: a deployed AI Employee with credential brokering through a Secure AI Gateway, a virtual-card or single-use-token issuer integrated with the gateway, an intent-contract policy artifact stored alongside the agent's configuration, and an approval gate that routes high-value transactions to a named human within a defined response window.

What Does the Minimum-Viable Version of This Look Like for a 30-to-100-Person Business?
The AMEX architecture is a useful target, not a prerequisite. The minimum-viable version a Fort Wayne mid-market business can stand up this quarter has four moving parts.
- A deployed AI Employee with bounded-scope credentials. The agent does not hold a long-lived procurement-card number. It holds, per transaction, an ephemeral payment credential brokered by a Secure AI Gateway. This is the credential-isolation primitive from our zero-trust AI agents and credential isolation post.
- An intent contract written as a policy artifact. Four dimensions: who, what, how much, under what conditions. Stored alongside the agent's configuration. Read by the gateway at every transaction request. Versioned and auditable. The AI Employee governance playbook gives the broader policy frame; the intent contract is the transactional sub-component.
- A virtual-card or single-use-token issuer. Most mid-market virtual-card providers — including offerings from the major commercial-card issuers — support per-transaction card issuance with a defined amount, vendor, and expiration. The integration is a few API calls, not a multi-month implementation.
- A human approval gate above a threshold. Below the threshold, the agent transacts autonomously and reports. Above the threshold, the gate requires a named-human acknowledgment within a defined window or the transaction is canceled. This is the safety net our human approval gate post walks through in detail.
The compounding effect of the four parts is that the failure modes covered in our 42 ways AI breaks business prevention catalog — runaway agents, prompt-injection-driven exfiltration, model-induced unintended transactions, credential leakage — are bounded at the network and policy level rather than at the model-behavior level. The operator does not need the model to be perfect. The operator needs the architecture to assume the model will eventually be wrong and to make that wrongness recoverable. This is precisely the policy-and-architecture cycle the NIST AI Risk Management Framework (GOVERN, MAP, MEASURE, MANAGE) is structured around, and the four-part stack above is the agentic-commerce specialization of that scaffold for a mid-market business.
The cost-equivalent of this stack at 30-to-100-person scale is meaningfully less than the cost of one full-time procurement specialist, and the visibility-and-control gain over the existing default — manager runs the cards from a personal-laptop browser, monthly statement reconciliation a week behind real time — is substantial. That is the agentic-commerce buying conversation for the mid-market in 2026.
Frequently Asked Questions
Q1.What is an intent contract in agentic commerce?
An intent contract is a structured rule, written in advance, that bounds what an AI agent may purchase on a business's behalf — typically scoped by vendor, category, transaction amount, time window, and approval requirement. Unlike a soft guardrail, the intent contract is enforced at the payment-network layer: a transaction outside the contract is rejected at authorization, not flagged afterward.
Q2.What is a single-use token in this context?
A single-use token is a payment instrument issued for one specific transaction (or a tightly-bounded set of transactions) that becomes invalid after use. The token is bound to a specific intent contract instance, scoped to the exact transaction amount, and tied to an expiration window. The blast radius if the agent is compromised is one token's worth of exposure, not the underlying credential's full credit limit.
Q3.Do mid-market businesses really need this, or is it enterprise-only?
The pattern applies any time an AI agent has the ability to spend money on a business's behalf. The scale of the underlying spend determines how much infrastructure is required, but the control primitives — bounded authority, per-transaction exposure, human approval gating — are the same at $5K monthly spend as they are at $5M monthly spend. The mid-market version uses commercial virtual-card products and a Secure AI Gateway rather than a custom payment-network integration.
Q4.What is the OWASP Excessive Agency risk and how does this address it?
Excessive Agency (LLM06 in the OWASP 2025 LLM Top 10) describes the risk of granting an LLM-based system more operational authority than its decision-making warrants. Intent contracts directly address this by encoding strict authority bounds the agent cannot exceed, with payment-network enforcement that rejects transactions outside those bounds.
Q5.Can our Fort Wayne business deploy this with an existing card program?
Most major commercial-card programs now offer virtual-card or single-use-token products that support per-transaction issuance with defined limits and expirations. The integration with a Secure AI Gateway is an API integration, typically scoped in days to a few weeks. The existing card program is usually retained as the funding source; the single-use tokens are issued against it.
Q6.What happens if the AI Employee tries to make a purchase outside its intent contract?
The transaction is rejected at network authorization. The agent does not get to "try harder" or escalate; the network refuses the request based on the contract bounds. The rejection is logged, the operator is notified per the agent's monitoring configuration, and the operations team can review whether the contract needs to be expanded for a legitimate edge case or whether the rejection caught an actual misbehavior.
Q7.How does the human approval gate fit alongside the intent contract?
The intent contract is the hard ceiling — outside it, no transaction. The human approval gate is the conditional layer below the ceiling — for transactions that fall within the contract but exceed a configurable threshold (say, any single transaction above $500), the agent must receive a named-human acknowledgment within a defined window or the transaction is canceled. The two layers together cover the routine spend (autonomous, fast, bounded) and the exception spend (human-acknowledged, slower, still bounded).
Sources & Further Reading
- VentureBeat: venturebeat.com/orchestration/inside-amexs-agentic-commerce-stack — Inside AMEX's agentic commerce stack: how intent contracts and single-use tokens enforce AI transactions.
- VentureBeat: venturebeat.com/security/ai-agent-zero-trust-architecture — AI agent zero-trust architecture: audit, credential isolation, Anthropic, NVIDIA NemoClaw.
- OWASP: genai.owasp.org/llm-top-10/ — OWASP Top 10 for LLM Applications (2025), including LLM06 Excessive Agency.
- NIST: nist.gov/itl/ai-risk-management-framework — AI Risk Management Framework (GOVERN, MAP, MEASURE, MANAGE).
- PCI Security Standards Council: pcisecuritystandards.org — Tokenization and payment-card data lifecycle guidance.
- Stanford HAI: hai.stanford.edu/ai-index/2026-ai-index-report — 2026 AI Index Report on enterprise adoption and incident rate.
Want to Scope an Intent-Contract Pattern for Your Fort Wayne Business?
Cloud Radix deploys AI Employees for Fort Wayne, Auburn, and Northeast Indiana businesses across HVAC, manufacturing, distribution, and professional services. If your business is letting (or considering letting) an AI tool place orders, pay invoices, or process renewals, reach out for a 30-minute scoping conversation and we will walk through the intent-contract template, the virtual-card integration, and the approval-threshold math for your specific operations footprint.



