The line at the demo booth did not move for forty minutes. According to VentureBeat's RSAC 2026 coverage, the Cisco and CrowdStrike joint demo showed an AI agent connecting to a Fortune 50 enterprise security console, reading a stack of policy documents, identifying a gap, and rewriting a production rule — using the credentials of the human user who had logged in at the start of the demo. The agent did everything correctly. The system did everything correctly. And nobody could tell you, from the audit log, whether the change had been authorized by a person or by a piece of software.
That is the agent IAM gap in one sentence: most enterprises are letting AI agents authenticate as humans, and pretending they have an identity story. They don't. They have a human's identity story, and an agent borrowing it. When the agent does something useful, the audit trail says the human did it. When the agent does something harmful, the audit trail says the human did it. The lawyer reading that log six months later cannot distinguish which is which, because at the identity layer there is no difference.
For Fort Wayne and Northeast Indiana mid-market IT — Allen County, DeKalb County, Whitley, Noble, the 50-to-500-seat M365 and Google Workspace shops that make up most of our client base — this gap is sharper at the SMB scale than at the enterprise scale. The Fortune 50 in that demo can at least afford to staff a non-human identity team and roll out a CIEM platform. The 150-seat law firm in downtown Fort Wayne, the 80-seat dental group in Auburn, the 220-seat back-office at a DeKalb County manufacturer — these organizations cannot, and the same agent IAM gap shows up in their deployments first, at lower visibility, with worse failure modes. This is the post Cloud Radix's Secure AI Gateway exists to address.
Key Takeaways
- The Cisco/CrowdStrike RSAC 2026 demo showed an AI agent rewriting a Fortune 50 security policy using a human's credentials — and exposed that most enterprises cannot tell from their audit logs whether a human or an agent took a given action.
- The fix is non-human identity (NHI): every AI agent gets its own credentials, its own audit trail, and a time-boxed authorization scope, distinct from any human's SSO identity.
- A four-tier agent identity model — read-only research → bounded-write workflow → cross-system orchestrator → policy-change agent — gives mid-market IT a practical framework for matching identity controls to agent authority.
- Cloud Radix's Secure AI Gateway is the IAM enforcement layer for AI Employees: it issues per-agent identities, scopes credentials per request, and produces a clean audit trail separated from human SSO.
- The 90-day audit checklist at the end of this post is the specific work a Fort Wayne IT lead should run against their current AI deployments — including the “who logged in as the agent at 3am” question almost no one can answer today.
What did the Cisco and CrowdStrike RSAC demo actually prove?
The demo paired a security-console workflow agent with a representative enterprise control plane. As VentureBeat reported, the agent ingested policy documentation, identified a gap in a production rule, drafted the rewrite, and committed the change. The demo was meant to show capability — and it did. What it also showed, less intentionally, was the missing identity layer underneath it: the agent did all of this while authenticated as a human user, which meant the change appeared in the audit log under that human's name.
That detail is the entire story. It is not a critique of the demo, which performed exactly as advertised. It is a critique of the architectural assumption every enterprise control plane currently makes: that the actor making a change is a person, that the credentials presenting at the API are owned by that person, and that the audit log is a sufficient legal record because identity is one-to-one with action. None of those assumptions survives the introduction of AI agents that operate inside human user sessions.
The architectural response is non-human identity — a category that already exists in the IAM literature for service accounts, machine-to-machine workflows, and CI/CD pipelines, but which most enterprises have not extended to AI agents. The NIST AI Risk Management Framework names governance, mapping, measurement, and management as its four core functions; the agent IAM gap is a governance failure first (we did not write the policy that says agents need their own identity) and a measurement failure second (we cannot count agent actions because they are indistinguishable from human ones). The frameworks exist. The implementation has not caught up.
Why is the agent IAM gap worse at SMB scale than at enterprise?
There is a counterintuitive result here that Fort Wayne IT leaders should sit with: the smaller the organization, the worse the agent IAM gap looks in practice. The reasoning is not flattering, but it is structural.
Enterprise IT has spent fifteen years cleaning up service-account sprawl, rotating machine credentials, and standing up CIEM tooling. When the agent question lands at a Fortune 500, there is at least an existing program to attach it to. The 80-seat dental group in Auburn does not have that scaffolding. Their service accounts are over-permissioned because nobody had time to scope them, the same admin password is shared across three SaaS tools, and the existing credential discipline is the IT lead trying to remember which contractor still has access to what. Bolt an AI Employee onto that environment, and a single credential compromise gives the attacker not just a user — it gives the attacker an autonomous agent with the user's full surface area, operating at machine speed.
The threat model gets sharper, not softer. We covered the credential-isolation foundation in zero-trust AI agents and credential isolation, and the prompt-injection vector in Fort Wayne Microsoft Copilot prompt injection risk. What ties those posts together — and what the RSAC demo crystallized — is that without a separate non-human identity layer, every other AI security control is degraded. Prompt injection is a worse problem when the injected instruction inherits a real human's permissions. Credential leak is a worse problem when there is no separate agent credential to revoke. Audit response is a worse problem when the log cannot distinguish actor type. The gap is the precondition that makes the rest harder.
The honest local picture: most Fort Wayne mid-market AI deployments today look like a staff member opening Copilot in their authenticated M365 session and letting the agent operate inside that session. There is no separate identity. There is no separate audit. There is no scoped credential that can be revoked without revoking the human. That is the current reality, not a doom scenario.

What is the four-tier agent identity model, and which tier needs what gate?
A pragmatic mid-market identity model has to do two things at once: give every agent its own identity (closing the gap), and right-size the controls so the IT lead is not approving every API call. We use a four-tier model in our engagements. Each tier raises the authority of the agent and raises the corresponding control:
| Tier | Agent type | What it can do | Identity & gate |
|---|---|---|---|
| 1 | Read-only research agent | Pull data from sanctioned sources, summarize, draft | Per-agent service identity, read-only scope, full audit log |
| 2 | Bounded-write workflow agent | Update records inside a single system within defined fields | Per-agent identity, scoped write credential per session, schema-bounded |
| 3 | Cross-system orchestrator | Coordinate work across two or more business systems | Per-agent identity, per-system scoped credentials, cross-system action log |
| 4 | Policy-change agent | Modify rules, policies, or governance objects | Per-agent identity, per-action human approval gate, immutable change record |
The point of the model is not the specific tier names — those are a Cloud Radix convention — but the principle that identity scope must scale with action authority, and that the most powerful agent type (the one in the RSAC demo) is the one that should never operate without a per-action human approval gate. The standardized approval-dialog pattern we covered in cross-app AI agent approval dialogs is the natural fit for tier 4. The OWASP LLM Top 10 names “Excessive Agency” as one of its top categories — that is the failure mode this tier model is designed to prevent.
The mid-market translation: most Fort Wayne deployments currently have tier-1 work being done by an agent operating with tier-3 credentials, because the identity gap means there is no way to scope down. The first benefit of fixing the IAM gap is not adding new capability — it is removing latent authority. An agent that only needs to read calendar data should have credentials that only let it read calendar data, with its own identity, and its own audit record. That sounds like a small thing. It is the entire game.

How does Cloud Radix's Secure AI Gateway enforce non-human identity?
The Secure AI Gateway is the architectural piece that makes the four-tier model operational. It sits between AI Employees and the business systems they touch. Three things matter about its design:
First, identity issuance is gateway-side, not application-side. Every AI Employee that runs through the gateway gets its own service identity issued by the gateway. That identity is distinct from any human SSO identity. When the agent makes a call to M365, the call presents the agent's identity, not the user's. The audit trail at the destination system shows the agent took the action, with a unique identifier that ties back to the specific Employee, the specific job, and the specific tier of authority granted to that job.
Second, credentials are scoped per request, not per deployment. A tier-2 workflow agent that needs to update a single CRM field gets a credential that lets it update that single CRM field, for the duration of that single workflow, and that credential expires. There is no long-lived broad-access credential the agent can keep using. This is the zero-trust principle that NIST SP 800-207 describes, applied specifically to AI agents — never trust, always verify, scope the trust to the request.
Third, the gateway produces a clean audit log that separates agent actions from human actions. Every call through the gateway is recorded with both the agent identity and (where applicable) the human who initiated the workflow. Six months later, when General Counsel asks “did a person or an agent rewrite this rule,” the gateway answer is unambiguous. That is the specific architectural property that closes the RSAC demo gap. We covered the broader threat-class question of agents-as-attack-surface in AI defender compromise — the IAM gap is the precondition for many of those threats, and closing it is the precondition for most of the defenses.
The honest trade-off: gateway-issued identity and per-request credential scoping is more work than letting the agent run inside a human session. It requires standing up the gateway, integrating it with M365 / Google Workspace / line-of-business systems, and writing per-tier policies for what each Employee can do. The work is real. It is also the work that the 85/5 AI agent trust gap post identified as the architectural difference between AI agents that ship and AI agents that stay in pilot. Identity is not optional; it is one of the five layers.

What does the agent IAM gap look like in Fort Wayne mid-market IT?
Northeast Indiana's mid-market IT environment is not Silicon Valley, and it is not a Fortune 500 SOC. It is a relatively small group of people running surprisingly complex stacks at organizations that have grown faster than their IT discipline. Lutheran Health Network's IT team supports a multi-county hospital footprint with the regulated weight of healthcare; Sweetwater Sound's IT runs a national e-commerce operation out of an Allen County campus; Steel Dynamics back-office IT supports a Fortune 500 manufacturer with operations across the country. None of these are the bottom of the SMB scale, and all of them face the agent IAM gap.
Underneath those names is the long tail that defines the FW mid-market: the dozens of 40-to-250-seat manufacturers across DeKalb County and the broader I-69 corridor, the 150-seat law firms downtown, the multi-location dental and orthopedic groups across Allen and Whitley counties, the 60-seat CPA firms in Auburn and Columbia City, the regional financial services and real estate brokerages serving Northeast Indiana. Each of these has the same shape of problem: enthusiastic AI adoption, lean IT, regulated data, and no current architectural distinction between “Sarah from accounting” and “the AI Employee that helps Sarah from accounting.” County-government IT shops face the same problem with the additional weight of public records law.
For these organizations, the agent IAM gap is not theoretical and it is not far away. It is the moment six months from now when the AG's office, a regulator, or an insurance carrier asks for an audit trail and the IT lead has to explain that the system records every action under a real person's name — including the actions a piece of software took on that person's behalf. That conversation goes badly. The way to avoid it is to install the identity layer before the conversation, not after. The mid-market scale is exactly where Cloud Radix's gateway design is sized; we did not build it for Fortune 50 demos, we built it for the 150-seat firm that needs the same identity discipline at a mid-market budget. The Mend AI security governance framework we wrote up earlier provides the broader governance scaffolding; identity is one specific operational layer inside that frame.
The 90-day agent identity audit checklist for Fort Wayne IT leaders
Run this against your current AI deployments. The point is not to score well; the point is to know where the gaps are. Each item is a specific, answerable question. If the answer is “I don't know,” that is the gap.
Days 1–30: Inventory and identity baseline
- List every AI tool, agent, or assistant in use across the organization — including Copilot, ChatGPT Enterprise, Gemini, Claude, internal agents, third-party SaaS AI features, and anything a department is “trying out.”
- For each tool, document the identity it operates under: a per-tool service account, a generic admin account, a specific human's account, a shared SSO identity, or unknown.
- Identify which tools currently produce an audit log that distinguishes agent actions from human actions. Mark each as Yes / No / Partial.
- Catalog the data classes each tool can read and write — at minimum: PII, PHI, financial records, intellectual property, internal communications, public information.
- Flag every tool that is operating in tier 3 or tier 4 (cross-system or policy-change authority) without a separate non-human identity. These are your priority remediations.
Days 31–60: Policy and tiering
- Write a one-page agent identity policy that defines the four tiers, the identity requirements per tier, and the approval gate per tier. The policy is short by design.
- For each inventoried tool, assign a tier (1 / 2 / 3 / 4) based on actual current authority, not advertised authority.
- For tier-3 and tier-4 tools, assign a remediation owner and a target date for moving to a non-human identity.
- Stand up an immutable audit storage location for agent action logs with a retention period that matches your regulatory regime — for healthcare, six years minimum; for financial services and legal, jurisdiction-specific; for general business, three years recommended.
- Establish a quarterly review cadence to re-inventory and re-tier, because new AI tools land in your environment every month whether you sanctioned them or not.
Days 61–90: Implementation and validation
- Deploy a gateway, a CIEM extension, or a managed identity layer that issues per-agent identities for the highest-priority tier-3 and tier-4 deployments. (Cloud Radix's Secure AI Gateway is one option; an enterprise CIEM is another; for very small deployments, even a careful service-account discipline with per-tool credentials is a meaningful improvement over the status quo.)
- Validate that audit logs at destination systems now record agent actions with agent identities. Run the “who logged in as the agent at 3am” test — pick a recent agent action and trace it from initiation to destination log. The trace should be unambiguous.
- Run a tabletop exercise: simulate an agent credential compromise. Confirm you can revoke the agent's credential without revoking any human's credential. If the answer is no, the identity layer is not yet separated.
- Document the residual risk for tier-1 and tier-2 tools where full identity isolation is impractical, and accept it explicitly with management sign-off rather than implicitly through silence.
- Schedule the next quarterly review and brief the executive team on the current posture, with the highest-priority remaining gap and a specific recommended next investment.
The checklist is not theoretical. We run a version of it as the first 90 days of every AI Employee engagement, because without the identity foundation the rest of the deployment is built on sand. CISA's threat advisories are increasingly addressing non-human identity compromise as a category of incident. Stanford's 2026 AI Index report documents the rising tide of enterprise agent deployment. The infrastructure is moving faster than the identity discipline. Closing that delta is the work.

Ready to run the agent identity audit on your AI deployments?
Cloud Radix runs a fixed-fee 90-day agent identity audit for Fort Wayne and Northeast Indiana mid-market organizations. We start with the inventory, walk you through the four-tier model against your specific deployments, install the Secure AI Gateway as the identity layer for your highest-priority agents, and hand back a written audit memo that the IT lead can take into a board meeting, an insurance renewal, or a regulator conversation. No slide decks. No vendor lock-in. A defensible audit trail and a working gateway. Contact Cloud Radix to schedule the audit and we will come back within one business day with a calendar hold and a pre-call inventory questionnaire.
Frequently Asked Questions
Q1.What exactly is the "agent IAM gap" and why is it new?
The agent IAM gap is the architectural state in which AI agents operate inside business systems using a human user's authenticated session — no separate agent identity, no separate audit trail, no separately revocable credentials. It is not a theoretical concept; it is the default deployment shape of most AI assistants today. It is "new" in the practical sense that the volume of AI agent activity inside enterprise environments has only become large enough to matter in the last twelve to eighteen months. The IAM problem itself — non-human identity for service accounts and machine workflows — has been an open enterprise-IT topic for over a decade. AI agents are the latest and largest non-human actor class to surface it.
Q2.Doesn't Microsoft Copilot or Google Workspace AI already handle agent identity?
Partially, and inconsistently. Both vendors have moved toward distinct service principals or workload identities for some agent operations, but in many real-world deployments the agent still inherits the user's session permissions when operating on the user's behalf. The audit-trail granularity varies by service and by configuration. The honest position for an IT leader is: do not assume the platform handles agent identity correctly by default, run the audit-trail test on your own data, and fix the gaps you find. The vendors are moving in the right direction; the field implementation is uneven.
Q3.Is non-human identity a Cloud Radix product or a general industry pattern?
Non-human identity (NHI) is a general industry pattern with multiple commercial implementations, including dedicated CIEM platforms, identity-provider extensions, and gateway architectures like Cloud Radix's Secure AI Gateway. The principle — agents get their own identities distinct from human SSO — is not vendor-specific. The reason we built our own gateway is that the existing enterprise CIEM tooling is sized and priced for Fortune 500 environments and most Fort Wayne mid-market organizations cannot adopt it without painful trade-offs. The pattern is general; the mid-market implementation needs an answer that fits mid-market budgets and operations teams.
Q4.How does the four-tier model handle a single agent that performs different tier-level actions?
In our deployment pattern, the same Employee can have multiple identities, one per tier of work it does. A research agent that occasionally needs tier-2 write authority issues a separate scoped credential for that write action, time-boxed to the specific workflow, then drops back to tier-1 read-only authority. The tier classification is per-action, not per-Employee. This is one of the operational reasons gateway-side issuance matters: the gateway can mint and revoke per-action credentials at machine speed, where a static service-account approach would be too coarse to handle the dynamic.
Q5.What should a Fort Wayne IT lead do this week if their organization has zero agent identity discipline today?
Run inventory item 1: list every AI tool, agent, or assistant in use across the organization. That is it. Do not try to fix anything until you can see what you have. The most common reason Fort Wayne mid-market AI security work stalls is not budget or technology; it is that the IT lead does not have a current inventory of what is running. The Allen and DeKalb County practices and firms we work with consistently find shadow AI agents in the inventory exercise that no one knew were operating against company data. Fix that visibility first, in a spreadsheet, this week. The rest of the program builds on it.
Q6.Does Cloud Radix's Secure AI Gateway only work with Cloud Radix's AI Employees, or can it sit in front of third-party AI tools?
The gateway is designed to handle both. The reason it exists as a separate architectural layer rather than a feature of the Employees is precisely so that it can sit in front of third-party tools — Copilot, ChatGPT Enterprise, Gemini, Claude, departmental SaaS AI features — and provide a unified identity and audit layer across the whole AI surface area. For organizations that have already adopted multiple third-party AI tools, the gateway is often the first piece of architecture we install, before any custom Employees, because it gives the IT lead visibility and control across the existing AI footprint immediately.
Q7.What happens to existing audit-log data once we install per-agent identities?
Existing logs continue to record actions under the human identities that were used at the time. Going forward, the gateway-issued agent identities show up in the logs separately. This means there is a clear "before" and "after" boundary in the audit trail; for regulators, auditors, or General Counsel asking about events after the cutover, the answer is unambiguous. For events before the cutover, the honest answer is that the architecture did not yet distinguish agent actions, and the trail reflects the older model. We document the cutover date in the audit memo so that future inquiries have a clear boundary to reference.
Sources & Further Reading
- VentureBeat: venturebeat.com/security/cisco-crowdstrike-rsac-2026-agent-identity-iam-gap — Cisco and CrowdStrike at RSAC 2026: When an AI Agent Rewrote a Fortune 50 Security Policy.
- NIST: nist.gov/itl/ai-risk-management-framework — AI Risk Management Framework.
- OWASP: genai.owasp.org/llm-top-10 — OWASP Top 10 for LLM Applications.
- NIST: csrc.nist.gov/publications/detail/sp/800-207/final — Zero Trust Architecture (NIST SP 800-207).
- Stanford HAI: hai.stanford.edu/ai-index/2026-ai-index-report — Stanford HAI 2026 AI Index Report.
- CISA: cisa.gov/topics/cyber-threats-and-advisories — Cybersecurity Threats and Advisories.
Close the Agent IAM Gap in 90 Days
Cloud Radix runs a fixed-fee 90-day agent identity audit for Fort Wayne and Northeast Indiana mid-market organizations. Inventory, four-tier scoring, gateway install, and a written audit memo — no slide deck.



