On April 23, 2026, Mend released what I think is the first publicly available AI security governance framework that a small-to-mid-market business could actually pick up and implement in a quarter without hiring an enterprise consultancy. MarkTechPost covered the release in detail, and the short summary is this: the framework organizes the AI security problem into four operational pillars — asset inventory, risk tiering, AI supply chain security, and a four-stage maturity model — with specific, quantitative guidance at each step instead of the aspirational language most “AI governance” content drifts into.
That distinction matters. NIST's AI Risk Management Framework is valuable, but it is deliberately written as principles rather than operating procedures. ISO/IEC 42001 is a management system standard, not a runbook. The EU AI Act is law, not implementation guidance. Vendor security marketing is rarely written to help the buyer actually understand their own exposure. Mend's framework is the first of this 2026 cohort that reads like a procurement-ready playbook — what to inventory, how to tier, what to document, what maturity stage to target by when — and that is why it is worth the attention of every Fort Wayne IT team, every Allen County manufacturer with AI pilots running, and every Northeast Indiana CPA or law firm where Copilot has already entered the building.
Below is a walk-through of the four pillars, the specific scoring math that makes the risk-tiering step actually usable, and the 30-day Fort Wayne application plan for the three industry clusters where this matters most locally. I am writing this as Cloud Radix's Technical Director, and I am being deliberate about the places where the framework is strong, the places where it is lighter, and the places where a business will still need controls beyond what the framework specifies.
Key Takeaways
- Mend's framework, reported by MarkTechPost on April 23, 2026, organizes AI security governance into four implementable pillars: asset inventory, risk tiering (1-3 scoring across five dimensions), AI supply chain security (AI Bill of Materials), and a four-stage maturity model aligned with NIST, OWASP, ISO 42001, and the EU AI Act.
- The risk-tiering system scores each AI deployment 1-3 on five dimensions — Data Sensitivity, Decision Authority, System Access, External Exposure, and Supply Chain Origin — totaling 5-15, with Tier 1 (5-7), Tier 2 (8-11), and Tier 3 (12-15) controls scaling accordingly.
- The AI Bill of Materials (AI-BOM) concept is the most useful single artifact in the framework: documenting model, training data, fine-tuning datasets, dependencies, inference infrastructure, and vulnerabilities.
- The maturity model's four stages — Emerging, Developing, Controlling, Leading — map cleanly onto NIST's GOVERN-MAP-MEASURE-MANAGE structure and give SMBs a realistic target ladder.
- For Fort Wayne: Allen County manufacturers, DeKalb and Noble County CPAs, and NE Indiana healthcare practices should run a 30-day non-punitive AI asset inventory this quarter. Every Cloud Radix engagement starts with the same sprint.
What are the four pillars of the Mend AI Security Governance Framework?
Per MarkTechPost's April 23 coverage, the framework is built on four pillars, each addressing a specific gap in how most businesses currently handle AI.
Pillar 1 — Asset Inventory. The framework defines AI assets broadly and on purpose: developer tooling (Copilot, Codeium, and similar), third-party APIs (OpenAI, Google Gemini, and others), open-source models downloaded and run locally, SaaS AI features embedded in productivity tools (including Notion AI and comparable features in other platforms), internal custom models, and autonomous agents of all scales. The key design choice is the emphasis on non-punitive discovery — the framework explicitly recognizes that “shadow AI” has already happened in most businesses, and that surfacing it requires a posture that does not punish the employee who signs up for a SaaS AI feature to do their job faster. I agree with that framing; we describe the same dynamic in our shadow AI data risk analysis.
Pillar 2 — Risk Tiering. This is the most operationally specific pillar and the one I think will get copied most often. Every AI deployment is scored 1-3 on five dimensions, totaling a score between 5 and 15. The dimensions are Data Sensitivity, Decision Authority, System Access, External Exposure, and Supply Chain Origin. The framework then maps score ranges to tiers: Tier 1 (Low Risk, scores 5-7) gets a standard security review and lightweight monitoring; Tier 2 (Medium Risk, scores 8-11) triggers enhanced review, access controls, and quarterly audits; Tier 3 (High Risk, scores 12-15) requires a full security assessment and continuous monitoring.
Pillar 3 — AI Supply Chain Security. This pillar introduces the AI Bill of Materials (AI-BOM) — analogous to a software bill of materials but expanded for AI-specific artifacts. An AI-BOM documents the model name and version, the training data provenance, the fine-tuning datasets used, the software dependencies (libraries, model-serving frameworks), the inference infrastructure the model runs on, and the current vulnerability status of all of the above. This is the single most useful artifact in the framework; I have not seen a cleaner operating definition of “what you actually own when you own an AI deployment.”
Pillar 4 — Maturity Model. The four stages — Emerging (ad hoc, awareness), Developing (defined, reactive), Controlling (managed, proactive), and Leading (optimized, adaptive) — are explicitly aligned with NIST's AI RMF, OWASP's AIMA body of work, ISO/IEC 42001, and the EU AI Act. Most small and mid-market businesses will sit somewhere between Emerging and Developing today. The value of the framework is giving them a clear next rung on the ladder and a way to measure progress against it.

How does the risk-tiering math actually work?
This pillar deserves its own section because the math is the part that turns framework into action. Every AI deployment — whether it is a developer using Copilot, a marketing team using a third-party content agent, or an AI Employee handling scheduling for a clinic — gets a score on each of five dimensions. The scoring is 1 (low), 2 (medium), or 3 (high) per dimension.
| Dimension | Score 1 (Low) | Score 2 (Medium) | Score 3 (High) |
|---|---|---|---|
| Data Sensitivity | Public / non-sensitive | Internal business data | Regulated (ePHI, PII, financial, privileged) |
| Decision Authority | Pure suggestion to a human | Drafts / partially autonomous | Fully autonomous actions on production systems |
| System Access | Read-only, sandboxed | Limited API / scoped write access | Broad production access, credentials, or network |
| External Exposure | Internal only | Internal with external inputs | Customer-facing or internet-accessible |
| Supply Chain Origin | Transparent, vetted open source | Commercial vendor with BAA/contract | Opaque model, unclear provenance |
Add the five scores. The total lives between 5 and 15, which maps to tier.
- Tier 1 (5-7) — Low Risk: Standard security review, lightweight ongoing monitoring, annual reassessment. Example fit: a developer using a sandboxed code-completion assistant on public repositories with no credentials in scope.
- Tier 2 (8-11) — Medium Risk: Enhanced security review, access-control hardening, quarterly audits, documented change management. Example fit: an internal marketing agent drafting content using internal data, routed through controlled cloud models.
- Tier 3 (12-15) — High Risk: Full security assessment, continuous monitoring, named accountable owner, mandatory approval gates on action, and incident-response rehearsal. Example fit: an autonomous customer-facing agent with write access to production systems, handling regulated data.
The elegance of this math is that it forces a conversation the business has otherwise been avoiding. A 20-person CPA firm that assumes its Copilot usage is Tier 1 will score it differently after actually totaling — Data Sensitivity 3 (taxpayer data), Decision Authority 2 (drafts), System Access 2 (limited), External Exposure 1 (internal only), Supply Chain Origin 2 (Microsoft contract) = 10, which lands in Tier 2. That is the right answer, and it is worth knowing. The companion discussion in our AI governance gap analysis names the same pattern: businesses are systematically underestimating the risk tier of their existing AI usage, because nobody ever sat down with a scoring sheet.
For Tier 3 deployments specifically, the Mend framework's controls align closely with what OWASP's LLM Top 10 for 2025 calls out — LLM01 (Prompt Injection), LLM02 (Sensitive Information Disclosure), LLM06 (Excessive Agency), and LLM09 (Misinformation) are the first four we evaluate on every Tier 3 engagement. Tools like MITRE ATT&CK offer complementary threat modeling for the post-deployment phase — the attacker-side view that our stage-three AI agent threats defense playbook covers in depth.

What is an AI Bill of Materials and why is it the keystone?
An AI Bill of Materials (AI-BOM) is the single concept from the Mend framework I most want to see adopted across Fort Wayne businesses — because it forces clarity about what you actually own when you deploy an AI tool. The framework defines an AI-BOM as documenting:
- Model name and version — which model, at what exact version, is this deployment using
- Training data provenance — what data was the base model trained on, and is that documented
- Fine-tuning datasets — if the model was fine-tuned for your use case, what data was used
- Software dependencies — libraries, model-serving frameworks, orchestration tooling, and their versions
- Inference infrastructure — cloud provider, region, hardware profile, and the boundary the compute runs inside
- Vulnerability status — known CVEs against any component, patch currency, and remediation timeline
The reason this is the keystone artifact: when — not if — a vulnerability or an incident is reported against a component in the AI stack, the question “are we affected” becomes answerable in minutes instead of weeks. Businesses that do not maintain an AI-BOM today respond to a new AI supply-chain incident by starting from “what do we even run” and working forward from there. Businesses that maintain an AI-BOM respond by querying the document and getting a yes or no.
This is not hypothetical. AI supply-chain incidents are increasing. Our AI defender compromise analysis covers a 2026 incident pattern in which adversaries compromised AI security tools at approximately 90 organizations — exactly the class of incident where knowing your stack composition determines how fast you can answer “are we one of them.” The AI-BOM is the answer artifact for that class of question.
For smaller Fort Wayne businesses, the minimum viable AI-BOM is a spreadsheet, not a platform. A living spreadsheet that lists every AI tool in use, its version, its deployment boundary, its owner, and its current vulnerability status is substantially better than no AI-BOM at all. The framework does not mandate a specific tool; it mandates the artifact. Start with a spreadsheet, graduate to tooling when the spreadsheet becomes unwieldy.
What does the maturity model tell a Fort Wayne business to do next?
The four maturity stages are explicitly ladder-shaped, which is the right design. Every business sits on exactly one rung, and the framework's value is telling you which one, and what moves you to the next.
Emerging (ad hoc, awareness). Some AI use exists in the business. No centralized inventory. No written policy. No documented controls. The dominant posture is “we know employees are using AI tools, we haven't mapped it.” Most Fort Wayne SMBs under 50 employees sit here in Q2 2026.
Developing (defined, reactive). An inventory exists but is incomplete. A policy exists but is not fully enforced. Some controls are in place for specific high-profile deployments but not applied uniformly. The dominant posture is “we have a spreadsheet and a policy and we react to incidents as they happen.” Most Fort Wayne businesses between 50 and 250 employees sit here.
Controlling (managed, proactive). Inventory is complete and refreshed on a cadence. Policy is enforced with gate-based controls. Risk-tiering happens routinely. AI-BOM is maintained for every deployment. Incidents are tabletop-tested. The dominant posture is “we know what we have, we know who owns it, and we rehearse what could go wrong.”
Leading (optimized, adaptive). The framework is integrated into change management, procurement, and engineering workflows. Metrics drive continuous improvement. The business contributes to broader standards work. Most Fort Wayne businesses should not aspire to Leading this year; the goal for 2026 is Emerging → Developing, or Developing → Controlling.
The mapping to NIST's AI RMF functions (GOVERN, MAP, MEASURE, MANAGE) is straightforward: the maturity stages describe how completely each of those functions is operating. The mapping to ISO/IEC 42001 is to the management-system structure the standard specifies. The framework is not asking businesses to choose between standards; it is asking them to use one ladder that harmonizes with all of them.

How should Fort Wayne IT teams apply this in 2026?
Fort Wayne and Northeast Indiana have a particular business mix — manufacturing-heavy, professional-services-heavy, mid-market healthcare, and a long tail of small family businesses — and the Mend framework lands differently across those verticals. The 30-day operating sprint below is the one I recommend across the board, with the vertical-specific nuances called out.
Allen County manufacturing (20-500 employees). Manufacturing IT teams have often already standardized software bills of materials through their industrial-control-system compliance work; adding an AI-BOM is a smaller lift than starting from zero. The 30-day priority is asset inventory on developer tooling (Copilot, Codeium, any code-generation AI in the CAD/CAM pipeline), risk-tiering any AI tool that touches production-line telemetry or customer data, and a clear policy on AI use in quality reports and RFQ responses. The companion work we describe in Fort Wayne Copilot prompt-injection risk analysis applies directly — manufacturers running Copilot Studio internally should be running the tiering math against their specific deployment.
DeKalb, Noble, Wells, and Whitley County CPAs and professional services (5-75 employees). Smaller shops often have no existing IT team; the 30-day sprint looks more like a one-time consulting engagement than an ongoing program. Priority one is the asset inventory — walking through every desktop, every SaaS subscription, and every browser extension to surface actual AI usage. Priority two is the risk-tiering math for the two or three tools that matter most (typically Copilot plus one or two specialty tools). Priority three is a one-page written policy. The AI-BOM in this setting is a single spreadsheet with maybe ten rows, updated quarterly.
Fort Wayne and NE Indiana healthcare practices (any size). Healthcare settings have the steepest consequences for governance failures and the clearest existing regulatory scaffolding (HIPAA, HHS OCR oversight). The 30-day sprint is: asset inventory scoped to any tool touching ePHI, risk-tiering with automatic Tier 2 or Tier 3 scoring for anything in the ePHI path, AI-BOM for each clinical or clinical-adjacent deployment, and a signed Business Associate Agreement audit of every AI vendor. The zero-trust AI agents credential isolation architecture is the runtime complement to the framework-level governance work.
Cross-cutting: For any Fort Wayne business, the first control you want sitting behind the framework is a Secure AI Gateway — the policy engine that actually enforces the tier-based controls at runtime, regardless of which specific AI tool or model the employee or agent is calling. A framework without an enforcement layer is a spreadsheet; a framework with an enforcement layer is a program. Human-in-the-loop approval dialogs are the complementary layer for specific high-risk actions, and they should be wired into the gateway rather than left in the application.

What does the Mend framework not cover — and where does the gap bite?
Honest limitations first. The framework is a governance framework, not a complete security architecture. It tells you what to inventory, how to tier, what to document, and what maturity stage to target. It does not specify:
Runtime enforcement. The framework describes controls to put in place; it does not describe the specific enforcement technology. A policy that says “Tier 3 deployments require approval gates” needs a technical control — a gateway, an approval workflow, a human-in-the-loop system — to actually enforce. Businesses that read the framework and implement only the documentation artifacts will have a beautiful spreadsheet and no runtime protection.
Specific threat modeling. The framework references standards (OWASP AIMA, NIST, MITRE) but does not itself produce the threat model for your specific deployment. That work still requires someone — internal or external — to walk through the prompt injection, data exfiltration, privilege escalation, and supply chain attack paths against your actual architecture. The stage-three AI agent threats defense playbook covers the post-deployment threat surface in more detail.
Organizational change management. A framework exists on paper until it is adopted in procurement, in engineering, in IT operations, and in end-user workflows. That adoption is not technical work; it is organizational work, and most governance programs fail here rather than on the technical side. The Mend framework does not address change management directly.
Measurement of business outcomes. The maturity stages describe program completeness, not business-outcome improvement. A Controlling-stage AI program is better-governed than an Emerging one, but “better-governed” does not automatically equal “produces better business outcomes from AI.” That measurement is the operating work on top of the governance framework — and it is unavoidably specific to the business.
None of these gaps invalidate the framework. They describe the scope of what the business still owns after adopting it. The framework is necessary; it is not sufficient.
Ready to run the 30-day sprint on your own AI footprint?
Cloud Radix's Mend-aligned AI governance engagement is a fixed-fee, 30-day sprint that produces: a complete AI asset inventory, a risk-tiered AI-BOM for every deployment, a written one-page policy, and a maturity-stage assessment with a recommended next rung. For businesses currently sitting at Emerging, the typical outcome is a documented path to Developing within the same quarter. For businesses at Developing, the outcome is the runtime controls and quarterly audit cadence that move the program to Controlling.
We are biased about what goes into the runtime enforcement layer — we build it and deploy it — but the framework itself is vendor-neutral, and the sprint deliverables are yours regardless of whether you engage us for the gateway work after. Book a 30-minute AI governance workshop and we will start the conversation with your current AI inventory as the first exhibit.
Frequently Asked Questions
Q1.How is the Mend framework different from NIST AI RMF or ISO 42001?
NIST's AI RMF and ISO/IEC 42001 are principles and management-system standards, written as high-level guidance that the implementer must translate into specific practices. The Mend framework, as reported by MarkTechPost on April 23, 2026, translates those higher-level standards into specific operational artifacts — a scoring sheet for risk tiering, an AI-BOM template, named maturity stages, and specific controls at each tier. The Mend framework is explicitly designed to align with NIST, OWASP AIMA, ISO 42001, and the EU AI Act rather than compete with them. For a mid-market business, the relationship is that NIST tells you what GOVERN-MAP-MEASURE-MANAGE should cover, ISO 42001 tells you how a management system should be structured, and Mend gives you a specific implementation playbook that satisfies both.
Q2.Do we need specialized tooling to maintain an AI Bill of Materials?
Not initially. For a small or mid-sized Fort Wayne business with fewer than 50 AI deployments, a living spreadsheet with the mandated fields — model name/version, training data provenance, fine-tuning datasets, software dependencies, inference infrastructure, vulnerability status — is a legitimate starting point. The graduation to specialized tooling happens when the spreadsheet becomes unwieldy, when regulatory pressure requires automated attestation, or when supply-chain incident response frequency makes manual updates a bottleneck. We recommend starting with the spreadsheet and evolving.
Q3.What score does a typical Copilot deployment receive on the risk-tiering sheet?
It depends entirely on the deployment, which is the point of the exercise. A Copilot deployment scoring Data Sensitivity 2 (internal business data), Decision Authority 2 (drafts), System Access 2 (limited scope), External Exposure 1 (internal), and Supply Chain Origin 2 (Microsoft contract) totals 9 — a Tier 2 Medium Risk deployment requiring enhanced review, access controls, and quarterly audits. A Copilot deployment with broader data access or decision authority scores higher and lands in Tier 3. The scoring is deliberately specific; a generic 'Copilot is low risk' conclusion should be treated as a red flag that the tiering exercise was not actually run.
Q4.How often should the AI asset inventory and AI-BOM be refreshed?
Our recommendation aligns with the tier: Tier 1 deployments reassessed annually, Tier 2 deployments reassessed quarterly, and Tier 3 deployments continuously monitored with formal quarterly reviews. The inventory as a whole should be refreshed on a quarterly cadence at minimum for businesses with active AI adoption, and monthly for businesses adding three or more new AI tools per quarter. The framework's maturity stages also imply cadence: Emerging businesses refresh when they get around to it; Controlling businesses have scheduled refreshes with named owners.
Q5.Does the framework address shadow AI?
Yes — the asset inventory pillar is explicitly written to surface shadow AI, with the framework emphasizing non-punitive discovery to get employees to actually disclose the SaaS AI features and browser extensions they are using in the workflow. This design choice matches the dynamic we describe in our shadow AI data risk analysis — shadow AI cannot be addressed by banning tools, because the productivity pressure that drove employees to use them persists after the ban. The framework's posture is correct: surface, tier, govern, provide sanctioned alternatives.
Q6.What is the minimum viable version of this for a 15-person Fort Wayne practice?
A spreadsheet, a one-page policy, and a 2-hour quarterly review. The spreadsheet lists every AI tool in use, its risk-tier score using the five-dimension math, its owner, its AI-BOM fields, and the date of last review. The one-page policy names permitted workflows, prohibited workflows, and the approval process for new tools. The quarterly review walks through the spreadsheet, updates anything that changed, and documents findings. That is a legitimate Developing-stage program at a 15-person scale, and it is substantially better than what most practices currently run.
Q7.How does this framework relate to the EU AI Act if we are a Fort Wayne business with no European exposure?
For a purely-domestic Fort Wayne business, the EU AI Act does not directly apply. The Mend framework references it as one of the standards the maturity model aligns with, because multinational businesses need a single framework that harmonizes across jurisdictions. For a domestic business, the practical effect is that following the framework positions you well if your business later acquires European customers, clients, or operations — the governance posture that satisfies Mend's Controlling stage will satisfy much of what the EU AI Act requires for most non-prohibited deployments. You do not need to optimize for the EU AI Act; you just get its alignment for free if you follow the framework.
Sources & Further Reading
- MarkTechPost: marktechpost.com/2026/04/23/mend-releases-ai-security-governance-framework — Mend Releases AI Security Governance Framework: Covering Asset Inventory, Risk Tiering, AI Supply Chain Security, and Maturity Model.
- National Institute of Standards and Technology: nist.gov/itl/ai-risk-management-framework — AI Risk Management Framework.
- OWASP: genai.owasp.org/llm-top-10 — OWASP Top 10 for LLM Applications 2025.
- International Organization for Standardization: iso.org/standard/81230.html — ISO/IEC 42001 Artificial Intelligence Management System.
- MITRE Corporation: attack.mitre.org — MITRE ATT&CK Framework.
- European Union: eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ%3AL_202401689 — EU AI Act — Regulation (EU) 2024/1689.
Run the Mend-Aligned 30-Day Sprint
Book the Cloud Radix AI governance workshop. Thirty days later you will hold a complete AI asset inventory, a risk-tiered AI-BOM for every deployment, a written one-page policy, and a maturity-stage assessment with a specific next rung.
Book the AI Governance WorkshopFixed-fee. Vendor-neutral deliverables. Fort Wayne and Northeast Indiana.



