The reassuring sentence in modern software supply chain security has been this one: if the provenance signature is valid, the package was built by the publisher you think it was built by, on the infrastructure you think it was built on. The Shai-Hulud worm — the npm and PyPI incident VentureBeat broke open on Tuesday — is the moment that sentence stops working as a security primitive. According to the VentureBeat reporting, Shai-Hulud pushed malicious code into 172 npm and PyPI packages with valid provenance signatures, meaning every defender check that boiled down to “the green provenance badge is present” silently passed the poisoned build through. The badge was not lying. The build pipeline that produced the badge had been subverted upstream.
That is a fundamentally different shape of supply chain attack from the AI-extension supply chain class we covered three weeks ago in the Anthropic Skill scanners writeup. That class targeted AI agent extensions riding in on test files. Shai-Hulud targets the standard developer dependency channel — npm, PyPI, GitHub Actions — which is the exact channel every web agency, in-house app team, and internal IT shop in Auburn, Fort Wayne, DeKalb, Allen, Whitley, and Noble Counties is already using on every Next.js, Astro, Python, or Node service in production this week.
This post is the six-step incident response action plan for Northeast Indiana dev teams and mid-market IT directors who are reading the headline today and need to know what to do tomorrow. The plan is intentionally tactical, intentionally checklist-shaped, and intentionally sized for the 25-to-250-seat firms Cloud Radix works with. Skip the strategic thinkpiece; this is the “do it this week” piece.
Key Takeaways
- The Shai-Hulud npm worm compromised 172 npm and PyPI packages with valid provenance signatures, breaking the “green provenance badge means safe” assumption used by most mid-market CI/CD audit checklists.
- The vector is the standard developer dependency channel — npm, PyPI, GitHub Actions OIDC — not an AI-agent-extension channel. Every Next.js, Astro, Node, and Python team in Fort Wayne and the rest of NE Indiana is exposed by default.
- The six-step incident response plan: freeze installs and rotate recent fetches, audit lockfiles against the canonical 172-package list, rotate CI tokens and deploy keys, upgrade provenance verification from single-key to two-key signing, install a runtime postinstall-script monitor, and institute a 14-day no-new-transitive-dependency freeze.
- For NE Indiana mid-market firms with no dedicated app-sec FTE — a typical Fort Wayne web agency, an in-house manufacturer app team, a healthcare-adjacent SaaS shop — the plan is two days of focused work, not a six-week engagement.
- A Secure AI Gateway-style egress chokepoint catches outbound command-and-control traffic from a poisoned dependency that would otherwise look like a normal API call.
- Indiana notification obligations apply if customer data was exfiltrated. Document the response timeline now, before you need it.
What is the Shai-Hulud npm worm and why does “valid provenance” matter?
The original Shai-Hulud incident — named after the giant sandworm of Frank Herbert's Dune — surfaced in late 2025 as a self-replicating npm worm that stole maintainer credentials from infected developer workstations and used them to push poisoned package updates downstream. The 2026-05-12 wave reported by VentureBeat is the next-generation iteration, and the change in technique is the part that matters. Earlier worm waves left fingerprints in the provenance chain: the malicious build came from a different runner, a different repo, or a different signing identity, and any mid-market team with a moderately tight CI/CD audit checklist could catch it by inspecting the provenance metadata. Shai-Hulud's new wave reportedly rides on valid provenance — the build came from a legitimate runner, signed with a legitimate key, in a legitimate repo, after the attacker compromised the maintainer's upstream identity rather than fabricating one downstream.
That distinction matters because npm's provenance feature, documented in detail by the npm Docs, is built on Sigstore and is the cornerstone of the standard mid-market supply chain defense. It establishes “a verifiable record of where and how a package was built” — federating with OIDC providers, verifying build information from the OIDC token, and logging the signing certificate to an immutable transparency ledger. Most defenders treat the presence of a valid provenance attestation as the end of the audit. Shai-Hulud forces the audit to keep going past the green badge.
The reason this hits NE Indiana mid-market firms harder than enterprise is structural. Enterprise dev shops have a dedicated app-sec function that runs a second layer of behavioral analysis on top of provenance checks. A 12-developer Fort Wayne web agency does not. A 30-person in-house app team at an Auburn-area injection-molding plant does not. A healthcare-adjacent SaaS shop with a HIPAA Business Associate Agreement and four engineers does not. The whole point of npm provenance was to bring a meaningful supply chain control within reach of those teams. The whole point of Shai-Hulud is that an attacker noticed the same thing.
The general shape of the new pattern matches the OWASP CI/CD Top 10's two highest-relevance categories for this incident — CICD-SEC-3 (Dependency Chain Abuse) and CICD-SEC-4 (Poisoned Pipeline Execution) — both of which OWASP authors Daniel Krivelevich and Omer Gil explicitly built around the dependency-confusion and Codecov-class incidents. Shai-Hulud is the 2026 lineal descendant of those incidents, with the addition that the integrity-validation layer (CICD-SEC-9) is now part of the attack surface, not part of the defense.

What is the 6-step Shai-Hulud incident response plan for NE Indiana dev teams?
The plan is six steps in the order we recommend executing them. Run them in order — the early steps reduce the blast radius while the later steps close the longer-term gap. Total wall-clock time for a 12-to-30-developer Fort Wayne shop is roughly two working days plus a 14-day soak.
Step 1 — Freeze pnpm, npm, and PyPI installs; rotate anything fetched in the last 30 days
The first action is to stop new poisoned packages from landing on developer workstations or build runners. Pause CI installs in your pipeline (GitHub Actions, GitLab CI, or whatever you run), pause local installs on developer machines, and identify every package version pulled in the last 30 days for review. For Node projects, this looks like:
# List every dependency added or updated in the last 30 days npm ls --all --json > /tmp/deps.json git log --since="30 days ago" -- package-lock.json pnpm-lock.yaml # pnpm equivalent pnpm ls --recursive --depth Infinity --json > /tmp/deps-pnpm.json
For Python projects:
# pip-audit reads from your installed environment, requirements.txt, or pyproject.toml pip install pip-audit pip-audit --strict --requirement requirements.txt git log --since="30 days ago" -- poetry.lock requirements.txt
The output is the candidate list you cross-reference in Step 2. Do not skip the 30-day window because Shai-Hulud's earlier waves used time-delayed activation logic — a package fetched on day 5 might not start beaconing until day 20.
Step 2 — Audit lockfiles against the canonical 172-package list
This is the step that depends on the canonical list of 172 affected packages, which the VentureBeat reporting publishes as the authoritative reference. Pull the canonical list, then grep your lockfiles for any name-version pair that appears:
# Save the canonical list (one name@version per line) to ./shai-hulud-172.txt # Then grep package-lock.json and pnpm-lock.yaml grep -F -f ./shai-hulud-172.txt package-lock.json pnpm-lock.yaml poetry.lock
If you have a hit, treat the entire build that produced that artifact as compromised. The remediation is not “upgrade the package” — it is “rebuild the artifact from a known-clean base, then verify the rebuild with two independent signatures (Step 4).” If you have no hits, you are not in the clear; you are in the not-yet-detected state, which is why the remaining four steps still apply.
Step 3 — Rotate GitHub Actions OIDC tokens and deploy keys
If you publish from GitHub Actions, your OIDC tokens are the credentials Shai-Hulud is structurally interested in. Rotate the GitHub App credentials, regenerate any long-lived tokens in your repo or org secret stores, and rotate any cloud deploy keys (AWS access keys, Vercel deploy hooks, Cloudflare API tokens, Render deploy secrets) that were exposed in CI within the 30-day window. The credential-rotation discipline here overlaps directly with the framework we laid out in the credential attack vector on AI coding agents writeup — the principle is identical, the only difference is the threat actor's entry point.
GitHub Actions OIDC token rotation is a UI operation; the harder step is making sure the rotated tokens are picked up by every consuming runner, every connected cloud, and every linked deployment platform. For a 12-developer Fort Wayne agency, that audit is half a day. For a 30-developer in-house app team with three cloud accounts, it is a full day.
Step 4 — Upgrade provenance verification from green-badge to two-key signing
The structural fix Shai-Hulud forces is to stop treating provenance as a single point of trust. npm audit signatures will tell you which dependencies have verified registry signatures and attestations, per the npm provenance docs. That is a necessary baseline. The next layer is an organizational publisher allow-list — for each direct dependency, you declare the expected publisher identity and reject any package update that doesn't match, even if its provenance attestation is otherwise valid. The Sigstore transparency log gives you the audit trail; your allow-list gives you the policy.
For the typical NE Indiana mid-market shop, the practical implementation is a JSON manifest checked into the repo:
{
"trusted_publishers": {
"next": { "registry_signature": true, "publisher_id": "vercel" },
"react": { "registry_signature": true, "publisher_id": "fb" },
"@aws-sdk/client-s3": { "registry_signature": true, "publisher_id": "aws" }
},
"policy": "fail-closed"
}A pre-install hook reads the manifest and rejects any installation that violates it. This is the layer of defense that the green-badge era encouraged shops to skip.
Step 5 — Install a runtime postinstall-script monitor
Even with Steps 1–4, you want one more layer: a runtime monitor that observes what npm and pip do during install — what files they touch, what network connections they open, what processes they spawn. The shape of the control is exactly what CISA's ICT supply chain security guidance recommends as the “monitor, don't just gate” layer of defense.
The Cloud Radix Secure AI Gateway sits in this role for the AI agent surface — it watches outbound calls from any process running on a developer workstation or build runner and flags any destination that is not on the organizational allow-list. The reason that matters for Shai-Hulud is that a poisoned postinstall script will almost always beacon out to a command-and-control endpoint, and that beacon is the loudest detectable signal the worm makes. A gateway chokepoint catches it; a green-badge audit does not.
Step 6 — Institute a 14-day no-new-transitive-dependency freeze
Cap the response with a soak: for 14 days, no new transitive dependency gets added to any net-new business app, and any direct dependency update goes through a manual two-developer review. This is the un-glamorous step that most shops skip and that has the highest cost-to-benefit ratio across the whole plan. Fourteen days is short enough that product velocity recovers; it is long enough for the registry, the security community, and the canonical list to converge on the actual blast radius.
This step is also where you re-baseline your dependency posture. A typical Fort Wayne shop running this freeze finds two to four packages that should have been removed years ago, one or two packages whose maintainer status had quietly deteriorated, and a handful of ^-range version specifiers that should be pinned to specific patch versions. Those finds compound — and the next Shai-Hulud-class incident is easier to absorb.

How does Shai-Hulud play out for a Fort Wayne web agency, an in-house manufacturer app team, and a healthcare-adjacent SaaS shop?
The action plan is shape-invariant, but the implementation realities differ by shop profile. The three NE Indiana scenarios we work with most are worth naming explicitly because the response cadence is different in each.
Fort Wayne web agency (8–25 developers, mixed-client portfolio). The agency typically runs Next.js, Astro, or WordPress-with-headless-CMS for 30 to 80 client sites, deploys to Vercel or Cloudflare Pages, and publishes a couple of internal npm packages for shared component libraries. The fastest path to remediation is to freeze installs on Monday, audit every active client's lockfile against the canonical list by Tuesday, rotate Vercel and Cloudflare deploy hooks Wednesday, and stand up the two-key publisher allow-list across the agency's internal packages over the rest of the week. Client notification is the strategic question: any client whose lockfile contained a flagged package gets a written notification that conforms to whatever data breach addendum sits in the master services agreement. The agency's professional liability carrier will want a written timeline; producing one mid-incident is half as hard as producing one a week later.
In-house manufacturer app team (12–30 developers, an Auburn-area metal-stamping or injection-molding plant). The team typically runs an internal MES dashboard, a customer ordering portal, a shop-floor IoT ingestion pipeline, and a couple of integrations to a SAP or NetSuite ERP. The dependency surface is smaller per service but wider in total. The remediation order is reversed from the agency: rotate first (because the ERP and IoT credentials are the high-value secrets), audit second, freeze third. Operational considerations include coordination with the OT (operational technology) team — any rotated cloud credential that talks to the shop-floor PLC layer needs a maintenance window. We covered the broader manufacturer-AI governance shape in the Fort Wayne manufacturers' SAP AI governance playbook; the Shai-Hulud response folds in cleanly on top.
Healthcare-adjacent SaaS shop (4–10 engineers, HIPAA BAA in place). The smallest team has the highest regulatory exposure. The HIPAA Security Rule does not name npm provenance as a control, but the breach notification rule kicks in if the worm exfiltrated any PHI from the SaaS environment during the soak window. The remediation has to be sequenced with a written incident-response document that the HIPAA security officer can sign. Indiana's data breach notification statute applies in parallel for any non-PHI consumer data exposure; the Indiana Attorney General's Consumer Protection Division is the relevant state notification authority, and the practical cadence is a written notification within 45 days of the discovery of any unauthorized disclosure. A 4-engineer team usually needs an outside hand to run Steps 4–6; Steps 1–3 are achievable in-house.
In all three scenarios, the shape of the response is the same as what we laid out for the dev-team-level data leak class in the Fort Wayne vibe-coded shadow AI S3 data leak playbook — speed of action matters more than perfection of action. The shops we have watched handle these incidents well were the ones that started Step 1 within four hours of the public disclosure, not the ones that started Step 1 the following Monday.

How does this fit with the AI agent supply chain attacks we already cover?
A subtle point worth making for any IT director who reads both the Shai-Hulud headline and our existing AI-supply-chain coverage: these are two distinct attack classes, and the defense investments don't fully transfer. The Anthropic Skill scanners writeup covered the AI extension supply chain — an attacker poisons a Skill or MCP extension that an AI agent then trusts. The confused-deputy AI agent audit matrix covered the delegation-of-authority failure class — an agent acts on an unauthorized request using its own legitimate credentials. The zero-trust AI agents and credential isolation playbook covered the runtime credential surface — what happens when an agent gets compromised mid-run.
Shai-Hulud is upstream of all of them. It hits the standard developer dependency channel that the AI agent surface and the rest of the application surface both consume. A Shai-Hulud-poisoned package that lands in your Next.js project will be inside both your customer-facing app and your AI agent backend, because both pull from the same lockfile. The defenses are layered: provenance verification at the dependency layer (this post), agent identity and credential isolation at the runtime layer (the zero-trust piece), authority and approval boundaries at the action layer (the confused-deputy matrix). A mid-market shop that defends one layer and not the others is exposed at the layers it skipped. The same threat actor that poisoned the upstream dependency would not have to do much more work to weaponize the AI agent that consumes it.
The AI agent surface adds one specific complication: agentic systems often run with broader scopes than the human user who triggered them, and they call out to more endpoints. The OWASP Top 10 for LLM Applications 2025 calls this LLM06 — Excessive Agency — and a Shai-Hulud-class incident that lands inside an agentic codebase amplifies that excessive agency by giving the attacker a quiet outbound channel that looks like normal agent egress. The NIST AI Risk Management Framework's Map/Manage/Measure functions are the right organizing scaffold here: Map the dependency surface, Manage the rotation cadence, Measure runtime egress against baseline.
How does Cloud Radix run this in production?
The honest answer is that the Cloud Radix Secure AI Gateway is one chokepoint, not the whole solution. The gateway sits between the agent runtime and the outside world and enforces the egress allow-list — it catches the postinstall-script beacon a poisoned package would emit at Step 5 of the plan. It does not run the lockfile audit; it does not rotate the OIDC tokens; it does not maintain the publisher allow-list. Those steps are dev-team work, not gateway work. The reason we ship the gateway is that the egress monitoring layer is the one most mid-market teams do not have time to build themselves and is the layer that catches the worm even when an earlier defense has already failed.
For the AI Employee side of the stack, Cloud Radix AI Employees ship behind the gateway by default — the agent's outbound calls are policy-bound at the boundary, not at the model. That means a poisoned dependency that landed inside the agent's runtime cannot quietly call out to a command-and-control host even if the runtime is fully compromised. It is not a substitute for the six-step plan above; it is the safety net that catches the case where the plan was executed late or imperfectly.
Cloud Radix offers a regional Shai-Hulud incident response audit pilot for Auburn, Fort Wayne, DeKalb, Allen, Whitley, and Noble County firms — we run the six-step plan against your current CI/CD posture, deliver a written remediation log, and (if useful) stand up the Secure AI Gateway as the egress chokepoint. The pilot is sized for 8-to-30-developer shops and is intentionally short: two days for the active response, fourteen days of monitored soak, and a one-hour debrief. Contact Cloud Radix if you want the audit run against your stack before your next compliance review.

Frequently Asked Questions
Q1.What is the Shai-Hulud npm worm?
The Shai-Hulud npm worm is a self-replicating supply-chain attack on the npm and PyPI package registries that, in its 2026-05-12 wave reported by VentureBeat, compromised 172 packages while producing valid provenance signatures. Earlier waves of the worm — first seen in late 2025 — stole maintainer credentials from infected developer workstations and used them to push poisoned package updates. The new wave is more dangerous because the malicious builds carry legitimate-looking provenance attestations, breaking the assumption that a green provenance badge means a package is safe.
Q2.Why doesn't a valid provenance signature mean a package is safe anymore?
npm provenance attests to where and how a package was built — the source repo, the build pipeline, the signing identity — federated through Sigstore and an OIDC provider. Shai-Hulud subverts the publishing identity itself rather than fabricating a different one downstream, so the resulting build is signed correctly by the wrong code path. The provenance metadata is truthful about a process that has been compromised upstream. The fix is to combine provenance verification with an organizational publisher allow-list and runtime egress monitoring, not to abandon provenance.
Q3.Is my Fort Wayne dev shop exposed to Shai-Hulud?
If your shop installs from the public npm or PyPI registries on any developer workstation, CI runner, or production deployment runner, you are exposed by default. The exposure is not zero even if none of your direct dependencies are on the canonical 172-package list, because transitive dependencies — the packages your packages depend on — are part of the attack surface. The six-step incident response plan in this post is the recommended action regardless of whether you find a hit during the Step 2 audit.
Q4.How long does it take a mid-market team to run the 6-step plan?
For an 8-to-25-developer Fort Wayne web agency or in-house app team, the active response (Steps 1–5) is roughly two working days, and the 14-day no-new-transitive-dependency freeze (Step 6) runs in parallel with normal product work. For a 4-to-10-engineer healthcare-adjacent SaaS shop with HIPAA obligations, plan an additional day for documentation and notification work. For a 30-developer in-house manufacturer app team with operational technology integrations, plan an additional half-day for OT coordination on credential rotation.
Q5.What is the difference between Shai-Hulud and the AI-extension supply chain attacks Cloud Radix has covered?
Shai-Hulud targets the standard developer dependency channel — npm and PyPI packages — which is consumed by every application surface a shop runs, including its AI agent runtime. The AI-extension supply chain attacks Cloud Radix covered in the Anthropic Skill scanners writeup target the AI agent's extension layer specifically — Skills, MCP servers, plugins — and rely on the agent trusting an extension descriptor. The defenses are different: provenance and lockfile audit defeat Shai-Hulud at the dependency layer, while signed tool descriptors and runtime authority boundaries defeat the AI-extension class at the agent layer. A mid-market shop needs both.
Q6.Are there Indiana notification obligations if a Shai-Hulud-poisoned dependency caused a data exposure?
Yes. Indiana's data breach notification statute requires notification to affected consumers and to the Indiana Attorney General's Consumer Protection Division if personal information was, or is reasonably believed to have been, exposed without authorization. HIPAA-regulated SaaS shops have an additional, parallel set of obligations under the HIPAA Breach Notification Rule. The practical recommendation is to document the response timeline contemporaneously, even if you ultimately conclude no notification is required — a documented "we investigated and confirmed no exposure" is a much stronger defensible position than reconstructing the timeline later.
Q7.What is the role of the Secure AI Gateway in defending against Shai-Hulud?
The Secure AI Gateway sits at the egress boundary and enforces an outbound allow-list, so a poisoned package that beacons to a command-and-control host is blocked at the network boundary even when the runtime itself is compromised. The gateway does not perform the lockfile audit or rotate OIDC tokens — those are dev-team activities in Steps 2 and 3 of the action plan. The gateway is the safety net that catches Step 5's runtime-monitor layer, particularly for shops that have not built equivalent egress monitoring on their own.
Sources & Further Reading
- VentureBeat: venturebeat.com/security/shai-hulud-worm-172-npm-pypi-packages-valid-provenance-ci-cd-audit — Protect your enterprise now from the Shai-Hulud worm and npm vulnerability in 6 actionable steps.
- OWASP Foundation: owasp.org/www-project-top-10-ci-cd-security-risks — OWASP Top 10 CI/CD Security Risks.
- Sigstore / Open Source Security Foundation: sigstore.dev — A new standard for signing, verifying, and protecting software.
- npm Docs: docs.npmjs.com/generating-provenance-statements — Generating Provenance Statements.
- Cybersecurity and Infrastructure Security Agency (CISA): cisa.gov/topics/cyber-threats-and-advisories/information-communications-technology-supply-chain-security — ICT Supply Chain Security.
- OWASP GenAI Security Project: genai.owasp.org/llm-top-10 — OWASP Top 10 for LLM Applications 2025.
- State of Indiana: in.gov/attorneygeneral/consumer-protection-division — Consumer Protection Division — Indiana Attorney General.
- NIST: nist.gov/itl/ai-risk-management-framework — AI Risk Management Framework.
Run the Shai-Hulud Six-Step Audit on Your Stack
Cloud Radix runs a focused two-day Shai-Hulud incident response audit for Auburn, Fort Wayne, DeKalb, Allen, Whitley, and Noble County dev shops — written remediation log, optional Secure AI Gateway standup, fourteen-day soak.



