Seven steps to AI supply chain visibility — before a breach forces the issue

HERO IMAGE
Four out of 10 enterprise applications this year will include task-specific AI agents. Yet, research from Stanford University’s 2025 Index report shows that only 6% of organizations have an advanced AI security strategy.

Palo Alto Networks predicts that 2026 will bring the first major lawsuit holding executives personally liable for rogue AI actions. Many organizations are struggling with how to handle the rapid and unpredictable nature of AI threats. Governance does not respond to quick solutions such as bigger budgets or more staff numbers.

There is a difference in visibility when it comes to how, where, when, and through what workflows and tools LLMs are being used or modified. Model SBOMs are the Wild West of governance today, one CISO told VentureBeat. Without visibility into which models are running where, AI security fails to make predictions – and incident response becomes impossible.

Over the past several years, the US government has adopted a policy of mandating SBOM for all software acquired for use. AI models need more of them, and the lack of continuous improvement in this area is one of the most significant risks of AI.

Visibility gap is vulnerability

Harness surveyed 500 security practitioners in the US, UK, France and Germany. The findings should concern every CISO: 62% of their peers have no way of telling where LLMs are in use in their organization. There is a need for greater rigor and transparency at the SBOM level to improve model traceability, data usage, integration points and usage patterns by the department.

Enterprises continue to experience increasing levels of malicious injection (76%), insecure LLM code (66%), and jailbreaking (65%). These are some of the most lethal risks and attack methods that adversaries use to gain anything from an organization’s AI modeling and LLM efforts. Despite spending millions on cybersecurity software, many organizations are unable to see these adversaries’ intrusion attempts because they are wrapped up in stealth techniques and comparable attack tradecraft that cannot be detected by legacy perimeter systems.

“Shadow AI has become the new enterprise blind spot,” said Adam Arellano, Field CTO at Harness. “Traditional security tools were built for static code and predictable systems, not adaptive, learning models that evolve daily.”

IBM’s Cost of Data Breach 2025 report measures the cost, revealing that 13% of organizations reported a breach of AI models or applications last year. 97% of the breaches lacked AI access controls. One in five reported breaches was due to shadow AI or unauthorized AI use. Shadow AI incidents cost $670,000 more than their comparable baseline intrusion counterparts. When no one knows which model runs where, incident response cannot expand the scope of impact.

Why stop at SBOM model file?

Executive Order 14028 (2021) and OMB Memorandum M-22-18 (2022) require federal vendors to have software SBOMs. NIST’s AI Risk Management Framework, released in 2023, explicitly calls for AI-BOMs as part of its “map” function, acknowledging that traditional software SBOMs do not capture model-specific risks. But software dependencies are resolved at build time and remain stable.

In contrast, model dependencies are resolved at runtime, often receiving weights from HTTP endpoints during initialization, and constantly changing through retraining, drift refinement, and feedback loops. LoRA adapters modify weights without version control, making it impossible to track which model version is actually running in production.

Why this matters to security teams: When AI models are saved in pickle format, loading them is like opening an email attachment that executes code on your computer, except that these files acting like attachments are trusted by default in production systems.

A PyTorch model saved this way is serialized Python bytecode that must be deserialized to be loaded and executed. When torch.load() runs, the pickle opcodes are executed sequentially. Any callable embedded in the stream becomes active. These typically include os.system(), network connections, and reverse shell.

SafeTensors, an alternative format that stores only numerical tensor data without executable code, addresses the inherent risks of pickles. Still, migration means rewriting load functions, re-verifying model accuracy, and potentially losing access to older models where the original training code no longer exists. This is one of the primary factors hindering adoption. In many organizations, it’s not just policy, it’s an engineering effort.

Model files are not passive artifacts – they are executable supply chain entry points.

Standards have been in place and in place for years, but adoption continues to be delayed. CycloneDX 1.6 added ML-BOM support in April 2024. AI profiles were included in SPDX 3.0, released in April 2024. ML-BOMs complement but do not replace documentation frameworks such as model cards and datasheets for datasets, which focus on performance characteristics and training data ethics rather than prioritizing supply chain provenance. VentureBeat continues to lag behind in adoption given how quickly this sector is becoming a potential threat to models and LLMs.

A June 2025 LineAJ survey found that 48% of security professionals admit that their organizations are falling behind in SBOM requirements. Adoption of ML-BOM is quite low.

ground level:Tooling exists. What is lacking is operational urgency.

AI-BOM enable response, not prevention

AI-BOMs are forensics, not firewalls. When ReversingLabs discovered the nullifAI-compromised models, the documented provenance would have immediately identified which organizations had downloaded them. Knowing this is invaluable for incident response, while practically useless for prevention. That factor needs to be taken into account when budgeting for AI-BOM security.

The ML-BOM tooling ecosystem is rapidly maturing, but software SBOMs are not there yet. Tools like Sift and Trivi generate complete software lists in minutes. ML-BOM tooling is earlier in that curve. There are vendor shipping solutions, but integration and automation still require additional steps and more effort. Organizations starting now may need manual processes to fill the gaps.

AI-BOM will not prevent model toxicity because it occurs during training, often before an organization has downloaded the model. They also won’t stop quick injection, because that exploits the attack model, not where it came from. Prevention requires runtime security that includes input validation, prompt firewall, output filtering, and tool call validation for agentive systems. AI-BOM are visibility and compliance tools. Valuable, but not a substitute for runtime security. CISOs and security leaders are increasingly relying on both.

The attack surface keeps expanding

JFrog’s 2025 Software Supply Chain Report documented more than 1 million new models coming to Hugging Face in 2024 alone, with a 6.5x increase in malicious models. As of April 2025, Protect AI’s scan of 4.47 million model versions found 352,000 insecure or suspicious issues across 51,700 models. The attack surface expanded faster than anyone could keep track.

In early 2025, ReversingLabs discovered using malicious models "nullifAI" Stealth techniques that eluded PixelScan detection. Hugging Face responded within 24 hours, removing the models and updating PicklesScan to detect similar theft techniques, showing that platform security is improving despite increasing attacker sophistication.

“Many organizations are enthusiastically adopting public ML models to drive rapid innovation,” said Yoav Landman, CTO and co-founder of JFrog. “However, more than a third still rely on manual efforts to manage access to secure, approved models, leading to potential oversights.”

Seven Steps to AI Supply Chain Visibility

The difference between hours and weeks in AI supply chain incident response is reduced by preparation. Organizations with visibility built before a breach have the insight they need to respond with greater accuracy and speed. Which are without any hassle. None of the following requires new budgets – just a decision to take AI model governance as seriously as software supply chain security.

  1. Commit to defining processes for creating a model inventory and keeping it running. Survey ML platform teams. Scan cloud spend for SageMaker, Vertex AI, and Bedrock usage. Review the Hugging Face download in the network logs. A spreadsheet does the trick: model name, owner, data classification, deployment location, source, and last validation date. You can’t protect what you can’t see.

  2. Use advanced techniques for management and redirection Chaya Aye Use apps, tools, and platforms that are safe. Survey every department. Check API keys in environment variables. Realize that accounting, finance, and consulting teams can have sophisticated AI apps with multiple APIs that connect directly to and utilize company proprietary data. The 62% visibility gap exists because no one asked.

  3. Production models and designs always require human approval for man-in-the-middle workflows. Every model that touches customer data needs a named owner, documented purpose, and an audit trail that shows who approved the deployment. Just as red teams at Anthropic, OpenAI, and other AI companies work, they design human-in-the-middle approval processes for every model release.

  4. Consider mandating SafeTensors for new deployments. Policy change doesn’t cost anything. SafeTensors only stores numerical tensor data, no code execution occurs on load. Grandfathered existing pickle models with documented risk acceptance and sunset deadlines.

  5. Consider Piloting ML-BOM First for the top 20% risk model. Choose those who touch customer data or make business decisions. Document architecture, training data sources, base model lineage, framework dependencies. Use CycloneDX 1.6 or SPDX 3.0. If you haven’t already been following along, get started immediately, realizing that an incomplete origin doesn’t defeat anyone as events unfold.

  6. Treat each model as a supply chain decision, so that it becomes part of your organization’s muscle memory. Verify cryptographic hash before load. Cache models internally. Block runtime network access to the model execution environment. Enterprises apply the same rigor they learned from LeftPad, event-stream, and Colors.js.

  7. Add AI governance to vendor contracts during the next renewal cycle. SBOM, training data provenance, model versioning and event notification SLAs are required. Ask if your data trains future models. There is no cost to request.

2026 will be the year of reckoning for AI SBOM

Securing AI models is becoming a boardroom priority. EU AI Act sanctions are already in effect, with fines reaching €35 million or 7% of global revenue. EU Cyber ​​Resilience Act SBOM requirements begin this year. Full AI Act compliance is required by August 2, 2027.

Looking at cyber insurance carriers. Given the $670,000 premium for shadow AI breaches and emerging executive liability risk, expect AI governance documentation to become a policy requirement this year, just as ransomware preparedness became table stakes after 2021.

The SEI Carnegie Mellon SBOM Harmonization PlugFest analyzed 243 SBOMs from 21 tool vendors for similar software and found significant variation in component counts. For AI models with embedded dependencies and executable payloads, the stakes are higher.

The first toxic model incident whose response cost seven figures and resulted in fines will make the case what should have already been clear.

Software SBOMs became essential after the supply chain proved to be the easiest target for attackers. AI supply chains are more dynamic, less visible, and harder to control. The only organizations that will scale AI securely are those building visibility now – before they need it.



<a href

Leave a Comment