The next phase of software supply chain security isn’t about better software supply chain inventory management—it’s the realization that distributed, micro-services architecture expands an application’s “supply chain” beyond the walls of isolated, monolithic containers to a dynamic graph of interconnected services working in concert.
Kate Stewart, co-founder of SPDX and one of the most influential voices in software supply chain security, discovered this firsthand while developing SPDX 3.0. Users were importing SBOMs into databases and asking interconnection questions that the legacy format couldn’t answer. Her key insight drove the development of SPDX 3.0: “It’s more than just software now, it really is a system.” The goal became transforming the SBOM format into a graph-native data structure that captures complex interdependencies between constellations of services.
In a recent interview with Anchore’s Director of Developer Relations on the Future of SBOMs, Stewart shared insights, shaped by decades of collaboration in the trenches with SBOM users and the sculpting of SBOM standards based on ground truth needs. Her perspectives are uniquely tailored to illuminate the challenge of adapting traditional security models designed for fully self-contained applications to the world of distributed micro-services architectures.
The architectural evolution from monolithic, containerized application to interconnected constellations of single-purpose services doesn’t just change how software is built—it fundamentally changes what we’re trying to secure.
Learn about how SBOMs have adapted to the world of micro-services architecture with the co-founder of SPDX and SBOMs.
When Software Became Systems
In the containerized monolith era, traditional SBOMs (think: < SPDX 2.2
) were perfectly suited for their purpose. They were designed for self-contained applications with clear boundaries where everything needed was packaged together. Risk assessment was straightforward: audit the container, secure the application.
Thing to scan 👇
================
+-------------------------------------------------+
| Container |
| +-------------------------------------------+ |
| | Monolithic Application | |
| | +----------+ +---------+ +----------+ | |
| | | Frontend | | Backend | | Database | | |
| | +----------+ +---------+ +----------+ | |
| +-------------------------------------------+ |
+-------------------------------------------------+
[ User ]
|
v
+------------+
| Frontend | (container) 👈 Thing...
+------------+
|
v
+--------------+
| API Server | (container) 👈 [s]...
+--------------+
/ \
v v
+----------+ +--------+
| Auth Svc | | Orders | (container) 👈 to...
+----------+ +--------+
\ /
v v
+------------+
| Database | (container) 👈 scan.
+------------+
But the distributed architecture movement changed everything. Cloud-native architectures spread components across multiple domains. Microservices created interdependencies that span networks, data stores, and third-party services. AI systems introduced entirely new categories of components including training data, model pipelines, and inference endpoints. Suddenly, the neat boundaries of traditional applications dissolved into complex webs of interconnected services.
Even with this evolution in software systems, the fundamental question of software supply chain security hasn’t evolved. Security teams still need to know, “what showed up; at what point in time AND do it at scale.” The new challenge is that system complexity has exploded exponentially and the legacy SBOM standards weren’t prepared for it.
Supply chain risk now flows through connections, not just components. Understanding what you’re securing requires mapping relationships, not just cataloging parts.
But if the structure of risk has changed, so has the nature of vulnerabilities themselves.
Where Tomorrow’s Vulnerabilities Will Hide
The next generation of critical vulnerabilities won’t just be in code—they’ll emerge from the connections and interactions between complex webs of software services.
Traditional security models relied on a castle-and-moat approach: scan containers at build time, stamp them for clearance, and trust them within the perimeter. But distributed architectures expose the fundamental flaw in this thinking. When applications are decomposed into atomic services the holistic application context is lost. A low severity vulnerability in one system component that is white listed for the sake of product delivery speed can still be exploited and alter a payload that is benign to the exploited component but disastrous to a downstream component.
The shift to interconnected services demands a zero-trust security paradigm where each interaction between services requires the same level of assurance as initial deployment. Point-in-time container scans can’t account for the dynamic nature of service-to-service communication, configuration changes, or the emergence of new attack vectors through legitimate service interactions.
In order to achieve this new security paradigm, SPDX needed a facelift. The new idea about an SBOM that can store the entire application context across independent services is sometimes called a SaaSBOM. SPDX 3.0 implements this idea via a new concept called profiles, where application profiles can be built from a collection of individual service profiles and operations or infrastructure profiles can also capture data on the build and runtime environments.
Your risk surface isn’t just your code anymore—it’s your entire operational ecosystem from hardware component supplier to data providers to third-party cloud service.
Understanding these expanding risks requires a fundamental shift from periodic snapshots (i.e., castle-and-moat posture) to continuous intelligence (i.e., zero-trust posture).
From Periodic Audits to Continuous Risk Intelligence
The shift to zero-trust architectures requires more than just changing security policies—it demands a fundamental reimagining of how we monitor and verify the safety of interconnected systems in real-time.
Traditional compliance operates on snapshot thinking: quarterly audits, annual assessments, point-in-time inventories. This approach worked when applications were monolithic containers that changed infrequently. But when services communicate continuously across network boundaries, static assessments become obsolete before they’re complete. By the time audit results are available, dozens of deployments, configuration changes, and scaling events have already altered the system’s risk profile.
Kate Stewart’s vision of “continuous compliance” addresses this fundamental mismatch between static assessment and dynamic systems. S—System—BOMs capture dependencies and their relationships in real-time as they evolve, enabling automated policy enforcement that can keep pace with DevOps-speed development. This continuous visibility means teams can verify that each service-to-service interaction maintains the same security assurance as initial deployment, fulfilling the zero-trust requirement.
The operational transformation is profound. Teams can understand blast radius immediately when incidents occur, tracing impact through the actual dependency graph rather than outdated documentation. Compliance verification happens inline with development pipelines rather than as a separate audit burden. Most importantly, security teams can identify and address misconfigurations or policy violations before they create exploitable vulnerabilities.
This evolution transforms security from a periodic checkpoint into continuous strategic intelligence, turning what was once a cost center into a competitive advantage that enables faster, safer innovation.
The Strategic Imperative—Why This Matters Now
Organizations that adapt to system-level visibility will have decisive advantages in risk management, compliance, and operational resilience as the regulatory and competitive landscape evolves.
The visibility problem remains foundational: you can’t secure what you can’t see. Traditional tools provide (system) component visibility, but emergent system risks only emerge through relationship mapping. Kate emphasizes this idea noting that “safety is a system property”. If you want to achieve system-level guarantees of security or risk, being able to see only the trees and not the forest won’t cut it.
Regulatory evolution is driving urgency around this transition. Emerging regulations (e.g., EO 14028, EU CRA, DORA, FedRAMP, etc.) increasingly focus on system-level accountability, making organizations liable for the security of entire systems, including interactions with trusted third-parties. Evidence requirements are evolving from point-in-time documentation to continuously demonstrable evidence, as seen in initiatives like FedRAMP 20x. Audit expectations are moving toward continuous verification rather than periodic assessment.
Competitive differentiation emerges through comprehensive risk visibility that enables faster, safer innovation. Organizations achieve reduced time-to-market through automated compliance verification. Customer trust builds through demonstrable security posture. Operational resilience becomes a competitive moat in markets where system reliability determines business outcomes.
Business continuity integration represents perhaps the most significant strategic opportunity. Security risk management aligns naturally with business continuity planning. System understanding enables scenario planning and resilience testing. Risk intelligence feeds directly into business decision-making. Security transforms from a business inhibitor into an enabler of agility.
This isn’t just about security—it’s about business resilience and agility in an increasingly interconnected world.
The path forward requires both vision and practical implementation.
The Path Forward
The transition from S—software—BOMs to S—system—BOMs represents more than technological evolution—it’s a fundamental shift in how we think about risk management in distributed systems.
Four key insights emerge from this evolution.
- Architectural evolution demands corresponding security model evolution—the tools and approaches that worked for monoliths cannot secure distributed systems.
- Risk flows through connections, requiring graph-based understanding that captures relationships and dependencies.
- Continuous monitoring and compliance must replace periodic audits to match the pace of modern development and deployment.
- System-level visibility becomes a competitive advantage for organizations that embrace it early.
Organizations that make this transition now will be positioned for success as distributed architectures become even more complex and regulatory requirements continue to evolve. The alternative—continuing to apply monolithic security thinking to distributed systems—becomes increasingly untenable.
The future of software supply chain security isn’t about better inventory—it’s about intelligent orchestration of system-wide risk management.
If you’re interested in how to make the transition from generating static software SBOMs to dynamic system SBOMs, check out Anchore SBOM or reach out to our team to schedule a demo.