Welcome to the fourth installment in our 5-part series on software bill of materials (SBOMs) In our previous posts, we’ve covered SBOM fundamentals, SBOM generation and scalable SBOM management. Now, we shift our focus to the bigger picture, exploring strategic perspectives from software supply chain thought leaders.
Understanding the evolving role of SBOMs in software supply chain security requires more than just technical knowledge—it demands strategic vision. In this post, we share insights from industry experts who are shaping the future of SBOM standards, practices, and use-cases.
Insights on SBOMs in the LLM Era
LLMs have impacted every aspect of the software industry and software supply chain security is no exception. To understand how industry luminaries like Kate Stewart are thinking about the future of SBOMs through this evolution, watch Understanding SBOMs: Deep Dive with Kate Stewart.
This webinar highlights several key points:
LLMs pose unique transparency challenges:The emergence of large language models reduces transparency since behavior is stored in datasets and training processes rather than code
Software introspection limitations: Already difficult with traditional software, introspection becomes both harder AND more important in the LLM era
Dataset lineage tracking: Stewart draws a parallel between SBOMs for supply chain security and the need for dataset provenance for LLMs
Behavior traceability: She advocates for “SBOMs of [training] datasets” that allow organizations to trace behavior back to a foundational source
“Transparency is the path to minimizing risk.” —Kate Stewart
This perspective expands the SBOM concept beyond mere software component inventories to encompass the broader information needed for transparency in AI-powered systems.
Content over format debates: Springett emphasizes that “content is king”—the actual data within SBOMs and their practical use-cases matter far more than format wars
Machine-readable attestations: Historically manual compliance activities can now be automated through structured data that provides verifiable evidence to auditors
Business process metadata: CycloneDX can include compliance process metadata like security training completion, going beyond component inventories
Compliance flexibility: The ability to attest to any standard, from government requirements to custom internal company policies
Quality-focused approach: Springett introduces five dimensions for evaluating SBOM completeness and a maturity model with profiles for different stakeholders (AppSec, SRE, NetSec, Legal/IP)
“The end-goal is transparency.” — Steve Springett
Echoing the belief of Kate Stewart, Springett reinforces the purpose of SBOMs as transparency tools. His perspective transforms our understanding of SBOMs from static component inventories to versatile data containers that attest to broader security and compliance activities.
Kelsey Hightower, Google’s former distinguished engineer, offers a pragmatic perspective that reframes security in developer-friendly terms. Watch Software Security in the Real World with Kelsey Hightower to learn how his “Security as Unit Tests” mental model helps developers integrate security naturally into their workflow by:
Treating security requirements as testable assertions
How SBOMs act as source of truth for supply chain data for tests
Integrating verification into the CI/CD pipeline
Making security outcomes measurable and reproducible
Hightower’s perspective helps bridge the gap between development practices and security requirements, with SBOMs serving as a foundational element in automated verification.
As we’ve seen from these expert perspectives, SBOMs are not just a technical tool but a strategic asset that intersects with many aspects of software development and security. In our final post, we’ll explore these intersections in depth, examining how SBOMs relate to DevSecOps, open source security, and regulatory compliance.
Stay tuned for the final installment in our series, “SBOMs as the Crossroad of the Software Supply Chain,” where we’ll complete our comprehensive exploration of software bills of materials.
Welcome to the third installment in our 5-part series on software bill of materials (SBOMs)—check here for day 1 and day 2. Now, we’re leveling up to tackle one of the most significant challenges organizations face: scaling SBOM management to keep pace with the velocity of modern, DevOps-based software development. After you’ve digested this part, jump into day four, “SBOM Insights on LLMs, Compliance Attestations and Security Mental Models“.
As your SBOM adoption graduates from proof-of-concept to enterprise implementation, several critical questions emerge:
How do you manage thousands—or even millions—of SBOMs?
How do you seamlessly integrate SBOM processes into complex CI/CD environments?
How do you extract maximum value from your growing SBOM repository?
Let’s explore three powerful resources that form a roadmap for scaling your SBOM initiative across your organization.
SBOM Automation: The Key to Scale
After you’ve generated your first SBOM and discovered the value, the next frontier is scaling across your entire software environment. Without robust automation, manual SBOM processes quickly become bottlenecks in fast-moving DevOps environments.
Key benefits:
Eliminates time-consuming manual SBOM generation and analysis
Ensures consistent SBOM quality across all repositories
Enables real-time security and compliance insights
The webinar Understanding SBOMs: How to Automate, Generate & Manage SBOMs delivers practical strategies for building automation into your SBOM pipeline from day one. This session unpacks how well-designed SBOM management services can handle CI/CD pipelines that process millions of software artifacts daily.
Real-world SBOMs: How Google Scaled to 4M+ SBOMs Daily
Nothing builds confidence like seeing how industry leaders have conquered the same challenges you’re facing. Google’s approach to SBOM implementation offers invaluable lessons for organizations of any size.
How Google architected their SBOM ecosystem for massive scale
Integration patterns that connect SBOMs to their broader security infrastructure
Practical lessons learned during their implementation journey
This resource transforms theoretical SBOM scaling concepts into tangible strategies you can adapt for your environment. If an organization as large and complex as Google can successfully deploy an SBOM initiative at scale—you can too!
Building a scalable SBOM data pipeline with advanced features like vulnerability management and automated compliance policy enforcement represents a significant engineering investment. For many organizations, leveraging purpose-built solutions makes strategic sense.
Anchore Enterprise offers an alternative path with three integrated components:
Anchore SBOM: A turnkey SBOM management platform with enterprise-grade features
Anchore Secure: Cloud-native vulnerability management powered by comprehensive SBOM data
Anchore Enforce: An SBOM-driven policy enforcement engine that automates compliance checks
As you scale your SBOM initiative, keep one eye on emerging trends and use cases. The SBOM ecosystem continues to evolve rapidly, with new applications emerging regularly.
In our next post, we’ll explore insights from industry experts on the future of SBOMs and their strategic importance. Stay tuned for part four of our series, “SBOM Insights on LLMs, Compliance Attestations and Security Mental Models”.
This post is designed for hands-on practitioners—the engineers, developers, and security professionals who want to move from theory to implementation. We’ll explore practical tools and techniques for generating, integrating, and leveraging SBOMs in your development workflows.
A list of the 4 most popular SBOM generation tools
How to install and configure Syft
How to scan source code, a container or a file directory’s supply chain composition
How to generate an SBOM in CycloneDX or SPDX formats based on the supply chain composition scan
A decision framework for evaluating and choosing an SBOM generator
Generating accurate SBOMs is the foundation of your software supply chain transparency initiative. Without SBOMs, valuable use-cases like vulnerability management, compliance audit management or license management are low-value, time sinks instead of efficient, value-add activities.
If you’re looking for step-by-step guides for popular ecosystems like Javascript, Python, GitHub or Docker 👈follow the links).
Under the Hood: How SBOM Generation Works
For those interested in the gory technical details of how a software composition analysis (SCA) tool and SBOM generator scale this function, How Syft Scans Software to Generate SBOMs is the perfect blog post to scratch that intellectual itch.
What you’ll learn:
The scanning algorithms that identify software components
How Syft handles package ecosystems (npm, PyPI, Go modules, etc.)
Performance optimization techniques for large codebases
Ways to contribute to the open source project
Understanding the “how” behind the SBOM generation process enables you to troubleshoot edge cases and customize tools when you’re ready to squeeze the most value from your SBOM initiative.
Pro tip: Clone the Syft repository and step through the code with a debugger to really understand what’s happening during a scan. It’s the developer equivalent of taking apart an engine to see how it works.
Advancing with Policy-as-Code
Our guide, The Developer’s Guide to SBOMs & Policy-as-Code, bridges the gap between generating SBOMs and automating the SBOM use-cases that align with business objectives. A policy-as-code strategy allows many of the use-cases to scale in cloud native environments and deliver outsized value.
What you’ll learn:
How to automate tedious compliance tasks with PaC and SBOMs
How to define security policies (via PaC) that leverage SBOM data
Integration patterns for CI/CD pipelines
How to achieve continuous compliance with automated policy enforcement
Combining SBOMs with policy-as-code creates a force multiplier for your security efforts, allowing you to automate compliance and vulnerability management at scale.
Pro tip: Start with simple policies that flag known CVEs, then gradually build more sophisticated rules as your team gets comfortable with the approach
Taking the Next Step
After dipping your feet into the shallow end of SBOM generation and integration, the learning continues with an educational track on scaling SBOMs for enterprise-grade deployments. In our next post, we’ll lay out how to take your SBOM initiative from proof-of-concept to production, with insights on automation, management, and real-world case studies.
Stay tuned for part three of our series, “DevOps-Scale SBOM Management,” where we’ll tackle the challenges of implementing SBOMs across large teams and complex environments.
This blog post is the first in our 5-day series exploring the world of SBOMs and their role in securing the foundational but often overlooked 3rd-party software supply chain. Whether you’re just beginning your SBOM journey or looking to refresh your foundational knowledge, these resources will provide a solid understanding of what SBOMs are and why they matter. Day two is a guide to “SBOM Generation Step-by-Step“, day three presents “DevOps-Scale SBOM Management“, and day four, “SBOM Insights on LLMs, Compliance Attestations and Security Mental Models“.
Learn SBOM Fundamentals in 1 Hour or Less
Short on time but need to understand SBOMs yesterday? Start your educational journey with this single-serving webinar on SBOM fundamentals—watch it at 2x for a true speedrun.
This webinar features Anchore’s team of SBOM experts who guide you through all the SBOM basics – topics covered:
Defining SBOM standards and formats
Best practices for generating and automating SBOMs
Integrating SBOMs into existing infrastructure and workflows
Practical tips for protecting against emerging supply chain threats
“You really need to know what you’re shipping and what’s there.” —Josh Bressers
This straightforward yet overlooked insight demonstrates the foundational nature of SBOMs to software supply chain security. Operating without visibility into your components creates significant security blind spots. SBOMs create the transparency needed to defend against the rising tide of supply chain attacks.
Improve SBOM Initiative Success: Crystalize the Core SBOM Mental Models
Explain how SBOMs are the central component of software supply chain
A quick reference table of SBOM use-cases
This gives you a strong foundation to build your SBOM initiative on. The mental models presented in the eBook help you:
avoid common implementation pitfalls,
align your SBOM strategy with security objectives, and
communicate SBOM value to stakeholders across your organization.
Rather than blindly following compliance requirements, you’ll learn the “why” behind SBOMs and make informed decisions about automation tools, integration points, and formats that are best suited for your specific environment.
Security teams: Rapidly identify vulnerable components when zero-days hit the news
Engineering teams: Make data-driven architecture decisions about third-party dependencies to incorporate
Compliance teams: Automate evidence collection for compliance audits
Legal teams: Proactively manage software license compliance and IP risks
Sales teams: Accelerate sales cycles by using transparency as a tool to build trust fast
“Transparency is the path to minimizing risk.” —Kate Stewart, VP of Embedded Systems at The Linux Foundation and Founder of SPDX
This core SBOM principle applies across all business functions. Our white paper shows how properly implemented SBOMs create a unified source of truth about your software components that empowers teams beyond security to make better decisions.
Perfect for technical leaders who need to justify SBOM investments and drive cross-team adoption.
After completing the fundamentals, you’re ready to get your hands dirty and learn the nitty-gritty of SBOM generation and CI/CD build pipeline integration. In our next post, we’ll map out a technical learning path with deep-dives for practitioners looking to get hands-on experience. Stay tuned for part two of our series, “SBOM Generation Step-by-Step”.
Anchore has been leading the SBOM charge for almost a decade: providing educational resources, tools and insights, and to help organizations secure their software supply chains. To help organizations navigate this critical aspect of software development, we’re excited to announce SBOM Learning Week!
Each day of the week we will be publishing a new blog post that provides an overview of how to progress on your SBOM educational journey. By the end of the week, you will have a full learning path laid out to guide you from SBOM novice to SBOM expert.
Why SBOM Learning Week, Why Now?
With recent executive orders (e.g., EO 14028) mandating SBOMs for federal software vendors and industry standards increasingly recommending their adoption, organizations across sectors are racing to weave SBOMs into their software development lifecycle. However, many still struggle with fundamental questions:
What exactly is an SBOM and why does it matter?
How do I generate, manage, and leverage SBOMs effectively?
How do I scale SBOM practices across a large organization?
What do leading experts predict for the future of SBOM adoption?
How do SBOMs integrate with existing security and development practices?
SBOM Learning Week answers these questions through a carefully structured learning journey designed for both newcomers and experienced practitioners.
What to Expect Each Day
Monday: SBOM Fundamentals
We’ll start with the fundamentals, exploring what SBOMs are, why they matter, and the key standards that define them. This foundational knowledge will prepare you for the more advanced topics to come.
Tuesday: Technical Deep-dives
Day two focuses on hands-on implementation, with practical guidance for generating SBOMs using open source tools, integrating them into CI/CD pipelines, and examining how SBOM generation actually works under the hood.
Wednesday: DevOps-Scale SBOM Management
Moving beyond initial implementation, we’ll explore how organizations can scale their SBOM practices across enterprise environments, featuring real-world examples from companies like Google.
Thursday: SBOM Insights on LLMs, Compliance Attestations and Security Mental Models
On day four, we’ll share insights from industry thought leaders on how software supply chain security and SBOMs are adapting to LLMs, how SBOMs are better thought of as compliance data containers than supply chain documents and how SBOMs and vulnerability scanners fit into existing developer mental models.
Friday: SBOMs as the Crossroad of the Software Supply Chain
Whether you’re a security leader looking to strengthen your organization’s defenses, a developer seeking to integrate security into your workflows, or an IT professional responsible for compliance, SBOM Learning Week offers valuable insights for your role.
Each day’s post will build on the previous content, creating a comprehensive resource you can reference as you develop and mature your organization’s SBOM initiative. We’ll also be monitoring comments and questions on our social channels (LinkedIn, BlueSky, X) throughout the week to help clarify concepts and address specific challenges you might face.
Mark your calendars and join us starting Monday as we embark on this exploration of one of today’s most important cybersecurity technologies. The journey to a more secure software supply chain begins with understanding what’s in your code—and SBOM Week will show you exactly how to get there.
That’s why we’re excited to announce our new white paper, “Unlocking Federal Markets: The Enterprise Guide to FedRAMP.” This comprehensive resource is designed for cloud service providers (CSPs) looking to navigate the complex FedRAMP authorization process, providing actionable insights and step-by-step guidance to help you access the lucrative federal cloud marketplace.
From understanding the authorization process to implementing continuous monitoring requirements, this guide offers a clear roadmap through the FedRAMP journey. More than just a compliance checklist, it delivers strategic insights on how to approach FedRAMP as a business opportunity while minimizing the time and resources required.
⏱️ Can’t wait till the end? 📥 Download the white paper now 👇👇👇
FedRAMP is the gateway to federal cloud business, but many organizations underestimate its complexity and strategic importance. Our white paper transforms your approach by:
Clarifying the Authorization Process: Understand the difference between FedRAMP authorization and certification, and learn the specific roles of key stakeholders.
Streamlining Compliance: Learn how to integrate security and compliance directly into your development lifecycle, reducing costs and accelerating time-to-market.
Establishing Continuous Monitoring: Build sustainable processes that maintain your authorization status through the required continuous monitoring activities.
Creating Business Value: Position your FedRAMP authorization as a competitive advantage that opens doors across multiple agencies.
What’s Inside the White Paper?
Our guide is organized to follow your FedRAMP journey from start to finish. Here’s a preview of what you’ll find:
FedRAMP Overview: Learn about the historical context, goals and benefits of the program.
Key Stakeholders: Understand the roles of federal agencies, 3PAOs and the FedRAMP PMO.
Authorization Process: Navigate through all phases—Preparation, Authorization and Continuous Monitoring—with detailed guidance for each step.
Strategic Considerations: Make informed decisions about impact levels, deployment models and resource requirements.
Compliance Automation: Discover how Anchore Enforce can transform FedRAMP from a burdensome audit exercise into a streamlined component of your software delivery pipeline.
You’ll also find practical insights on staffing your authorization effort, avoiding common pitfalls and estimating the level of effort required to achieve and maintain FedRAMP authorization.
Transform Your Approach to Federal Compliance
The white paper emphasizes that FedRAMP compliance isn’t just a one-time hurdle but an ongoing commitment that requires a strategic approach. By treating compliance as an integral part of your DevSecOps practice—with automation, policy-as-code and continuous monitoring—you can turn FedRAMP from a cost center into a competitive advantage.
Whether your organization is just beginning to explore FedRAMP or looking to optimize existing compliance processes, this guide provides the insights needed to build a sustainable approach that opens doors to federal business opportunities.
Download the White Paper Today
FedRAMP authorization is more than a compliance checkbox—it’s a strategic enabler for your federal market strategy. Our comprehensive guide gives you the knowledge and tools to navigate this complex process successfully.
📥 Download the white paper now and unlock your path to federal markets.
Learn how to navigate FedRAMP authorization while avoiding all of the most common pitfalls.
If you’re a developer, this vignette may strike a chord: You’re deep in the flow, making great progress on your latest feature, when someone from the security team sends you an urgent message. A vulnerability has been discovered in one of your dependencies and has failed a compliance review. Suddenly, your day is derailed as you shift from coding to a gauntlet of bureaucratic meetings.
This is an unfortunate reality for developers at organizations where security and compliance are bolt-on processes rather than integrated parts of the whole. Your valuable development time is consumed with digging through arcane compliance documentation, attending security reviews and being relegated to compliance training sessions. Every context switch becomes another drag on your productivity, and every delayed deployment impacts your ability to ship code.
Two niche DevSecOps/software supply chain technologies have come together to transform the dynamic between developers and organizational policy—software bills of materials (SBOMs) and policy-as-code (PaC). Together, they dramatically reduce the friction between development velocity and risk management requirements by making policy evaluation and enforcement:
Automated and consistent
Integrated into your existing workflows
Visible early in the development process
In this guide, we’ll explore how SBOMs and policy-as-code work, the specific benefits they bring to your daily development work, and how to implement them in your environment. By the end, you’ll understand how these tools can help you spend less time manually doing someone else’s job and more time doing what you do best—writing great code.
Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.
You’re probably familiar with Infrastructure-as-Code (IaC) tools like Terraform, AWS CloudFormation, or Pulumi. These tools allow you to define your cloud infrastructure in code rather than clicking through web consoles or manually running commands. Policy-as-Code (PaC) applies this same principle to policies from other departments of an organization.
What is policy-as-code?
At its core, policy-as-code translates organizational policies—whether they’re security requirements, licensing restrictions, or compliance mandates—from human-readable documents into machine-readable representations that integrate seamlessly with your existing DevOps platform and tooling.
Think of it this way: IaC gives you a DSL for provisioning and managing cloud resources, while PaC extends this concept to other critical organizational policies that traditionally lived outside engineering teams. This creates a bridge between development workflows and business requirements that previously existed in separate silos.
Why do I care?
Let’s play a game of would you rather. Choose the activity from the table below that you’d rather do:
Before Policy-as-Code
After Policy-as-Code
Read lengthy security/legal/compliance documentation to understand requirements
Reference policy translated into code with clear comments and explanations
Manually review your code policy compliance and hope you interpreted policy correctly
Receive automated, deterministic policy evaluation directly in CI/CD build pipeline
Attend compliance training sessions because you didn’t read the documentation
Learn policies by example as concrete connections to actual development tasks
Setup meetings with security, legal or compliance teams to get code approval
Get automated approvals through automated policy evaluation without review meetings
Wait till end of sprint and hope VP of Eng can get exception to ship with policy violations
Identify and fix policy violations early when changes are simple to implement
While the game is a bit staged, it isn’t divorced from reality. PaC is meant to relieve much of the development friction associated with the external requirements that are typically hoisted onto the shoulders of developers.
From oral tradition to codified knowledge
Perhaps one of the most under appreciated benefits of policy-as-code is how it transforms organizational knowledge. Instead of policies living in outdated Word documents or in the heads of long-tenured employees, they exist as living code that evolves with your organization.
When a developer asks “Why do we have this restriction?” or “What’s the logic behind this policy?”, the answer isn’t “That’s just how we’ve always done it” or “Ask Alice in Compliance.” Instead, they can look at the policy code, read the annotations, and understand the reasoning directly.
In the next section, we’ll explore how software bills of materials (SBOMs) provide the perfect data structure to pair with policy-as-code for managing software supply chain security.
A Brief Introduction to SBOMs (in the Context of PaC)
If policy-as-code provides the rules engine for your application’s dependency supply chain, then Software Bills of Materials (SBOMs) provide the structured, supply chain data that the policy engine evaluates.
What is an SBOM?
An SBOM is a formal, machine-readable inventory of all components and dependencies used in building a software artifact. If you’re familiar with Terraform, you can think of an SBOM as analogous to a dev.tfstate file but it stores the state of your application code’s 3rd-party dependency supply chain which is then reconciled against a main.tf file (i.e., policy) to determine if the software supply chain is compliant or in violation of the defined policy.
SBOMs vs package manager dependency files
You may be thinking, “Don’t I already have this information in my package.json, requirements.txt, or pom.xml file?” While these files declare your direct dependencies, they don’t capture the complete picture:
They don’t typically include transitive dependencies (dependencies of your dependencies)
They don’t include information about the components within container images you’re using
They don’t provide standardized metadata about vulnerabilities, licenses, or provenance
They aren’t easily consumable by automated policy engines across different programming languages and environments
SBOMs solve these problems by providing a standardized format that comprehensively documents your entire software supply chain in a way that policy engines can consistently evaluate.
A universal policy interface: How SBOMs enable policy-as-code
Think of SBOMs as creating a standardized “policy interface” for your software’s supply chain metadata. Just as APIs create a consistent way to interact with services, SBOMs create a consistent way for policy engines to interact with your software’s composable structure.
This standardization is crucial because it allows policy engines to operate on a known data structure rather than having to understand the intricacies of each language’s package management system, build tool, or container format.
For example, a security policy that says “No components with critical vulnerabilities may be deployed to production” can be applied consistently across your entire software portfolio—regardless of the technologies used—because the SBOM provides a normalized view of the components and their vulnerabilities.
In the next section, we’ll explore the concrete benefits that come from combining SBOMs with policy-as-code in your development workflow.
How do I get Started with SBOMs and Policy-as-Code
Now that you understand what SBOMs and policy-as-code are and why they’re valuable, let’s walk through a practical implementation. We’ll use Anchore Enterprise as an example of a policy engine that has a DSL to express a security policy which is then directly integrated into a CI/CD runbook. The example will focus on a common software supply chain security best practice: preventing the deployment of applications with critical vulnerabilities.
Tools we’ll use
For this example implementation, we’ll use the following components from Anchore:
AnchoreCTL: A software composition analysis (SCA) tool and SBOM generator that scans source code, container images or application binaries to populate an SBOM with supply chain metadata
Anchore Enforce: The policy engine that evaluates SBOMs against defined policies
Anchore Enforce JSON: The Domain-Specific Language (DSL) used to define policies in a machine-readable format
While we’re using Anchore in this example, the concepts apply to other SBOM generators and policy engines as well.
Step 1: Translate human-readable policies to machine-readable code
The first step is to take your organization’s existing policies and translate them into a format that a policy engine can understand. Let’s start with a simple but effective policy.
Human-Readable Policy:
Applications with critical vulnerabilities must not be deployed to production environments.
{"id": "critical_vulnerability_policy","version": "1.0","name": "Block Critical Vulnerabilities","comment": "Prevents deployment of applications with critical vulnerabilities","rules": [ {"id": "block_critical_vulns","gate": "vulnerabilities","trigger": "package","comment": "Rule evaluates each dependency in an SBOM against vulnerability database. If the dependency is found in the database, all known vulnerability severity scores are evaluated for a critical value. If match if found policy engine returns STOP action to CI/CD build task","parameters": [ { "name": "package_type", "value": "all" }, { "name": "severity_comparison", "value": "=" }, { "name": "severity", "value": "critical" }, ],"action": "stop" } ]}
This policy code instructs the policy engine to:
Examine all application dependencies (i.e., packages) in the SBOM
Check if any dependency/package has vulnerabilities with a severity of “critical”
If found, return a “stop” action that will fail the build
If you’re looking for more information on the capabilities of the Anchore Enforce DSL, our documentation provides the full capabilities of the Anchore Enforce policy engine.
Step 2: Deploy Anchore Enterprise with the policy engine
With the example policy defined, the next step is to deploy Anchore Enterprise (AE) and configure the Anchore Enforce policy engine. The high-level steps are:
Configure access controls/permissions between AE deployment and CI/CD build pipeline
If you’re interested to get hands-on with this, we have developed a self-paced workshop that walks you through a full deployment and how to set up a policy. You can get a trial license by signing up for our free trial.
Step 3: Integrate SBOM generation into your CI/CD pipeline
Now you need to generate SBOMs as part of your build process and have them evaluated against your policies. Here’s an example of how this might look in a GitHub Actions workflow:
name: Build App and Evaluate Supply Chain for Vulnerabilitieson:push:branches: [ main ]pull_request:branches: [ main ]jobs:build-and-evaluate:runs-on: ubuntu-lateststeps: - uses: actions/checkout@v3 - name: Build Applicationrun: | # Build application as container image docker build -t myapp:latest . - name: Generate SBOMrun: | # Install AnchoreCTL curl -sSfL https://anchorectl-releases.anchore.io/v1.0.0/anchorectl_1.0.0_linux_amd64.tar.gz | tar xzf - -C /usr/local/bin # Execute supply chain composition scan of container image, generate SBOM and send to policy engine for evaluation anchorectl image add --wait myapp:latest - name: Evaluate Policyrun: | # Get policy evaluation results RESULT=$(anchorectl image check myapp:latest --policy critical_vulnerability_policy) # Handle the evaluation result if [[ $RESULT == *"Status: pass"* ]]; then echo "Policy evaluation passed! Proceeding with deployment." else echo "Policy evaluation failed! Deployment blocked." exit 1 fi - name: Deploy if Passedif: success()run: | # Your deployment steps here
This workflow:
Builds your application as a container image using Docker
Installs AnchoreCTL
Scans container image with SCA tool to map software supply chain
Generates an SBOM based on the SCA results
Submits the SBOM to the policy engine for evaluation
Gets evaluation results from policy engine response
Continues or halts the pipeline based on the policy response
Step 4: Test the integration
With the integration in place, it’s time to test that everything works as expected:
Create a test build that intentionally includes a component with a known critical vulnerability
Push the build through your CI/CD pipeline
Confirm that:
The SBOM is correctly generated
The policy engine identifies the vulnerability
The pipeline fails as expected
If all goes well, you’ve successfully implemented your first policy-as-code workflow using SBOMs!
Step 5: Expand your policy coverage
Once you have the basic integration working, you can begin expanding your policy coverage to include:
Security policies
Compliance policies
Software license policies
Custom organizational policies
Environment-specific requirements (e.g., stricter policies for production vs. development)
Work with your security and compliance teams to translate their requirements into policy code, and gradually expand your automated policy coverage. This process is a large upfront investment but creates recurring benefits that pay dividends over the long-term.
Step 6: Profit!
With SBOMs and policy-as-code implemented, you’ll start seeing the benefits almost immediately:
Fast feedback on security and compliance issues
Reduced manual compliance tasks
Better documentation of what’s in your software and why
Consistent evaluation and enforcement of policies
Certainty about policy approvals
The key to success is getting your security and compliance teams to embrace the policy-as-code approach. Help them understand that by translating their policies into code, they gain more consistent enforcement while reducing manual effort.
Wrap-Up
As we’ve explored throughout this guide, SBOMs and policy-as-code represent a fundamental shift in how developers interact with security and compliance requirements. Rather than treating these as external constraints that slow down development, they become integrated features of your DevOps pipeline.
Key takeaways
Policy-as-Code transforms organizational policies from static documents into dynamic, version-controlled code that can be automated, tested, and integrated into CI/CD pipelines.
SBOMs provide a standardized format for documenting your software’s components, creating a consistent interface that policy engines can evaluate.
Together, they enable “shift-left” security and compliance, providing immediate feedback on policy violations without meetings or context switching.
Integration is straightforward with pre-built plugins for popular DevOps platforms, allowing you to automate policy evaluation as part of your existing build process.
The benefits extend beyond security to include faster development cycles, reduced compliance burden, and better visibility into your software supply chain.
Get started today
Ready to bring SBOMs and policy-as-code to your development environment? Anchore Enterprise provides a comprehensive platform for generating SBOMs, defining policies, and automating policy evaluation across your software supply chain.
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
Two cybersecurity buzzwords are rapidly shaping how organizations manage risk and streamline operations: Continuous Monitoring (ConMon) and Software Bill of Materials (SBOMs). ConMon, rooted in the traditional security principle—“trust but verify”—has evolved into an iterative process where organizations measure, analyze, design, and implement improvements based on real-time data. Meanwhile, SBOMs offer a snapshot of an application’s composition (i.e., 3rd-party dependency supply chain) at any given point in the DevSecOps lifecycle.
Understanding these concepts is crucial because together they unlock significant enterprise value—ranging from rapid zero-day response to scalable vulnerability management and even automated compliance enforcement. By integrating SBOMs into a robust ConMon strategy, organizations can not only mitigate supply chain risks but also ensure that every stage of software development adheres to stringent security and compliance standards.
A Brief Introduction to Continuous Monitoring (ConMon)
Continuous Monitoring is a wide ranging topic applied across various contexts such as application performance, error tracking, and security oversight. In the most general sense, ConMon is an iterative cycle of:
Measure: Collect relevant data.
Analyze: Analyze raw data and generate actionable insights.
Design: Develop solution(s) based on the insights.
Implement: Execute solution to resolve issue(s) or improve performance.
Repeat
ConMon is underpinned by the well-known management theory maxim, “You can’t manage what you don’t measure.” Historically, the term has its origins in the federal compliance world—think FedRAMP and cATO—where continuous monitoring was initially embraced as an evolution of traditional point-in-time security compliance audits.
So, where do SBOMs fit into this picture?
A Brief Introduction to SBOMs (in the Context of ConMon)
In the world of ConMon, SBOMs are a source of data that can be measured and analyzed to extract actionable insights. SBOMs are specifically a structured store of software supply chain metadata. They track the evolution of a software artifacts supply chain as it develops throughout the software development lifecycle.
An SBOM catalogs information like software supply chain dependencies, security vulnerabilities and licensing contracts. In this way SBOMs act as the source of truth for the state of an application’s software supply chain during the development process.
The ConMon process utilizes the supply chain data stored in the SBOMs to generate actionable insights like:
uncovering critical supply chain vulnerabilities,
identifying legally risky open source licenses, or
catching new software dependencies that break key compliance frameworks like FedRAMP.
SBOMs are the key data source that allows ConMon to apply its goal of continuously monitoring and improving an organization’s software development environment—specifically, the software supply chain.
Benefits of using SBOMs as the central component of ConMon
As the cornerstone of the software supply chain, SBOMs play a central role in supply chain security which falls under the purview of the continuous monitoring workflow. Given this, it shouldn’t be a surprise that there are cross-over use-cases between the two domains. Leveraging the standardized data structure of SBOMs unlocks a wealth of opportunities for automating supply chain security use-cases and scaling the principles of ConMon. Key use-cases and benefits include:
Rapid incident response to zero-day disclosure
Reduced exploitation risk: SBOMs reduce the risk of supply chain exploitation by dramatically reducing the time to identify if vulnerable components are present in an organization’s software environment and how to prioritize remediation efforts.
Prevent wasted organizational resources: A centralized SBOM repository enables organizations to automate manual dependency audits into a single search query. This prevents the need to re-task software engineers away from development work to deal with incident response.
Software dependency drift detection
Reduced exploitation risk: When SBOM generation is integrated natively into the DevSecOps pipeline a historical record of the development of an application becomes available. Each development stage is compared against the previous to identify dependency injection in real-time. Catching and remediating malicious injections as early as possible significantly reduces the risk of exploitation by threat actors.
Proactive and scalable vulnerability management
Reduced security risk: SBOMs empower organizations to decouple software composition scanning from vulnerability scanning, enabling a scalable vulnerability management approach that meets cloud-native expectations. By generating an SBOM early in the DevSecOps pipeline, teams can continuously cross-reference software components against known vulnerability databases, proactively detecting risks before they reach production.
Reduced time on vulnerability management: This streamlined process reduces the manual tasks associated with legacy vulnerability management. Teams are now enabled to focus their efforts on higher-level issues rather than be bogged down with tedious manual tasks.
Reduce time spent generating compliance audit evidence: Manual generation of compliance audit reports to prove security best practices are in place is a time consuming process. SBOMs unlock the power to automate the generation of audit evidence for the software supply chain. This protects organizational resources for higher-value tasks.
Automated OSS license management
Reduced legal risk: An adversarial OSS license accidentally entering a commercial application opens up an enterprise to significant legal risk. SBOMs enable organizations to automate the process of scanning for OSS licenses and assessing the legal risk of the entire software supply chain. Having real-time visibility into the licenses of an organization’s entire supply chain dramatically reduces the risk of legal penalties.
In essence, SBOMs serve as the heart of a robust ConMon strategy, providing the critical data needed to scale automation and ensure that the software supply chain remains secure and compliant.
Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.
SBOMs and ConMon applied to the DevSecOps lifecycle
Integrating SBOMs within the DevSecOps lifecycle enables organizations to realize security outcomes efficiently through a cyclical process of measurement, analysis, design and implementation. We’ll go through each step of the process.
1. Measure
Measurement is the most obvious stage of ConMon given its synonym “monitoring” is in the name. The measurement step is primarily concerned with collecting as much data as possible that will later power the analysis stage. ConMon traditionally focuses on collecting security specific data like:
Observability Telemetry: Application logs, metrics, and traces
Audit Logs: Records of application administration: access and modifications
Supply Chain Metadata: Point-in-time scans of the composition of a software artifacts supply chain of 3rd-party dependencies
Software composition analysis (SCA) scanners and SBOM generation tools create an additional dimension of information about software that can then be analyzed. The supply chain metadata that is generated powers the insights that feeds the ConMon flywheel and increases transparency into software supply chain issues.
External measurements (i.e., data sources)
Additionally, external databases (e.g., NVD or VEX) can be referenced to correlate valuable information aggregated by public interest organizations like NIST (National Institute of Standards and Measures) and CISA (Cybersecurity and Infrastructure Security Agency). These databases act as comprehensive stores of the collective work of security researchers worldwide.
As software components are analyzed and tested for vulnerabilities and exploits, researchers submit their findings to these public goods databases. The security findings are tied to the specific software components (i.e., the same component identifier stored in an SBOM).
After collecting all of this information, we are ready to move to the next phase of the ConMon lifecycle: analysis.
2. Analyze
The foundational premise of ConMon is to monitor continuously in order to uncover security threats before they can cause damage. The analysis step is what transforms raw data into actionable security insights that reduce the risk of a breach.
Queries over Software Supply Chain Data:
As the central data format for the software supply chain, SBOMs act as the data source that queries are crafted to extract insight from
Actionable Insights:
Highlight known vulnerabilities by cross-referencing the entire 3rd-party software supply chain against vulnerability databases like NVD
Compare SBOMs from one stage of the pipeline to a later stage in order to uncover dependency injections (both unintentional and malicious)
Codify software supply chain security or regulatory compliance policies into policy-as-code that can alert on policy drift in real-time
3. Design
After generating insights from analysis of the raw data, the next step is to design a solution based on the analysis from the previous step. The goal of the designed solution is to reduce the risk of a security incident.
Example Solutions Based on Analysis
Vulnerability Remediation Workflow: After analysis has uncovered known vulnerabilities, designing a system to remediate the vulnerabilities comes into focus. Utilizing the existing engineering task management process is ideal. To live up to the promise of ConMon, the analysis to remediation process should be an automated system that pushes supply chain data from the analysis platform directly into the remediation queue.
Dependency Injection Detection via SBOM Drift Analysis: SBOMs can be scanned at each stage of the DevSecOps pipeline, when anomalous dependencies are flagged as not coming from a known good source this analysis can be used to prompt an investigation. These types of investigations are generally too complex to be automated but the investigation process still requires design.
Automated Compliance Enforcement via Policy-as-Code: By codifying software supply chain best practices or regulatory compliance controls into policy-as-code, organizations can be alerted to policy violations in real-time. The solution design includes a mapping of policies into code, scans against containers in the DevSecOps pipeline and a notification system that can act on the analysis.
4. Implement
The final step of ConMon involves implementing the design from the previous step. This also sets up the entire process to restart from the beginning. After the design is implemented it is ready to be monitored again to assess effectiveness of the implemented design.
Execution of Solution Design
Vulnerability Remediation Workflow: Configure the analysis platform to push a report of identified vulnerabilities to the engineering organization’s existing ticketing system. Prioritize the vulnerabilities based on their severity score to increase signal-to-noise ratio for the assigned developer or gate the analysis platform to only push reports if a vulnerability is classified as high or critical.
Dependency Injection Detection via SBOM Drift Analysis: Integrate SBOM generation into two or more stages of the DevSecOps pipeline to enable diff analysis of software supply chain analysis over time. Allowlists and denylists can fine tune which types of dependency injections are expected and which are anomalous. Investigations into suspected dependency injection can be triggered based on anomaly detection.
Automated Compliance Enforcement via Policy-as-Code: Security policies and/or compliance controls will need to be translated from english into code but this is a one-time, upfront investment to enable scalable, automated policy scanning. A scanner will need to be integrated into one or more places in the CI/CD build pipeline in order to detect policy violations. As violations are identified, the analysis platform can push notifications to the appropriate team.
5. Repeat
Step 5 isn’t actually a different step of ConMon, it is just a reminder to return to Step 1 for another turn of the ConMon cycle. The ‘continuous’ in ConMon means that we continue to repeat the process indefinitely. As security of a system is measured and evaluated, new security issues are discovered that then require a solution design and design implementation. This is the flywheel cycle of ConMon that endlessly improves the security of the system that is monitored.
Real-world Scenario
SBOMs and ConMon aren’t just a theoretical framework, there are a number of SBOM use-cases in production that are delivering value to enterprises. The most prominent of these is the security incident response use-case. This use-case moved into the limelight in the wake of the string of infamous software supply chain attacks of the past decade: Solarwinds, Log4j, XZ Utils, etc.
The biggest takeaway from these high-profile incidents is that enterprises were caught unprepared and experienced pain as a result of not having the tools to measure, analyze, design and implement solutions to these supply chain attacks.
This outcome was only possible due to Google’s foresight and willingness to instrument their DevSecOps pipeline with tools like SCAs and SBOM generators and continuously monitor their software supply chain.
Ready to Reap the Rewards of SBOM-powered ConMon?
Integrating SBOMs as a central element of your ConMon strategy transforms how organizations manage software supply chain security. By aligning continuous monitoring with the principles of DevSecOps, security and engineering leaders can ensure that their organizations are well-prepared to face the evolving threat landscape—keeping operations secure, compliant, and efficient.
If you’re interested in embracing the power of SBOMs and ConMon but your team doesn’t want to take on this side question themselves, Anchore offers a turnkey platform to unlock the benefits without the headache of building and managing a solution from scratch. Anchore Enterprise is a complete SBOM-powered, supply chain security solution that extends ConMon into your organization’s software supply chain. Reach out to our team to learn more or start a free trial to kick the tires yourself.
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
Security engineers at modern enterprises face an unprecedented challenge: managing software supply chain risk without impeding development velocity, all while threat actors exploit the rapidly expanding attack surface. With over 25,000 new vulnerabilities in 2023 alone and supply chain attacks surging 540% year-over-year from 2019 to 2022, the exploding adoption of open source software has created an untenable security environment. To overcome these challenges security teams are in need of tools to scale their impact and invert the they are a speed bump for high velocity software delivery.
If your DevSecOps pipeline utilizes the open source Harbor registry then we have the perfect answer to your needs. Integrating Anchore Enterprise—the SBOM-powered container vulnerability management platform—with Harbor offers the force-multiplier security teams need. This one-two combo delivers:
Proactive vulnerability management: Automatically scan container images before they reach production
Actionable security insights: Generate SBOMs, identify vulnerabilities and alert on actionable insights to streamline remediation efforts
Lightweight implementation: Native Harbor integration requiring minimal configuration while delivering maximum value
Improved cultural dynamics: Reduce security incident risk and, at the same time, burden on development teams while building cross-functional trust
This technical guide walks through the implementation steps for integrating Anchore Enterprise into Harbor, equipping security engineers with the knowledge to secure their software supply chain without sacrificing velocity.
Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.
Anchore Enterprise can integrate with Harbor in two different ways—each has pros and cons:
Pull Integration Model
In this model, Anchore uses registry credentials to pull and analyze images from Harbor:
Anchore accesses Harbor using standard Docker V2 registry integration
Images are analyzed directly within Anchore Enterprise
Results are available in Anchore’s interface and API
Ideal for organizations where direct access to Harbor is restricted but API access is permitted
Push Integration Model
In this model, Harbor uses its native scanner adapter feature to push images to Anchore for analysis:
Harbor initiates scans on-demand through its scanner adapter as images are added
Images are scanned within the Anchore deployment
Vulnerability scan results are stored in Anchore and sent to Harbor’s UI
Better for environments with direct access to Harbor that want immediate scans
Both methods provide strong security benefits but differ in workflow and where results are accessed.
Setting Up the Pull Integration
Let’s walk through how to configure Anchore Enterprise to pull and analyze images from your Harbor registry.
Prerequisites
Anchore Enterprise installed and running
Harbor registry deployed and accessible
Harbor user account with appropriate permissions
Step 1: Configure Registry Credentials in Anchore
In Anchore Enterprise, navigate to the “Registries” section
Select “Add Registry”
Fill in the following details:
Registry Hostname or IP Address: [your Harbor API URL or IP address, e.g., http://harbor.yourdomain.com]Name: [Human readable name]Type: docker_v2Username: [your Harbor username, e.g., admin]Password: [your Harbor password]
Configure any additional options like SSL validation if necessary
Test the connection
Save the configuration
Step 2: Analyze an Image from Harbor
Once the registry is configured, you can analyze images stored in Harbor:
Navigate to the “Images” section in Anchore Enterprise
Select “Add Image”
Choose your Harbor registry from the dropdown
Specify the repository and tag for the image you want to analyze
Click “Analyze”
Anchore will pull the image from Harbor, decompose it, generate an SBOM, and scan for vulnerabilities. This process typically takes a few minutes depending on image size.
Step 3: Review Analysis Results
After analysis completes:
View the vulnerability report in the Anchore UI
Check the generated SBOM for all dependencies
Review compliance status against configured policies
Export reports or take remediation actions as needed
Setting Up the Push Integration
Now let’s configure Harbor to push images to Anchore for scanning using the Harbor Scanner Adapter.
Review the results in your Harbor UI once scanning completes
Advanced Configuration Features
Now that you have the base configuration working for the Harbor Scanner Adapter, you are ready to consider some additional features to increase your security posture.
Scheduled Scanning
Beyond on-push scanning, you can configure scheduled scanning to catch newly discovered vulnerabilities in existing images:
In Harbor, navigate to “Administration” → “Interrogation Services” → “Vulnerability”
Set the scan schedule (hourly, daily, weekly, etc.)
Save the configuration
This ensures all images are regularly re-scanned as vulnerability databases are updated with newly discovered and documented vulnerabilities.
Security Policy Enforcement
To enforce security at the pipeline level:
In your Harbor project, navigate to “Configuration”
Enable “Prevent vulnerable images from running”
Select the vulnerability severity level threshold (Low, Medium, High, Critical)
Images with vulnerabilities above this threshold will be blocked from being pulled*
*Be careful with this setting for a production environment. If an image is flagged as having a vulnerability and your container orchestrator attempts to pull the image to auto-scale a service it may cause instability for users.
Proxy Image Cache
Harbor’s proxy cache capability provides an additional security layer:
Navigate to “Registries” in Harbor and select “New Endpoint”
Configure a proxy cache to a public registry like Docker Hub
All images pulled from Docker Hub will be cached locally and automatically scanned for vulnerabilities based on your project settings
Security Tips and Best Practices from the Anchore Team
Use Anchore Enterprise for highest fidelity vulnerability data
The Anchore Enterprise dashboard surfaces complete vulnerability details
Full vulnerability data can be configured with downstream integrations like Slack, Jira, ServiceNow, etc.
“Good data empowers good people to make good decisions.”
—Dan Perry, Principal Customer Success Engineer, Anchore
Configuration Best Practices
For optimal security posture:
Configure per Harbor project: Use different vulnerability scanning settings for different risk profiles
Mind your environment topology: Adjust network timeouts and SSL settings based on network topology; make sure Harbor and Anchore Enterprise deployments are able to communicate securely
Secure Access Controls
Adopt least privilege principle: Use different credentials per repository
Utilize API keys: For service accounts and integrations, use API keys rather than user credentials
Conclusion
Integrating Anchore Enterprise with Harbor registry creates a powerful security checkpoint in your DevSecOps pipeline. By implementing either the pull or push model based on your specific needs, you can automate vulnerability scanning, enforce security policies, and maintain compliance requirements.
This integration enables security teams to:
Detect vulnerabilities before images reach production
Generate and maintain accurate SBOMs
Enforce security policies through prevention controls
Maintain continuous security through scheduled scans
With these tools properly integrated, you can significantly reduce the risk of deploying vulnerable containers to production environments, helping to secure your software supply chain.
Save your developers time with Anchore Enterprise. Get instant access with a 15-day free trial.
Software Bill of Materials (SBOMs) are no longer optional—they’re mission-critical.
That’s why we’re excited to announce the release of our new white paper, “Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization.” This comprehensive guide is designed for security and engineering leadership at both commercial enterprises and federal agencies, providing actionable insights into how SBOMs are transforming the way organizations manage software complexity, mitigate risk, and drive business outcomes.
From software supply chain security to DevOps acceleration and regulatory compliance, SBOMs have emerged as a cornerstone of modern software development. They do more than provide a simple inventory of application components; they enable rapid security incident response, automated compliance, reduced legal risk, and accelerated software delivery.
⏱️ Can’t wait till the end? 📥 Download the white paper now 👇👇👇
SBOMs are no longer just a checklist item—they’re a strategic asset. They provide an in-depth inventory of every component within your software ecosystem, complete with critical metadata about suppliers, licensing rights, and security postures. This newfound transparency is revolutionizing cross-functional operations across enterprises by:
Accelerating Incident Response: Quickly identify vulnerable components and neutralize threats before they escalate.
Enhancing Vulnerability Management: Prioritize remediation efforts based on risk, ensuring that developer resources are optimally deployed.
Reducing Legal Risk: Manage open source license obligations proactively, ensuring that every component meets your organization’s legal and security standards.
What’s Inside the White Paper?
Our white paper is organized by organizational function; each section highlighting the relevant SBOM use-cases. Here’s a glimpse of what you can expect:
Security: Rapidly identify and mitigate zero-day vulnerabilities, scale vulnerability management, and detect software drift to prevent breaches.
Engineering & DevOps: Eliminate wasted developer time with real-time feedback, automate dependency management, and accelerate software delivery.
Regulatory Compliance: Automate policy checks, streamline compliance audits, and meet requirements like FedRAMP and SSDF Attestation with ease.
Legal: Reduce legal exposure by automating open source license risk management.
Sales: Instill confidence in customers and accelerate sales cycles by proactively providing SBOMs to quickly build trust.
Also, you’ll find real-world case studies from organizations that have successfully implemented SBOMs to reduce risk, boost efficiency, and gain a competitive edge. Learn how companies like Google and Cisco are leveraging SBOMs to drive business outcomes.
Empower Your Enterprise with SBOM-Centric Strategies
The white paper underscores that SBOMs are not a one-trick pony. They are the cornerstone of modern software supply chain management, driving benefits across security, engineering, compliance, legal, and customer trust. Whether your organization is embarking on its SBOM journey or refining an established process, this guide will help you unlock cross-functional value and future-proof your technology operations.
Download the White Paper Today
SBOMs are more than just compliance checkboxes—they are a strategic enabler for your organization’s security, development, and business operations. Whether your enterprise is just beginning its SBOM journey or operating a mature SBOM initiative, this white paper will help you uncover new ways to maximize value.
Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.
SBOM (software bill of materials) generation is becoming increasingly important for software supply chain security and compliance. Several approaches exist for generating SBOMs for Python projects, each with its own strengths. In this post, we’ll explore two popular methods: using pipdeptree with cyclonedx-py and Syft. We’ll examine their differences and see why Syft is better for many use-cases.
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
Before diving into the tools, let’s understand why generating an SBOM for your Python packages is increasingly critical in modern software development. Security analysis is a primary driver—SBOMs provide a detailed inventory of your dependencies that security teams can use to identify vulnerabilities in your software supply chain and respond quickly to newly discovered threats. The cybersecurity compliance landscape is also evolving rapidly, with many organizations and regulations (e.g., EO 14028) now requiring SBOMs as part of software delivery to ensure transparency and traceability in an organization’s software supply chain.
From a maintenance perspective, understanding your complete dependency tree is essential for effective project management. SBOMs help development teams track dependencies, plan updates, and understand the potential impact of changes across their applications. They’re particularly valuable when dealing with complex Python applications that may have hundreds of transitive dependencies.
License compliance is another crucial aspect where SBOMs prove invaluable. By tracking software licenses across your entire dependency tree, you can ensure your project complies with various open source licenses and identify potential conflicts before they become legal issues. This is especially important in Python projects, where dependencies might introduce a mix of licenses that need careful consideration.
Generating a Python SBOM with pipdeptree and cyclonedx-py
The first approach we’ll look at combines two specialized Python tools: pipdeptree for dependency analysis and cyclonedx-py for SBOM generation. Here’s how to use them:
# Install the required tools$ pip install pipdeptree cyclonedx-bom# Generate requirements with dependencies$ pipdeptree --freeze > requirements.txt# Generate SBOM in CycloneDX format$ cyclonedx-py requirements requirements.txt > cyclonedx-sbom.json
This Python-specific approach leverages pipdeptree‘s deep understanding of Python package relationships. pipdeptree excels at:
Detecting circular dependencies
Identifying conflicting dependencies
Providing a clear, hierarchical view of package relationships
Generating a Python SBOM with Syft: A Universal SBOM Generator
Syft takes a different approach. As a universal SBOM generator, it can analyze Python packages and multiple software artifacts. Here’s how to use Syft with Python projects:
# Install Syft (varies by platform)# See: https://github.com/anchore/syft#installation$ curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin# Generate SBOM from requirements.txt$ syft requirements.txt -o cyclonedx-json# Or analyze an entire Python project$ syft path/to/project -o cyclonedx-json
Key Advantages of Syft
Syft’s flexibility in output formats sets it apart from other tools. In addition to the widely used CycloneDX format, it supports SPDX for standardized software definitions and offers its own native JSON format that includes additional metadata. This format flexibility allows teams to generate SBOMs that meet various compliance requirements and tooling needs without switching between multiple generators.
Syft truly shines in its comprehensive analysis capabilities. Rather than limiting itself to a single source of truth, Syft examines your entire Python environment, detecting packages from multiple sources, including requirements.txt files, setup.py configurations, and installed packages. It seamlessly handles virtual environments and can even identify system-level dependencies that might impact your application.
The depth of metadata Syft provides is particularly valuable for security and compliance teams. For each package, Syft captures not just basic version information but also precise package locations within your environment, file hashes for integrity verification, detailed license information, and Common Platform Enumeration (CPE) identifiers. This rich metadata enables more thorough security analysis and helps teams maintain compliance with security policies.
Comparing the Outputs
We see significant differences in detail and scope when examining the outputs from both approaches. The pipdeptree with cyclonedx-py combination produces a focused output that concentrates specifically on Python package relationships. This approach yields a simpler, more streamlined SBOM that’s easy to read but contains limited metadata about each package.
Syft, on the other hand, generates a more comprehensive output that includes extensive metadata for each package. Its SBOM provides rich details about package origins, includes comprehensive CPE identification for better vulnerability matching, and offers built-in license detection. Syft also tracks the specific locations of files within your project and includes additional properties that can be valuable for security analysis and compliance tracking.
Here’s a snippet comparing the metadata for the rich package in both outputs:
While both approaches are valid, Syft offers several compelling advantages. As a universal tool that works across multiple software ecosystems, Syft eliminates the need to maintain different tools for different parts of your software stack.
Its rich metadata gives you deeper insights into your dependencies, including detailed license information and precise package locations. Syft’s support for multiple output formats, including CycloneDX, SPDX, and its native format, ensures compatibility with your existing toolchain and compliance requirements.
The project’s active development means you benefit from regular updates and security fixes, keeping pace with the evolving software supply chain security landscape. Finally, Syft’s robust CLI and API options make integrating into your existing automation pipelines and CI/CD workflows easy.
How to Generate a Python SBOM with Syft
Ready to generate SBOMs for your Python projects? Here’s how to get started with Syft:
While pipdeptree combined with cyclonedx-py provides a solid Python-specific solution, Syft offers a more comprehensive and versatile approach to SBOM generation. Its ability to handle multiple ecosystems, provide rich metadata, and support various output formats makes it an excellent choice for modern software supply chain security needs.
Whether starting with SBOMs or looking to improve your existing process, Syft provides a robust, future-proof solution that grows with your needs. Try it and see how it can enhance your software supply chain security today.
Learn about the role that SBOMs for the security of your organization in this white paper.
As software supply chain security becomes a top priority, organizations are turning to Software Bill of Materials (SBOM) generation and analysis to gain visibility into the composition of their software and supply chain dependencies in order to reduce risk. However, integrating SBOM analysis tools into existing workflows can be complex, requiring extensive configuration and technical expertise. Anchore Enterprise, a leading SBOM management and container security platform, simplifies this process with seamless integration capabilities that cater to modern DevSecOps pipelines.
This article explores how Anchore makes SBOM analysis effortless by offering automation, compatibility with industry standards, and integration with popular CI/CD tools.
Learn about the role that SBOMs for the security of your organization in this white paper.
SBOMs play a crucial role in software security, compliance, and vulnerability management. However, organizations often face challenges when adopting SBOM analysis tools:
Complex Tooling: Many SBOM solutions require significant setup and customization.
Scalability Issues: Enterprises managing thousands of dependencies need scalable and automated solutions.
Compatibility Concerns: Ensuring SBOM analysis tools work seamlessly across different DevOps environments can be difficult.
Anchore addresses these challenges by providing a sleek approach to SBOM analysis with easy-to-use integrations.
How Anchore Simplifies SBOM Analysis Integration
1. Automated SBOM Generation and Analysis
Anchore automates SBOM generation from various sources, including container images, software packages, and application dependencies. This eliminates the need for manual intervention, ensuring continuous security and compliance monitoring.
Automatically scans and analyzes SBOMs for vulnerabilities, licensing issues, and security and compliance policy violations.
Provides real-time insights to security teams.
2. Seamless CI/CD Integration
DevSecOps teams require tools that integrate effortlessly into their existing workflows. Anchore achieves this by offering:
Popular CI/CD platform plugins: Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps and more.
API-driven architecture: Embed SBOM generation and analysis in any DevOps pipeline.
Policy-as-code support: Enforce security and compliance policies within CI/CD workflows.
AnchoreCTL: A command-line (CLI) tool for developers to generate and analyze SBOMs locally before pushing to production.
3. Cloud Native and On-Premises Deployment
Organizations have diverse infrastructure requirements, and Anchore provides flexibility through:
Cloud native support: Works seamlessly with Kubernetes, OpenShift, AWS, and GCP.
On-premises deployment: For organizations requiring strict control over data security.
Hybrid model: Allows businesses to use cloud-based Anchore Enterprise while maintaining on-premises security scanning.
Bonus: Anchore also offers an air-gapped deployment option for organizations working with customers that provide critical national infrastructure like energy, financial services or defense.
A major challenge in SBOM adoption is developer resistance due to complexity. Anchore makes security analysis developer-friendly by:
Providing CLI and API tools for easy interaction.
Delivering clear, actionable vulnerability reports instead of overwhelming developers with false positives.
Integrating directly with development environments, such as VS Code and JetBrains IDEs.
Providing an industry standard 24/7 customer support through Anchore’s customer success team.
Conclusion
Anchore has positioned itself as a leader in SBOM analysis by making integration effortless, automating security checks, and supporting industry standards. Whether an organization is adopting SBOMs for the first time or looking to enhance its software supply chain security, Anchore provides a scalable and developer-friendly solution.
By integrating automated SBOM generation, CI/CD compatibility, cloud native deployment, and compliance management, Anchore enables businesses (no matter the size) and government institutions to adopt SBOM analysis without disrupting their workflows. As software security becomes increasingly critical, tools like Anchore will play a pivotal role in ensuring a secure and transparent software supply chain.For organizations seeking a simple to deploy SBOM analysis solution, Anchore Enterprise is here to deliver results to your organization. Request a demo with our team today!
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
We’re excited to announce Syft v1.20.0! If you’re new to the community, Syft is Anchore’s open source software composition analysis (SCA) and SBOM generation tool that provides foundational support for software supply chain security for modern DevSecOps workflows.
The latest version is packed with performance improvements, enhanced SBOM accuracy, and several community-driven features that make software composition scanning more comprehensive and efficient than ever.
Scanning projects with numerous DLLs was reported to take peculiarly long when running on Windows, sometimes up to 50 minutes. A sharp-eyed community member (@rogueai) discovered that certificate validation was being performed unnecessarily during DLL scanning. A fix was merged into this release and those lengthy scans have been dramatically reduced from to just a few minutes—a massive performance improvement for Windows users!
Bitnami Embedded SBOM Support: Maximum Accuracy
Container images from Bitnami include valuable embedded SBOMs located at /opt/bitnami/. These SBOMs, packaged by the image creators themselves, represent the most authoritative source for package metadata. Thanks to community member @juan131 and maintainer @willmurphyscode, Syft now includes a dedicated cataloger for these embedded SBOMs.
This feature wasn’t simple to implement. It required careful handling of package relationships and sophisticated deduplication logic to merge authoritative vendor data with Syft’s existing scanning capabilities. The result? Scanning Bitnami images gives you the most accurate SBOM possible, combining authoritative vendor data with Syft’s comprehensive analysis.
Smarter License Detection
Handling licenses for non-open source projects can be a bit tricky. We discovered that when license files can’t be matched to a valid SPDX expression, they sometimes get erroneously marked as “unlicensed”—even when valid license text is present. For example, our dpkg cataloger occasionally encountered a license like:
NVIDIA Software License Agreement and CUDA Supplement to Software License Agreement
And categorized the package as unlicensed. Ideally, the cataloger would capture this non-standards compliant license whether the maintainer follows SDPX or not.
Community member @HeyeOpenSource and maintainer @spiffcs tackled this challenge with an elegant solution: a new configuration option that preserves the original license text when SPDX matching fails. While disabled by default for compatibility, you can enable this feature with license.include-unknown-license-content: true in your configuration. This ensures you never lose essential license information, even for non-standard licenses.
Go 1.24: Better Performance and Versioning
The upgrade to Go 1.24 brings two significant improvements:
Faster Scanning: Thanks to Go 1.24’s optimized map implementations, discussed in this Bytesize Go post—and other performance improvements—we’re seeing scan times reduced by as much as 20% in our testing.
Enhanced Version Detection: Go 1.24’s new version embedding means Syft can now accurately report its version and will increasingly provide more accurate version information for Go applications it scans:
syft: go1.24.0
$ go version -m ./syft
path github.com/anchore/syft/cmd/syft
mod github.com/anchore/syft v1.20.0
This also means that as more applications are built with Go 1.24—the versions reported by Syft will become increasingly accurate over time. Everyone’s a winner!
Join the Conversation
We’re proud of these enhancements and grateful to the community for their contributions. If you’re interested in contributing or have ideas for future improvements, head to our GitHub repo and join the conversation. Your feedback and pull requests help shape the future of Syft and our other projects. Happy scanning!
Stay updated on future community spotlights and events by subscribing to our community newsletter.
Learn how MegaLinter leverages Syft and Grype to generate SBOMs and create vulnerability reports
Syft is an open source CLI tool and Go library that generates a Software Bill of Materials (SBOM) from source code, container images and packaged binaries. It is a foundational building block for various use-cases: from vulnerability scanning with tools like Grype, to OSS license compliance with tools like Grant. SBOMs track software components—and their associated supplier, security, licensing, compliance, etc. metadata—through the software development lifecycle.
At a high level, Syft takes the following approach to generating an SBOM:
Determine the type of input source (container image, directory, archive, etc.)
Orchestrate a pluggable set of catalogers to scan the source or artifact
Each package cataloger looks for package types it knows about (RPMs, Debian packages, NPM modules, Python packages, etc.)
In addition, the file catalogers gather other metadata and generate file hashes
Aggregate all discovered components into an SBOM document
Output the SBOM in the desired format (Syft, SPDX, CycloneDX, etc.)
Let’s dive into each of these steps in more detail.
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
Container images (both from registries and local Docker/Podman engines)
Local filesystems and directories
Archives (TAR, ZIP, etc.)
Single files
This flexibility is important as SBOMs are used in a variety of environments, from a developer’s workstation to a CI/CD pipeline.
When you run Syft, it first tries to autodetect the source type from the provided input. For example:
# Scan a container image syft ubuntu:latest# Scan a local filesystemsyft ./my-app/
Pluggable Package Catalogers
The heart of Syft is its decoupled architecture for software composition analysis (SCA). Rather than one monolithic scanner, Syft delegates scanning to a collection of catalogers, each focused on a specific software ecosystem.
Some key catalogers include:
apk-db-cataloger for Alpine packages
dpkg-db-cataloger for Debian packages
rpm-db-cataloger for RPM packages (sourced from various databases)
python-package-cataloger for Python packages
java-archive-cataloger for Java archives (JAR, WAR, EAR)
npm-package-cataloger for Node/NPM packages
Syft automatically selects which catalogers to run based on the source type. For a container image, it will run catalogers for the package types installed in containers (RPM, Debian, APK, NPM, etc). For a filesystem, Syft runs a different set of catalogers looking for installed software that is more typical for filesystems and source code.
This pluggable architecture gives Syft broad coverage while keeping the core streamlined. Each cataloger can focus on accurately detecting its specific package type.
If we look at a snippet of the trace output from scanning an Ubuntu image, we can see some catalogers in action:
Here, the dpkg-db-cataloger found 91 Debian packages, while the rpm-db-cataloger and npm-package-cataloger didn’t find any packages of their types—which makes sense for an Ubuntu image.
Aggregating and Outputting Results
Once all catalogers have finished, Syft aggregates the results into a single SBOM document. This normalized representation abstracts away the implementation details of the different package types.
Source information (repository, download URL, etc.)
File digests and metadata
It also contains essential metadata, including a copy of the configuration used when generating the SBOM (for reproducibility). The SBOM will contain detailed information about package evidence, which packages were parsed from (within package.Metadata).
Finally, Syft serializes this document into one or more output formats. Supported formats include:
Syft’s native JSON format
SPDX’s tag-value and JSON
CycloneDX’s JSON and XML
Having multiple formats allows integrating Syft into a variety of toolchains and passing data between systems that expect certain standards.
Revisiting the earlier Ubuntu example, we can see a snippet of the final output:
NAME VERSION TYPEapt 2.7.14build2 debbase-files 13ubuntu10.1 debbash 5.2.21-2ubuntu4 deb
Container Image Parsing with Stereoscope
To generate high-quality SBOMs from container images, Syft leverages a stereoscope library for parsing container image formats.
Stereoscope does the heavy lifting of unpacking an image into its constituent layers, understanding the image metadata, and providing a unified filesystem view for Syft to scan.
This encapsulation is quite powerful, as it abstracts the details of different container image specs (Docker, OCI, etc.), allowing Syft to focus on SBOM generation while still supporting a wide range of images.
Cataloging Challenges and Future Work
While Syft can generate quality SBOMs for many source types, there are still challenges and room for improvement.
One challenge is supporting the vast variety of package types and versioning schemes. Each ecosystem has its own conventions, making it challenging to extract metadata consistently. Syft has added support for more ecosystems and evolved its catalogers to handle edge-cases to provide support for an expanding array of software tooling.
Another challenge is dynamically generated packages, like those created at runtime or built from source. Capturing these requires more sophisticated analysis that Syft does not yet do. To illustrate, let’s look at two common cases:
Runtime Generated Packages
Imagine a Python application that uses a web framework like Flask or Django. These frameworks allow defining routes and views dynamically at runtime based on configuration or plugin systems.
For example, an application might scan a /plugins directory on startup, importing any Python modules found and registering their routes and models with the framework. These plugins could pull in their own dependencies dynamically using importlib.
From Syft’s perspective, none of this dynamic plugin and dependency discovery happens until the application actually runs. The Python files Syft scans statically won’t reveal those runtime behaviors.
Furthermore, plugins could be loaded from external sources not even present in the codebase Syft analyzes. They might be fetched over HTTP from a plugin registry as the application starts.
To truly capture the full set of packages in use, Syft would need to do complex static analysis to trace these dynamic flows, or instrument the running application to capture what it actually loads. Both are much harder than scanning static files.
Source Built Packages
Another typical case is building packages from source rather than installing them from a registry like PyPI or RubyGems.
Consider a C++ application that bundles several libraries in a /3rdparty directory and builds them from source as part of its build process.
When Syft scans the source code directory or docker image, it won’t find any already built C++ libraries to detect as packages. All it will see are raw source files, which are much harder to map to packages and versions.
One approach is to infer packages from standard build tool configuration files, like CMakeLists.txt or Makefile. However, resolving the declared dependencies to determine the full package versions requires either running the build or profoundly understanding the specific semantics of each build tool. Both are fragile compared to scanning already built artifacts.
Some Language Ecosystems are Harder Than Others
It’s worth noting that dynamism and source builds are more or less prevalent in different language ecosystems.
Interpreted languages like Python, Ruby, and JavaScript tend to have more runtime dynamism in their package loading compared to compiled languages like Java or Go. That said, even compiled languages have ways of loading code dynamically, it just tends to be less common.
Likewise, some ecosystems emphasize always building from source, while others have a strong culture of using pre-built packages from central registries.
These differences mean the level of difficulty for Syft in generating a complete SBOM varies across ecosystems. Some will be more amenable to static analysis than others out of the box.
What Could Help?
To be clear, Syft has already done impressive work in generating quality SBOMs across many ecosystems despite these challenges. But to reach the next level of coverage, some additional analysis techniques could help:
Static analysis to trace dynamic code flows and infer possible loaded packages (with soundness tradeoffs to consider)
Dynamic instrumentation/tracing of applications to capture actual package loads (sampling and performance overhead to consider)
Standardized metadata formats for build systems to declare dependencies (adoption curve and migration path to consider)
Heuristic mapping of source files to known packages (ambiguity and false positives to consider)
None are silver bullets, but they illustrate the approaches that could help push SBOM coverage further in complex cases.
Ultimately, there will likely always be a gap between what static tools like Syft can discover versus the actual dynamic reality of applications. But that doesn’t mean we shouldn’t keep pushing the boundary! Even incremental improvements in these areas help make the software ecosystem more transparent and secure.
Syft also has room to grow in terms of programming language support. While it covers major ecosystems like Java and Python well, more work is needed to cover languages like Go, Rust, and Swift completely.
As the SBOM landscape evolves, Syft will continue to adapt to handle more package types, sources, and formats. Its extensible architecture is designed to make this growth possible.
Get Involved
Syft is fully open source and welcomes community contributions. If you’re interested in adding support for a new ecosystem, fixing bugs, or improving SBOM generation, the repo is the place to get started.
There are issues labeled “Good First Issue” for those new to the codebase. For more experienced developers, the code is structured to make adding new catalogers reasonably straightforward.
No matter your experience level, there are ways to get involved and help push the state of the art in SBOM generation. We hope you’ll join us!
Learn about the role that SBOMs for the security of your organization in this white paper.
Today, we’re excited to announce the launch of “Software Bill of Materials 101: A Guide for Developers, Security Engineers, and the DevSecOps Community”. This eBook is free and open source resource that provides a comprehensive introduction to all things SBOMs.
Why We Created This Guide
While SBOMs have become increasingly critical for software supply chain security, many developers and security professionals still struggle to understand and implement them effectively. We created this guide to help bridge that knowledge gap, drawing on our experience building popular SBOM tools like Syft.
What’s Inside
The ebook covers essential SBOM topics, including:
Practical guidance for integrating SBOMs into DevSecOps pipelines
We’ve structured the content to be accessible to newcomers while providing enough depth for experienced practitioners looking to expand their knowledge.
Community-Driven Development
This guide is published under an open source license and hosted on GitHub at https://github.com/anchore/sbom-ebook. The collective wisdom of the DevSecOps community will strengthen this resource over time. We welcome contributions whether fixes, new content, or translations.
Getting Started
You can read the guide online, download PDF/ePub versions, or clone the repository to build it locally. The source is in Markdown format, making it easy to contribute improvements.
The software supply chain security challenges we face require community collaboration. We hope this guide advances our collective understanding of SBOMs and their role in securing the software ecosystem.
Learn about the role that SBOMs play for the security of your organization in this white paper.
Software Bill of Materials (SBOM) has emerged as a pivotal technology to scale product innovation while taming the inevitable growth of complexity of modern software development. SBOMs are typically thought of as a comprehensive inventory of all software components—both open source and proprietary—within an application. But they are more than just a simple list of “ingredients”. They offer deeper insights that enable organizations to unlock enterprise-wide value. From automating security and compliance audits to assessing legal risks and scaling continuous regulatory compliance, SBOMs are central to the software development lifecycle.
However, as organizations begin to mature their SBOM initiative, the rapid growth in SBOM documents can quickly become overwhelming—a phenomenon known as SBOM sprawl. Below, we’ll outline how SBOM sprawl happens, why it’s a natural byproduct of a maturing SBOM program, and the best practices for wrangling the complexity to extract real value.
Learn about the role that SBOMs for the security of your organization in this white paper.
SBOM adoption typically begins ad hoc when a Software Engineer starts self-scanning their source code to look for vulnerabilities or a DevOps Engineer integrating an open source SBOM generator into a non-critical CI/CD pipeline as a favor to a friendly Security Engineer. SBOMs begin to pile up niches across the organization and adoption continues to grow through casual conversation.
It’s only a matter of time before you accumulate a flood of SBOMs. As ad hoc adoption reaches scale, the volume of SBOM data balloons. We’re not all running at Google’s scale but their SBOM growth chart still illustrates the story of how new SBOM programs can quickly get out of hand.
We call this SBOM sprawl. It is the overabundance of unmanaged and unorganized SBOM documents. When left unchecked, SBOM sprawl prevents SBOMs from delivering the value of its use-cases like real-time vulnerability discovery, automated compliance checks, or supply chain insights.
So, how do we regain control as our SBOM initiative grows? The answer lies in a centralized data store. Instead of letting SBOMs scatter, we bring them together in a robust SBOM management system that can index, analyze, and make these insights actionable. Think of it like any large data management problem. The same best practices used for enterprise data management—ETL pipelines, centralized storage, robust queries—apply to SBOMs as well.
What is SBOM Management (SBOMOps)?
SBOM management—or SBOMOps—is defined as:
A data pipeline, centralized repository and query engine purpose-built to manage and extract actionable software supply chain insights from software component metadata (i.e., SBOMs).
In practical terms, SBOM management focuses on:
Reliably generating SBOMs at key points in the development lifecycle
Ingesting and converting 3rd-party supplier SBOMs
Storing the SBOM documents in a centralized repository
Enriching SBOMs from additional data sources (e.g., security, compliance, licensing, etc.)
Generating the use-case specific queries and reports that drive results
When done right, SBOMOps reduces chaos, making SBOMs instantly available for everything from zero-day vulnerability checks to open source license tracking—without forcing teams to manually search scattered locations for relevant files.
SBOM Management Best Practices
1. Store and manage SBOMs in a central repository
A single, central repository for SBOMs is the antidote to sprawl. Even if developer teams keep SBOM artifacts near their code repositories for local reference, the security organization should maintain a comprehensive SBOM inventory across all applications.
Why it matters:
Rapid incident response: When a new zero-day hits, you need a quick way to identify which applications are affected. That’s nearly impossible if your SBOM data is scattered or nonexistent.
Time savings: Instead of rescanning every application from scratch, you can consult the repository for an instant snapshot of dependencies.
2. Support SBOM standards—but don’t stop there
SPDX and CycloneDX are the two primary SBOM standards today, each with their own unique strengths. Any modern SBOM management system should be able to ingest and produce both. However, the story doesn’t end there.
Tooling differences:
Not all SBOM generation tools are created equal. For instance,Syft (OSS) and AnchoreCTL can produce a more comprehensive intermediate SBOM format containing additional metadata beyond what SPDX or CycloneDX alone might capture. This extra data can lead to:
More precise vulnerability detection
More granular policy enforcement
Fewer false positives
3. Require SBOMs from all 3rd-party software suppliers
It’s not just about your own code—any 3rd-party software you include is part of your software supply chain. SBOM best practice demands obtaining an SBOM from your suppliers, whether that software is open source or proprietary.
Proprietary: Commercial vendors may be wary of sharing SBOM data (IP or confidentiality concerns). If they won’t provide the SBOM, at least require them to maintain an SBOM-driven vulnerability management workflow and to notify you of relevant incidents.
4. Generate SBOMs at each step in the development process and for each build
Yes, generating an SBOM with every build inflates your data management and storage needs. However, if you have an SBOM management system in place then storing these extra files is a non-issue—and the benefits are massive.
SBOM drift detection: By generating SBOMs multiple times along the DevSecOps pipeline, you can detect any unexpected changes (drift) in your supply chain—whether accidental or malicious. This approach can thwart adversaries (e.g., APTs) that compromise upstream supplier code. An SBOM audit trail lets you pinpoint exactly when and where an unexpected code injection occurred.
5. Pay it forward >> Create an SBOM for your releases
If you expect third parties to provide you SBOMs, it’s only fair to provide your own to downstream users. This transparency fosters trust and positions you for future SBOM-related regulatory or compliance requirements. If SBOM compliance becomes mandatory (we think the industry is trending that direction), you’ll already be prepared.
How to Architect an SBOM Management System
The reference architecture below outlines the major components needed for a reliable SBOM management solution:
Integrate SBOM Generation into CI/CD Pipeline: Embed a SBOM generator in every relevant DevSecOps pipeline stage to automate software composition analysis and SBOM generation.
Ingest 3rd-party supplier SBOMs: Ingest and normalize external SBOMs from all 3rd-party software component suppliers.
Send All SBOM Artifacts to Repository: Once generated, ensure each SBOM is automatically shipped to the central repository to serve as your source-of-truth for software supply chain data.
Enrich SBOMs with critical business data: Enrich SBOMs from additional data sources (e.g., security, compliance, licensing, etc.)
Stand Up Data Analysis & Visualization: Build or adopt dashboards and query tooling to generate the software supply chain insights desired from the scoped SBOM use-cases.
Profit! (Figuratively): Gain comprehensive software supply chain visibility, faster incident response, and the potential to unlock advanced use-cases like automated policy-as-code enforcement.
You can roll your own SBOM management platform or opt for a turnkey solution 👇.
SBOM Management Using Anchore SBOM and Anchore Enterprise
Anchore offers a comprehensive SBOM management platform that help you centralize and operationalize SBOM data—unlocking the full strategic value of your software bills of materials:
Anchore Enterprise
Out-of-the-box SBOM generation: Combined SCA scanning, SBOM generation and SBOM format support for full development environment coverage.
Purpose-built SBOM Inventory: Eliminates SBOM sprawl with a centralized SBOM repository to power SBOM use-cases like rapid incident response, license risk management and automated compliance.
DevSecOps Integrations: Drop-in plugins for standard DevOps tooling; CI/CD pipelines, container registries, ticketing systems, and more.
SBOM Enrichment Data Pipeline: Fully automated data enrichment pipeline for security, compliance, licensing, etc. data.
Pre-built Data Analysis & Visualization: Delivers immediate insights on the health of your software supply chain with visualization dashboards for vulnerabilities, compliance, and policy checks.
Policy-as-Code Enforcement: Customizable policy rules guarantee continuous compliance, preventing high-risk or non-compliant components from entering production and reducing manual intervention.
To close out 2024, we’re going to count down the top 10 hottest hits from the Anchore blog in 2024! The Anchore content team continued our tradition of delivering expert guidance, practical insights, and forward-looking strategies on DevSecOps, cybersecurity compliance, and software supply chain management.
This top ten list spotlights our most impactful blog posts from the year, each tackling a different angle of modern software development and security. Hot topics this year include:
All things SBOM (software bill of materials)
DevSecOps compliance for the Department of Defense (DoD)
Regulatory compliance for federal government vendors (e.g., FedRAMP & SSDF Attestation)
Vulnerability scanning and management—a perennial favorite!
Our selection runs the gamut of knowledge needed to help your team stay ahead of the compliance curve, boost DevSecOps efficiency, and fully embrace SBOMs. So, grab your popcorn and settle in—it’s time to count down the blog posts that made the biggest splash this year!
The Top Ten List
10 | A Guide to Air Gapping
Kicking us off at number 10 is a blog that’s all about staying off the grid—literally. Ever wonder what it really means to keep your network totally offline?
A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments breaks down the concept of “air gapping”—literally disconnecting a computer from the internet by leaving a “gap of air” between your computer and an ethernet cable. It is generally considered a security practice to protect classified, military-grade data or similar.
Our blog covers the perks, like knocking out a huge range of cyber threats, and the downsides, like having to manually update and transfer data. It also details how Anchore EnforceFederal Edition can slip right into these ultra-secure setups, blending top-notch protection with the convenience of automated, cloud-native software checks.
9 | SBOMs + Vulnerability Management == Open Source Security++
Coming in at number nine on our countdown is a blog that breaks down two of our favorite topics; SBOMs and vulnerability scanners—And how using SBOMs as your foundation for vulnerability management can level up your open source security game.
speeding up the DevSecOps process so you don’t feel the drag of legacy security tools.
By switching to this modern, SBOM-driven approach, you’ll see benefits like faster fixes, smoother compliance checks, and fewer late-stage security surprises—just ask companies like NVIDIA, Infoblox, DreamFactory and ModuleQ, who’ve saved tons of time and hassle by adopting these practices.
8 | Improving Syft’s Binary Detection
Landing at number eight, we’ve got a blog post that’s basically a backstage pass to Syft’s binary detection magic. Improving Syft’s Binary Detection goes deep on how Syft—Anchore’s open source SBOM generation tool—uncovers out the details of executable files and how you can lend a hand in making it even better.
We walk you through the process of adding new binary detection rules, from finding the right binaries and testing them out, to fine-tuning the patterns that match their version strings.
The end goal? Helping all open source contributors quickly get started making their first pull request and broadening support for new ecosystems. A thriving, community-driven approach to better securing the global open source ecosystem.
7 | A Guide to FedRAMP in 2024: FAQs & Key Takeaways
Sliding in at lucky number seven, we’ve got the ultimate cheat sheet for FedRAMP in 2024 (and 2025😉)! Ever wonder how Uncle Sam greenlights those fancy cloud services? A Guide to FedRAMP in 2024: FAQs & Key Takeaways spills the beans on all the FedRAMP basics you’ve been struggling to find—fast answers without all the fluff.
It covers what FedRAMP is, how it works, who needs it, and why it matters; detailing the key players and how it connects with other federal security standards like FISMA. The idea is to help you quickly get a handle on why cloud service providers often need FedRAMP certification, what benefits it offers, and what’s involved in earning that gold star of trust from federal agencies. By the end, you’ll know exactly where to start and what to expect if you’re eyeing a spot in the federal cloud marketplace.
By using SBOMs, Grant can quickly show you which licenses are in play—and whether any have changed from something friendly to something more restrictive. With handy list and check commands, Grant makes it easier to spot and manage license risk, ensuring you keep shipping software without getting hit with last-minute legal surprises.
5 | An Overview of SSDF Attestation: Compliance Need-to-Knows
Landing at number five on tonight’s compliance countdown is a big wake-up call for all you software suppliers eyeing Uncle Sam’s checkbook: the SSDF Attestation Form. That’s right—starting now, if you wanna do business with the feds, you gotta show off those DevSecOps chops, no exceptions! In Using the Common Form for SSDF Attestation: What Software Producers Need to Know we break down the new Secure Software Development Attestation Form—released in March 2024—that’s got everyone talking in the federal software space.
In short, if you’re a software vendor wanting to sell to the US government, you now have to “show your work” when it comes to secure software development. The form builds on the SSDF framework, turning it from a nice-to-have into a must-do. It covers which software is included, the timelines you need to know, and what happens if you don’t shape up.
There are real financial risks if you can’t meet the deadlines or if you fudge the details (hello, criminal penalties!). With this new rule, it’s time to get serious about locking down your dev practices or risk losing out on government contracts.
4 | Prep your Containers for STIG
At number four, we’re diving headfirst into the STIG compliance world—the DoD’s ultimate ‘tough crowd’ when it comes to security! If you’re feeling stressed about locking down those container environments—we’ve got you covered. 4 Ways to Prepare your Containers for the STIG Process is all about tackling the often complicated STIG process for containers in DoD projects.
You’ll learn how to level up your team by cross-training cybersecurity pros in container basics and introducing your devs and architects to STIG fundamentals. It also suggests going straight to the official DISA source for current STIG info, making the STIG Viewer a must-have tool on everyone’s workstation, and looking into automation to speed up compliance checks.
Bottom line: stay informed, build internal expertise, and lean on the right tools to keep the STIG process from slowing you down.
3 | Syft Graduates to v1.0!
Give it up for number three on our countdown—Syft’s big graduation announcement! In Syft Reaches v1.0! Syft celebrates hitting the big 1.0 milestone!
Syft is Anchore’s OSS tool for generating SBOMs, helping you figure out exactly what’s inside your software, from container images to source code. Over the years, it’s grown to support over 40 package types, outputting SBOMs in various formats like SPDX and CycloneDX. With v1.0, Syft’s CLI and API are now stable, so you can rely on it for consistent results and long-term compatibility.
But don’t worry—development doesn’t stop here. The team plans to keep adding support for more ecosystems and formats, and they invite the community to join in, share ideas, and contribute to the future of Syft.
2 | RAISE 2.0 Overview: RMF and ATO for the US Navy
Next up at number two is the lowdown on RAISE 2.0—your backstage pass to lightning-fast software approvals with the US Navy! In RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery we break down what RAISE 2.0 means for teams working with the Department of the Navy’s containerized software. The key takeaway? By using an approved DevSecOps platform—known as an RPOC—you can skip getting separate ATOs for every new app.
The guidelines encourage a “shift left” approach, focusing on early and continuous security checks throughout development. Tools like Anchore EnforceFederal Edition can help automate the required security gates, vulnerability scans, and policy checks, making it easier to keep everything compliant.
In short, RAISE 2.0 is all about streamlining security, speeding up approvals, and helping you deliver secure code faster.
1 | Introduction to the DoD Software Factory
Taking our top spot at number one, we’ve got the DoD software factory—the true VIP of the DevSecOps world! We’re talking about a full-blown, high-security software pipeline that cranks out code for the defense sector faster than a fighter jet screaming across the sky. In Introduction to the DoD Software Factory we break down what a DoD software factory really is—think of it as a template to build a DoD-approved DevSecOps pipeline.
The blog post details how concepts like shifting security left, using microservices, and leveraging automation all come together to meet the DoD’s sky-high security standards. Whether you choose an existing DoD software factory (like Platform One) or build your own, the goal is to streamline development without sacrificing security.
Tools like Anchore EnforceFederal Edition can help with everything from SBOM generation to continuous vulnerability scanning, so you can meet compliance requirements and keep your mission-critical apps protected at every stage.
Wrap-Up
That wraps up the top ten Anchore blog posts of 2024! We covered it all: next-level software supply chain best practices, military-grade compliance tactics, and all the open-source goodies that keep your DevSecOps pipeline firing on all cylinders.
The common thread throughout them all is the recognition that security and speed can work hand-in-hand. With SBOM-driven approaches, modern vulnerability management, and automated compliance checks, organizations can achieve the rapid, secure, and compliant software delivery required in the DevSecOps era. We hope these posts will serve as a guide and inspiration as you continue to refine your DevSecOps practice, embrace new technologies, and steer your organization toward a more secure and efficient future.
If you enjoyed our greatest hits album of 2024 but need more immediacy in your life, follow along in 2025 by subscribing to the Anchore Newsletter or following Anchore on your favorite social platform:
ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity.
ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.
Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.
The Challenge: Scaling Security in a High-Stakes Environment
ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.
The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.
The Solution: Anchore Secure for Automated, Integrated Vulnerability Management
ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.
Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.
80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.
Looking Ahead
In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.
Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.
In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:
Analyzed SBOMs at the Release (Registry) stage
Deployed SBOMs during the Deployment phase
Runtime SBOMs in Production (Operate & Monitor stages)
As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.
Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.
So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM”by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.
At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.
On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.
The additional metadata that is collected for an Analyzed SBOM includes:
Release images that bypass the happy path build and test pipeline
3rd-party container images, binaries and applications
Pros and Cons
Pros:
Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.
Cons:
High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.
Use-Cases
Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.
Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.
Pro Tip
A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.
Deploy => Deployed SBOM
As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM”by CISA.
The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.
The additional metadata that is collected for a Deployed SBOM includes:
Any additional sidecar containers or production dependencies that are injected or modified through a release controller.
Pros and Cons
Pros:
Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.
Cons:
Essentially the same issues that come up during the release phase.
High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
Use-Cases
Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.
Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.
Pro Tip
Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.
This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident.
Operate & Monitor (or Production) => Runtime SBOM
After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve.
The additional metadata that is collected for a Runtime SBOM includes:
Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment.
Pros and Cons
Pros:
Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.
Cons:
No Shift Left Security: By definition is excluded as a part of a shift left security posture.
Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.
Use-Cases
Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.
Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.
Pro Tip
If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.
As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.
SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.
Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇
Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.
The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.
An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.
In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.
Learn about the role that SBOMs for the security of your organization in this white paper.
Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.
Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.
With that out of the way, let’s get started!
Plan => Design SBOM
As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.
At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.
The metadata that is collected for a Design SBOM includes:
Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.
This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.
Pros and Cons
Pros:
Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.
Cons:
Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.
Use-Cases
There are a number of use-cases that are enabled by
Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.
Example:
A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.
Pro Tip
If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio.
Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.
Source => Source SBOM
During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.
The additional metadata that is collected for a Source SBOM includes:
Dependency Mapping: Documenting direct and transitive dependencies.
Identity Metadata: Adding contributor and commit information.
Developer Environment: Captures information about the development environment.
Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers.
Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.
Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.
Cons:
Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.
Use-Cases
Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.
Pro Tip
Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.
Build & Test => Build SBOM
When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.
The additional metadata that is collected includes:
Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
Configuration Parameters: Details on build configuration files that might impact security or compliance.
Pros and Cons
Pros:
Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
Reproducibility: Facilitates reproducing builds for debugging and auditing.
Cons:
SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.
Use-Cases
SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.
Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.
Pro Tip
If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM.
As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.
Intermission
So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.
If you’re looking for some additional reading in the meantime, check out our container security white paper below.
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.
But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.
We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.
Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:
Understanding Your Use-Cases: Aligning the tool with your specific goals.
Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
Proprietary vs. Open Source: Weighing the long-term implications of your choice.
By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.
Learn about the role that SBOMs for the security of your organization in this white paper.
When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).
What to Do:
Identify and prioritize the outcomes that your organization is attempting to achieve
Map the outcomes to the relevant SBOM use-cases
Review each SBOM generation tool to determine whether they are best suited to your use-cases
It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.
SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.
Example SBOM Use-Cases:
Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.
Pro tip:While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.
Does the SBOM generator support your organization’s ecosystem of programming languages, etc?
SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.
Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.
Considerations:
Programming Languages: Does the tool support all languages used by your team?
Operating Systems: Can it scan the different OS environments your applications run on top of?
Build Artifacts: Does the tool scan containers? Binaries? Source code repositories?
Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?
Data accuracy
This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.
Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.
Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.
In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.
Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:
In this case, we are returned none of the critical vulnerabilities found with the former tool.
This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.
Can the SBOM tool integration into your DevSecOps pipeline?
If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.
Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.
Proprietary vs. open source SBOM generator?
Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.
Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.
Wrap-Up
In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.
Save your developers time with Anchore Enterprise. Get instant access with a 15-day free trial.
The goal for any company that aims to provide software services to the Department of Defense (DoD) is an ATO. Without this stamp of approval your software will never get into the hands of the warfighters that need it most. STIG compliance is a necessary needle that must be thread on the path to ATO. Luckily, MITRE has developed and open-sourced SAF to smooth the often complex and time-consuming STIG compliance process.
We’ll get you up to speed on MITRE SAF and how it helps you achieve STIG compliance in this blog post but before we jump straight into the content be sure to bookmark our webinar with the Chief Architect of MITRE Security Automation Framework (SAF), Aaron Lippold. Josh Bressers, VP of Security at Anchore and Lippold provide a behind the scenes look at SAF and how it dramatically reduces the friction of the STIG compliance process.
What is the MITRE Security Automation Framework (SAF)?
The MITRE SAF is both a high-level cybersecurity framework and an umbrella that encompasses a set of security/compliance tools. It is designed to simplify STIG compliance by translating DISA (Defense Information Systems Agency) SRG (Security Requirements Guide) guidance into actionable steps.
By following the Security Automation Framework, organizations can streamline and automate the hardened configuration of their DevSecOps pipeline to achieve an ATO (Authority to Operate).
The SAF offers four primary benefits:
Accelerate Path to ATO: By streamlining STIG compliance, SAF enables organizations to get their applications into the hands of DoD operators faster. This acceleration is crucial for meeting operational demands without compromising on security standards.
Establish Security Requirements: SAF translates SRGs and STIGs into actionable steps tailored to an organization’s specific DevSecOps pipeline. This eliminates ambiguity and ensures security controls are implemented correctly.
Build Security In: The framework provides tooling that can be directly embedded into the software development pipeline. By automating STIG configurations and policy checks, it ensures that security measures are consistently applied, leaving no room for false steps.
Assess and Monitor Vulnerabilities: SAF offers visualization and analysis tools that assist organizations in making informed decisions about their current vulnerability inventory. It helps chart a path toward achieving STIG compliance and ultimately an ATO.
The overarching vision of the MITRE SAF is to “implement evolving security requirements while deploying apps at speed.” In essence, it allows organizations to have their cake and eat it too—gaining the benefits of accelerated software delivery without letting cybersecurity risks grow unchecked.
How does MITRE SAF work?
MITRE SAF is segmented into 5 capabilities that map to specific stages of the DevSecOps pipeline or STIG compliance process:
Plan
Harden
Validate
Normalize
Visualize
Let’s break down each of these capabilities.
Plan
There are hundreds of existing STIGs for products ranging from Microsoft Windows to Cisco Routers to MySQL databases. On the off chance that a product your team wants to use doesn’t have a pre-existing STIG, SAF’s Vulcan tool is helps translate the application SRG into a tailored STIG that can then be used to achieve compliance.
Vulcan helps streamline the process of creating STIG-ready security guidance and the accompanying InSpec automated policy that confirms a specific instance of software is configured in a compliant manner.
Vulcan does this by modeling the STIG intent form and tailoring the applicable SRG controls into a finished STIG for an application. The finished STIG is then sent to DISA for peer review and formal publishing as a STIG. Vulcan allows the author to develop both human-readable instructions and machine-readable InSpec automated validation code at the same time.
Harden
The hardening capability focuses on automating STIG compliance through the use of pre-built infrastructure configuration scripts. SAF hardening content allows organizations to:
Use their preferred configuration management tools: Chef Cookbooks, Ansible Playbooks, Terraform Modules, etc. are available as open source templates on MITRE’s GitHub page.
Share and collaborate: All hardening content is open source, encouraging community involvement and shared learning.
Coverage for the full development stack: Ensuring that every layer, from cloud infrastructure to applications, adheres to security standards.
Validate
The validation capability focuses on verifying the hardening meets the applicable STIG compliance standard. These validation checks are automated via the SAF CLI tool that incorporates the InSpec policies for a STIG. With SAF CLI, organizations can:
Automatically validate STIG compliance: By integrating SAF CLI directly into your CI/CD pipeline and invoking InSpec policy checks at every build; shifting security left by surfacing policy violations early.
Promote community collaboration: Like the hardening content, validation scripts are open source and accessible by the community for collaborative efforts.
Span the entire development stack: Validation—similar to hardening—isn’t limited to a single layer; it encompasses cloud infrastructure, platforms, operating systems, databases, web servers, and applications.
Incorporate manual attestation: To achieve comprehensive coverage of policy requirements that automated tools might not fully address.
Normalize
Normalization addresses the challenge of interoperability between different security tools and data formats. SAF CLI performs double-duty by taking on the normalization function as well as validation. It is able to:
Translate data into OHDF:OASIS Heimdall Data Format (OHDF), is an open standard that structures countless proprietary security metadata formats into a single universal format.
Leverage open source OHDF libraries: Organizations can use OHDF converters as libraries within their custom applications.
Automate data conversion: By incorporating SAF CLI into the DevSecOps pipeline, data is automatically standardized with each run.
Increased compliance efficiency: A single data format for all security data allows interoperability and facilitates efficient and automated STIG compliance.
Example: Below is an example of Burp Suite’s proprietary data format normalized to the OHDF JSON format:
Visualize
Visualization is critical for understanding security posture and making informed decisions. SAF provides an open source, self-hosted visualization tool named Heimdall. It ingests OHDF normalized security data and provides the data analysis tools to enable organizations to:
Aggregate security and compliance results: Compiling data into comprehensive rollups, charts, and timelines for a holistic view of security and compliance status.
Perform deep dives: Allowing teams to explore detailed vulnerability information to facilitate investigation and remediation, ultimately speeding up time to STIG compliance.
Guide risk reduction efforts: Visualization of insights help with prioritization of security and compliance tasks reducing risk in the most efficient manner.
How is SAF related to a DoD Software Factory?
A DoD Software Factory is the common term for a DevSecOps pipeline that meets the definition laid out in DoD Enterprise DevSecOps Reference Design. All software that ultimately achieves an ATO has to be built on a fully implemented DoD Software Factory. You can either build your own or use a pre-existing DoD Software Factory like the US Air Force’s Platform One or the US Navy’s Black Pearl.
As we saw earlier, MITRE SAF is a framework meant to help you achieve STIG compliance and is a portion of your journey towards an ATO. STIG compliance applies to both the software that you write as well as the DevSecOps platform that your software is built on. Building your own DoD Software Factory means committing to going through the ATO process and STIG compliance for the DevSecOps platform first then a second time for the end-user application.
Wrap-Up
The MITRE SAF is a huge leg up for modern, cloud-native DevSecOps software vendors that are currently navigating the labyrinth towards ATO. By providing actionable guidance, automation tooling, and a community-driven approach, SAF dramatically reduces the time to ATO. It bridges the gap between the speed of DevOps software delivery and secure, compliant applications ready for critical DoD missions with national security implications.
Embracing SAF means more than just meeting regulatory requirements; it’s about building a resilient, efficient, and secure development pipeline that can adapt to evolving security demands. In an era where cybersecurity threats are evolving just as rapidly as software, leveraging frameworks like MITRE SAF is not an efficient path to compliance—it’s essential for sustained success.
We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage.
With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.
It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>
Announcing the Anchore Data Service
Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:
Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.
We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.
Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.
The new architecture with ADS as the intermediary is illustrated below:
As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.
Increased AnchoreCTL Ecosystem Coverage
We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.
More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.
Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.
Full list of support ecosystems
Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):
Alpine (apk)
C (conan)
C++ (conan)
Dart (pubs)
Debian (dpkg)
Dotnet (deps.json)
Objective-C (cocoapods)
Elixir (mix)
Erlang (rebar3)
Go (go.mod, Go binaries)
Haskell (cabal, stack)
Java (jar, ear, war, par, sar, nar, native-image)
JavaScript (npm, yarn)
Jenkins Plugins (jpi, hpi)
Linux kernel archives (vmlinuz)
Linux kernel a (ko)
Nix (outputs in /nix/store)
PHP (composer)
Python (wheel, egg, poetry, requirements.txt)
Red Hat (rpm)
Ruby (gem)
Rust (cargo.lock)
Swift (cocoapods, swift-package-manager)
WordPress plugins
After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:
Wrap-Up
Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.
To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.
Save your developers time with Anchore Enterprise. Get instant access with a 15-day free trial.
In the rapidly modernizing landscape of cybersecurity compliance, evolving to a continuous compliance posture is more critical than ever—particularly for organizations involved with the Department of Defense (DoD) and other government agencies. At the heart of the DoD’s modern approach to software development is the DoD Enterprise DevSecOps Reference Design, commonly implemented as a DoD Software Factory.
A key component of this framework is adhering to the Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA). STIG compliance within the DevSecOps pipeline not only accelerates the delivery of secure software but also embeds robust security practices directly into the development process, safeguarding sensitive data and reinforcing national security.
This comprehensive guide will walk you through what STIGs are, who should care about them, the levels of STIG compliance, key categories of STIG requirements, how to prepare for the STIG compliance process, and the tools available to automate STIG implementation and maintenance.
The Defense Information Systems Agency (DISA) is the DoD agency responsible for delivering information technology (IT) support to ensure the security of U.S. national defense systems. To help organizations meet the DoD’s rigorous security controls, DISA develops Security Technical Implementation Guides (STIGs).
STIGs are configuration standards that provide prescriptive guidance on how to secure operating systems, network devices, software, and other IT systems. They serve as a secure configuration standard to harden systems against cyber threats.
For example, a STIG for the open source Apache web server would specify that encryption is enabled for all traffic (incoming or outgoing). This would require the generation of SSL/TLS certificates on the server in the correct location, updating the server’s configuration file to reference this certificate and re-configuration of the server to serve traffic from a secure port rather than the default insecure port.
Who should care about STIG compliance?
STIG compliance is mandatory for any organization that operates within the DoD network or handles DoD information. This includes:
DoD Contractors and Vendors: Companies providing products or services to the DoD—a.k.a. the defense industrial base (DIB)—must ensure their systems comply with STIG requirements.
Government Agencies: Federal agencies interfacing with the DoD need to adhere to applicable STIGs.
DoD Information Technology Teams: IT professionals within the DoD responsible for system security must implement STIGs.
Connection to the RMF and NIST SP 800-53
The Risk Management Framework (RMF)—more formally NIST 800-37—is a framework that integrates security and risk management into IT systems as they are being developed. The STIG compliance process outlined below is directly integrated into the higher-level RMF process. As you follow the RMF, the individual steps of STIG compliance will be completed in turn.
STIGs are also closely connected to the NIST 800-53, colloquially known as the “Control Catalog”. NIST 800-53 outlines security and privacy controls for all federal information systems; the controls are not prescriptive about the implementation, only the best practices and outcomes that need to be achieved.
As DISA developed the STIG compliance standard, they started with the NIST 800-53 controls then “tailored” them to meet the needs of the DoD; these customized security best practices are known as Security Requirements Guides (SRGs). In order to remove all ambiguity around how to meet these higher-level best practices STIGs were created with implementation specific instructions.
For example, an SRG will mandate that all systems utilize a cybersecurity best practice, such as, role-based access control (RBAC) to prevent users without the correct privileges from accessing certain systems. A STIG, on the other hand, will detail exactly how to configure an RBAC system to meet the highest security standards.
Levels of STIG Compliance
The DISA STIG compliance standard uses Severity Category Codes to classify vulnerabilities based on their potential impact on system security. These codes help organizations prioritize remediation efforts. The three Severity Category Codes are:
Category I (Cat I): These are the highest risk vulnerabilities, allowing an attacker immediate access to a system or network or allowing superuser access. Due to their high risk nature, these vulnerabilities be addressed immediately.
Category II (Cat II): These vulnerabilities provide information with a high potential of giving access to intruders. These findings are considered a medium risk and should be remediated promptly.
Category III (Cat III): These vulnerabilities constitute the lowest risk, providing information that could potentially lead to compromise. Although not as pressing as Cat II & III issues, it is still important to address these vulnerabilities to minimize risk and enhance overall security.
Understanding these categories is crucial in the STIG process, as they guide organizations in prioritizing remediation of vulnerabilities.
Key categories of STIG requirements
Given the extensive range of technologies used in DoD environments, there are hundreds of STIGs applicable to different systems, devices, applications, and more. While we won’t list all STIG requirements here, it’s important to understand the key categories and who they apply to.
1. Operating System STIGs
Applies to: System Administrators and IT Teams managing servers and workstations
Examples:
Microsoft Windows STIGs: Provides guidelines for securing Windows operating systems.
Linux STIGs: Offers secure configuration requirements for various Linux distributions.
2. Network Device STIGs
Applies to: Network Engineers and Administrators
Examples:
Network Router STIGs: Outlines security configurations for routers to protect network traffic.
Network Firewall STIGs: Details how to secure firewall settings to control access to networks.
3. Application STIGs
Applies to: Software Developers and Application Managers
Examples:
Generic Application Security STIG: Outlines the necessary security best practices needed to be STIG compliant
Web Server STIG: Provides security requirements for web servers.
Database STIG: Specifies how to secure database management systems (DBMS).
4. Mobile Device STIGs
Applies to: Mobile Device Administrators and Security Teams
Examples:
Apple iOS STIG: Guides securing of Apple mobile devices used within the DoD.
Android OS STIG: Details security configurations for Android devices.
5. Cloud Computing STIGs
Applies to: Cloud Service Providers and Cloud Infrastructure Teams
Examples:
Microsoft Azure SQL Database STIG: Offers security requirements for Azure SQL Database cloud service.
Cloud Computing OS STIG: Details secure configurations for any operating system offered by a cloud provider that doesn’t have a specific STIG.
Each category addresses specific technologies and includes a STIG checklist to ensure all necessary configurations are applied.
Achieving DISA STIG compliance involves a structured approach. Here are the stages of the STIG process and tips to prepare:
Stage 1: Identifying Applicable STIGs
With hundreds of STIGs relevant to different organizations and technology stacks, this step should not be underestimated. First, conduct an inventory of all systems, devices, applications, and technologies in use. Then, review the complete list of STIGs to match each to your inventory to ensure that all critical areas requiring secure configuration are addressed. This step is essential to avoiding gaps in compliance.
Tip: Use automated tools to scan your environment then match assets to relevant STIGs.
Stage 2: Implementation
After you’ve mapped your technology to the corresponding STIGs, the process of implementing the security configurations outlined in the guides begins. This step may require collaboration between IT, security, and development teams to ensure that the configurations are compatible with the organization’s infrastructure while enforcing strict security standards. Be sure to keep detailed records of changes made.
Tip: Prioritize implementing fixes for Cat I vulnerabilities first, followed by Cat II and Cat III. Depending on the urgency and needs of the mission, ATO can still be achieved with partial STIG compliance. Prioritizing efforts increases the chances that partial compliance is permitted.
Stage 3: Auditing & Maintenance
After the STIGs have been implemented, regular auditing and maintenance are critical to ensure ongoing compliance, verifying that no deviations have occurred over time due to system updates, patches, or other changes. This stage includes periodic scans, manual reviews, and remediation of any identified gaps. Additionally, organizations should develop a plan to stay informed about new STIG releases and updates from DISA.
Tip: Establish a maintenance schedule and assign responsibilities to team members. Alternatively, adopting a policy-as-code approach to continuous compliance by embedding STIG compliance requirements “-as-code” directly into your DevSecOps pipeline, you can automate this process.
General Preparation Tips
Training: Ensure your team is familiar with STIG requirements and the compliance process.
Collaboration: Work cross-functionally with all relevant departments, including IT, security, and compliance teams.
Resource Allocation: Dedicate sufficient resources, including time and personnel, to the compliance effort.
Continuous Improvement: Treat STIG compliance as an ongoing process rather than a one-time project.
Tools to automate STIG implementation and maintenance
Automation can significantly streamline the STIG compliance process. Here are some tools that can help:
1. Anchore STIG (Static and Runtime)
Purpose: Automates the process of checking container images against STIG requirements.
Benefits:
Simplifies compliance for containerized applications.
Integrates into CI/CD pipelines for continuous compliance.
Use Case: Ideal for DevSecOps teams utilizing containers in their deployments.
Purpose: Identify vulnerabilities and compliance issues within your network.
Benefits:
Provides actionable insights to remediate security gaps.
Offers continuous monitoring capabilities.
Use Case: Critical for security teams focused on proactive risk management.
Wrap-Up
Achieving DISA STIG compliance is mandatory for organizations working with the DoD. By understanding what STIGs are, who they apply to, and how to navigate the compliance process, your organization can meet the stringent compliance requirements set forth by DISA. As a bonus, you will enhance its security posture and reduce the potential for a security breach.
Remember, compliance is not a one-time event but an ongoing effort that requires regular updates, audits, and maintenance. Leveraging automation tools like Anchore STIG and Anchore Secure can significantly ease this burden, allowing your team to focus on strategic initiatives rather than manual compliance tasks.
Stay proactive, keep your team informed, and make use of the resources available to ensure that your IT systems remain secure and compliant.
For Black Pearl, the premier DevSecOps platform for the U.S. Navy, and Sigma Defense, a leading DoD technology contractor, the challenge was not just about meeting stringent security requirements but to empower the warfighter.
Challenge: Navigating Complex Security and Compliance Requirements
Black Pearl and Sigma Defense faced several critical hurdles in meeting the stringent security and compliance standards of the DoD Enterprise DevSecOps Reference Design:
Achieving RMF Security and Compliance: Black Pearl needed to secure its own platform and help its customers achieve ATO under the Risk Management Framework (RMF). This involved meeting stringent security controls like RA-5 (Vulnerability Management), SI-3 (Malware Protection), and IA-5 (Credential Management) for both the platform and the applications built on it.
Maintaining Continuous Compliance: With the RAISE 2.0 memo emphasizing continuous ATO compliance, manual processes were no longer sufficient. The teams needed to automate compliance tasks to avoid the time-consuming procedures traditionally associated with maintaining ATO status.
Managing Open-Source Software (OSS) Risks: Open-source components are integral to modern software development but come with inherent risks. Black Pearl had to manage OSS risks for both its platform and its customers’ applications, ensuring vulnerabilities didn’t compromise security or compliance.
Vulnerability Overload for Developers: Developers often face an overwhelming number of vulnerabilities, many of which may not pose significant risks. Prioritizing actionable items without draining resources or slowing down development was a significant challenge.
“By using Anchore and the Black Pearl platform, applications inherit 80% of the RMF’s security controls. You can avoid all of the boring stuff and just get down to what everyone does well, which is write code.”
— Christopher Rennie, Product Lead/Solutions Architect
Solution: Automating Compliance and Security with Anchore
To address these challenges, Black Pearl and Sigma Defense implemented Anchore, which provided:
Policy Packs to Meet RMF Security Controls: Black Pearl used Anchore Enterprise’s DoD policy pack to identify, evaluate, prioritize, enforce, and report on security controls necessary for RMF compliance. This ensured that both the platform and customer applications met all required standards.
“Working alongside Anchore, we have customized the compliance artifacts that come from the Anchore API to look exactly how the AOs are expecting them to. This has created a good foundation for us to start building the POA&Ms that they’re expecting.”
— Josiah Ritchie, DevSecOps Staff Engineer
Managing OSS Risks with Continuous Monitoring: Anchore’s integrated vulnerability scanner, policy enforcer, and reporting system provided continuous monitoring of open-source software components. This proactive approach ensured vulnerabilities were detected and addressed promptly, effectively mitigating security risks.
Automated Prioritization of Vulnerabilities: By integrating the Anchore Developer Bundle, Black Pearl enabled automatic prioritization of actionable vulnerabilities. Developers received immediate alerts on critical issues, reducing noise and allowing them to focus on what truly matters.
Results: Accelerated ATO and Enhanced Security
The implementation of Anchore transformed Black Pearl’s compliance process and security posture:
Platform ATO in 3-5 days: With Anchore’s integration, Black Pearl users accessed a fully operational DevSecOps platform within days, a significant reduction from the typical six months for DIY builds.
“The DoD has four different layers of authorizing officials in order to achieve ATO. You have to figure out how to make all of them happy. We want to innovate by automating the compliance process. Anchore helps us achieve this, so that we can build a full ATO package in an afternoon rather than taking a month or more.”
— Josiah Ritchie, DevSecOps Staff Engineer
Significantly reduced time spent on compliance reporting: Anchore automated compliance checks and artifact generation, cutting down hours spent on manual reviews and ensuring consistency in reports submitted to authorizing officials.
Proactive OSS risk management: By shifting security and compliance to the left, developers identified and remediated open-source vulnerabilities early in the development lifecycle, mitigating risks and streamlining the compliance process.
Reduced vulnerability overload with prioritized vulnerability reporting: Anchore’s prioritization of vulnerabilities prevented developer overwhelm, allowing teams to focus on critical issues without hindering development speed.
Conclusion: Empowering the Warfighter Through Efficient Compliance and Security
Black Pearl and Sigma Defense’s partnership with Anchore demonstrates how automating security and compliance processes leads to significant efficiencies. This empowers Navy divisions to focus on developing software that supports the warfighter.
Achieving ATO in days rather than months is a game-changer in an environment where every second counts, setting a new standard for efficiency through the combination of Black Pearl’s robust DevSecOps platform and Anchore’s comprehensive security solutions.
If you’re facing similar challenges in securing your software supply chain and accelerating compliance, it’s time to explore how Anchore can help your organization achieve its mission-critical objectives.
As the popularity of APIs has swept the software industry, API security has become paramount, especially for organizations in highly regulated industries. DreamFactory, an API generation platform serving the defense industry and critical national infrastructure, required an air-gappedvulnerability scanning and management solution that didn’t slow down their productivity. Avoiding security breaches and compliance failures are non-negotiables for the team to maintain customer trust.
Challenge: Security Across the Gap
DreamFactory encountered several critical hurdles in meeting the needs of its high-profile clients, particularly those in the defense community and other highly regulated sectors:
Secure deployments without cloud connectivity: Many clients, including the Department of Defense (DoD), required on-premises deployments with air-gapping, breaking the assumptions of modern cloud-based security strategies.
Air-gapped vulnerability scans: Despite air-gapping, these organizations still demanded comprehensive vulnerability reporting to protect their sensitive data.
Building high-trust partnerships: In industries where security breaches could have catastrophic consequences, establishing trust rapidly was crucial.
As Terence Bennett, CEO of DreamFactory, explains, “The data processed by these organizations have the highest national security implications. We needed a solution that could deliver bulletproof security without cloud connectivity.”
Solution: Anchore Enterprise On-Prem and Air-Gapped
To address these challenges, DreamFactory implemented Anchore Enterprise, which provided:
Comprehensive vulnerability scanning: DreamFactory integrated Anchore Enterprise into its build pipeline, running daily vulnerability scans on all deployment versions.
Automated SBOM generation and management: Every build is now cataloged and stored (as an SBOM), providing immediate transparency into the software’s components.
“By catching vulnerabilities in our build pipeline, we can inform our customers and prevent any of the APIs created by a DreamFactory install from being leveraged to exploit our customer’s network,” Bennett notes. “Anchore has helped us achieve this massive value-add for our customers.”
Results: Developer Time Savings and Enhanced Trust
The implementation of Anchore Enterprise transformed DreamFactory’s security posture and business operations:
75% reduction in time spent on vulnerability management and compliance requirements
70% faster production deployments with integrated security checks
Rapid trust development through transparency
“We’re seeing a lot of traction with data warehousing use-cases,” says Bennett. “Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.”
Conclusion: A Competitive Edge in High-Stakes Environments
By leveraging Anchore Enterprise, DreamFactory has positioned itself as a trusted partner for organizations requiring the highest levels of security and compliance in their API generation solutions. In an era where API security is more critical than ever, DreamFactory’s success story demonstrates that with the right tools and approach, it’s possible to achieve both ironclad security and operational efficiency.
Are you facing similar challenges hardening your software supply chain in order to meet the requirements of the DoD? By designing your DevSecOps pipeline to the DoD software factory standard, your organization can guarantee to meet these sky-high security and compliance requirements. Learn more about the DoD software factory standard by downloading our white paper below.