Software Supply Chain Security in 2025: SBOMs Take Center Stage

In recent years, we’ve witnessed software supply chain security transition from a quiet corner of cybersecurity into a primary battlefield. This is due to the increasing complexity of modern software that obscures the full truth—applications are a tower of components of unknown origin. Cybercriminals have fully embraced this hidden complexity as a ripe vector to exploit.

Software Bills of Materials (SBOMs) have emerged as the focal point to achieve visibility and accountability in a software ecosystem that will only grow more complex. SBOMs are an inventory of the complex dependencies that make up modern applications. SBOMs help organizations scale vulnerability management and automate compliance enforcement. The end goal is to increase transparency in an organization’s supply chain where 70-90% of modern applications are open source software (OSS) dependencies. This significant source of risk demands a proactive, data-driven solution.

Looking ahead to 2025, we at Anchore, see two trends for SBOMs that foreshadow their growing importance in software supply chain security:

  1. Global regulatory bodies continue to steadily drive SBOM adoption
  2. Foundational software ecosystems begin to implement build-native SBOM support

In this blog, we’ll walk you through the contextual landscape that leads us to these conclusions; keep reading if you want more background.

Global Regulatory Bodies Continue Growing Adoption of SBOMs

As supply chain attacks surged, policymakers and standards bodies recognized this new threat vector as a critical threat with national security implications. To stem the rising tide supply chain threats, global regulatory bodies have recognized that SBOMs are one of the solutions.

Over the past decade, we’ve witnessed a global legislative and regulatory awakening to the utility of SBOMs. Early attempts like the US Cyber Supply Chain Management and Transparency Act of 2014 may have failed to pass, but they paved the way for more significant milestones to come. Things began to change in 2021, when the US Executive Order (EO) 14028 explicitly named SBOMs as the foundation for a secure software supply chain. The following year the European Union’s Cyber Resilience Act (CRA) pushed SBOMs from “suggested best practice” to “expected norm.”

The one-two punch of the US’s EO 14028 and the EU’s CRA has already prompted action among regulators worldwide. In the years following these mandates, numerous global bodies issued or updated their guidance on software supply chain security practices—specifically highlighting SBOMs. Cybersecurity offices in Germany, India, Britain, Australia, and Canada, along with the broader European Union Agency for Cybersecurity (ENISA), have each underscored the importance of transparent software component inventories. At the same time, industry consortiums in the US automotive (Auto-ISAC) and medical device (IMDRF) sectors recognized that SBOMs can help safeguard their own complex supply chains, as have federal agencies such as the FDA, NSA, and the Department of Commerce.

By the close of 2024, the pressure mounted further. In the US, the Office of Management and Budget (OMB) set a due date requiring all federal agencies to comply with the Secure Development Framework (SSDF), effectively reinforcing SBOM usage as part of secure software development. Meanwhile, across the Atlantic, the EU CRA officially became law, cementing SBOMs as a cornerstone of modern software security. This constant pressure ensures that SBOM adoption will only continue to grow. It won’t be long until SBOMs become table stakes for anyone operating an online business. We expect the steady march forward of SBOMs to continue in 2025. 

In fact, this regulatory push has been noticed by the foundational ecosystems of the software industry and they are reacting accordingly.

Software Ecosystems Trial Build-Native SBOM Support

Until now, SBOM generation has been relegated to afterthought in software ecosystems. Businesses scan their internal supply chains with software composition analysis (SCA) tools; trying to piece together a picture of their dependencies. But as SBOM adoption continues its upward momentum, this model is evolving. In 2025, we expect that leading software ecosystems will promote SBOMs to a first-class citizen and integrate them natively into their build tools.

Industry experts have recently begun advocating for this change. Brandon Lum, the SBOM Lead at Google, notes, “The software industry needs to improve build tools propagating software metadata.” Rather than forcing downstream consumers to infer the software’s composition after the fact, producers will generate SBOMs as part of standard build pipelines. This approach reduces friction, makes application composition discoverable, and ensures that software supply chain security is not left behind.

We are already seeing early examples:

  • Linux Ecosystem (Yocto): The Yocto Project’s OpenEmbedded build system now includes native SBOM generation. This demonstrates the feasibility of integrating SBOM creation directly into the developer toolchain, establishing a blueprint for other ecosystems to follow.
  • Python Ecosystem: In 2024, Python maintainers explored proposals for build-native SBOM support, motivated by the regulations such as, the Secure Software Development Framework (SSDF) and the EU’s CRA. They’ve envisioned a future where projects, package maintainers, and contributors can easily annotate their code with software dependency metadata that will automatically propagate at build time.
  • Perl Ecosystem: The Perl Security Working Group has also begun exploring internal proposals for SBOM generation, again driven by the CRA’s regulatory changes. Their goal: ensure that Perl packages have SBOM data baked into their ecosystems so that compliance and security requirements can be met more effortlessly.
  • Java Ecosystem: The Eclipse Foundation and VMware’s Spring Boot team have introduced plug-ins for Java build tools like Maven or Gradle that streamline SBOM generation. While not fully native to the compiler or interpreter, these integrations lower the barrier to SBOM adoption within mainstream Java development workflows.

In 2025 we won’t just be talking about build-native SBOMs in abstract terms—we’ll have experimental support for them from the most forward thinking ecosystems. This shift is still in its infancy but it foreshadows the central role that SBOMs will play in the future of cybersecurity and software development as a whole.

Closing Remarks

The writing on the wall is clear: supply chain attacks aren’t slowing down—they are accelerating. In a world of complex, interconnected dependencies, every organization must know what’s inside its software to quickly spot and respond to risk. As SBOMs move from a nice-to-have to a fundamental part of building secure software, teams can finally gain the transparency they need over every component they use, whether open source or proprietary. This clarity is what will help them respond to the next Log4j or XZ Utils issue before it spreads, putting security team’s back in the driver’s seat and ensuring that software innovation doesn’t come at the cost of increased vulnerability.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

All Things SBOM in 2025: a Weekly Webinar Series

Software Bills of Materials (SBOMs) have quickly become a critical component in modern software supply chain security. By offering a transparent view of all the components that make up your applications, SBOMs enable you to pinpoint vulnerabilities before they escalate into costly incidents.

As we enter 2025, software supply chain security and risk management for 3rd-party software dependencies are top of mind for organizations. The 2024 Anchore Software Supply Chain Security Survey notes that 76% of organizations consider these challenges top priorities. Given this, it is easy to see why understanding what SBOMs are—and how to implement them—is key to a secure software supply chain.

To help organizations achieve these top priorities Anchore is hosting a weekly webinar series dedicated entirely to SBOMs. Beginning January 14 and running throughout Q1, our webinar line-up will explore a wide range of topics (see below). Industry luminaries like Kate Stewart (co-founder of the SPDX project) and Steve Springett (Chair of the OWASP CycloneDX Core Working Group) will be dropping in to provide unique insights and their special blend of expertise on all things SBOMs.

The series will cover:

  • SBOM basics and best practices
  • SDPX and SBOMs in-depth with Kate Stewart
  • Getting started: How to generate an SBOM
  • Software supply chain security and CycloneDX with Steve Springett
  • Scaling SBOMs for the enterprise
  • Real-world insights on applying SBOMs in high-stakes or regulated sectors
  • A look ahead at the future of SBOMs and software supply chain security with Kate Stewart
  • And more!

We invite you to learn from experts, gain practical skills, and stay ahead of the rapidly evolving world of software supply chain security. Visit our events page to register for the webinars now or keep reading to get a sneak peek at the content.

#1 Understanding SBOMs: An Intro to Modern Development

Date/Time: Tuesday, January 14, 2025 – 10am PST / 1pm EST
Featuring: 

  • Lead Developer of Syft
  • Anchore VP of Security
  • Anchore Director of Developer Relations

We are kicking off the series with an introduction to the essentials of SBOMs. This session will cover the basics of SBOMs—what they are, why they matter, and how to get started generating and managing them. Our experts will walk you through real-world examples (including Log4j) to show just how vital it is to know what’s in your software.

Key Topics:

This webinar is perfect for both technical practitioners and business leaders looking to establish a strong SBOM foundation.

#2 Understanding SBOMs: Deep Dive with Kate Stewart

Date/Time: Wednesday, January 22, 2025 – 10am PST / 1pm EST
Featured Guest: Kate Stewart (co-founder of SPDX)

Our second session brings you a front-row seat to an in-depth conversation with Kate Stewart, co-founder of the SPDX project. Kate is a leading voice in software supply chain security and the SBOM standard. From the origins of the SPDX standard to the latest challenges in license compliance, Kate will provide an extensive behind-the-scenes look into the world of SBOMs.

Key Topics:

  • The history and evolution of SBOMs, including the creation of SPDX
  • Balancing license compliance with security requirements
  • How SBOMs support critical infrastructure with national security concerns
  • The impact of emerging technology—like open source LLMs—on SBOM generation and analysis

If you’re ready for a more advanced look at SBOMs and their strategic impact, you won’t want to miss this conversation.

#3 How to Automate, Generate, and Manage SBOMs

Date/Time: Wednesday, January 29, 2025 – 12pm EST / 9am PST
Featuring: 

  • Anchore Director of Developer Relations
  • Anchore Principal Solutions Engineer

For those seeking a hands-on approach, this webinar dives into the specifics of automating SBOM generation and management within your CI/CD pipeline. Anchore’s very own Alan Pope (Developer Relations) and Sean Fazenbaker (Solutions) will walk you through proven techniques for integrating SBOMs to reveal early vulnerabilities, minimize manual interventions, and improve overall security posture.

Key Topics:

This is the perfect session for teams focused on shifting security left and preserving developer velocity.

What’s Next?

Beyond our January line-up, we have more exciting sessions planned throughout Q1. Each webinar will feature industry experts and dive deeper into specialized use-cases and future technologies:

  • CycloneDX & OWASP with Steve Springett – A closer look at this popular SBOM format, its technical architecture, and VEX integration.
  • SBOM at Scale: Enterprise SBOM Management – Learn from large organizations that have successfully rolled out SBOM practices across hundreds of applications.
  • SBOMs in High-Stakes Environments – Explore how regulated industries like healthcare, finance, and government handle unique compliance challenges and risk management.
  • The Future of Software Supply Chain Security – Join us in March as we look ahead at emerging standards, tools, and best practices with Kate Stewart returning as the featured guest.

Stay tuned for dates and registration details for each upcoming session. Follow us on your favorite social network (Twitter, Linkedin, Bluesky) or visit our events page to stay up-to-date.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

The Top Ten List: The 2024 Anchore Blog

To close out 2024, we’re going to count down the top 10 hottest hits from the Anchore blog in 2024! The Anchore content team continued our tradition of delivering expert guidance, practical insights, and forward-looking strategies on DevSecOps, cybersecurity compliance, and software supply chain management.

This top ten list spotlights our most impactful blog posts from the year, each tackling a different angle of modern software development and security. Hot topics this year include: 

  • All things SBOM (software bill of materials)
  • DevSecOps compliance for the Department of Defense (DoD)
  • Regulatory compliance for federal government vendors (e.g., FedRAMP & SSDF Attestation)
  • Vulnerability scanning and management—a perennial favorite!

Our selection runs the gamut of knowledge needed to help your team stay ahead of the compliance curve, boost DevSecOps efficiency, and fully embrace SBOMs. So, grab your popcorn and settle in—it’s time to count down the blog posts that made the biggest splash this year!

The Top Ten List

10 | A Guide to Air Gapping

Kicking us off at number 10 is a blog that’s all about staying off the grid—literally. Ever wonder what it really means to keep your network totally offline? 

A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments breaks down the concept of “air gapping”—literally disconnecting a computer from the internet by leaving a “gap of air” between your computer and an ethernet cable. It is generally considered a security practice to protect classified, military-grade data or similar.

Our blog covers the perks, like knocking out a huge range of cyber threats, and the downsides, like having to manually update and transfer data. It also details how Anchore Enforce Federal Edition can slip right into these ultra-secure setups, blending top-notch protection with the convenience of automated, cloud-native software checks.

9 | SBOMs + Vulnerability Management == Open Source Security++

Coming in at number nine on our countdown is a blog that breaks down two of our favorite topics; SBOMs and vulnerability scanners—And how using SBOMs as your foundation for vulnerability management can level up your open source security game.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era is all about getting a handle on: 

  • every dependency in your code, 
  • scanning for issues early and often, and 
  • speeding up the DevSecOps process so you don’t feel the drag of legacy security tools. 

By switching to this modern, SBOM-driven approach, you’ll see benefits like faster fixes, smoother compliance checks, and fewer late-stage security surprises—just ask companies like NVIDIA, Infoblox, DreamFactory and ModuleQ, who’ve saved tons of time and hassle by adopting these practices.

8 | Improving Syft’s Binary Detection

Landing at number eight, we’ve got a blog post that’s basically a backstage pass to Syft’s binary detection magic. Improving Syft’s Binary Detection goes deep on how Syft—Anchore’s open source SBOM generation tool—uncovers out the details of executable files and how you can lend a hand in making it even better. 

We walk you through the process of adding new binary detection rules, from finding the right binaries and testing them out, to fine-tuning the patterns that match their version strings. 

The end goal? Helping all open source contributors quickly get started making their first pull request and broadening support for new ecosystems. A thriving, community-driven approach to better securing the global open source ecosystem.

7 | A Guide to FedRAMP in 2024: FAQs & Key Takeaways

Sliding in at lucky number seven, we’ve got the ultimate cheat sheet for FedRAMP in 2024 (and 2025😉)! Ever wonder how Uncle Sam greenlights those fancy cloud services? A Guide to FedRAMP in 2024: FAQs & Key Takeaways spills the beans on all the FedRAMP basics you’ve been struggling to find—fast answers without all the fluff. 

It covers what FedRAMP is, how it works, who needs it, and why it matters; detailing the key players and how it connects with other federal security standards like FISMA. The idea is to help you quickly get a handle on why cloud service providers often need FedRAMP certification, what benefits it offers, and what’s involved in earning that gold star of trust from federal agencies. By the end, you’ll know exactly where to start and what to expect if you’re eyeing a spot in the federal cloud marketplace.

6 | Introducing Grant: OSS Licence Management

At number six on tonight’s countdown, we’re rolling out the red carpet for Grant—Anchore’s snazzy new software license-wrangling sidekick! Introducing Grant: An OSS project for inspecting and checking license compliance using SBOMs covers how Grant helps you keep track of software licenses in your projects. 

By using SBOMs, Grant can quickly show you which licenses are in play—and whether any have changed from something friendly to something more restrictive. With handy list and check commands, Grant makes it easier to spot and manage license risk, ensuring you keep shipping software without getting hit with last-minute legal surprises.

5 | An Overview of SSDF Attestation: Compliance Need-to-Knows

Landing at number five on tonight’s compliance countdown is a big wake-up call for all you software suppliers eyeing Uncle Sam’s checkbook: the SSDF Attestation Form. That’s right—starting now, if you wanna do business with the feds, you gotta show off those DevSecOps chops, no exceptions! In Using the Common Form for SSDF Attestation: What Software Producers Need to Know we break down the new Secure Software Development Attestation Form—released in March 2024—that’s got everyone talking in the federal software space. 

In short, if you’re a software vendor wanting to sell to the US government, you now have to “show your work” when it comes to secure software development. The form builds on the SSDF framework, turning it from a nice-to-have into a must-do. It covers which software is included, the timelines you need to know, and what happens if you don’t shape up.

There are real financial risks if you can’t meet the deadlines or if you fudge the details (hello, criminal penalties!). With this new rule, it’s time to get serious about locking down your dev practices or risk losing out on government contracts.

4 | Prep your Containers for STIG

At number four, we’re diving headfirst into the STIG compliance world—the DoD’s ultimate ‘tough crowd’ when it comes to security! If you’re feeling stressed about locking down those container environments—we’ve got you covered. 4 Ways to Prepare your Containers for the STIG Process is all about tackling the often complicated STIG process for containers in DoD projects. 

You’ll learn how to level up your team by cross-training cybersecurity pros in container basics and introducing your devs and architects to STIG fundamentals. It also suggests going straight to the official DISA source for current STIG info, making the STIG Viewer a must-have tool on everyone’s workstation, and looking into automation to speed up compliance checks. 

Bottom line: stay informed, build internal expertise, and lean on the right tools to keep the STIG process from slowing you down.

3 | Syft Graduates to v1.0!

Give it up for number three on our countdown—Syft’s big graduation announcement! In Syft Reaches v1.0! Syft celebrates hitting the big 1.0 milestone! 

Syft is Anchore’s OSS tool for generating SBOMs, helping you figure out exactly what’s inside your software, from container images to source code. Over the years, it’s grown to support over 40 package types, outputting SBOMs in various formats like SPDX and CycloneDX. With v1.0, Syft’s CLI and API are now stable, so you can rely on it for consistent results and long-term compatibility. 

But don’t worry—development doesn’t stop here. The team plans to keep adding support for more ecosystems and formats, and they invite the community to join in, share ideas, and contribute to the future of Syft.

2 | RAISE 2.0 Overview: RMF and ATO for the US Navy

Next up at number two is the lowdown on RAISE 2.0—your backstage pass to lightning-fast software approvals with the US Navy! In RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery we break down what RAISE 2.0 means for teams working with the Department of the Navy’s containerized software.  The key takeaway? By using an approved DevSecOps platform—known as an RPOC—you can skip getting separate ATOs for every new app. 

The guidelines encourage a “shift left” approach, focusing on early and continuous security checks throughout development. Tools like Anchore Enforce Federal Edition can help automate the required security gates, vulnerability scans, and policy checks, making it easier to keep everything compliant. 

In short, RAISE 2.0 is all about streamlining security, speeding up approvals, and helping you deliver secure code faster.

1 | Introduction to the DoD Software Factory

Taking our top spot at number one, we’ve got the DoD software factory—the true VIP of the DevSecOps world! We’re talking about a full-blown, high-security software pipeline that cranks out code for the defense sector faster than a fighter jet screaming across the sky. In Introduction to the DoD Software Factory we break down what a DoD software factory really is—think of it as a template to build a DoD-approved DevSecOps pipeline. 

The blog post details how concepts like shifting security left, using microservices, and leveraging automation all come together to meet the DoD’s sky-high security standards. Whether you choose an existing DoD software factory (like Platform One) or build your own, the goal is to streamline development without sacrificing security. 

Tools like Anchore Enforce Federal Edition can help with everything from SBOM generation to continuous vulnerability scanning, so you can meet compliance requirements and keep your mission-critical apps protected at every stage.

Wrap-Up

That wraps up the top ten Anchore blog posts of 2024! We covered it all: next-level software supply chain best practices, military-grade compliance tactics, and all the open-source goodies that keep your DevSecOps pipeline firing on all cylinders. 

The common thread throughout them all is the recognition that security and speed can work hand-in-hand. With SBOM-driven approaches, modern vulnerability management, and automated compliance checks, organizations can achieve the rapid, secure, and compliant software delivery required in the DevSecOps era. We hope these posts will serve as a guide and inspiration as you continue to refine your DevSecOps practice, embrace new technologies, and steer your organization toward a more secure and efficient future.

If you enjoyed our greatest hits album of 2024 but need more immediacy in your life, follow along in 2025 by subscribing to the Anchore Newsletter or following Anchore on your favorite social platform:

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Going All In: Anchore at SBOM Plugfest 2024

When we were invited to participate in Carnegie Mellon University’s Software Engineering Institute (SEI) SBOM Harmonization Plugfest 2024, we saw an opportunity to contribute to SBOM generation standardization efforts and thoroughly exercise our open-source SBOM generator, Syft

While the Plugfest only required two SBOM submissions, we decided to go all in – and learned some valuable lessons along the way.

The Plugfest Challenge

The SBOM Harmonization Plugfest aims to understand why different tools generate different SBOMs for the same software. It’s not a competition but a collaborative study to improve SBOM implementation harmonization. The organizers selected eight diverse software projects, ranging from Node.js applications to C++ libraries, and asked participants to generate SBOMs in standard formats like SPDX and CycloneDX.

Going Beyond the Minimum

Instead of just submitting two SBOMs, we decided to:

  1. SBOM generation for all eight target projects
  2. Create both source and binary analysis SBOMs where possible
  3. Output in every format Syft supports
  4. Test both enriched and non-enriched versions
  5. Validate everything thoroughly

This comprehensive approach would give us (and the broader community) much more data to work with.

Automation: The Key to Scale

To handle this expanded scope, we created a suite of scripts to automate the entire process:

  1. Target acquisition
  2. Source SBOM generation
  3. Binary building
  4. Binary SBOM generation
  5. SBOM validation

The entire pipeline runs in about 38 minutes on a well-connected server, generating nearly three hundred SBOMs across different formats and configurations.

The Power of Enrichment

One of Syft’s interesting features is its --enrich option, which can enhance SBOMs with additional metadata from online sources. Here’s a real example showing the difference in a CycloneDX SBOM for Dependency-Track:

$ wc -l dependency-track/cyclonedx-json.json dependency-track/cyclonedx-json_enriched.json
  5494 dependency-track/cyclonedx-json.json
  6117 dependency-track/cyclonedx-json_enriched.json

The enriched version contains additional information like license URLs and CPE identifiers:

{
  "license": {
    "name": "Apache 2",
    "url": "http://www.apache.org/licenses/LICENSE-2.0"
  },
  "cpe": "cpe:2.3:a:org.sonatype.oss:JUnitParams:1.1.1:*:*:*:*:*:*:*"
}

These additional identifiers are crucial for security and compliance teams – license URLs help automate legal compliance checks, while CPE identifiers enable consistent vulnerability matching across security tools.

SBOM Generation of Binaries

While source code analysis is valuable, many Syft users analyze built artifacts and containers. This reflects real-world usage where organizations must understand what’s being deployed, not just what’s in the source code. We built and analyzed binaries for most target projects:

PackageBuild MethodKey Findings
Dependency TrackDockerThe container SBOMs included ~1000 more items than source analysis, including base image components like Debian packages
HTTPiepip installBinary analysis caught runtime Python dependencies not visible in source
jqDockerPython dependencies contributed significant additional packages
MinecoloniesGradleJava runtime java archives appeared in binary analysis, but not in the source
OpenCVCMakeBinary and source SBOMs were largely the same
hexylCargo buildRust static linking meant minimal difference from source
nodejs-goofDockerNode.js runtime and base image packages significantly increased the component count

Some projects, like gin-gonic (a library) and PHPMailer, weren’t built as they’re not typically used as standalone binaries.

The differences between source and binary SBOMs were striking. For example, the Dependency-Track container SBOM revealed:

  • Base image operating system packages
  • Runtime dependencies not visible in source analysis
  • Additional layers of dependencies from the build process
  • System libraries and tools included in the container

This perfectly illustrates why both source and binary analysis are important:

  • Source SBOMs show some direct development dependencies
  • Binary/container SBOMs show the complete runtime environment
  • Together, they provide a full picture of the software supply chain

Organizations can leverage these differences in their CI/CD pipelines – using source SBOMs for early development security checks and binary/container SBOMs for final deployment validation and runtime security monitoring.

Unexpected Discovery: SBOM Generation Bug

One of the most valuable outcomes wasn’t planned at all. During our comprehensive testing, we discovered a bug in Syft’s SPDX document generation. The SPDX validators were flagging our documents as invalid due to absolute file paths:

file name must not be an absolute path starting with "/", but is: 
/.github/actions/bootstrap/action.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/benchmark-testing.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/dependabot-automation.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/oss-project-board-add.yaml

The SPDX specification requires relative file paths in the SBOM, but Syft used absolute paths. Our team quickly developed a fix, which involved converting absolute paths to relative ones in the format model logic:

// spdx requires that the file name field is a relative filename
// with the root of the package archive or directory
func convertAbsoluteToRelative(absPath string) (string, error) {
    // Ensure the absolute path is absolute (although it should already be)
    if !path.IsAbs(absPath) {
        // already relative
        log.Debugf("%s is already relative", absPath)
        return absPath, nil
    }
    // we use "/" here given that we're converting absolute paths from root to relative
    relPath, found := strings.CutPrefix(absPath, "/")
    if !found {
        return "", fmt.Errorf("error calculating relative path: %s", absPath)
    }
    return relPath, nil
}

The fix was simple but effective – stripping the leading “/” from absolute paths while maintaining proper error handling and logging. This change was incorporated into Syft v1.18.0, which we used for our final Plugfest submissions.

This discovery highlights the value of comprehensive testing and community engagement. What started as a participation in the Plugfest ended up improving Syft for all users, ensuring more standard-compliant SPDX documents. It’s a perfect example of how collaborative efforts like the Plugfest can benefit the entire SBOM ecosystem.

SBOM Validation

We used multiple validation tools to verify our SBOMs:

Interestingly, we found some disparities between validators. For example, some enriched SBOMs that passed sbom-utility validation failed with pyspdxtools. Further, the NTA online validator gave us another different result in many cases. This highlights the ongoing challenges in SBOM standardization – even the tools that check SBOM validity don’t always agree!

Key Takeaways

  • Automation is crucial: Our scripted approach allowed us to efficiently generate and validate hundreds of SBOMs.
  • Real-world testing matters: Building and analyzing binaries revealed insights (and bugs!) that source-only analysis might have missed.
  • Enrichment adds value: Additional metadata can significantly enhance SBOM utility, though support varies by ecosystem.
  • Validation is complex: Different validators can give different results, showing the need for further standardization.

Looking Forward

The SBOM Harmonization Plugfest results will be analyzed in early 2025, and we’re eager to see how different tools handled the same targets. Our comprehensive submission will help identify areas where SBOM generation can be improved and standardized.

More importantly, this exercise has already improved Syft for our users through the bug fix and given us valuable insights for future development. We’re committed to continuing this thorough testing and community participation to make SBOM generation more reliable and consistent for everyone.

The final SBOMs are published in the plugfest-sboms repo, with the scripts in the plugfest-scripts repository. Consider using Syft for SBOM generation against your code and containers, and let us know how you get on in our community discourse.

Automating SBOMs: From Creation to Scanning & Analysis

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474667&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

ModuleQ reduces vulnerability management time by 80% with Anchore Secure

ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity. 

ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.

Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.

The Challenge: Scaling Security in a High-Stakes Environment

ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.

The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.

The Solution: Anchore Secure for Automated, Integrated Vulnerability Management

ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.

Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.

Results: Faster, More Secure Deployments

By adopting Anchore Secure, ModuleQ dramatically accelerated and enhanced its vulnerability management approach:

  • 80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
  • 50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
  • Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.

Looking Ahead

In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.

Looking for more details? Read the ModuleQ case study in full. If you’re ready to move forward see all of the features on Anchore Secure’s product page or reach out to our team to schedule a demo.

Enhancing Container Security with NVIDIA’s AI Blueprint and Anchore’s Syft

Container security is critical – one breach can lead to devastating data losses and business disruption. NVIDIA’s new AI Blueprint for Vulnerability Analysis transforms how organizations handle these risks by automating vulnerability detection and analysis. For enhanced container security, this AI-powered solution is a potential game-changer.

At its core, the Blueprint combines AI-driven scanning with NVIDIA’s Morpheus Cybersecurity SDK to identify vulnerabilities in seconds rather than hours or days for enhanced container security. The system works through a straightforward process:

First, it generates a Software Bill of Materials (SBOM) using Syft, Anchore’s open-source tool. This tool creates a detailed inventory of all software components in a container. This SBOM feeds into an AI pipeline that leverages large language models (LLMs) and retrieval-augmented generation (RAG) to analyze potential vulnerabilities for enhanced container security.

The AI examines multiple data sources – from code repositories to vulnerability databases – and produces a detailed analysis of each potential threat. Most importantly, it distinguishes between genuine security risks and false positives by considering environmental factors and dependency requirements.

The system then provides clear recommendations through a standardized Vulnerability Exploitability eXchange (VEX) status, as illustrated below. Container security is further enhanced by these clear recommendations.

This Blueprint is particularly valuable because it automates traditional manual security analysis. Security teams can stop spending days investigating potential vulnerabilities and focus on addressing confirmed threats. This efficiency is invaluable for organizations managing container security at scale with enhanced container security solutions.

Want to try it yourself? Check out the Blueprint, read more in the NVIDIA blog post, and explore the vulnerability-analysis git repo. Let us know if you’ve tried this out with Syft, over on the Anchore Community Discourse.

Survey Data Shows 200% Increase in Software Supply Chain Focus

Data found in the recent Anchore 2024 Software Supply Chain Security Report shows that there has been a 200% increase in the priority of software supply chain security. As attacks continue to increase, organizations are doubling their focus in this area. There is much to understand across the industry with the nuances and intensity of software supply chain attacks across the past twelve months.

Below we’ve compiled a graphical representation of the insights gathered in the Anchore 2024 Software Supply Chain Security Report, to provide a visual approach to the unique insights, experiences, and practices of over 100 organizations that are the targets of software supply chain attacks.

The Anchore 2024 Software Supply Chain Security Report is now available. This report provides a unique set of insights into the experiences and practices of over 100 organizations that are the targets of software supply chain attacks.

The Evolution of SBOMs in the DevSecOps Lifecycle: Part 2

Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.

In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:

  • Analyzed SBOMs at the Release (Registry) stage
  • Deployed SBOMs during the Deployment phase
  • Runtime SBOMs in Production (Operate & Monitor stages)

As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.

Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.

So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Release (or Registry) => Analyzed SBOM

After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM” by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.

At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.

On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.

The additional metadata that is collected for an Analyzed SBOM includes:

  • Release images that bypass the happy path build and test pipeline
  • 3rd-party container images, binaries and applications

Pros and Cons

Pros:

  • Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
  • Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
  • Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
  • Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.

Cons:

  • High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
  • Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
  • Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.

Use-Cases

  • Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
  • Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
  • Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
  • Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.

Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.

Pro Tip

A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.

Deploy => Deployed SBOM

As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM” by CISA.

The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.

The additional metadata that is collected for a Deployed SBOM includes:

  • Any additional sidecar containers or production dependencies that are injected or modified through a release controller.

Pros and Cons

Pros:

  • Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
  • Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
  • Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.

Cons:

Essentially the same issues that come up during the release phase.

  • High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
  • Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.

Use-Cases

  • Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
  • High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
  • Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.

Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.

Pro Tip

Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.

This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident. 

Operate & Monitor (or Production) => Runtime SBOM

After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve. 

The additional metadata that is collected for a Runtime SBOM includes:

  • Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment. 

Pros and Cons

Pros:

  • Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
  • Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
  • Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.

Cons:

  • No Shift Left Security: By definition is excluded as a part of a shift left security posture.
  • Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.

Use-Cases

  • Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
  • Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
  • Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.

Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.

Pro Tip

If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.

Wrap-Up

As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.

SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.

Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇

Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The Evolution of SBOMs in the DevSecOps Lifecycle: From Planning to Production

The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.

An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.

In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Types of SBOMs and the DevSecOps Pipeline

Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.

Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.

With that out of the way, let’s get started!

Plan => Design SBOM

As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.

At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.

The metadata that is collected for a Design SBOM includes:

  • Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
  • Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
  • Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.

This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.

Pros and Cons

Pros:

  • Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
  • Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
  • Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.

Cons:

  • Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
  • Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.

Use-Cases

There are a number of use-cases that are enabled by 

  • Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
  • License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
  • Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
  • Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.

Example:

A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.

Pro Tip

If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio. 

Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.

Source => Source SBOM

During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.

The additional metadata that is collected for a Source SBOM includes:

  • Dependency Mapping: Documenting direct and transitive dependencies.
  • Identity Metadata: Adding contributor and commit information.
  • Developer Environment: Captures information about the development environment.

Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers. 

If you’re looking for an SBOM generation tool (SCA embedded), we have a comprehensive list of options to make this decision easier.

Pros and Cons

Pros:

  • Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
  • Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
  • Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.

Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.

Cons:

  • Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
  • Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
  • Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.

Use-Cases

  • Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
  • Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
  • Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
  • Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.

Pro Tip

Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.

Build & Test => Build SBOM

When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.

The additional metadata that is collected includes:

  • Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
  • Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
  • Configuration Parameters: Details on build configuration files that might impact security or compliance.

Pros and Cons

Pros:

  • Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
  • Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
  • Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
  • Reproducibility: Facilitates reproducing builds for debugging and auditing.

Cons:

  • SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
  • Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.

Use-Cases

  • SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
  • Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
  • Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.

Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.

Pro Tip

If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM

As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.

Intermission

So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.

If you’re looking for some additional reading in the meantime, check out our container security white paper below.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Choosing the Right SBOM Generator: A Framework for Success

Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.

But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.

We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.

Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:

  • Understanding Your Use-Cases: Aligning the tool with your specific goals.
  • Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
  • Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
  • DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
  • Proprietary vs. Open Source: Weighing the long-term implications of your choice.

By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Know your use-cases

When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).

What to Do:

  • Identify and prioritize the outcomes that your organization is attempting to achieve
  • Map the outcomes to the relevant SBOM use-cases
  • Review each SBOM generation tool to determine whether they are best suited to your use-cases

It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.

SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.

Example SBOM Use-Cases:

  • Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
  • Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
  • Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
  • Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
  • Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.

Pro tip: While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.

Does the SBOM generator support your organization’s ecosystem of programming languages, etc?

SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.

Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.

Considerations:

  • Programming Languages: Does the tool support all languages used by your team?
  • Operating Systems: Can it scan the different OS environments your applications run on top of?
  • Build Artifacts: Does the tool scan containers? Binaries? Source code repositories? 
  • Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?

Data accuracy

This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.

Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.

Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.

Let’s make this example more concrete by simulating this difference with Syft, Anchore’s open source SBOM generation tool:

$ syft -q -o spdx-json nginx:latest > nginx_a.spdx.json
$ grype -q nginx_a.spdx.json | grep Critical
libaom3             3.6.0-1+deb12u1          (won't fix)       deb   CVE-2023-6879     Critical    
libssl3             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
openssl             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
zlib1g              1:1.2.13.dfsg-1          (won't fix)       deb   CVE-2023-45853    Critical

In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.

Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:

$ syft -q -o spdx-json --select-catalogers "-dpkg-db-cataloger,-binary-classifier-cataloger" nginx:latest > nginx_b.spdx.json 
$ grype -q nginx_b.spdx.json | grep Critical
$

In this case, we are returned none of the critical vulnerabilities found with the former tool.

This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.

Can the SBOM tool integration into your DevSecOps pipeline?

If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.

Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.

Proprietary vs. open source SBOM generator?

Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.

Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.

Wrap-Up

In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.

Anchore on AWS Marketplace and joins ISV Accelerate

We are excited to announce two significant milestones in our partnership with Amazon Web Services (AWS) today:  

  • Anchore Enterprise can now be purchased through the AWS marketplace and 
  • Anchore has joined the APN’s (Amazon Partner Network) ISV Accelerate Program

Organizations like Nvidia, Cisco Umbrella and Infoblox validate our commitment to delivering trusted solutions for SBOM management, secure software supply chains, and automated compliance enforcement  and can now benefit from a stronger partnership between AWS and Anchore.

Anchore’s best-in-breed container security solution was chosen by Cisco Umbrella as it seamlessly integrated in their AWS infrastructure and accelerated meeting all six FedRAMP requirements. They deployed Anchore into an environment that had to meet a number of high-security and compliance standards. Chief among those was STIG compliance for Amazon EC2 nodes that backed the Amazon EKS deployment. 

In addition, Anchore Enterprise supports high-security requirements such as IL4/IL6, FIPS, SSDF attestation and EO 14208 compliance. 

Contact Anchore’s sales team today for a pricing quote or demo that suits your unique needs.

Anchore Enterprise is now available on AWS Marketplace

The AWS Marketplace offers a convenient and efficient way for AWS customers to procure Anchore. It simplifies the procurement process, provides greater control and governance, and fosters innovation by offering a rich selection of tools and services that seamlessly integrate with their existing AWS infrastructure. 

Anchore Enterprise on AWS Marketplace benefits DevSecOps teams by enabling:

  • Self-procurement via the AWS console
  • Faster procurement with applicable legal terms provided and standardized security review
  • Easier budget management with a single consolidated AWS bill for all infrastructure spend
  • Spend on Anchore Enterprise partially counts towards EDP (Enterprise Discount Program) committed spend

By strengthening our collaboration with AWS, customers can now feel at ease that Anchore Enterprise integrates and operates seamlessly on AWS infrastructure. Joining the ISV Accelerate Program allows us to work closely with AWS account teams to ensure seamless support and exceptional service for our joint clients. 

Purchase Anchore Enterprise on the AWS Marketplace or contact our sales team for a pricing quote that meets your organization’s needs.

Anchore Survey 2024: Only 1 in 5 organizations have full visibility of open source

The Anchore 2024 Software Supply Chain Security Report is now available. This report provides a unique set of insights into the experiences and practices of over 100 organizations that are the targets of software supply chain attacks.

Survey Highlights

The survey shows that amid growing software supply chain risks:

  • The intensity of software supply chain attacks is increasing.
  • 200% increase in the priority of software supply chain security.
  • Only 1 in 5 have full visibility of open source.
  • Third-party software joins open source as a top security challenge.
  • Organizations must comply with an average of 4.9 standards. 
  • 78% plan to increase SBOM usage.
  • Respondents worry about AI’s impact on software supply chain security.

The intensity of software supply chain attacks is increasing.

The survey shows that the intensity of software supply chain attacks is increasing, with 21% of successful supply chain attacks having a significant impact, more than doubling from 10% in 2022. 

200% increase in the priority of software supply chain security.

As a result of increased attacks, organizations are increasing their focus on software supply chain security, with a 200% increase in organizations making it a top priority. 

Only 1 in 5 have full visibility of open source.

Amid growing software supply chain risks, only 21% of respondents are very confident that they have complete visibility into all the dependencies of the applications their organization builds. Without this critical foundation, organizations are unaware of vulnerabilities that leave them open to supply chain attacks.

Third-party software joins open source as a top security challenge.

Organizations are looking to secure all elements of their software supply chain, including open source software and 3rd party libraries. While the security of open source software continues to be identified as a significant challenge, in this year’s report, 46% of respondents chose the security of 3rd party software as a significant challenge.

Organizations must comply with an average of 4.9 different standards.

Compliance is a significant driver in supply chain security. As software supply chain risks grow, governments and industry groups are responding with new guidelines and regulations. Respondents reported the need to comply with an average of almost five separate standards per organization.  Many must comply with new regulatory requirements including the CISA Directive of Known Exploited Vulnerabilities, the Secure Software Development Framework (SSDF), and the EU Cyber Resilience Act.

78% plan to increase SBOM usage.

The software bill-of-materials (SBOM) is now a critical component of software supply chain security. An SBOM provides visibility into software ingredients and is a foundation for understanding software vulnerabilities and risks. While just under half of respondents currently leverage SBOMs, a large majority plan to increase SBOM use over the next 18 months.

Respondents worry about AI’s impact on software supply chain security.

A large majority of respondents are concerned about AI’s impact on software supply chain security, and as many as a third are very concerned. The highest concerns are with code tested with AI and code generated with AI or with Copilot tools. 

Let’s design an action plan

Join on December 10, 2024 for a live discussion with VP of Security Josh Bressers on the latest trends. Hear practical steps for building a more resilient software supply chain. Register Now.

To minimize risk, avoid reputational damage, and protect downstream users and customers, software supply chain security must become a new practice for every organization that uses or builds software. SBOMs are a critical foundation of this new practice, providing visibility into the dependencies and risks of the software you use.  

Here are seven steps to take your software supply chain security to the next level:

  1. Assess your software supply chain maturity against best practices
  2. Identify key challenges and create a plan to make tangible improvements over the coming months.
  3. Develop a methodology to document and assess the impact of supply chain attacks on your organization, along with improvements to be made.
  4. Create a plan to generate, manage, and share SBOMs as a key pillar of your supply chain security initiative. Learn more with the Expert Guide on SBOMs in Cybersecurity and 6 Ways to Prevent SBOM sprawl
  5. Delve into existing and emerging compliance requirements and create a plan to automate compliance checks. Learn how to meet compliance standards like NIST, SSDF, and FedRAMP.
  6. Identify gaps in tooling and create plans to address the gaps.  See how Anchore can help.  Try open source tools like Syft for SBOM generation and Grype for vulnerability scanning as a good way to get started.
  7. Create an organizational structure and define responsibilities to address software supply chain security and risk.

Tonight’s Movie: The Terminal (of your laptop)

A picture paints a thousand words, but a GIF shows every typo in motion. But it doesn’t have to! GIFs have long been the go-to in technical docs, capturing real-time terminal output and letting readers watch workflows unfold as if sitting beside you.

I recently needed to make some terminal GIFs, so I tried three of the best available tools, and here are my findings.

Requirements

We recently attended All Things Open, where a TV on our stand needed a rolling demo video. I wanted to add a few terminal usage examples for Syft, Grype, and Grant – our Open-Source, best-in-class container security tools. I tried a few tools to generate the GIFs, which I embedded in a set of Google Slides (for ease) and then captured and rendered as a video that played in a loop on a laptop running VLC.

To summarise, this was the intended flow:

Typing in a terminal → 
↳ Recording
↳ GIF
↳ Google Slides
↳ Video Capture
↳ VLC playlist
↳ Success 🎉

We decided to render it as a video to mitigate conference WiFi issues. Nobody wants to walk past your exhibitor stand and see a 404 or “Network Connectivity Problems” on the Jumbotron®️!

The goal was for attendees passing our stand to see the command-line utilities in action. It also allowed us to discuss the demos with interested conferencegoers without busting out a laptop and crouching around it. We just pointed to the screen as a terminal appeared and talked through it.

Below is an early iteration of what I was aiming for, taken as a frame grab from a video – hence the slight blur.

My requirements were for a utility which:

  • Records a terminal running commands
  • Runs on Linux and macOS because I use both
  • Reliably captures output from the commands being run
  • Renders out a high-quality GIF
  • Is preferably open source
  • Is actively maintained

The reason for requiring a GIF rather than a traditional video, such as MP4, is to embed the GIF easily in a Google Slides presentation. While I could create an MP4 and then use a video editor to cut together the videos, I wanted something simple and easily reproducible. I may use MP4s in other situations – such as posting to social media – so if a tool can export to that format easily, I consider that a bonus.

It is worth noting that Google Slides supports GIFs up to 1000 frames in length. So, if you have a long-running command captured at a high frame rate, this limit is easy to hit. If that is the case, perhaps render an MP4 and use the right tool for the job, a video editor.

“High quality” GIF is a subjective term, but I’m after something that looks pleasing (to me), doesn’t distract from the tool being demonstrated, and doesn’t visibly stutter.

Feature Summary

I’ve put the full summary up here near the top of the article to save wear & tear on your mouse wheel or while your magic mouse is upside down, on charge. The details are underneath the conclusion for those interested and equipped with a fully-charged mouse.

† asciinema requires an additional tool such as agg to convert the recorded output to a GIF.
◊ t-rec supports X11 on Linux, but currently does not support Wayland sessions.
* t-rec development appears to have stalled.

Conclusion

All three tools are widely used and work fine in many cases. Asciinema is often recommended because it’s straightforward to install, and almost no configuration is required. The resulting recordings can be published online and rendered on a web page.

While t-rec is interesting, as it records the actual terminal window, not just the session text (as asciinema does), it is a touch heavyweight. As such, with a 4fps frame rate, videos made with t-rec look jerky.

I selected vhs for a few reasons.

It runs easily on macOS and Linux, so I can create GIFs on my work or personal computer with the same tool. vhs is very configurable, supports higher frame rates than other tools, and is scriptable, making it ideal for creating GIFs for documentation in CI pipelines.

vhs being scriptable is, I think, the real superpower here. For example, vhs can be part of a documentation site build system. One configuration file can specify a particular font family, size and color scheme to generate a GIF suitable for embedding in documentation.

Another almost identical configuration file might use a different font size or color, which is more suitable for a social media post. The same commands will be run, but the color, font family, font size, and even GIF resolution can be different, making for a very flexible and reliable way to create a terminal GIF for any occasion!

vhs ships with a broad default theme set that matches typical desktop color schemes, such as the familiar purple-hue terminal on Ubuntu, as seen below. This GIF uses the “BlexMono Nerd Font Mono” font (a modified version of IBM Plex font), part of the nerd-fonts project.

If this GIF seems slow, that’s intentional. The vhs configuration can “type” at a configurable speed and slow the resulting captured output down (or speed it up).

There are also popular Catppuccin themes that are pretty appealing. The following GIF uses the “catppuccin-macchiato” theme with “Iosevka Term” font, which is part of the Iosevka project. I also added a PS1 environment variable to the configuration to simulate a typical console prompt.

vhs can also take a still screenshot during the recording, which can be helpful as a thumbnail image, or to capture a particular frame from the middle of the recording. Below is the final frame from the previous GIF.

Here is one of the final (non-animated) slides from the video. I tried to put as little as possible on screen simultaneously, just the title, video, and a QR code for more information. It worked well, with someone even asking how the terminal videos were made. This blog is for them.

I am very happy with the results from vhs, and will likely continue using it in documentation, and perhaps social posts – if I can get the font to a readable size on mobile devices.

Alternatives

I’m aware of OBS Studio and other screen (and window) recording tools that could be used to create an initial video, which could be converted into a GIF.

Are there other, better ways to do this?

Let me know on our community discourse, or leave a comment wherever you read this blog post.

Below are the details about each of the three tools I tested.


t-rec

t-rec is a “Blazingly fast terminal recorder that generates animated gif images for the web written in rust.” This was my first choice, as I had played with it before my current task came up.

I initially quite liked that t-rec recorded the entire terminal window, so when running on Linux, I could use a familiar desktop theme indicating to the viewer that the command is running on a Linux host. On a macOS host, I could use a native terminal (such as iTerm2) to hint that the command is run on an Apple computer.

However, I eventually decided this wasn’t that important at all. Especially given that vhs can be used to theme the terminal so it looks close to a particular host OS. Plus, most of the commands I’m recording are platform agnostic, producing the same output no matter what they’re running on.

t-rec Usage

  • Configure the terminal to be the size you require with the desired font and any other settings before you start t-rec.
  • Run t-rec.
$ t-rec --quiet --output grant

The terminal will clear, and recording will begin.

  • Type each command as you normally would.
  • Press CTRL+D to end recording.
  • t-rec will then generate the GIF using the specified name.
🎆 Applying effects to 118 frames (might take a bit)
💡 Tip: To add a pause at the end of the gif loop, use e.g. option `-e 3s`
🎉 🚀 Generating grant.gif
Time: ~9s
 alan@Alans-MacBook-Pro  ~ 

The output GIF will be written in the current directory by stitching together all the bitmap images taken during the recording. Note the recording below contains the entire terminal user interface and the content.

t-rec Benefits

t-rec records the video by taking actual bitmap screenshots of the entire terminal on every frame. So, if you’re keen on having a GIF that includes the terminal UI, including the top bar and other window chrome, then this may be for you.

t-rec Limitations

t-rec records at 4 frames per second, which may be sufficient but can look jerky with long commands. There is an unmerged draft PR to allow user-configurable recording frame rates, but it hasn’t been touched for a couple of years.

I found t-rec would frequently just stop adding frames to a GIF. So the resulting GIF would start okay, then randomly miss out most of the frames, abruptly end, and loop back to the start. I didn’t have time to debug why this happened, which got me looking for a different tool.

asciinema

Did you try asciinema?” was a common question asked of me, when I mentioned to fellow nerds what I was trying to achieve. Yes.

asciinema is the venerable Grand-daddy of terminal recording. It’s straightforward to install and setup, has a very simple recording and publishing pipeline. Perhaps too simple.

When I wandered around the various exhibitor stands at All Things Open last week, it was obvious who spent far too long fiddling with these tools (me), and which vendors recorded a window, or published an asciinema, with some content blurred out.

One even had an ugly demo of our favorite child, grype (don’t tell syft I said that), in such a video! Horror of horrors!

asciinema doesn’t create GIFs directly but instead creates “cast” files, JSON formatted text representations of the session, containing both the user-entered text and the program output. A separate utility, agg (asciinema gif generator), converts the “cast” to a GIF. In addition, another tool, asciinema-edit, can be used to edit the cast file post-recording.

asciinema Usage

  • Start asciinema rec, and optionally specify a target file to save as.
asciinema rec ./grype.cast
  • Run commands.
  • Type exit when finished.
  • Play back the cast file

asciinema play ./grype.cast

  • Convert asciinema recording to GIF.
agg --font-family "BlexMono Nerd Font Mono" grype.cast grype.gif

Here’s the resulting GIF, using the above options. Overall, it looks fine, very much like my terminal appears. Some of the characters are missing or incorrectly displayed, however. For example, the animated braille characters are used while grype is parsing the container image.

asciinema – or rather agg (the cast-to-GIF converter) has a few options for customizing the resulting video. There are a small number of themes, the ability to configure the window size (in rows/columns), font family, and size, and set various speed and delay-related options.

Overall, asciinema is very capable, fast, and easy to use. The upstream developers are currently porting it from Python to Rust, so I’d consider this an active project. But it wasn’t entirely giving me all the options I wanted. It’s still a useful utility to keep in your toolbelt.

vhs

vhs has a novel approach using ‘tape’ files which describe the recording as a sequence of Type, Enter and Sleep statements.

The initial tape file can be created with vhs record and then edited in any standard text editor to modify commands, choice of shell, sleep durations, and other configuration settings. The vhs cassette.tape command will configure the session, then run the commands in a virtual (hidden) terminal.

Once the end of the ‘tape’ is reached, vhs generates the GIF, and optionally, an MP4 video. The tape file can be iterated on to change the theme, font family, size, and other settings, then re-running vhs cassette.tape creates a whole new GIF.

vhs Usage

  • Create a .tape file with vis record --shell bash > cassette.tape.
  • Run commands.
  • Type exit when finished.

vhs will write the commands and timings to the cassette.tape file, for example:

$ cat cassette.tape
Sleep 1.5s
Type "./grype ubuntu:latest"
Enter
Sleep 3s
  • Optionally edit the tape file
  • Generate the GIF
$ vhs cassette.tape
File: ./cassette.tape
Sleep 1.5s
Type ./grype ubuntu:latest
Enter 1
Sleep 3s
Creating ...
Host your GIF on vhs.charm.sh: vhs publish <file>.gif

Below is the resulting default GIF, which looks fantastic out of the box, even before playing with themes, fonts and prompts.

vhs Benefits

vhs is very configurable, with some useful supported commands in the .tape file. The support for themes, fonts, resolution and ‘special’ key presses, makes it very flexible for scripting a terminal based application recording.

vhs Limitations

vhs requires the tape author to specify how long to Sleep after each command – or assume the initial values created with vhs record are correct. vhs does not (yet) auto-advance when a command finishes. This may not be a problem if the command you’re recording has a reliable runtime. Still, it might be a problem if the duration of a command is dependent on prevailing conditions such as the network or disk performance.


What do you think? Do you like animated terminal output, or would you prefer a video, interactive tool, or just a plain README.md. Let me know on our community discourse, or leave a comment wherever you read this blog post.

Automate STIG Compliance with MITRE SAF: the Fastest Path to ATO

Trying to get your head around STIG (Security Technical Implementation Guides) compliance? Anchore is here to help. With the help of MITRE Security Automation Framework (SAF) we’ll walk you through the quickset path to STIG Compliance and ultimately the converted Authority to Operate (ATO).

The goal for any company that aims to provide software services to the Department of Defense (DoD) is an ATO. Without this stamp of approval your software will never get into the hands of the warfighters that need it most. STIG compliance is a necessary needle that must be thread on the path to ATO. Luckily, MITRE has developed and open-sourced SAF to smooth the often complex and time-consuming STIG compliance process.

We’ll get you up to speed on MITRE SAF and how it helps you achieve STIG compliance in this blog post but before we jump straight into the content be sure to bookmark our webinar with the Chief Architect of MITRE Security Automation Framework (SAF), Aaron Lippold. Josh Bressers, VP of Security at Anchore and Lippold provide a behind the scenes look at SAF and how it dramatically reduces the friction of the STIG compliance process.

What is the MITRE Security Automation Framework (SAF)?

The MITRE SAF is both a high-level cybersecurity framework and an umbrella that encompasses a set of security/compliance tools. It is designed to simplify STIG compliance by translating DISA (Defense Information Systems Agency) SRG (Security Requirements Guide) guidance into actionable steps. 

By following the Security Automation Framework, organizations can streamline and automate the hardened configuration of their DevSecOps pipeline to achieve an ATO (Authority to Operate).

The SAF offers four primary benefits:

  1. Accelerate Path to ATO: By streamlining STIG compliance, SAF enables organizations to get their applications into the hands of DoD operators faster. This acceleration is crucial for meeting operational demands without compromising on security standards.
  2. Establish Security Requirements: SAF translates SRGs and STIGs into actionable steps tailored to an organization’s specific DevSecOps pipeline. This eliminates ambiguity and ensures security controls are implemented correctly.
  3. Build Security In: The framework provides tooling that can be directly embedded into the software development pipeline. By automating STIG configurations and policy checks, it ensures that security measures are consistently applied, leaving no room for false steps.
  4. Assess and Monitor Vulnerabilities: SAF offers visualization and analysis tools that assist organizations in making informed decisions about their current vulnerability inventory. It helps chart a path toward achieving STIG compliance and ultimately an ATO.

The overarching vision of the MITRE SAF is to “implement evolving security requirements while deploying apps at speed.” In essence, it allows organizations to have their cake and eat it too—gaining the benefits of accelerated software delivery without letting cybersecurity risks grow unchecked.

How does MITRE SAF work?

MITRE SAF is segmented into 5 capabilities that map to specific stages of the DevSecOps pipeline or STIG compliance process:

  1. Plan
  2. Harden
  3. Validate
  4. Normalize
  5. Visualize

Let’s break down each of these capabilities.

Plan

There are hundreds of existing STIGs for products ranging from Microsoft Windows to Cisco Routers to MySQL databases. On the off chance that a product your team wants to use doesn’t have a pre-existing STIG, SAF’s Vulcan tool is helps translate the application SRG into a tailored STIG that can then be used to achieve compliance.

Vulcan helps streamline the process of creating STIG-ready security guidance and the accompanying InSpec automated policy that confirms a specific instance of software is configured in a compliant manner.

Vulcan does this by modeling the STIG intent form and tailoring the applicable SRG controls into a finished STIG for an application. The finished STIG is then sent to DISA for peer review and formal publishing as a STIG. Vulcan allows the author to develop both human-readable instructions and machine-readable InSpec automated validation code at the same time.

Diagram of process to map SRG controls to STIG guidelines via the MITE SAF Vulcan CLI tool; an automated conversion tool to speed up STIG compliance process.

Harden

The hardening capability focuses on automating STIG compliance through the use of pre-built infrastructure configuration scripts. SAF hardening content allows organizations to:

  • Use their preferred configuration management tools: Chef Cookbooks, Ansible Playbooks, Terraform Modules, etc. are available as open source templates on MITRE’s GitHub page.
  • Share and collaborate: All hardening content is open source, encouraging community involvement and shared learning.
  • Coverage for the full development stack: Ensuring that every layer, from cloud infrastructure to applications, adheres to security standards.

Validate

The validation capability focuses on verifying the hardening meets the applicable STIG compliance standard. These validation checks are automated via the SAF CLI tool that incorporates the InSpec policies for a STIG. With SAF CLI, organizations can:

  • Automatically validate STIG compliance: By integrating SAF CLI directly into your CI/CD pipeline and invoking InSpec policy checks at every build; shifting security left by surfacing policy violations early.
  • Promote community collaboration: Like the hardening content, validation scripts are open source and accessible by the community for collaborative efforts.
  • Span the entire development stack: Validation—similar to hardening—isn’t limited to a single layer; it encompasses cloud infrastructure, platforms, operating systems, databases, web servers, and applications.
  • Incorporate manual attestation: To achieve comprehensive coverage of policy requirements that automated tools might not fully address.

Normalize

Normalization addresses the challenge of interoperability between different security tools and data formats. SAF CLI performs double-duty by taking on the normalization function as well as validation. It is able to:

  • Translate data into OHDF: OASIS Heimdall Data Format (OHDF), is an open standard that structures countless proprietary security metadata formats into a single universal format.
  • Leverage open source OHDF libraries: Organizations can use OHDF converters as libraries within their custom applications.
  • Automate data conversion: By incorporating SAF CLI into the DevSecOps pipeline, data is automatically standardized with each run.
  • Increased compliance efficiency: A single data format for all security data allows interoperability and facilitates efficient and automated STIG compliance.

Example: Below is an example of Burp Suite’s proprietary data format normalized to the OHDF JSON format:

Image of Burp Suite data format being mapped to MITRE SAF's OHDF to reduce manual data mapping and reduce time to STIG compliance.

Visualize

Visualization is critical for understanding security posture and making informed decisions. SAF provides an open source, self-hosted visualization tool named Heimdall. It ingests OHDF normalized security data and provides the data analysis tools to enable organizations to:

  • Aggregate security and compliance results: Compiling data into comprehensive rollups, charts, and timelines for a holistic view of security and compliance status.
  • Perform deep dives: Allowing teams to explore detailed vulnerability information to facilitate investigation and remediation, ultimately speeding up time to STIG compliance.
  • Guide risk reduction efforts: Visualization of insights help with prioritization of security and compliance tasks reducing risk in the most efficient manner.

How is SAF related to a DoD Software Factory?

A DoD Software Factory is the common term for a DevSecOps pipeline that meets the definition laid out in DoD Enterprise DevSecOps Reference Design. All software that ultimately achieves an ATO has to be built on a fully implemented DoD Software Factory. You can either build your own or use a pre-existing DoD Software Factory like the US Air Force’s Platform One or the US Navy’s Black Pearl.

As we saw earlier, MITRE SAF is a framework meant to help you achieve STIG compliance and is a portion of your journey towards an ATO. STIG compliance applies to both the software that you write as well as the DevSecOps platform that your software is built on. Building your own DoD Software Factory means committing to going through the ATO process and STIG compliance for the DevSecOps platform first then a second time for the end-user application.

Wrap-Up

The MITRE SAF is a huge leg up for modern, cloud-native DevSecOps software vendors that are currently navigating the labyrinth towards ATO. By providing actionable guidance, automation tooling, and a community-driven approach, SAF dramatically reduces the time to ATO. It bridges the gap between the speed of DevOps software delivery and secure, compliant applications ready for critical DoD missions with national security implications. 

Embracing SAF means more than just meeting regulatory requirements; it’s about building a resilient, efficient, and secure development pipeline that can adapt to evolving security demands. In an era where cybersecurity threats are evolving just as rapidly as software, leveraging frameworks like MITRE SAF is not an efficient path to compliance—it’s essential for sustained success.

Grype Support for Azure Linux 3 released

On September 26, 2024 the OSS team at Anchore released general support for Azure Linux 3, Microsoft’s new cloud-focused Linux distribution. This blog post will share some of the technical details of what goes into supporting a new Linux distribution in Grype.

Step 1: Make sure Syft identifies the distro correctly

In this case, this step happened automatically. Syft is pretty smart about parsing /etc/os-release in an image, and Microsoft has labeled Azure Linux in a standard way. Even before this release, if you’d run the following command, you would see Azure Linux 3 correctly identified.

syft -q -o json mcr.microsoft.com/azurelinux/base/core:3.0 | jq .distro
{
  "prettyName": "Microsoft Azure Linux 3.0",
  "name": "Microsoft Azure Linux",
  "id": "azurelinux",
  "version": "3.0.20241005",
  "versionID": "3.0",
  "homeURL": "https://aka.ms/azurelinux",
  "supportURL": "https://aka.ms/azurelinux",
  "bugReportURL": "https://aka.ms/azurelinux"
}

Step 2: Build a vulnerable image

You can’t test a vulnerability scanner without an image that has known vulnerabilities in it. So just about the first thing to do is make a test image that is known to have some problems.

In this case, we started with Azure’s base image and intentionally installed an old version of the golang RPM:

FROM mcr.microsoft.com/azurelinux/base/core:3.0@sha256:9c1df3923b29a197dc5e6947e9c283ac71f33ef051110e3980c12e87a2de91f1

RUN tdnf install -y golang-1.22.5-1.azl3

This has a couple of CVEs against it, so we can use it to test whether Grype is working end to end.

$ docker build -t azuretest:latest .
$ docker image save azuretest:latest > azuretest.tar
$ grype ./azuretest.tar
  Parsed image sha256:49edd6d1eff19d2b34c27a6ad11a4a8185d2764ae1182c17c563a597d173b8
  Cataloged contents e649de5ff4361e49e52ecdb8fe8acb854cf064247e377ba92669e7a33a228a00
   ├──  Packages                        [122 packages]
   ├──  File digests                    [11,141 files]
   ├──  File metadata                   [11,141 locations]
   └──  Executables                     [426 executables]
  Scanned for vulnerabilities     [84 vulnerability matches]
   ├── by severity: 3 critical, 57 high, 3 medium, 0 low, 0 negligible (21 unknown)
   └── by status:   84 fixed, 0 not-fixed, 0 ignored
NAME          INSTALLED      FIXED-IN         TYPE       VULNERABILITY   SEVERITY
coreutils     9.4-3.azl3     0:9.4-5.azl3     rpm        CVE-2024-0684   Medium
curl          8.8.0-1.azl3   0:8.8.0-2.azl3   rpm        CVE-2024-6197   High
curl-libs     8.8.0-1.azl3   0:8.8.0-2.azl3   rpm        CVE-2024-6197   High
expat         2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45492  High
expat         2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45491  High
expat         2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45490  High
expat-libs    2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45492  High
expat-libs    2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45491  High
expat-libs    2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45490  High
golang        1.22.5-1.azl3  0:1.22.7-2.azl3  rpm        CVE-2023-29404  Critical
golang        1.22.5-1.azl3  0:1.22.7-2.azl3  rpm        CVE-2023-29402  Critical
golang        1.22.5-1.azl3  0:1.22.7-2.azl3  rpm        CVE-2022-41722  High
krb5          1.21.2-1.azl3  0:1.21.3-1.azl3  rpm        CVE-2024-37371  Critical

Normally, we like to build test images with CVEs from 2021 or earlier against them because this set of vulnerabilities changes slowly. But hats off to the team at Microsoft. We could not find an easy way to get a three-year-old vulnerability into their distro. So, in this case, the team did some behind-the-scenes work to make it easier to add test images that only have newer vulnerabilities as part of this release.

Step 3: Write the vunnel provider

Vunnel is Anchore’s “vulnerability funnel,” the open-source project that downloads vulnerability data from many different sources and collects and normalizes them so that grype can match them. This step was pretty straightforward because Microsoft publishes complete and up-to-date OVAL XML, so the Vunnel provider can just download it, parse it into our own format, and pass it along.

Step 4: Wire it up in Grype, and profit scan away

Now Syft identifies the distro, we have test images to use in our CI/CD pipelines so that we’re sure we don’t regress, and Vunnel is downloading the Azure Linux 3 vulnerability data from Microsoft, we’re ready to release the Grype change. In this case, it was a simple change telling Grype where to look in its database for vulnerabilities about the new distro.

Conclusion

There are two big upshots of this post:

First, anyone running Grype v0.81.0 or later can scan images built from Azure Linux 3 and get accurate vulnerability information today, for free.

Second, Anchore’s tools make it possible to add a new Linux distro to Syft and Grype in just a few pull requests. All the work we did for this support was open source – you can go read the pull requests on GitHub if you’d like (vunnel, grype-db, grype, test-images). And that means that if your favorite Linux distribution isn’t covered yet, you can let us know or send us a PR.

If you’d like to discuss any topics this post raises, join us on discourse.

Introducing Anchore Data Service and Anchore Enterprise 5.10

We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage. 

With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.

On top of that, we have buffed our software composition analysis (SCA) scanner’s ecosystem coverage (e.g., C++, Swift, Elixir, R, etc.) for all Anchore customers. To do this we embedded Syftour popular, open source SCA/SBOM (software bill of materials) generator—directly into Anchore Enterprise.

It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>

Announcing the Anchore Data Service

Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:

Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.

We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.

Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.

The new architecture with ADS as the intermediary is illustrated below:

As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.

Increased AnchoreCTL Ecosystem Coverage

We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.

More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.

Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.

Full list of support ecosystems

Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):

  • Alpine (apk)
  • C (conan)
  • C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel archives (vmlinuz)
  • Linux kernel a (ko)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress plugins

After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:

Wrap-Up

Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.

To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.

Who watches the watchmen? Introducing yardstick validate

Grype scans images for vulnerabilities, but who tests Grype? If Grype does or doesn’t find a given vulnerability in a given artifact, is it right? In this blog post, we’ll dive into yardstick, an open-source tool by Anchore for comparing the results of different vulnerability scans, both against each other and against data hand-labeled by security researchers.

Quality Gates

In Anchore’s CI pipelines, we have a concept we call a “quality gate.” A quality gate’s job is to ensure each change to each of our tools results in matching at least as good as the version before the change. To talk about quality gates, we need a couple of terms:

  • Reference tool configuration, or just “reference tool” for short – this is an invocation of the tool (Grype, in this case) as it works today, without the change we are considering making
  • Candidate tool configuration, or just “candidate tool” for short – this is an invocation the tool with the change we’re trying to verify. Maybe we changed Vunnel, or the grype source code itself, for example.
  • Test images are images that Anchore has built that are known to have vulnerabilities
  • Label data is data our researchers have labeled, essentially writing down, “for image A, for package B at version C, vulnerability X is really present (or is not really present)”

The important thing about the quality gate is that it’s an experiment – it changes only one thing to test the hypothesis. The hypothesis is always, “the candidate tool is as good or better than the reference tool,” and the one thing that’s different is the difference between the candidate tool and the reference tool. For example, if we’re testing a code change in Grype itself, the only difference between reference tool and candidate tool is the code change; the database of vulnerabilities will be the same for both runs. On the other hand, if we’re testing a change to how we build the database, the code for both Grypes will be the same, but the database used by the candidate tool will be built by the new code.

Now let’s talk through the logic of a quality gate:

  1. Run the reference tool and the candidate tool to get reference matches
  2. If both tools found nothing, the test is invalid. (Remember we’re scanning images that intentionally have vulnerabilities to test a scanner.)
  3. If both tools find exactly the same vulnerabilities, the test passes, because the candidate tool can’t be worse than the reference tool if they find the same things
  4. Finally, if both the reference tool and the candidate tool find at least one vulnerability, but not the same set of vulnerabilities, then we need to Do Match

Matching Math: Precision, Recall, and F1

The first math problem we do is easy: Did we add too many False Negatives? (Usually one is too many.) For example, if the reference tool found a vulnerability, and the candidate tool didn’t, and the label data says it’s really there, then the gate should fail – we can’t have a vulnerability matcher that misses things we know about!

The second math problem is also pretty easy: did we leave too many matches unlabeled? If we left too many matches unlabeled, we can’t do a comparison, because, if the reference tool and the candidate tool both found a lot of vulnerabilities, but we don’t know whether they’re really present or not, we can’t say which set of results is better. So the gate should fail and the engineer making the change will go and label more vulnerabilities

Now, we get to the harder math. Let’s say the reference tool and the candidate tool both find vulnerabilities, but not exactly the same ones, and the candidate tool doesn’t introduce any false negatives. But let’s say the candidate tool does introduce a false positive or two, but it also fixes false positives and false negatives that the reference tool was wrong about. Is it better? Now we have to borrow some math from science class:

  • Precision is the fraction of matches that are true positives. So if one of the tools found 10 vulnerabilities, and 8 of them are true positives, the precision is 0.8.
  • Recall is the fraction of vulnerabilities that the tool found. So if there were 10 vulnerabilities present in the image and Grype found 9 of them, the recall is 0.9.
  • F1 score is a calculation based on precision and recall that tries to reward high precision and high recall, while penalizing low precision and penalizing low recall. I won’t type out the calculation but you can read about it on Wikipedia or see it calculated in yardstick’s source code.

So what’s new in yardstick

Recently, the Anchore OSS team released the yardstick validate subcommand. This subcommand encapsulates the above work in a single command, which centralized a bunch of test Python spread out over the different OSS repositories.

Now, to add a quality gate with a set of images, we just need to add some yaml like:

pr_vs_latest_via_sbom_2022:
    description: "same as 'pr_vs_latest_via_sbom', but includes vulnerabilities from 2022 and before, instead of 2021 and before"
    max_year: 2022
    validations:
      - max-f1-regression: 0.1 # allowed to regress 0.1 on f1 score
        max-new-false-negatives: 10
        max-unlabeled-percent: 0
        max_year: 2022
        fail_on_empty_match_set: false
    matrix:
      images:
        - docker.io/anchore/test_images:azurelinux3-63671fe@sha256:2d761ba36575ddd4e07d446f4f2a05448298c20e5bdcd3dedfbbc00f9865240d
      tools:
        - name: syft
          # note: we want to use a fixed version of syft for capturing all results (NOT "latest")
          version: v0.98.0
          produces: SBOM
          refresh: false
        - name: grype
          version: path:../../+import-db=db.tar.gz
          takes: SBOM
          label: candidate # is candidate better than the current baseline?
        - name: grype
          version: latest+import-db=db.tar.gz
          takes: SBOM
          label: reference # this run is the current baseline

We think this change will make it easier to contribute to Grype and Vunnel. We know it helped out in the recent work to release Azure Linux 3 support.

If you’d like to discuss any topics this post raises, join us on discourse.

Preparing for a critical vulnerability

One morning, you wake up and see a tweet like the one above. The immediate response is often panic. This sounds bad; it probably affects everyone, and nobody knows for certain what to do next. Eventually, the panic subsides, but we still have a problem that needs to be dealt with. So the question to ask is: What can we do?

Don’t panic

Having a vague statement about a situation that apparently will probably affect everyone sounds like a problem we can’t possibly prepare for. Waiting is generally the worst option in situations like this, but there are some things we can do to help us out.

One of the biggest challenges in modern infrastructure is just understanding what you have. This sounds silly, but it’s a tough problem because of how most software is built now. You depend on a few open-source projects, and those projects depend on a few other open-source projects, which rely on even more open-source projects. And before you know it, you have 300 open-source projects instead of the 6 you think you installed.

Our goal is to create an inventory of all our software. If we have an accurate and updated inventory, you don’t have to wonder if you’re running some random application, like CUPS. You will know beyond a reasonable doubt. Knowing what software you do (or don’t) have deployed can bring an amazing amount of peace of mind.

This is not new

This was the same story during log4J and xz emergencies. What induced panic in everyone wasn’t the vulnerabilities themselves but the scramble to find where those libraries were deployed. In many instances, we observed engineers manually connecting to servers with SSH and dissecting container images.

These security emergencies will never end, but they all play out similarly. There is a time between when something is announced and good actionable guidance appearing. The security community will come to a better understanding of the issue, then we can figure out the best way to deal with whatever the problem is. This could be updating packages, it could mean adjusting firewall rules, or maybe changing a configuration option.

While we wait for the best guidance, what if we were going through our software inventory? When Log4Shell happened almost everyone spent the first few days or weeks (or months) just figuring out if they had Log4j anywhere. If you have an inventory, those first few days can be spent putting together a plan for applying the guidance you know is coming. It’s a much nicer way to spend time than frantically searching!

The inventory

Creating an inventory sounds like it shouldn’t be that hard. How much software do you REALLY have? It’s hard, and software poses a unique challenge because there are nearly infinite ways to build and assemble any given application. Do you have OpenSSL as an operating system package? Or is it a library in a random directory? Maybe it’s statically compiled into the binary. Maybe we download a copy off the internet when the application starts. Maybe it’s all of these … at the same time.

This complexity is taken to a new level when you consider how many computers, servers, containers, and apps are deployed. The scale means automation is the only way we can do this. Humans cannot handcraft an inventory. They are too slow and make too many mistakes for this work, but robots are great at it!

But the automation we have today isn’t perfect. It’s early days for many of these scanners and inventory formats (such as a Software Bill of Materials or SBOM). We must grasp what possible blind spots our inventories may have. For example, some scanners do a great job finding operating system packages but aren’t as good at finding Java archives (jar files). This is part of what makes the current inventory process difficult. The tooling is improving at an impressive rate; don’t write anything off as too incomplete. It will change and get better in the future.

Enter the SBOM

Now that we have mentioned SBOMs, we should briefly explain how they fit into this inventory universe. An SBOM does nothing by itself; it’s just a file format for capturing information, such as a software inventory.

Anchore developers have written plenty over the years about what an is SBOM, but here is the tl;dr:

An SBOM is a detailed list of all software project components, libraries, and dependencies. It serves as a comprehensive inventory that helps understand the software’s structure and the origins of its components.

An SBOM in your project enhances security by quickly identifying and mitigating vulnerabilities in third-party components. Additionally, it ensures compliance with regulatory standards and provides transparency, which is essential for maintaining trust with stakeholders and users.

An example

To explain what all this looks like and some of the difficulties, let’s go over an example using the eclipse-temurin Java runtime container image. It would be very common for a developer to build on top of this image. It also shows many of the challenges in trying to pin down a software inventory.

The Dockerfile we’re going to reference can be found on GitHub, and the container image can be found on Docker Hub.

The first observation is that this container uses Ubuntu as the underlying container image.

This is great, Ubuntu has a very nice packaging system and it’s no trouble to see what’s installed. We can easily do this with Syft.

bress@anchore   ~ syft ubuntu:24.04
  Parsed image sha256:61b2756d6fa9d6242fafd5b29f674404779be561db2d0bd932aa3640ae67b9e1
  Cataloged contents 74f92a6b3589aa5cac6028719aaac83de4037bad4371ae79ba362834389035aa
   ├──  Packages                        [91 packages]
   ├──  File digests                    [2,259 files]
   ├──  File metadata                   [2,259 locations]
   └──  Executables                     [722 executables]
NAME                 VERSION                      TYPE
apt                  2.7.14build2                 deb
base-files           13ubuntu10.1                 deb
base-passwd          3.6.3build1                  deb
bash                 5.2.21-2ubuntu4              deb
bsdutils             1:2.39.3-9ubuntu6.1          deb
coreutils            9.4-3ubuntu6                 deb
dash                 0.5.12-6ubuntu5              deb
debconf              1.5.86ubuntu1                deb
debianutils          5.17build1                   deb
diffutils            1:3.10-1build1               deb
dpkg                 1.22.6ubuntu6.1              deb
e2fsprogs            1.47.0-2.4~exp1ubuntu4.1     deb
findutils            4.9.0-5build1                deb
gcc-14-base          14-20240412-0ubuntu1         deb
gpgv                 2.4.4-2ubuntu17              deb
grep                 3.11-4build1                 deb
gzip                 1.12-1ubuntu3                deb
hostname             3.23+nmu2ubuntu2             deb
init-system-helpers  1.66ubuntu1                  deb

There has been nothing exciting so far. But if we look a little deeper at the eclipse temurin Dockerfile, we see it’s installing the Java JDK using wget. That’s not something we’ll find just by looking at Ubuntu packages.

bress@anchore ~ syft ubuntu:24.04
  Parsed image sha256:61b2756d6fa9d6242fafd5b29f674404779be561db2d0bd932aa3640ae67b9e1
  Cataloged contents 74f92a6b3589aa5cac6028719aaac83de4037bad4371ae79ba362834389035aa
   ├──  Packages                        [91 packages]
   ├──  File digests                    [2,259 files]
   ├──  File metadata                   [2,259 locations]
   └──  Executables                     [722 executables]
NAME                 VERSION                      TYPE
apt                  2.7.14build2                 deb
base-files           13ubuntu10.1                 deb
base-passwd          3.6.3build1                  deb
bash                 5.2.21-2ubuntu4              deb
bsdutils             1:2.39.3-9ubuntu6.1          deb
coreutils            9.4-3ubuntu6                 deb
dash                 0.5.12-6ubuntu5              deb
debconf              1.5.86ubuntu1                deb
debianutils          5.17build1                   deb
diffutils            1:3.10-1build1               deb
dpkg                 1.22.6ubuntu6.1              deb
e2fsprogs            1.47.0-2.4~exp1ubuntu4.1     deb
findutils            4.9.0-5build1                deb
gcc-14-base          14-20240412-0ubuntu1         deb
gpgv                 2.4.4-2ubuntu17              deb
grep                 3.11-4build1                 deb
gzip                 1.12-1ubuntu3                deb
hostname             3.23+nmu2ubuntu2             deb
init-system-helpers  1.66ubuntu1                  deb

If we scan this image with Syft, we can see a few different types of packages installed.

bress@anchore ~ syft eclipse-temurin:8u422-b05-jre-noble
  Parsed image sha256:d2c2442dea2a2b1164bd6dd39af673db2215ff680910aff7417432b00a3c8e4d
  Cataloged contents 805b45dee2c503f1cca36e1ecc6e8625538592e2db32cc04e317a246fb86d0fc
   ├──  Packages                        [142 packages]
   ├──  File digests                    [3,856 files]
   ├──  File metadata                   [3,856 locations]
   └──  Executables                     [809 executables]
NAME                 VERSION                            TYPE

hostname             3.23+nmu2ubuntu2                   deb
init-system-helpers  1.66ubuntu1                        deb
jaccess              UNKNOWN                            java-archive
jce                  1.8.0_422                          java-archive
jfr                  1.8.0_422                          java-archive
jsse                 1.8.0_422                          java-archive
libacl1              2.3.2-1build1                      deb
libapt-pkg6.0t64     2.7.14build2                       deb
libassuan0           2.5.6-1build1                      deb
libattr1             1:2.5.2-1build1                    deb
libaudit-common      1:3.1.2-2.1build1                  deb

The jdk and jre are binaries in the image, as are some Java archives. This is a gotcha to watch for when you’re building an inventory. Many inventories and scanners may only look for known-good packages, not binaries and other files installed on the system. In a perfect world, our SBOM tells us details about everything in the image, not just one package type.

At this point, you can imagine a developer adding more things to the container: code they wrote, Java Archives, data files, and maybe even a few more binary files, probably installed with wget or curl.

What next

This sounds pretty daunting, but it’s not that hard to start building an inventory. You don’t need a fancy system. The easiest way is to find an open source SBOM generator, like Syft, and put the SBOMs in a directory. It’s not perfect, but even searching through those files is faster than manually finding every version of CUPS in your infrastructure.

Once you understand an initial inventory, you can investigate more complete solutions. There are countless open-source projects, products (such as Anchore Enterprise), and services that can help here. For example, when starting to build the inventory, don’t expect to go from zero to complete overnight. Big projects need immense patience.

It’s like the old proverb that the best time to plant a tree was twenty years ago; the second best time is now. The best time to start an inventory system was a decade ago; the second best time is now.

If you’d like to discuss any topics raised in this post, join us on this discourse thread.

Compliance Requirements for DISA’s Security Technical Implementation Guides (STIGs)

In the rapidly modernizing landscape of cybersecurity compliance, evolving to a continuous compliance posture is more critical than ever—particularly for organizations involved with the Department of Defense (DoD) and other government agencies. At the heart of the DoD’s modern approach to software development is the DoD Enterprise DevSecOps Reference Design, commonly implemented as a DoD Software Factory

A key component of this framework is adhering to the Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA). STIG compliance within the DevSecOps pipeline not only accelerates the delivery of secure software but also embeds robust security practices directly into the development process, safeguarding sensitive data and reinforcing national security.

This comprehensive guide will walk you through what STIGs are, who should care about them, the levels of STIG compliance, key categories of STIG requirements, how to prepare for the STIG compliance process, and the tools available to automate STIG implementation and maintenance.

What are STIGs and who should care?

Understanding DISA and STIGs

The Defense Information Systems Agency (DISA) is the DoD agency responsible for delivering information technology (IT) support to ensure the security of U.S. national defense systems. To help organizations meet the DoD’s rigorous security controls, DISA develops Security Technical Implementation Guides (STIGs).

STIGs are configuration standards that provide prescriptive guidance on how to secure operating systems, network devices, software, and other IT systems. They serve as a secure configuration standard to harden systems against cyber threats.

For example, a STIG for the open source Apache web server would specify that encryption is enabled for all traffic (incoming or outgoing). This would require the generation of SSL/TLS certificates on the server in the correct location, updating the server’s configuration file to reference this certificate and re-configuration of the server to serve traffic from a secure port rather than the default insecure port.

Who should care about STIG compliance?

STIG compliance is mandatory for any organization that operates within the DoD network or handles DoD information. This includes:

  • DoD Contractors and Vendors: Companies providing products or services to the DoD—a.k.a. the defense industrial base (DIB)—must ensure their systems comply with STIG requirements.
  • Government Agencies: Federal agencies interfacing with the DoD need to adhere to applicable STIGs.
  • DoD Information Technology Teams: IT professionals within the DoD responsible for system security must implement STIGs.

Connection to the RMF and NIST SP 800-53

The Risk Management Framework (RMF)—more formally NIST 800-37—is a framework that integrates security and risk management into IT systems as they are being developed. The STIG compliance process outlined below is directly integrated into the higher-level RMF process. As you follow the RMF, the individual steps of STIG compliance will be completed in turn.

STIGs are also closely connected to the NIST 800-53, colloquially known as the “Control Catalog”. NIST 800-53 outlines security and privacy controls for all federal information systems; the controls are not prescriptive about the implementation, only the best practices and outcomes that need to be achieved. 

As DISA developed the STIG compliance standard, they started with the NIST 800-53 controls then “tailored” them to meet the needs of the DoD; these customized security best practices are known as Security Requirements Guides (SRGs). In order to remove all ambiguity around how to meet these higher-level best practices STIGs were created with implementation specific instructions.

For example, an SRG will mandate that all systems utilize a cybersecurity best practice, such as, role-based access control (RBAC) to prevent users without the correct privileges from accessing certain systems. A STIG, on the other hand, will detail exactly how to configure an RBAC system to meet the highest security standards.

Levels of STIG Compliance

The DISA STIG compliance standard uses Severity Category Codes to classify vulnerabilities based on their potential impact on system security. These codes help organizations prioritize remediation efforts. The three Severity Category Codes are:

  1. Category I (Cat I): These are the highest risk vulnerabilities, allowing an attacker immediate access to a system or network or allowing superuser access. Due to their high risk nature, these vulnerabilities be addressed immediately.
  2. Category II (Cat II): These vulnerabilities provide information with a high potential of giving access to intruders. These findings are considered a medium risk and should be remediated promptly.
  3. Category III (Cat III): These vulnerabilities constitute the lowest risk, providing information that could potentially lead to compromise. Although not as pressing as Cat II & III issues, it is still important to address these vulnerabilities to minimize risk and enhance overall security.

Understanding these categories is crucial in the STIG process, as they guide organizations in prioritizing remediation of vulnerabilities.

Key categories of STIG requirements

Given the extensive range of technologies used in DoD environments, there are hundreds of STIGs applicable to different systems, devices, applications, and more. While we won’t list all STIG requirements here, it’s important to understand the key categories and who they apply to.

1. Operating System STIGs

Applies to: System Administrators and IT Teams managing servers and workstations

Examples:

  • Microsoft Windows STIGs: Provides guidelines for securing Windows operating systems.
  • Linux STIGs: Offers secure configuration requirements for various Linux distributions.

2. Network Device STIGs

Applies to: Network Engineers and Administrators

Examples:

  • Network Router STIGs: Outlines security configurations for routers to protect network traffic.
  • Network Firewall STIGs: Details how to secure firewall settings to control access to networks.

3. Application STIGs

Applies to: Software Developers and Application Managers

Examples:

  • Generic Application Security STIG: Outlines the necessary security best practices needed to be STIG compliant
  • Web Server STIG: Provides security requirements for web servers.
  • Database STIG: Specifies how to secure database management systems (DBMS).

4. Mobile Device STIGs

Applies to: Mobile Device Administrators and Security Teams

Examples:

  • Apple iOS STIG: Guides securing of Apple mobile devices used within the DoD.
  • Android OS STIG: Details security configurations for Android devices.

5. Cloud Computing STIGs

Applies to: Cloud Service Providers and Cloud Infrastructure Teams

Examples:

  • Microsoft Azure SQL Database STIG: Offers security requirements for Azure SQL Database cloud service.
  • Cloud Computing OS STIG: Details secure configurations for any operating system offered by a cloud provider that doesn’t have a specific STIG.

Each category addresses specific technologies and includes a STIG checklist to ensure all necessary configurations are applied. 

You can view an example of a STIG checklist for “Application Security and Development” by following this link.

How to Prepare for the STIG Compliance Process

Achieving DISA STIG compliance involves a structured approach. Here are the stages of the STIG process and tips to prepare:

Stage 1: Identifying Applicable STIGs

With hundreds of STIGs relevant to different organizations and technology stacks, this step should not be underestimated. First, conduct an inventory of all systems, devices, applications, and technologies in use. Then, review the complete list of STIGs to match each to your inventory to ensure that all critical areas requiring secure configuration are addressed. This step is essential to avoiding gaps in compliance.

Tip: Use automated tools to scan your environment then match assets to relevant STIGs.

Stage 2: Implementation

After you’ve mapped your technology to the corresponding STIGs, the process of implementing the security configurations outlined in the guides begins. This step may require collaboration between IT, security, and development teams to ensure that the configurations are compatible with the organization’s infrastructure while enforcing strict security standards. Be sure to keep detailed records of changes made.

Tip: Prioritize implementing fixes for Cat I vulnerabilities first, followed by Cat II and Cat III. Depending on the urgency and needs of the mission, ATO can still be achieved with partial STIG compliance. Prioritizing efforts increases the chances that partial compliance is permitted.

Stage 3: Auditing & Maintenance

After the STIGs have been implemented, regular auditing and maintenance are critical to ensure ongoing compliance, verifying that no deviations have occurred over time due to system updates, patches, or other changes. This stage includes periodic scans, manual reviews, and remediation of any identified gaps. Additionally, organizations should develop a plan to stay informed about new STIG releases and updates from DISA.

Tip: Establish a maintenance schedule and assign responsibilities to team members. Alternatively, adopting a policy-as-code approach to continuous compliance by embedding STIG compliance requirements “-as-code” directly into your DevSecOps pipeline, you can automate this process.

General Preparation Tips

  • Training: Ensure your team is familiar with STIG requirements and the compliance process.
  • Collaboration: Work cross-functionally with all relevant departments, including IT, security, and compliance teams.
  • Resource Allocation: Dedicate sufficient resources, including time and personnel, to the compliance effort.
  • Continuous Improvement: Treat STIG compliance as an ongoing process rather than a one-time project.

Tools to automate STIG implementation and maintenance

Automation can significantly streamline the STIG compliance process. Here are some tools that can help:

1. Anchore STIG (Static and Runtime)

  • Purpose: Automates the process of checking container images against STIG requirements.
  • Benefits:
    • Simplifies compliance for containerized applications.
    • Integrates into CI/CD pipelines for continuous compliance.
  • Use Case: Ideal for DevSecOps teams utilizing containers in their deployments.

2. SCAP Compliance Checker

  • Purpose: Provides automated compliance scanning using the Security Content Automation Protocol (SCAP).
  • Benefits:
    • Validates system configurations against STIGs.
    • Generates detailed compliance reports.
  • Use Case: Useful for system administrators needing to audit various operating systems.

3. DISA STIG Viewer

  • Purpose: Helps in viewing and managing STIG checklists.
  • Benefits:
    • Allows for easy navigation of STIG requirements.
    • Facilitates documentation and reporting.
  • Use Case: Assists compliance officers in tracking compliance status.

4. DevOps Automation Tools

  • Infrastructure Automation Examples: Red Hat Ansible, Perforce Puppet, Hashicorp Terraform
  • Software Build Automation Examples: CloudBees CI, GitLab
  • Purpose: Automate the deployment of secure configurations that meet STIG compliance across multiple systems.
  • Benefits:
    • Ensures consistent application of secure configuration standards.
    • Reduces manual effort and the potential for errors.
  • Use Case: Suitable for large-scale environments where manual configuration is impractical.

5. Vulnerability Management Tools

  • Examples: Anchore Secure
  • Purpose: Identify vulnerabilities and compliance issues within your network.
  • Benefits:
    • Provides actionable insights to remediate security gaps.
    • Offers continuous monitoring capabilities.
  • Use Case: Critical for security teams focused on proactive risk management.

Wrap-Up

Achieving DISA STIG compliance is mandatory for organizations working with the DoD. By understanding what STIGs are, who they apply to, and how to navigate the compliance process, your organization can meet the stringent compliance requirements set forth by DISA. As a bonus, you will enhance its security posture and reduce the potential for a security breach.

Remember, compliance is not a one-time event but an ongoing effort that requires regular updates, audits, and maintenance. Leveraging automation tools like Anchore STIG and Anchore Secure can significantly ease this burden, allowing your team to focus on strategic initiatives rather than manual compliance tasks.

Stay proactive, keep your team informed, and make use of the resources available to ensure that your IT systems remain secure and compliant.

Navigating Open Source Software Compliance in Regulated Industries

Open source software (OSS) brings a wealth of benefits; speed, innovation, cost savings. But when serving customers in highly regulated industries like defense, energy, or finance, a new complication enters the picture—compliance.

Imagine your DevOps-fluent engineering team has been leveraging OSS to accelerate product delivery, and suddenly, a major customer hits you with a security compliance questionnaire. What now? 

Regulatory compliance isn’t just about managing the risks of OSS for your business anymore; it’s about providing concrete evidence that you meet standards like FedRAMP and the Secure Software Development Framework (SSDF).

The tricky part is that the OSS “suppliers” making up 70-90% of your software supply chain aren’t traditional vendors—they don’t have the same obligations or accountability, and they’re not necessarily aligned with your compliance needs. 

So, who bears the responsibility? You do.

The OSS your engineering team consumes is your resource and your responsibility. This means you’re not only tasked with managing the security risks of using OSS but also with proving that both your applications and your OSS supply chain meet compliance standards. 

In this post, we’ll explore why you’re ultimately responsible for the OSS you consume and outline practical steps to help you use OSS while staying compliant.

Learn about CISA’s SSDF attestation form and how to meet compliance.

What does it mean to use open source software in a regulated environment?

Highly regulated environments add a new wrinkle to the OSS security narrative. The OSS developers that author the software dependencies that make up the vast majority of modern software supply chains aren’t vendors in the traditional sense. They are more of a volunteer force that allow you to re-use their work but it is a take it or leave it agreement. You have no recourse if it doesn’t work as expected, or worse, has vulnerabilities in it.

So, how do you meet compliance standards when your software supply chain is built on top of a foundation of OSS?

Who is the vendor? You are!

Whether you have internalized this or not the open source software that your developers consume is your resource and thus your responsibility.

This means that you are shouldered with the burden of not only managing the security risk of consuming OSS but also having to shoulder the burden of proving that both your applications and the your OSS supply chain meets compliance.

Open source software is a natural resource

Before we jump into how to accomplish the task set forth in the previous section, let’s take some time to understand why you are the vendor when it comes to open source software.

The common idea is that OSS is produced by a 3rd-party that isn’t part of your organization, so they are the software supplier. Shouldn’t they be the ones required to secure their code? They control and maintain what goes in, right? How are they not responsible?

To answer that question, let’s think about OSS as a natural resource that is shared by the public at large, for instance the public water supply.

This shouldn’t be too much of a stretch. We already use terms like upstream and downstream to think about the relationship between software dependencies and the global software supply chain.

Using this mental model, it becomes easier to understand that a public good isn’t a supplier. You can’t ask a river or a lake for an audit report that it is contaminant free and safe to drink. 

Instead the organization that processes and provides the water to the community is responsible for testing the water and guaranteeing its safety. In this metaphor, your company is the one processing the water and selling it as pristine bottled water. 

How do you pass the buck to your “supplier”? You can’t. That’s the point.

This probably has you asking yourself, if I am responsible for my own OSS supply chain then how to meet a compliance standard for something that I don’t have control over? Keep reading and you’ll find out.

How do I use OSS and stay compliant?

While compliance standards are often thought of as rigid, the reality is much more nuanced. Just because your organization doesn’t own/control the open source projects that you consume doesn’t mean that you can’t use OSS and meet compliance requirements.

There are a few different steps that you need to take in order to build a “reasonably secure” OSS supply chain that will pass a compliance audit. We’ll walk you through the steps below:

Step 1 — Know what you have (i.e., an SBOM inventory)

The foundation of the global software supply chain is the SBOM (software bill of materials) standard. Each of the security and compliance functions outlined in the steps below use or manipulate an SBOM.

SBOMs are the foundational component of the global software supply chain because they record the ingredients that were used to produce the application an end-user will consume. If you don’t have a good grasp of the ingredients of your applications there isn’t much hope for producing any upstream security or compliance guarantees.

The best way to create observability into your software supply chain is to generate an SBOM for every single application in your DevSecOps build pipeline—at each stage of the pipeline!

Step 2 — Maintain a historical record of application source code

To meet compliance standards like FedRAMP and SSDF, you need to be able to maintain a historical record of the source code of your applications, including: 

  • Where it comes from, 
  • Who created it, and 
  • Any modifications made to it over time.

SBOMs were designed to meet these requirements. They act as a record of how applications were built and when/where OSS dependencies were introduced. They also double as compliance artifacts that prove you are compliant with regulatory standards.

Governments aren’t content with self-attestation (at least not for long); they need hard evidence to verify that you are trustworthy. Even though SSDF is currently self-attestation only, the federal government is known for rolling out compliance frameworks in stages. First advising on best-practices, then requiring self-attestation, finally external validation via a certification process. 

The Cybersecurity Maturity Model Certification (CMMC) is a good example of this dynamic process. It recently transitioned from self-attestation to external validation with the introduction of the 2.0 release of the framework.

Step 3 — Manage your OSS vulnerabilities

Not only do you need to keep a record of applications as they evolve over time, you have to track the known vulnerabilities of your OSS dependencies to achieve compliance. Just as SBOMs prove provenance, vulnerability scans are proof that your application and its dependencies aren’t vulnerable. These scans are a crucial piece of the evidence that you will need to provide to your compliance officer as you go through the certification process. 

Remember the buck stops with you! If the OSS that your application consumes doesn’t supply an SBOM and vulnerability scan (which is essentially all OSS projects) then you are responsible to create them. There is no vendor to pass the blame to for proving that your supply chain is reasonably secure and thus compliant.

Step 4 — Continuous compliance of open source software supply chain

It is important to recognize that modern compliance standards are no longer sprints but marathons. Not only do you have to prove that your application(s) are compliant at the time of audit but you have to be able to demonstrate that it remains secure continuously in order to maintain your certification.

This can be challenging to scale but it is made easier by integrating SBOM generation, vulnerability scanning and policy checks directly into the DevSecOps pipeline. This is the approach that modern, SBOM-powered SCAs advocate for.

By embedding the compliance policy-as-code into your DevSecOps pipeline as policy gates, compliance can be maintained over time. Developers are alerted when their code doesn’t meet a compliance standard and are directed to take the corrective action. Also, these compliance checks can be used to automatically generate the compliance artifacts needed. 

You already have an automated DevSecOps pipeline that is producing and delivering applications with minimal human intervention, why not take advantage of this existing tooling to automate open source software compliance in the same way that security was integrated directly into DevOps.

Real-world Examples

To help bring these concepts to life, we’ve outlined some real-world examples of how open source software and compliance intersect:

Open source project has unfixed vulnerabilities

This is far and wide the most common issue that comes up during compliance audits. One of your application’s OSS dependencies has a known vulnerability that has been sitting in the backlog for months or even years!

There are several reasons why an open source software developer might leave a known vulnerability unresolved:

  • They prioritize a feature over fixing a vulnerability
  • The vulnerability is from a third-party dependency they don’t control and can’t fix
  • They don’t like fixing vulnerabilities and choose to ignore it
  • They reviewed the vulnerability and decided it’s not likely to be exploited, so it’s not worth their time
  • They’re planning a codebase refactor that will address the vulnerability in the future

These are all rational reasons for vulnerabilities to persist in a codebase. Remember, OSS projects are owned and maintained by 3rd-party developers who control the repository; they make no guarantees about its quality. They are not vendors.

You, on the other hand, are a vendor and must meet compliance requirements. The responsibility falls on you. An OSS vulnerability management program is how you meet your compliance requirements while enjoying the benefits of OSS.

Need to fill out a supplier questionnaire

Imagine you’re a cloud service provider or software vendor. Your sales team is trying to close a deal with a significant customer. As the contract nears signing, the customer’s legal team requests a security questionnaire. They’re in the business of protecting their organization from financial risk stemming from their supply chain, and your company is about to become part of that supply chain.

These forms are usually from lawyers, very formal, and not focused on technical attacks. They just want to know what you’re using. The quick answer? “Here’s our SBOM.” 

Compliance comes in the form of public standards like FedRAMP, SSDF, NIST, etc., and these less formal security questionnaires. Either way, being unable to provide a full accounting of the risks in your software supply chain can be a speed bump to your organization’s revenue growth and success.

Integrating SBOM scanning, generation, and management deeply into your DevSecOps pipeline is key to accelerating the sales process and your organization’s overall success.

Prove provenance

CISA’s SSDF Attestation form requires that enterprises selling software to the federal government can produce a historical record of their applications. Quoting directly: “The software producer [must] maintain provenance for internal code and third-party components incorporated into the software to the greatest extent feasible.”

If you want access to the revenue opportunities the U.S. federal government offers, SSDF attestation is the needle you have to thread. Meeting this requirement without hiring an army of compliance engineers to manually review your entire DevSecOps pipeline demands an automated OSS component observability and management system.

Often, we jump to cryptographic signatures, encryption keys, trust roots—this quickly becomes a mess. Really, just a hash of the files in a database (read: SBOM inventory) satisfies the requirement. Sometimes, simpler is better. 

Discover the “easy button” to SSDF Attestation and OSS supply chain compliance in our previous blog post.

Takeaways

OSS Is Not a Vendor—But You Are! The best way to have your OSS cake and eat it too (without the indigestion) is to:

  1. Know Your Ingredients: Maintain an SBOM inventory of your OSS supply chain.
  2. Maintain a Complete Historical Record: Keep track of your application’s source code and build process.
  3. Scan for Known Vulnerabilities: Regularly check your OSS dependencies.
  4. Continuous Compliance thru Automation: Generate compliance records automatically to scale your compliance process.

There are numerous reasons to aim for open source software compliance, especially for your software supply chain:

  • Balance Gains Against Risks: Leverage OSS benefits while managing associated risks.
  • Reduce Financial Risk: Protect your organization’s existing revenue.
  • Increase Revenue Opportunities: Access new markets that mandate specific compliance standards.
  • Avoid Becoming a Cautionary Tale: Stay ahead of potential security incidents.

Regardless of your motivation for wanting to use OSS and use it responsibly (i.e., securely and compliantly), Anchore is here to help. Reach out to our team to learn more about how to build and manage a secure and compliant OSS supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

US Navy achieves ATO in days with continuous compliance and OSS risk management

Implementing secure and compliant software solutions within the Department of Defense’s (DoD) software factory framework is no small feat. 

For Black Pearl, the premier DevSecOps platform for the U.S. Navy, and Sigma Defense, a leading DoD technology contractor, the challenge was not just about meeting stringent security requirements but to empower the warfighter. 

We’ll cover how they streamlined compliance, managed open source software (OSS) risk, and reduced vulnerability overload—all while accelerating their Authority to Operate (ATO) process.

Challenge: Navigating Complex Security and Compliance Requirements

Black Pearl and Sigma Defense faced several critical hurdles in meeting the stringent security and compliance standards of the DoD Enterprise DevSecOps Reference Design:

  • Achieving RMF Security and Compliance: Black Pearl needed to secure its own platform and help its customers achieve ATO under the Risk Management Framework (RMF). This involved meeting stringent security controls like RA-5 (Vulnerability Management), SI-3 (Malware Protection), and IA-5 (Credential Management) for both the platform and the applications built on it.
  • Maintaining Continuous Compliance: With the RAISE 2.0 memo emphasizing continuous ATO compliance, manual processes were no longer sufficient. The teams needed to automate compliance tasks to avoid the time-consuming procedures traditionally associated with maintaining ATO status.
  • Managing Open-Source Software (OSS) Risks: Open-source components are integral to modern software development but come with inherent risks. Black Pearl had to manage OSS risks for both its platform and its customers’ applications, ensuring vulnerabilities didn’t compromise security or compliance.
  • Vulnerability Overload for Developers: Developers often face an overwhelming number of vulnerabilities, many of which may not pose significant risks. Prioritizing actionable items without draining resources or slowing down development was a significant challenge.

“By using Anchore and the Black Pearl platform, applications inherit 80% of the RMF’s security controls. You can avoid all of the boring stuff and just get down to what everyone does well, which is write code.”

Christopher Rennie, Product Lead/Solutions Architect

Solution: Automating Compliance and Security with Anchore

To address these challenges, Black Pearl and Sigma Defense implemented Anchore, which provided:

“Working alongside Anchore, we have customized the compliance artifacts that come from the Anchore API to look exactly how the AOs are expecting them to. This has created a good foundation for us to start building the POA&Ms that they’re expecting.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Managing OSS Risks with Continuous Monitoring: Anchore’s integrated vulnerability scanner, policy enforcer, and reporting system provided continuous monitoring of open-source software components. This proactive approach ensured vulnerabilities were detected and addressed promptly, effectively mitigating security risks.
  • Automated Prioritization of Vulnerabilities: By integrating the Anchore Developer Bundle, Black Pearl enabled automatic prioritization of actionable vulnerabilities. Developers received immediate alerts on critical issues, reducing noise and allowing them to focus on what truly matters.

Results: Accelerated ATO and Enhanced Security

The implementation of Anchore transformed Black Pearl’s compliance process and security posture:

  • Platform ATO in 3-5 days: With Anchore’s integration, Black Pearl users accessed a fully operational DevSecOps platform within days, a significant reduction from the typical six months for DIY builds.

“The DoD has four different layers of authorizing officials in order to achieve ATO. You have to figure out how to make all of them happy. We want to innovate by automating the compliance process. Anchore helps us achieve this, so that we can build a full ATO package in an afternoon rather than taking a month or more.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Significantly reduced time spent on compliance reporting: Anchore automated compliance checks and artifact generation, cutting down hours spent on manual reviews and ensuring consistency in reports submitted to authorizing officials.
  • Proactive OSS risk management: By shifting security and compliance to the left, developers identified and remediated open-source vulnerabilities early in the development lifecycle, mitigating risks and streamlining the compliance process.
  • Reduced vulnerability overload with prioritized vulnerability reporting: Anchore’s prioritization of vulnerabilities prevented developer overwhelm, allowing teams to focus on critical issues without hindering development speed.

Conclusion: Empowering the Warfighter Through Efficient Compliance and Security

Black Pearl and Sigma Defense’s partnership with Anchore demonstrates how automating security and compliance processes leads to significant efficiencies. This empowers Navy divisions to focus on developing software that supports the warfighter. 

Achieving ATO in days rather than months is a game-changer in an environment where every second counts, setting a new standard for efficiency through the combination of Black Pearl’s robust DevSecOps platform and Anchore’s comprehensive security solutions.

If you’re facing similar challenges in securing your software supply chain and accelerating compliance, it’s time to explore how Anchore can help your organization achieve its mission-critical objectives.

Download the full case study below👇

Mark Your Calendars: Anchore’s Must-Attend Events and Webinars in October

Are you ready for cooler temperatures and the changing of the leaves? Anchore is! We are excited to announce a series of events and webinars next month. From in-person conferences to insightful webinars, we have a lineup designed to keep you informed about the latest developments in software supply chain security, DevSecOps, and compliance. Join us to learn, connect, and explore how Anchore can help your organization navigate the evolving landscape of software security.

EVENT: TD Synnex Inspire

Date: October 9-12, 2024

Location: Booth T84 | Greenville Convention Center in Greenville, SC

Anchore is thrilled to be exhibiting at the 2024 TD SYNNEX Inspire. Visit us at Booth T84 in the Pavilion to discover how Anchore secures containers for AI, machine learning applications—with a special emphasis on high-performance computing (HPC).

Anchore has helped many Fortune 50 enterprises scale their container security and vulnerability management programs for their entire software supply chain including luminaries like NVIDIA. If you’d like to book dedicated time to speak with our team, drop by our booth or email us at [email protected].

WEBINAR: Introducing the Anchore Data Service

Date: October 15, 2024 at 10am PT

We will showcase the exciting new features introduced in Anchore Enterprise 5.8, 5.9, and 5.10. All designed to effortlessly secure your software supply chain and reduce risk for your organization. Highlights include:

  • Version 5.10: New Anchore Data Service which automatically updates your vulnerability feeds—even in air-gapped environments!
  • Version 5.9: Improved SBOM generation with native integration of Syft, etc.
  • Version 5.8: CISA Known Exploited Vulnerabilities (KEV) feed, etc.

We will be demo-ing all of the new features, sharing pro tips and providing takeaways on how to best utilize the new releases. Don’t miss out!

EVENT: All Things Open Conference

Date: October 27-29, 2024

Location: Booth #95 | Raleigh Convention Center in Raleigh, NC

Anchore is excited to participate in the 2024 All Things Open Conference—one of the largest open source software events in the U.S. Drop by and visit us at Booth #95 to learn how our open source tools, Syft and Grype, can help you start your journey to a more secure DevSecOps pipeline. 

Anchore employees will be on hand to help you understand:

WEBINAR: Accelerate FedRAMP Compliance on Amazon EKS with Anchore

Date: October 29, 2024 at 10am PT

Navigating FedRAMP compliance can be challenging, but Anchore and AWS are here to simplify the process. Join Luis Morales, Solutions Architect at AWS, and Brian Thomason, Manager of Partner and Solutions Engineering at Anchore, as they explain how Cisco achieved FedRAMP compliance in weeks rather than months.

In this live session, we’ll share actionable guidance and insights that address:

  • How to meet six FedRAMP vulnerability scanning requirements
  • Automating STIG and FIPS compliance for Amazon EC2 virtual machines
  • Securing containers end-to-end across CI/CD, Amazon EKS, and ECS

*We’ll also discuss the architecture of Anchore running in an AWS customer environment, demonstrating how to leverage AWS tools and services to enhance your cloud security posture.

WEBINAR: Expert Series: Solving Real-World Challenges in FedRAMP Compliance

Date: October 30, 2024 at 10am PT

Navigating the path to FedRAMP authorization can be daunting, especially with the evolving landscape of federal security requirements. In this Expert Series webinar, Neil Levine, SVP of Product at Anchore, and Mike Strohecker, Director of Cloud Operations at InfusionPoints, will share real-world stories of how we’ve helped our FedRAMP customers overcome key challenges—from achieving compliance faster to meeting the latest FedRAMP Rev 5 requirements.

We’ll dive into practical solutions, including:

  • Overcoming common FedRAMP compliance hurdles
  • Meeting Rev 5 security hardening standards like STIG and CIS (CM-6)
  • Effectively shifting security left in the CI/CD pipeline
  • Automating policy enforcement and continuous monitoring

*We’ll also explore the future impact of the July 2024 FedRAMP modernization memo, highlighting how increased automation with OSCAL is transforming the compliance process.

Wrap-Up

With a brimming schedule of events, October promises to be a jam packed month for Anchore and our community. Whether you’re interested in our latest product updates, exploring strategies for FedRAMP compliance, or connecting at industry-leading events, there’s something for everyone. Mark your calendars and join us to stay ahead in the evolving world of software supply chain security.

Stay informed about upcoming events and developments at Anchore by bookmarking our Events Page and checking back regularly!

We migrated from S3 to R2. Thankfully nobody noticed

Sometimes, the best changes are the ones that you don’t notice. Well, some of you reading this may not have noticed, but there’s a good chance that many of you did notice a hiccup or two in Grype database availability that suddenly became a lot more stable.

One of the greatest things about Anchore, is that we are empowered to make changes quickly when needed. This is the story about doing just that: identifying issues in our database distribution mechanism and making a change to improve the experience for all our users.

A Heisenbug is born

It all started some time ago, in a galaxy far away. As early as 2022, when we received reports that some users had issues downloading the Grype database. These issues included general slowness and timeouts, with users receiving the dreaded: context deadline exceeded; and manually downloading the database from a browser could show similar behavior:

Debugging these transient single issues among thousands of legitimate, successful downloads was problematic for the team, as no one could reproduce these reliably, so it remained unclear what the cause was. A few more reports trickled in here and there, but everything seemed to work well whenever we tested this ourselves. Without further information, we had to chalk this up to something like unreliable network transfers in specific regions or under certain conditions, exacerbated by the moderately large size of the database: about 200 MB, compressed.

To determine any patterns or provide feedback to our CDN provider that users are having issues downloading the files, we set up a job to download the database periodically, adding DataDog monitoring across many regions to do the same thing. We noticed a few things: periodic and regular issues downloading the database, and the failures seemed to correlate to high-volume periods – just after a new database was built, for example. We continued monitoring these, but the intermittent failures didn’t seem frequent enough to cause great concern.

Small things matter

At some point leading up to August, we also began to get reports of users experiencing issues downloading the Grype database listing file. When Grype downloads the database, it first downloads a listing file to determine if a newer database exists. At the time, this file contained a historical record of 450 databases worth of metadata (90 days × each of the 5 Grype database versions), so the listing file clocked in around 200 KB. 

Grype only really needs the latest database, so the first thing we did was trim this file down to only the last few days; once we shrunk this file to under 5k, the issues downloading the listing file itself went away. This was our first clue about the problem: smaller files worked fine.

Fast forward to August 16, 2024: we awoke to multiple reports from people worldwide indicating they had the same issues downloading the database. We finally started to see the same thing ourselves after many months of being unable to reproduce the failures meaningfully. What happened? We had reached an inflection point of traffic that was causing issues with the CDN being able to deliver these files reliably to end users. Interestingly, the traffic was not from Grype but rather from Syft invocations checking for application updates: 1 million times per hour – approximately double what we saw previously, and this amount of traffic was beginning to affect users of Grype adversely – since they were served from the same endpoint, possibly due to the volume causing some throttling by the CDN provider.

The right tool for the job

As a team, we had individually investigated these database failures, but we decided it was time for all of us to strap on our boots and solve this. The clue we had from decreasing the size of the listing file was crucial to understanding what was going on. We were using a standard CDN offering backed by AWS S3 storage. 

Finding documentation about the CDN usage resulted in vague information that didn’t help us understand if we were decidedly doing something wrong or not. However, much of the documentation was evident in that it talked about web traffic, and we could assume this is how the service is optimized based on our experience with a more web-friendly sized listing file. After much reading, it started to sound like larger files should be served using the Cloudflare R2 Object Storage offering instead…

So that’s what we did: the team collaborated via a long, caffeine-fuelled Zoom call over an entire day. We updated our database publishing jobs to additionally publish databases and updated listing files to a second location backed by the Cloudflare R2 Object Storage service, served from grype.anchore.io instead of toolbox-data.anchore.io/grype

We verified this was working as expected with Grype and finally updated the main listing file to point to this new location. The traffic load moved to the new service precisely as expected. This was completely transparent for Grype end-users, and our monitoring jobs have been green since!

While this wasn’t fun to scramble to fix, it’s great to know that our tools are popular enough to cause problems with a really good CDN service. Because of all the automated testing we have in place, our autonomy to operate independently, and robust publishing jobs, we were able to move quickly to address these issues. After letting this change operate over the weekend, we composed a short announcement for our community discourse to keep everyone informed. 

Many projects experience growing pains as they see increased usage; our tools are no exception. Still, we were able almost seamlessly to provide everyone with a more reliable experience quickly and have had reports that the change has solved issues for them. Hopefully, we won’t have to make any more changes even when usage grows another 100x…

If you have any feedback for the Syft & Grype developers, head over to our community discourse.

How to build an OSS risk management program

In previous blog posts we have covered the risks of open source software (OSS) and security best practices to manage that risk. From there we zoomed in on the benefits of tightly coupling two of those best practices (SBOMs and vulnerability scanning)

Now, we’ll dig deeper into the practical considerations of integrating this paired solution into a DevSecOps pipeline. By examining the design and implementation of SBOMs and vulnerability scanning, we’ll illuminate the path to creating a holistic open source software (OSS) risk management program.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How do I integrate SBOM management and vulnerability scanning into my development process?

Ideally, you want to generate an SBOM at each stage of the software development process (see image below). By generating an SBOM and scanning for vulnerabilities at each stage, you unlock a number of novel use-cases and benefits that we covered previously.

DevSecOps lifecycle diagram with all stages to integrate SBOM generation and vulnerability scanning.

Let’s break down how to integrate SBOM generation and vulnerability scanning into each stage of the development pipeline:

Source (PLAN & CODE)

The easiest way to integrate SBOM generation and vulnerability scanning into the design and coding phases is to provide CLI (command-line interface) tools to your developers. Engineers are already used to these tools—and have a preference for them!

If you’re going the open source route, we recommend both Syft (SBOM generation) and Grype (vulnerability scanner) as easy options to get started. If you’re interested in an integrated enterprise tool then you’ll want to look at AnchoreCTL.

Developers can generate SBOMs and run vulnerability scans right from the workstation. By doing this at design or commit time developers can shift security left and know immediately about security implications of their design decisions.

If existing vulnerabilities are found, developers can immediately pivot to OSS dependencies that are clean or start a conversation with their security team to understand if their preferred framework will be a deal breaker. Either way, security risk is addressed early before any design decisions are made that will be difficult to roll back.

Build (BUILD + TEST)

The ideal location to integrate SBOM generation and vulnerability scanning during the build and test phases are directly into the organization’s continuous integration (CI) pipeline.

The same self-contained CLI tools used during the source stage are integrated as additional steps into CI scripts/runbooks. When a developer pushes a commit that triggers the build process, the new steps are executed and both an SBOM and vulnerability scan are created as outputs. 

Check out our docs site to see how AnchoreCTL (running in distributed mode) makes this integration a breeze.

If you’re having trouble convincing your developers to jump on the SBOM train, we recommend that developers think about all security scans as just another unit test that is part of their testing suite.

Running these steps in the CI pipeline delays feedback a little versus performing the check as incremental code commits are made as an application is being coded but it is still light years better than waiting till a release is code complete. 

If you are unable to enforce vulnerability scanning of OSS dependencies by your engineering team, a CI-based strategy can be a good happy medium. It is much easier to ensure every build runs exactly the same each time than it is to do the same for developers.

Release (aka Registry)

Another integration option is the container registry. This option will require you to either roll your own service that will regularly call the registry and scan new containers or use a service that does this for you.

See how Anchore Enterprise can automate this entire process by reviewing our integration docs.

Regardless of the path you choose, you will end up creating an IAM service account within your CI application which will give your SBOM and vulnerability scanning solution the access to your registries.

The release stage tends to be fairly far along in the development process and is not an ideal location for these functions to run. Most of the benefits of a shift left security posture won’t be available anymore.

If this is an additional vulnerability scanning stage—rather than the sole stage—then this is a fantastic environment to integrate into. Software supply chain attacks that target registries are popular and can be prevented with a continuous scanning strategy.

Deploy

This is the traditional stage of the SDLC (software development lifecycle) to run vulnerability scans. SBOM generation can be added on as another step in an organization’s continuous deployment (CD) runbook.

Similar to the build stage, the best integration method is by calling CLI tools directly in the deploy script to generate the SBOM and then scan it for vulnerabilities.

Alternatively, if you utilize a container orchestrator like Kubernetes you can also configure an admission controller to act as a deployment gate. The admissions controller should be configured to make a call out to a standalone SBOM generator and vulnerability scanner. 

If you’d like to understand how this is implemented with Anchore Enterprise, see our docs.

While this is the traditional location for running vulnerability scans, it is not recommended that this is the only stage to scan for vulnerabilities. Feedback about security issues would be arriving very late in the development process and prior design decisions may prevent vulnerabilities from being easily remediated. Don’t do this unless you have no other option.

Production (OPERATE + MONITOR)

This is not a traditional stage to run vulnerability scans since the goal is to prevent vulnerabilities from getting to production. Regardless, this is still an important environment to scan. Production containers have a tendency to drift from their pristine build states (DevSecOps pipelines are leaky!).

Also, new vulnerabilities are discovered all of the time and being able to prioritize remediation efforts to the most vulnerable applications (i.e., runtime containers) considerably reduces the risk of exploitation.

The recommended way to run SBOM generation and vulnerability scans in production is to run an independent container with the SBOM generator and vulnerability scanner installed. Most container orchestrators have SDKs that will allow you to integrate an SBOM generator and vulnerability scanner to the preferred administration CLI (e.g., kubectl for k8s clusters). 

Read how Anchore Enterprise integrates these components together into a single container for both Kubernetes and Amazon ECS.

How do I manage all of the SBOMs and vulnerability scans?

Tightly coupling SBOM generation and vulnerability scanning creates a number of benefits but it also creates one problem; a firehose of data. This unintended side effect is named SBOM sprawl and it inevitably becomes a headache in and of itself.

The concise solution to this problem is to create a centralized SBOM repository. The brevity of this answer downplays the challenges that go along with building and managing a new data pipeline.

We’ll walk you through the high-level steps below but if you’re looking to understand the challenges and solutions of SBOM sprawl in more detail, we have a separate article that covers that.

Integrating SBOMs and vulnerability scanning for better OSS risk management

Assuming you’ve deployed an SBOM generator and vulnerability scanner into at least one of your development stages (as detailed above in “How do I integrate SBOM management and vulnerability scanning into my development process?”) and have an SBOM repository for storing your SBOMs and/or vulnerability scans, we can now walkthrough how to tie these systems together.

  1. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  2. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  3. Implement a query system to extract insights from your inventory of SBOMs.
  4. Create a dashboard to visualize your software supply chain’s health.
  5. Build alerting automation to ping your team as newly discovered vulnerabilities are announced.
  6. Maintain all of these DIY security applications and tools. 
  7. Continue to incrementally improve on these tools as new threats emerge, technologies evolve and development processes change.

If this feels like more work than you’re willing to take on, this is why security vendors exist. See the benefits of a managed SBOM-powered SCA below.

Prefer not to DIY? Evaluate Anchore Enterprise

Anchore Enterprise was designed from the ground up to provide a reliable software supply chain security platform that requires the least amount of work to integrate and maintain. Included in the product is:

  • Out-of-the-box integrations for popular CI/CD software (e.g., GitHub, Jenkins, GitLab, etc.)
  • End-to-end SBOM management
  • Enterprise-grade vulnerability scanning with best-in-class false positives
  • Built-in SBOM drift detection
  • Remediation recommendations
  • Continuous visibility and monitoring of software supply chain health

Enterprises like NVIDIA, Cisco, Infoblox, etc. have chosen Anchore Enterprise as their “easy button” to achieve open source software security with the least amount of lift.

If you’re interested to learn more about how to roll out a complete OSS security solution without the blood, sweat and tears that come with the DIY route—reach out to our team to get a demo or try Anchore Enterprise yourself with a 15-day free trial.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era

The rise of open-source software (OSS) development and DevOps practices has unleashed a paradigm shift in OSS security. As traditional approaches to OSS security have proven inadequate in the face of rapid development cycles, the Software Bill of Materials (SBOM) has re-made OSS vulnerability management in the era of DevSecOps.

This blog post zooms in on two best practices from our introductory article on OSS security and the software supply chain:

  1. Maintain a Software Dependency Inventory
  2. Implement Vulnerability Scanning

These two best practices are set apart from the rest because they are a natural pair. We’ll cover how this novel approach,

  • Scaled OSS vulnerability management under the pressure of rapid software delivery
  • Is set apart from legacy SCAs
  • Unlocks new use-cases in software supply chain security, OSS risk management, etc.
  • Benefits software engineering orgs
  • Benefits an organization’s overall security posture
  • Has measurably impacted modern enterprises, such as, NVIDIA, Infoblox, etc.

Whether you’re a seasoned DevSecOps professional or just beginning to tackle the challenges of securing your software supply chain, this blog post offers insights into how SBOMs and vulnerability management can transform your approach to OSS security.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Why do I need SBOMs for OSS vulnerability management?

The TL;DR is SBOMs enabled DevSecOps teams to scale OSS vulnerability management programs in a modern, cloud native environment. Legacy security tools (i.e., SCA platforms) weren’t built to handle the pace of software delivery after a DevOps face lift.

Answering this question in full requires some historical context. Below is a speed-run of how we got to a place where SBOMs became the clear solution for vulnerability management after the rise of DevOps and OSS; the original longform is found on our blog.

If you’re not interested in a history lesson, skip to the next section, “What new use-cases are unlocked with a software dependency inventory?” to get straight to the impact of this evolution on software supply chain security (SSCS).

A short history on software composition analysis (SCA)

  • SCAs were originally designed to solve the problem of OSS licensing risk
  • Remember that Microsoft made a big fuss about the dangers of OSS at the turn of the millennium
  • Vulnerability scanning and management was tacked-on later
  • These legacy SCAs worked well enough until DevOps and OSS popularity hit critical mass

How the rise of OSS and DevOps principles broke legacy SCAs

  • DevOps and OSS movements hit traction in the 2010s
  • Software development and delivery transitioned from major updates with long development times to incremental updates with frequent releases
  • Modern engineering organizations are measured and optimized for delivery speed
  • Legacy SCAs were designed to scan a golden image once and take as much as needed to do it; upwards of weeks in some cases
  • This wasn’t compatible with the DevOps promise and created friction between engineering and security
  • This meant not all software could be scanned and much was scanned after release increasing the risk of a security breach

SBOMs as the solution

  • SBOMs were introduced as a standardized data structure that comprised a complete list of all software dependencies (OSS or otherwise)
  • These lightweight files created a reliable way to scan software for vulnerabilities without the slow performance of scanning the entire application—soup to nuts
  • Modern SCAs utilize SBOMs as the foundational layer to power vulnerability scanning in DevSecOps pipelines
  • SBOMs + SCAs deliver on the performance of DevOps without compromising security

What is the difference between SBOMs and legacy SCA scanning?

SBOMs offer two functional innovations over the legacy model: 

  1. Deeper visibility into an organization’s application inventory and; 
  2. A record of changes to applications over-time.

The deeper visibility comes from the fact that modern SCA scanners identify software dependencies recursively and build a complete software dependency tree (both direct and transitive). The record of changes comes from the fact that the OSS ecosystem has begun to standardize the contents of SBOMs to allow interoperability between OSS consumers and producers.

Legacy SCAs typically only scan for direct software dependencies and don’t recursively scan for dependencies of dependencies. Also, legacy SCAs don’t generate standardized scans that can then be used to track changes over time.

What new use-cases are unlocked with an SBOM inventory?

The innovations brought by SBOMs (see above) have unlocked new use-cases that benefit both the software supply chain security niche and the greater DevSecOps world. See the list below:

OSS Dependency Drift Detection

Ideally software dependencies are only injected in source code but the reality is that CI/CD pipelines are leaky and both automated and one-off modifications are made at all stages of development. Plugging 100% of the leaks is a strategy with diminishing returns. Application drift detection is a scalable solution to this challenge.

SBOMs unlocks drift detection by creating a point-in-time record on the composition of an application at each stage of the development process. This creates an auditable record of when software builds are modified; how they are changed and who changed it. 

Software Supply Chain Attack Detection

Not all dependency injections are performed by benevolent 1st-party developers. Malicious threat actors who gain access to your organization’s DevSecOps pipeline or the pipeline of one of your OSS suppliers can inject malicious code into your applications.

An SBOM inventory creates the historical record that can identify anomalous behavior and catch these security breaches before organizational damage is done. This is a particularly important strategy for dealing with advanced persistent threats (APTs) that are expert at infiltration and stealth. For a real-world example, see our blog on the recent XZ supply chain attack.

OSS Licensing Risk Management

OSS licenses are currently undergoing the beginning of a new transformation. The highly permissive licenses that came into fashion over the last 20 years are proving to be unsustainable. As prominent open source startups amend their licenses (e.g., Hashicorp, Elastic, Redis, etc.), organizations need to evaluate these changes and how it impacts their OSS supply chain strategy.

Similar to the benefits during a security incident, an SBOM inventory acts as the source of truth for OSS licensing risk. As licenses are amended, an organization can quickly evaluate their risk by querying their inventory and identifying who their “critical” OSS suppliers are. 

Domain Expertise Risk Management

Another emerging use-case of software dependency inventories is the management of domain expertise of developers in your organization. A comprehensive inventory of software dependencies allows organization’s to map critical software to individual employee’s domain knowledge. This creates a measurement of how well resourced your engineering organization is and who owns the knowledge that could impact business operations.

While losing an employee with a particular set of skills might not have the same urgency as a security incident, over time this gap can create instability. An SBOM inventory allows organizations to maintain a list of critical OSS suppliers and get ahead of any structural risks in their organization.

What are the benefits of a software dependency inventory?

SBOM inventories create a number of benefits for tangential domains, such as, software supply chain security, risk management, etc. but there is one big benefit for the core practices of software development.

Reduced engineering and QA time for debugging

A software dependency inventory stores metadata about applications and their OSS dependencies over-time in a centralized repository. This datastore is a simple and efficient way to search and answer critical questions about the state of an organization’s software development pipeline.

Previously, engineering and QA teams had to manually search codebases and commits in order to determine the source of a rogue dependency being added to an application. A software dependency inventory combines a centralized repository of SBOMs with an intuitive search interface. Now, these time consuming investigations can be accomplished in minutes versus hours.

What are the benefits of scanning SBOMs for vulnerabilities?

There are a number of security benefits that can be achieved by integrating SBOMs and vulnerability scanning. We’ve highlighted the most important below:

Reduce risk by scaling vulnerability scanning for complete coverage

One of the side effects of transitioning to DevOps practices was that legacy SCAs couldn’t keep up with the software output of modern engineering orgs. This meant that not all applications were scanned before being deployed to production—a risky security practice!

Modern SCAs solved this problem by scanning SBOMs rather than applications or codebases. These lightweight SBOM scans are so efficient that they can keep up with the pace of DevOps output. Scanning 100% of applications reduces risk by preventing unscanned software from being deployed into vulnerable environments.

Prevent delays in software delivery

Overall organizational productivity can be increased by adopting modern, SBOM-powered SCAs that allow organizations to shift security left. When vulnerabilities are uncovered during application design, developers can make informed decisions about the OSS dependencies that they choose. 

This prevents the situation where engineering creates a new application or feature but right before it is deployed into production the security team scans the dependencies and finds a critical vulnerability. These last minute security scans can delay a release and create frustration across the organization. Scanning early and often prevents this productivity drain from occurring at the worst possible time.

Reduced financial risk during a security incident

The faster a security incident is resolved the less risk that an organization is exposed to. The primary metric that organizations track is called mean-time-to-recovery (MTTR). SBOM inventories are utilized to significantly reduce this metric and improve incident outcomes.

An application inventory with full details on the software dependencies is a prerequisite for rapid security response in the event of an incident. A single SQL query to an SBOM inventory will return a list of all applications that have exploitable dependencies installed. Recent examples include Log4j and XZ. This prevents the need for manual scanning of codebases or production containers. This is the difference between a zero-day incident lasting a few hours versus weeks.

Reduce hours spent on compliance with automation

Compliance certifications are powerful growth levers for organizations; they open up new market opportunities. The downside is that they create a lot of work for organizations. Manually confirming that each compliance control is met and providing evidence for the compliance officer to review discourages organizations from pursuing these certifications.

Providing automated vulnerability scans from DevSecOps pipelines that integrate SBOM inventories and vulnerability scanners significantly reduces the hours needed to generate and collect evidence for compliance audits.

How impactful are these benefits?

Many modern enterprises are adopting SBOM-powered SCAs and reaping the benefits outlined above. The quantifiable benefits to any organization are unique to that enterprise but anecdotal evidence is still helpful when weighing how to prioritize a software supply chain security initiative, like the adoption of an SBOM-powered SCA against other organizational priorities.

As a leading SBOM-powered SCA, Anchore has helped numerous organizations achieve the benefits of this evolution in the software industry. To get an estimate of what your organization can expect, see the case studies below:

NVIDIA

  • Reduced time to production by scanning SBOMs instead of full applications
  • Scaled vulnerability scanning and management program to 100% coverage across 1000s of containerized applications and 100,000s of containers

Read the full NVIDIA case study here >>

Infoblox

  • 75% reduction in engineering hours spent performing manual vulnerability detection
  • 55% reduction in hours allocated to retroactive remediation of vulnerabilities
  • 60% reduction in hours spent on manual compliance discovery and documentation

Read the full Infoblox case study here >>

DreamFactory

  • 75% reduction in engineering hours spent on vulnerability management and compliance
  • 70% faster production deployments with automated vulnerability scanning and management

Read the full DreamFactory case study here >>

Next Steps

Hopefully you now have a better understanding of the power of integrating an SBOM inventory into OSS vulnerability management. This “one-two” combo has unlocked novel use-cases, numerous benefits and measurable results for modern enterprises.

If you’re interested in learning more about how Anchore can help your organization achieve similar results, reach out to our team.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

DreamFactory Achieves 75% Time Savings with Anchore: A Case Study in Secure API Generation

As the popularity of APIs has swept the software industry, API security has become paramount, especially for organizations in highly regulated industries. DreamFactory, an API generation platform serving the defense industry and critical national infrastructure, required an air-gapped vulnerability scanning and management solution that didn’t slow down their productivity. Avoiding security breaches and compliance failures are non-negotiables for the team to maintain customer trust.

Challenge: Security Across the Gap

DreamFactory encountered several critical hurdles in meeting the needs of its high-profile clients, particularly those in the defense community and other highly regulated sectors:

  1. Secure deployments without cloud connectivity: Many clients, including the Department of Defense (DoD), required on-premises deployments with air-gapping, breaking the assumptions of modern cloud-based security strategies.
  2. Air-gapped vulnerability scans: Despite air-gapping, these organizations still demanded comprehensive vulnerability reporting to protect their sensitive data.
  3. Building high-trust partnerships: In industries where security breaches could have catastrophic consequences, establishing trust rapidly was crucial.

As Terence Bennett, CEO of DreamFactory, explains, “The data processed by these organizations have the highest national security implications. We needed a solution that could deliver bulletproof security without cloud connectivity.”

Solution: Anchore Enterprise On-Prem and Air-Gapped 

To address these challenges, DreamFactory implemented Anchore Enterprise, which provided:

  1. Support for on-prem and air-gapped deployments: Anchore Enterprise was designed to operate in air-gapped environments, aligning perfectly with DreamFactory’s needs.
  2. Comprehensive vulnerability scanning: DreamFactory integrated Anchore Enterprise into its build pipeline, running daily vulnerability scans on all deployment versions.
  3. Automated SBOM generation and management: Every build is now cataloged and stored (as an SBOM), providing immediate transparency into the software’s components.

“By catching vulnerabilities in our build pipeline, we can inform our customers and prevent any of the APIs created by a DreamFactory install from being leveraged to exploit our customer’s network,” Bennett notes. “Anchore has helped us achieve this massive value-add for our customers.”

Results: Developer Time Savings and Enhanced Trust

The implementation of Anchore Enterprise transformed DreamFactory’s security posture and business operations:

  • 75% reduction in time spent on vulnerability management and compliance requirements
  • 70% faster production deployments with integrated security checks
  • Rapid trust development through transparency

“We’re seeing a lot of traction with data warehousing use-cases,” says Bennett. “Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.”

Conclusion: A Competitive Edge in High-Stakes Environments

By leveraging Anchore Enterprise, DreamFactory has positioned itself as a trusted partner for organizations requiring the highest levels of security and compliance in their API generation solutions. In an era where API security is more critical than ever, DreamFactory’s success story demonstrates that with the right tools and approach, it’s possible to achieve both ironclad security and operational efficiency.


Are you facing similar challenges hardening your software supply chain in order to meet the requirements of the DoD? By designing your DevSecOps pipeline to the DoD software factory standard, your organization can guarantee to meet these sky-high security and compliance requirements. Learn more about the DoD software factory standard by downloading our white paper below.

How is Open Source Software Security Managed in the Software Supply Chain?

Open source software has revolutionized the way developers build applications, offering a treasure trove of pre-built software “legos” that dramatically boost productivity and accelerate innovation. By leveraging the collective expertise of a global community, developers can create complex, feature-rich applications in a fraction of the time it would take to build everything from scratch. However, this incredible power comes with a significant caveat: the open source model introduces risk.

Organizations inherit both the good and bad parts of the OSS source code they don’t own. This double-edged sword of open source software necessitates a careful balance between harnessing its productivity benefits and managing the risks. A comprehensive OSS security program is the industry standard best practice for managing the risk of open source software within an organization’s software supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software security, to reduce the risk of software supply chain attacks.

What is open source software security?

Open source software security is the ecosystem of security tools (some of it being OSS!) that have developed to compensate for the inherent risk of OSS development. The security of the OSS environment was founded on the idea that “given enough eyeballs, all bugs are shallow”. The reality of OSS is that the majority of it is written and maintained by single contributors. The percentage of open source software that passes the qualifier of “enough eyeballs” is miniscule.

Does that mean open source software isn’t secure? Fortunately, no. The OSS community still produces secure software but an entire ecosystem of tools ensure that this is verified—not only trusted implicitly.

What is the difference between closed source and open source software security?

The primary difference between open source software security and closed source software security is how much control you have over the source code. Open source code is public and can have many contributors that are not employees of your organization while proprietary source code is written exclusively by employees of your organization. The threat models required to manage risk for each of these software development methods are informed by these differences.

Due to the fact that open source software is publicly accessible and can be contributed to by a diverse, often anonymous community, its threat model must account for the possibility of malicious code contributions, unintentional vulnerabilities introduced by inexperienced developers, and potential exploitation of disclosed vulnerabilities before patches are applied. This model emphasizes continuous monitoring, rigorous code review processes, and active community engagement to mitigate risks. 

In contrast, proprietary software’s threat model centers around insider threats, such as disgruntled employees or lapses in secure coding practices, and focuses heavily on internal access controls, security audits, and maintaining strict development protocols. 

The need for external threat intelligence is also greater in OSS, as the public nature of the code makes it a target for attackers seeking to exploit weaknesses, while proprietary software relies on obscurity and controlled access as a first line of defense against potential breaches.

What are the risks of using open source software?

  1. Vulnerability exploitation of your application
    • The bargain that is struck when utilizing OSS is your organization gives up significant amounts of control of the quality of the software. When you use OSS you inherit both good AND bad (read: insecure) code. Any known or latent vulnerabilities in the software become your problem.
  2. Access to source code increases the risk of vulnerabilities being discovered by threat actors
    • OSS development is unique in that both the defenders and the attackers have direct access to the source code. This gives the threat actors a leg up. They don’t have to break through perimeter defenses before they get access to source code that they can then analyze for vulnerabilities.
  3. Increased maintenance costs for DevSecOps function
    • Adopting OSS into an engineering organization is another function that requires management. Data has to be collected about the OSS that is embedded in your applications. That data has to be stored and made available in case of the event of a security incident. These maintenance costs are typically incurred by the DevOps and Security teams.
  4. OSS license legal exposure
    • OSS licenses are mostly permissive for use within commercial applications but a non-trivial subset are not, or worse they are highly adversarial when used by a commercial enterprise. Organizations that don’t manage this risk increase the potential for legal action to be taken against them.

How serious are the risks associated with the use of open source software?

Current estimates are that 70-90% of modern applications are composed of open source software. This means that only 10-30% of applications developed by organizations are written by developers employed by the organization. Without having significant visibility into the security of OSS, organization’s are handing over the keys to the castle to the community and hoping for the best.

Not only is OSS a significant footprint in modern application composition but its growth is accelerating. This means the associated risks are growing just as fast. This is part of the reason we see an acceleration in the frequency of software supply chain attacks. Organizations that aren’t addressing these realities are getting caught on their back foot when zero-days are announced like the recent XZ utils backdoor.

Why are SBOMs important to open source software security?

Software Bills of Materials (SBOMs) serve as the foundation of software supply chain security by providing a comprehensive “ingredient list” of all components within an application. This transparency is crucial in today’s software landscape, where modern applications are a complex web of mostly open source software dependencies that can harbor hidden vulnerabilities. 

SBOMs enable organizations to quickly identify and respond to security threats, as demonstrated during incidents like Log4Shell, where companies with centralized SBOM repositories were able to locate vulnerable components in hours rather than days. By offering a clear view of an application’s composition, SBOMs form the bedrock upon which other software supply chain security measures can be effectively built and validated.

The importance of SBOMs in open source software security cannot be overstated. Open source projects often involve numerous contributors and dependencies, making it challenging to maintain a clear picture of all components and their potential vulnerabilities. By implementing SBOMs, organizations can proactively manage risks associated with open source software, ensure regulatory compliance, and build trust with customers and partners. 

SBOMs enable quick responses to newly discovered vulnerabilities, facilitate automated vulnerability management, and support higher-level security abstractions like cryptographically signed images or source code. In essence, SBOMs provide the critical knowledge needed to navigate the complex world of open source dependencies by enabling us to channel our inner GI Joe—”knowing is half the battle” in software supply chain security.

Best practices for securing open source software?

Open source software has become an integral part of modern development practices, offering numerous benefits such as cost-effectiveness, flexibility, and community-driven innovation. However, with these advantages come unique security challenges. To mitigate risks and ensure the safety of your open source components, consider implementing the following best practices:

1. Model Security Scans as Unit Tests

Re-branding security checks as another type of unit test helps developers orient to DevSecOps principles. This approach helps developers re-imagine security as an integral part of their workflow rather than a separate, post-development concern. By modeling security checks as unit tests, you can:

  • Catch vulnerabilities earlier in the development process
  • Reduce the time between vulnerability detection and remediation
  • Empower developers to take ownership of security issues
  • Create a more seamless integration between development and security teams

Remember, the goal is to make security an integral part of the development process, not a bottleneck. By treating security checks as unit tests, you can achieve a balance between rapid development and robust security practices.

2. Review Code Quality

Assessing the quality of open source code is crucial for identifying potential vulnerabilities and ensuring overall software reliability. Consider the following steps:

  • Conduct thorough code reviews, either manually or using automated tools
  • Look for adherence to coding standards and best practices
  • Look for projects developed with secure-by-default principles
  • Evaluate the overall architecture and design patterns used

Remember, high-quality code is generally more secure and easier to maintain.

3. Assess Overall Project Health

A vibrant, active community and committed maintainers are crucial indicators of a well-maintained open source project. When evaluating a project’s health and security:

  • Examine community involvement:
    • Check the number of contributors and frequency of contributions
    • Review the project’s popularity metrics (e.g., GitHub stars, forks, watchers)
    • Assess the quality and frequency of discussions in forums or mailing lists
  • Evaluate maintainer(s) commitment:
    • Check the frequency of commits, releases, and security updates
    • Check for active engagement between maintainers and contributors
    • Review the time taken to address reported bugs and vulnerabilities
    • Look for a clear roadmap or future development plans

4. Maintain a Software Dependency Inventory

Keeping track of your open source dependencies is crucial for managing security risks. To create and maintain an effective inventory:

  • Use tools like Syft or Anchore SBOM to automatically scan your application source code for OSS dependencies
    • Include both direct and transitive dependencies in your scans
  • Generate a Software Bill of Materials (SBOM) from the dependency scan
    • Your dependency scanner should also do this for you
  • Store your SBOMs in a central location that can be searched and analyzed
  • Scan your entire DevSecOps pipeline regularly (ideally every build and deploy)

An up-to-date inventory allows for quicker responses to newly discovered vulnerabilities.

5. Implement Vulnerability Scanning

Regular vulnerability scanning helps identify known security issues in your open source components. To effectively scan for vulnerabilities:

  • Use tools like Grype or Anchore Secure to automatically scan your SBOMs for vulnerabilities
  • Automate vulnerability scanning tools directly into your CI/CD pipeline
    • At minimum implement vulnerability scanning as containers are built
    • Ideally scan container registries, container orchestrators and even each time a new dependency is added during design
  • Set up alerts for newly discovered vulnerabilities in your dependencies
  • Establish a process for addressing identified vulnerabilities promptly

6. Implement Version Control Best Practices

Version control practices are crucial for securing all DevSecOps pipelines that utilize open source software:

  • Implement branch protection rules to prevent unauthorized changes
  • Require code reviews and approvals before merging changes
  • Use signed commits to verify the authenticity of contributions

By implementing these best practices, you can significantly enhance the security of your software development pipeline and reduce the risk intrinsic to open source software. By doing this you will be able to have your cake (productivity boost of OSS) and eat it too (without the inherent risk).

How do I integrate open source software security into my development process?

DIY a comprehensive OSS security system

We’ve written about the steps to build a OSS security system from scratch in a previous blog post—below is the TL;DR:

  • Integrate dependency scanning, SBOM generation and vulnerability scanning into your DevSecOps pipeline
  • Implement a data pipeline to manage the influx of security metadata
  • Use automated policy-as-code “security tests” to provide rapid feedback to developers
  • Automate remediation recommendations to reduce cognitive load on developers

Outsource OSS security to a turnkey vendor

Modern software composition analysis (SCA) tools, like Anchore Enterprise, are purpose built to provide you with a comprehensive OSS security system out-of-the-box. All of the same features of DIY but without the hassle of building while maintaining your current manual process.

  • Anchore SBOM: comprehensive dependency scanning, SBOM generation and management
  • Anchore Secure: vulnerability scanning and management
  • Anchore Enforce: automated security enforcement and compliance

Whether you want to scale an understaffed security to increase their reach across your organization or free your team up to focus on different priorities, the buy versus build opportunity cost is a straightforward decision.

Next Steps

Hopefully, you now have a strong understanding of the risks associated with adopting open source software. If you’re looking to continue your exploration into the intricacies of software supply chain security, Anchore has a catalog of deep dive content on our website. If you’d prefer to get your hands dirty, we also offer a 15-day free trial of Anchore Enterprise.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software security, of your organization in this white paper.

SSDF Attestation Template: Battle-tested Compliance Guidance

The CISA Secure Software Development Attestation form, commonly referred to as, SSDF attestation, was released earlier this year and with any new compliance framework, knowing the exact wording and details to provide in order to meet the compliance requirements can be difficult.

We feel you here. Anchore is heavily invested in the public sector and had to generate our own SSDF attestation for our platform, Anchore Enterprise. Having gone through the process ourselves and working with a number of customers that requested our expertise on this matter, we developed a document that helps you put together an SSDF attestation that will make a compliance officer’s heart sing.

Our goal with this document is to make SSDF attestation as easy as possible and demonstrate how Anchore Enterprise is an “easy button” that you can utilize to satisfy the majority of evidence needed to achieve compliance. We have already submitted in our own SSDF attestation and been approved, so we have confidence these answers will help get you over the line. You can find our SSDF attestation guide on our docs site.

Explore SSDF attestation in-depth with this eBook. Learn the benefits of the framework and how you can benefit from it.

How do I fill out the SSDF attestation form?

This is the difficult part, isn’t it? The SSDF attestation form looks very simple at a glance, but it has a number of sections that expect evidence to be attached that details how your organization secures both your development environments and production systems. Like all compliance standards, it doesn’t specify what will or won’t meet compliance for your organization, hence the importance of the evidence.

At Anchore, we both experienced this ourselves and helped our customers navigate this ambiguity. Out of these experiences we created a document that breaks down each item and what evidence was able to achieve compliance without being rejected by a compliance officer.

We have published this document on our Docs site for all other organizations to use as a template when attempting to meet SSDF attestation compliance.

Structure of the SSDF attestation form

The SSDF attestation is divided into 3 sections:

Section I

The first section is very short, it is where you list the type of attestation you are submitting and information about the product that you are attesting to meeting compliance.

Section II

This section is also short, the form is collecting contact information. CISA wants to be able to know how to get in contact with your organization and who is responsible for any questions or concerns that need to be addressed.

Section III

For all intents and purposes, Section III is the SSDF attestation form. This is where you will provide all of the technical supporting information to demonstrate that your organization complies with the requirements set out in the SSDF attestation form. 

The guide that Anchore has developed is focused specifically on how to fill out this section in a way that will meet the expectations of CISA compliance officers.

Where do I submit the SSDF attestation form?

If you are a US government vendor you can submit your organization’s completed form on the Repository for Software Attestations and Artifacts. You will need an account that can be requested on the login page. It normally takes a few days for the account to be created. Be sure to give yourself at least a week for it to be created. This can be done ahead of time while you’re gathering the information to fill out your form.

It’s also possible you will receive requests directly to pass along the form. Not every agency will use the repository. It’s even possible you will have non-government customers asking for the form. While it’s being mandated by the government, there’s a lot of good evidence in the document.

What tooling do I need to meet SSDF attestation compliance?

There are many ways in order to meet the technical requirements of SSDF attestation but there is also a well worn path. Anchore utilizes modern DevSecOps practices and assumes that the majority of our customers do as well. Below is a list of common DevSecOps tools that are typically used to help meet SSDF compliance

Endpoint Protection

Description: Endpoint protection tools secure individual devices (endpoints) that connect to a network. They protect against malware, detect and prevent intrusions, and provide real-time monitoring and response capabilities.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jamf, Elastic, SentinelOne, etc.

Source Control

Description: Source control systems manage changes to source code over time. They help track modifications, facilitate collaboration among developers, and maintain different versions of code.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: GitHub, GitLab, etc.

CI/CD Build Pipeline

Description: Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying software. They help ensure consistent and reliable software delivery.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jenkins, GitLab, GitHub Actions, etc.

Single Sign-on (SSO)

Description: SSO allows users to access multiple applications with one set of login credentials. It enhances security by centralizing authentication and reducing the number of attack vectors.

SSDF Requirement: [3.1] — “Enforcing multi-factor authentication and conditional access across the environments relevant to developing and building software in a manner that minimizes security risk;”

Examples: Okta, Google Workspace, etc.

Security Event and Incident Management (SEIM)

Description: Monitoring tools provide real-time visibility into the performance and security of systems and applications. They can detect anomalies, track resource usage, and alert on potential issues.

SSDF Requirement: [3.1] — “Implementing defensive cybersecurity practices, including continuous monitoring of operations and alerts and, as necessary, responding to suspected and confirmed cyber incidents;”

Examples: Elasticsearch, Splunk, Panther, RunReveal, etc.

Audit Logging

Description: Audit logging captures a record of system activities, providing a trail of actions performed within the software development and build environments.

SSDF Requirement: [3.1] — “Regularly logging, monitoring, and auditing trust relationships used for authorization and access: i) to any software development and build environments; and ii) among components within each environment;”

Examples: Typically a built-in feature of CI/CD, SCM, SSO, etc.

Secrets Encryption

Description: Secrets encryption tools secure sensitive information such as passwords, API keys, and certificates used in the development and build processes.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Typically a built-in feature of CI/CD and SCM

Secrets Scanning

Description: Secrets scanning tools automatically detect and alert on exposed secrets in code repositories, preventing accidental leakage of sensitive information.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Anchore Secure or other container security platforms

OSS Component Inventory (+ Provenance)

Description: These tools maintain an inventory of open-source software components used in a project, including their origins and lineage (provenance).

SSDF Requirement: [3.3] — “The software producer maintains provenance for internal code and third-party components incorporated into the software to the greatest extent feasible;”

Examples: Anchore SBOM or other SBOM generation and management platform

Vulnerability Scanning

Description: Vulnerability scanning tools automatically detect security weaknesses in code, dependencies, and infrastructure.

SSDF Requirement: [3.4] — “The software producer employs automated tools or comparable processes that check for security vulnerabilities. In addition: a) The software producer operates these processes on an ongoing basis and prior to product, version, or update releases;”

Examples: Anchore Secure or other software composition analysis (SCA) platform

Vulnerability Management and Remediation Runbook

Description: This is a process and set of guidelines for addressing discovered vulnerabilities, including prioritization and remediation steps.

SSDF Requirement: [3.4] — “The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion and according to and timelines specified in the vulnerability disclosure program or applicable policies.”

Examples: This is not necessarily a tool but an organizational SLA on security operations. For reference Anchore has included a screenshot from our vulnerability management guide.

Next Steps

If your organization currently provides software services to a federal agency or is looking to in the future, Anchore is here to help you in your journey. Reach out to our team and learn how you can integrate continuous and automated compliance directly into your CI/CD build pipeline with Anchore Enterprise.

Learn about the importance of both FedRAMP and SSDF compliance for selling to the federal government.

Ad for webinar by Anchore about how to sell software services to the federal government by achieving FedRAMP or SSDF Compliance

FedRAMP & FISMA Compliance: Key Differences Explained

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474188&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Anchore at Billington CyberSecurity Summit: Automating Defense in the AI Era

Are you gearing up for the 15th Annual Billington CyberSecurity Summit? So are we! The Anchore team will be front and center in the exhibition hall throughout the event, ready to showcase how we’re revolutionizing cybersecurity in the age of AI.

This year’s summit promises to be a banger, highlighting the evolution in cybersecurity as the latest iteration of AI takes center stage. While large language models (LLMs) like ChatGPT have been making waves across industries, the cybersecurity realm is still charting its course in this new AI-driven landscape. But make no mistake – this is no time to rest on our laurels.

As blue teams explore innovative ways to harness LLMs, cybercriminals are working overtime to weaponize the same technology. If there’s one lesson we’ve learned from every software and AI hype cycle: automation is key. As adversaries incorporate novel automations into their tactics, defenders must not just keep pace—they need to get ahead.

At Anchore, we’re all-in with this strategy. The Anchore Enterprise platform is purpose-built to automate and scale cybersecurity across your entire software development lifecycle. By automating continuous vulnerability scanning and compliance in your DevSecOps pipeline, we’re equipping warfighters with the tools they need to outpace adversaries that never sleep.

Ready to see how Anchore can transform your cybersecurity posture in the AI era? Stop by our booth for a live demo. Don’t miss this opportunity to stay ahead of the curve—book a meeting (below) with our team and take the first step towards a more secure tomorrow.

Anchore at the Billington CyberSecurity Summit

Date: September 3–6, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

Our team is looking forward to meeting you! Book a demo session in advance to ensure a preferred slot.

Anchore’s Showcase: DevSecOps and Automated Compliance

We will be demonstrating the Anchore Enterprise platform at the event. Our showcase will focus on:

  1. Software Composition Analysis (SCA) for Cloud-Native Environments: Learn how our tools can help you gain visibility into your software supply chain and manage risk effectively.
  2. Automated SBOM Generation and Management: Discover how Anchore simplifies the creation and maintenance of Software Bills of Materials (SBOMs), the foundational component in software supply chain security.
  3. Continuous Scanning for Vulnerabilities, Secrets, and Malware: See our advanced scanning capabilities in action, designed to protect your applications across the DevSecOps pipeline or DoD software factory.
  4. Automated Compliance Enforcement: Experience how Anchore can streamline compliance with key standards such as cATO, RAISE 2.0,  NIST, CISA, and FedRAMP, saving time and reducing human error.

We invite all attendees to visit our booth to learn more about how Anchore’s DevSecOps and automated compliance solutions can enhance your organization’s security posture in the age of AI and cloud computing.

Event Highlights

Still on the fence about whether to attend? Here is a quick run-down to help get you off of the fence. This year’s summit, themed “Advancing Cybersecurity in the AI Age,” will feature more than 40 sessions and breakouts, covering critical topics such as:

  • The increasing impact of artificial intelligence on cybersecurity
  • Cloud security challenges and solutions
  • Proactive approaches to technical risk management
  • Emerging cyber risks and defense strategies
  • Data protection against breaches and insider threats
  • The intersection of cybersecurity and critical infrastructure

The event will showcase fireside chats with top government officials, including FBI Deputy Director Paul Abbate, Chairman of the Joint Chiefs of Staff General CQ Brown, Jr., and U.S. Cyber Command Commander General Timothy D. Haugh, among others.

Next Steps and Additional Resources

Join us at the Billington Cybersecurity Summit to network with industry leaders, gain valuable insights, and explore innovative technologies that are shaping the future of cybersecurity. We look forward to seeing you there!

If you are interested in the Anchore Enterprise platform and can’t wait till the show, here are some resources to help get you started:

Learn about best practices that are setting new standards for security in DoD software factories.

Enhancing Software Security: August Webinars on DevSecOps, DoD Software Factories, and CMMC Compliance

This August Anchore’s webinar series is coming in hot with blazing hot topics on software development and cybersecurity. Stay informed on the latest trends and best practices with our full docket of live webinars that address critical aspects of software supply chain security, DevSecOps, and compliance. Whether you’re interested in adopting the Department of Defense (DoD) software factory model, automating CMMC compliance, or optimizing DevSecOps practices, these webinars offer valuable insights from engineers in the field.

WEBINAR: Adopting the DoD Software Factory Model: Insights & How Tos

Date: August 13, 2024, 10 am PT | 1 pm ET

The DoD software factory model has become a cornerstone of innovation and security in national defense and cybersecurity. This webinar will explore the building blocks needed to successfully adopt a software factory model, drawing insights from both Platform One and Black Pearl.

This session is perfect for those looking to enhance their understanding of DoD-compliant software development practices and learn how to implement them effectively. Topics covered will include how to standardize secure software development and deployment along with a demo of Anchore Enterprise’s capabilities in automating policy enforcement, security checks, and vulnerability scans.

WEBINAR: Automated Policy Enforcement for CMMC with Anchore Enterprise

Date: August 27, 2024, 2-2:30 PM EST

For organizations required to comply with the Cybersecurity Maturity Model Certification (CMMC), this webinar will offer crucial insights into automating compliance measures. With CMMC’s importance in hardening the cybersecurity posture of the defense industrial base, timely compliance is critical for software vendors that work with the federal government.

This webinar is invaluable for teams working to achieve CMMC compliance efficiently and effectively. Attendees will learn about the implementation and automation of compliance controls, how to leverage automation for vulnerability scans along with specific controls automated by Anchore Enterprise for CMMC and NIST.

WEBINAR: DevSecOps Editorial Roundtable

Date: August 12, 2024, 1-2pm EST

As the software industry increasingly adopts and refines practices for secure software development, optimizing DevSecOps processes has become crucial. This roundtable discussion will bring together application development and cybersecurity experts to explore strategies for shifting application security left in the development process.

This webinar is ideal for those looking to enhance their DevSecOps practices and create more robust and efficient software supply chains. Those that attend will gain insights on effective DevSecOps integration without slowing down application deployment and how to optimize security measures that developers will embrace.

Wrap-Up

Don’t miss these opportunities to deepen your understanding of modern software security topics and learn from industry experts. Each webinar offers unique perspectives and practical strategies that can be applied to improve your organization’s approach to software security.

Also, if you want to stay up-to-date on all of the events that Anchore hosts or participates in be sure to bookmark our events page and check back often!

Anchore Awarded DoD ESI DevSecOps Phase II Agreement

The Department of Defense (DoD) Enterprise Software Initiative (ESI) has awarded Anchore inclusion in its DevSecOps program, which is part of the ESI’s DevSecOps Phase II enterprise agreements.

The DoD ESI’s main objective is to streamline the acquisition process for software and services across the DoD, in order to gain significant cost savings and improve efficiency. Admittance into the ESI program validates Anchore’s commitment to be a trusted partner to the DoD, delivering advanced container vulnerability scanning as well as SBOM management solutions that meet the most stringent compliance and security requirements.

Anchore’s inclusion in the DoD ESI DevSecOps Phase II agreement is a testament to our commitment to delivering cutting-edge software supply chain security solutions. This milestone enables us to more efficiently support the DoD’s critical missions by providing them with the tools they need to secure their software development pipelines. Our continued partnership with the DoD reinforces Anchore’s position as a trusted leader in SBOM-powered DevSecOps and container security.

—Tim Zeller, EVP Sales & Marketing

The agreements also included DevSecOps luminaries Hashicorp and Rancher Government as well as Cloudbees, Infoblox, GitLab, Crowdstrike, F5 Networks; all are now part of the preferred vendor list for all DoD missions that require cybersecurity solutions, generally, and software supply chain security, specifically.

Anchore is steadily growing their presence on federal contracts and catalogues such as Iron Patriot & Minerva, GSA, 2GIT, NASA SEWP, ITES and most recently also JFAC (Joint Federated Assurance Center).

What does this mean?

Similar to the GSA Advantage marketplace, DoD missions can now procure Anchore through the fully negotiated and approved ESI Agreements on the Solutions for Enterprise-Wide Procurement (SEWP) Marketplace. 

Anchore’s History with DoD

This award continues Anchore’s deepening relationship with the DoD. Starting in 2020, the DoD has vetted and approved Anchore’s container vulnerability scanning tools. Anchore is named in both the DoD Container Image Creation and Deployment Guide and the DoD Container Hardening Process Guide as recommended solutions.

The same year, Anchore was selected by the US Air Force’s Platform One to become the software supply chain vendor to implement the best practices in the above guides for all software built on the platform. Read our case study on how Anchore partnered with Platform One to build the premier DevSecOps platform for the DoD.

The following year, Anchore won the Small Business Innovation Research (SBIR) Phase III contract with Platform One to integrate directly into the Iron Bank container image process. If your image has achieved Iron Bank certification it is because Anchore’s solution has given it a passing grade. Read more about this DevSecOps success story in our case study with the Iron Bank.

Due to the success of Platform One within the US Air Force, in 2022 Anchore partnered with the US Navy to secure the Black Pearl DevSecOps platform. Similar to Platform One, Black Pearl is the go-to standard for modern software development within the Department of the Navy (DON) software development.

As Anchore continued to expand its relationship with the DoD and federal agencies, its offerings became available for purchase through the online government marketplaces and contracts such as GSA Advantage and Second Generation IT Blanket Purchase Agreements (2GIT), NASA SEWP, Iron Patriot/Minerva, ITES and JFAC. The ESI’s DevSecOps Phase II award was built on the back of all of the previous success stories that came before it. 

Achieving ATO is now easier with the inclusion of Anchore into the DoD ESI. Read our white paper on DoD software factory best practices to reach cATO or RAISE 2.0 compliance in days versus months.

We advise on best practices that are setting new standards for security and efficiency in DoD software factories, such as: Hardening container images, automation for policy enforcement and continuous monitoring for vulnerabilities.

Anchore Previews Grype Support for Azure Linux 3.0

The Anchore OSS team was on the Microsoft community call for mariner users last week. At this meeting, we got a chance to demo some new grype capabilities for when Azure Linux 3.0 becomes generally available.

The Anchore OSS team builds its vulnerability feeds and data sourcing out in the open. It’s important to note that an update to support a new distro release (or naming migration for past releases) can require pull requests in up to three different repositories. Let’s look at the pull requests supporting this new release of Azure Linux and walk through how we can build a local copy of the demo on our machines.

Grype ecosystem changes that support new Linux distributions

Here are the three pull requests required to get Azure Linux 3.0 working with grype.

  • Grype-db: this change asserts that the new data shape and data mapping is being done correctly when processing the new Azure Linux 3.0 vulnerability data
  • Vunnel: this change sources the vulnerability data from Microsoft and transforms it into a common scheme that grype-db can distribute
  • Grype: this change adds the base distro types used by grype-db, vunnel, and grype so that matching can be correctly associated with both the old mariner and new Azure Linux 3.0 data

For this preview, let’s do a quick walkthrough on how a user could test this new functionality locally and get a grype db for just Azure Linux 3.0 setup. When Azure Linux 3.0 is released as generally available, readers can look forward to a more technical post on how the vunnel and grype-db data pipeline works in GitHub actions, what matching looks like, and how syft/grype can discern the different distribution versions. 

Let’s get our demo working locally in anticipation of the coming release!

Setting up the Demo

To get the demo setup readers will want to make sure they have the following installed:

  • Git to clone and interact with the repositories
  • The latest version of Golang
  • A managed version of Python running at 3.12.x. If you need help getting a managed version of python setup we recommend mise.
  • The poetry python dependency manager 
  • Make is also required as part of developing and bootstrapping commands in the three development environments.

After the dev dependencies are installed, clone down the three repositories listed above (grype, grype-db, and vunnel) into a local development folder and checkout the branches listed in the above pull requests. I have included a script to do all this for you below.

#!/bin/bash

# Define the repositories and the branch
REPOS=(
    "https://github.com/anchore/grype.git"
    "https://github.com/anchore/grype-db.git"
    "https://github.com/anchore/vunnel.git"
)
BRANCH="feat-azure-linux-3-support"
FOLDER="demo"

# Create the folder if it doesn't exist
mkdir -p "$FOLDER"

# Change to the folder
cd "$FOLDER" || exit

# Clone each repository, checkout the branch, and run make bootstrap
for REPO in "${REPOS[@]}"; do
    # Extract the repo name from the URL
    REPO_NAME=$(basename -s .git "$REPO")

    # Clone the repository
    git clone "$REPO"

    # Change to the repository directory
    cd "$REPO_NAME" || exit

    # Checkout the branch
    git checkout "$BRANCH"

    # Run make bootstrap
    make bootstrap

	# Special handling for grype-db repository
    if [ "$REPO_NAME" == "grype-db" ]; then
        # Add the replace directive to go.mod
        echo 'replace github.com/anchore/grype v0.78.0 => ../grype' >> go.mod

        # Run go mod tidy
        go mod tidy
    fi

    # Special handling for grype repository
    if [ "$REPO_NAME" == "grype" ]; then
        # Run go mod tidy
        go mod tidy
    fi

    # Change back to the parent folder
    cd ..

done

echo "All repositories have been cloned, checked out, and built."

Pulling the new Azure Linux 3.0 vulnerability data

We will be doing all of our work in the vunnel repository. We needed to pull the other two repositories since vunnel can orchestrate and build those binaries to accomplish its data aggregation goals. 

To get all the repositories built and usable in vunnel, run the following commands:

cd demo/vunnel
poetry shell
make dev provider="mariner"
make update-db

That should produce output similar to the following:

Entering vunnel development shell...
• Configuring with providers: mariner ...
• Writing grype config: ./demo/vunnel/.grype.yaml ...
• Writing grype-db config: ./demo/vunnel/.grype-db.yaml ...
• Activating poetry virtual env: /Library/Caches/pypoetry/virtualenvs/vunnel-0PTQ8JOw-py3.12 ...
• Installing editable version of vunnel ...
• Building grype ...
• Building grype-db ...
mkdir -p ./bin

Note: development builds grype and grype-db are now available in your path.
To update these builds run 'make build-grype' and 'make build-grype-db' respectively.
To run your provider and update the grype database run 'make update-db'.
Type 'exit' to exit the development shell.

.....Records being processed

This should lead to a local vulnerability db being built for just the Azure Linux 3.0 data. You can interact with this data and use the locally built grype to see how the data can be used against an older preview image of Azure Linux 3.0.

Let’s run the following command to interact with the new Azure Linux 3.0 data and preview grype against an older dev build of the container image to make sure everything is working correctly:

./bin/grype azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0.20240401-amd64

  Loaded image azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0.20240401-amd64
  Parsed image sha256:3017b52132fb240b9c714bd09e88c4bc1f8e55860de23c74fe2431b8f75981dd
  Cataloged contents 9b4fcfdd3a247b97e02cda6011cd6d6858dcdf98d1f95fb8af54d57d2da89d5f
   ├──  Packages                        [75 packages]
   ├──  File digests                    [1,495 files]
   ├──  File metadata                   [1,495 locations]
   └──  Executables                     [380 executables]
  Scanned for vulnerabilities     [17 vulnerability matches]
   ├── by severity: 0 critical, 8 high, 7 medium, 2 low, 0 negligible
   └── by status:   17 fixed, 0 not-fixed, 0 ignored
NAME          INSTALLED      FIXED-IN         TYPE  VULNERABILITY   SEVERITY
expat         2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2024-28757  High
expat         2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52425  High
expat         2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52426  Medium
expat-libs    2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2024-28757  High
expat-libs    2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52425  High
expat-libs    2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52426  Medium
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-6779   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-6246   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-5156   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-4911   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-6780   Medium
libgcc        13.2.0-3.azl3  0:13.2.0-7.azl3  rpm   CVE-2023-4039   Medium
libstdc++     13.2.0-3.azl3  0:13.2.0-7.azl3  rpm   CVE-2023-4039   Medium
openssl       3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2023-6237   Medium
openssl       3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2024-2511   Low
openssl-libs  3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2023-6237   Medium
openssl-libs  3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2024-2511   Low

Updating the image

Many vulnerable container images can be remediated by consuming the upstream security team’s fixes. Let’s run the same command against the latest preview version released from Microsoft:

./bin/grype azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0

  Loaded image azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0
  Parsed image sha256:234cac9f296dd1d336eecde7a97074bec0d691c6fd87bd4ff098b5968e579ce1
  Cataloged contents 9964aca715152fb6b14bfb57be5e27c655fb7d733a33dd995a5ba72157c54ee7
   ├──  Packages                        [76 packages]
   ├──  File digests                    [1,521 files]
   ├──  File metadata                   [1,521 locations]
   └──  Executables                     [380 executables]
  Scanned for vulnerabilities     [0 vulnerability matches]
   ├── by severity: 0 critical, 0 high, 0 medium, 0 low, 0 negligible
   └── by status:   0 fixed, 0 not-fixed, 0 ignored
No vulnerabilities found

Awesome! Microsoft security teams for the Azure Linux 3 preview images have been highly responsive in ensuring up-to-date images containing fixes or remediations to any security findings are published.

We’re excited to see the new Azure Linux 3 release when it’s ready! In the meantime, you can grab our latest Grype release and try it on all your other containers. If you have questions or problems, join the Anchore Open Source Team on Discourse or check out one of our weekly Live Streams on YouTube.

Automate your SBOM management with Anchore Enterprise. Get instant access with a 15-day free trial.

Anchore Enterprise 5.8 Adds KEV Enrichment Feed

Today we have released Anchore Enterprise 5.8, featuring the integration of the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog as a new vulnerability feed.

Previously, Anchore Enterprise matched software libraries and frameworks inside applications against vulnerability databases, such as, National Vulnerability Database (NVD), the GitHub Advisory Database or individual vendor feeds. With Anchore Enterprise 5.8, customers can augment their vulnerability feeds with the KEV catalog without having to leave the dashboard. In addition, teams can automatically flag exploitable vulnerabilities as software is being developed or gate build artifacts from being released into production. 

Before we jump into what all of this means, let’s take a step back and get some context to KEV and its impact on DevSecOps pipelines.

What is CISA KEV?

The KEV (Known Exploited Vulnerabilities) catalog is a critical cybersecurity resource maintained by the U.S. Cybersecurity and Infrastructure Security Agency (CISA). It is a database of exploited vulnerabilities that is current and active in the wild. While addressing these vulnerabilities is mandatory for U.S. federal agencies under Binding Operational Directive 22-01, the KEV catalog serves as an essential public resource for improving cybersecurity for any organization.

The primary difference between CISA KEV and a standard vulnerability feed (e.g., the CVE program) are the adjectives, “actively exploited”. Actively exploited vulnerabilities are being used by attackers to compromise systems in real-time, meaning now. They are real and your organization may be standing in the line of fire, whereas CVE lists vulnerabilities that may or may not have any available exploits currently. Due to the imminent threat of actively exploited vulnerabilities, they are considered the highest risk outside of an active security incident.

The benefits of KEV enrichment

The KEV catalog offers significant benefits to organizations striving to improve their cybersecurity posture. One of its primary advantages is its high signal-to-noise ratio. By focusing exclusively on vulnerabilities that are actively being exploited in the wild, the KEV cuts through the noise of countless potential vulnerabilities, allowing developers and security teams to prioritize their efforts on the most critical and immediate threats. This focused approach ensures that limited resources are allocated to addressing the vulnerabilities that pose the greatest risk, significantly enhancing an organization’s security efficiency.

Moreover, the KEV can be leveraged as a powerful tool in an organization’s development and deployment processes. By using the KEV as a trigger for build pipeline gates, companies can prevent exploitable vulnerabilities from being promoted to production environments. This proactive measure adds an extra layer of security to the software development lifecycle, reducing the risk of deploying vulnerable code. 

Additionally, while adherence to the KEV is not yet a universal compliance requirement, it represents a security best practice that forward-thinking organizations are adopting. Given the trend of such practices evolving into compliance mandates, integrating the KEV into security protocols can be seen as a form of future-proofing, potentially easing the transition if and when such practices inevitably become compliance requirements.

How Anchore Enterprise delivers KEV enrichment

With Anchore Enterprise, CISA KEV is now a vulnerability feed similar to any other data feed that gets imported into the system. Anchore Enterprise can be configured to pull this directly from the source as part of the deployment feed service.

To make use of the new KEV data, we have an additional rule option in the Anchore Policy Engine that allows a STOP or WARN to be configured when a vulnerability is detected that is on the KEV list. When any application build, registry store or runtime deploy occurs, Anchore Enterprise will evaluate the artifiact’s SBOM against the security policy and if the SBOM has been annotated with a KEV entry then the Anchore policy engine can return a STOP value to inform the build pipeline to fail the step and return the KEV as the source of the violation.

To configure the KEV feed as a trigger in the policy engine, first select vulnerabilities as the gate then kev list as a trigger. Finally choose an action.

Anchore Enterprise dashboard policy engine rule set configuration showing vulnerabilities as the gate value and the CISA KEV catalog as the trigger value.

After you save the new rule, you will see the kev list rule as part of the entire policy.

Anchore Enterprise 5.8 policy engine dashboard showing all rules for the default policy including the CISA KEV catalog rule at the top (highlighted in the red square).

After scanning a container with the policy that has the kev list rule in it, you can view all dependencies that match the kev list vulnerability feed.

Anchore Enterprise 5.8 vulnerability scan report with policy enrichment and policy actions. All software dependencies are matched against the CISA KEV catalog of known exploitable vulnerabilities and the assigned action is reported in the dashboard.

Next Steps

To stay on top of our releases, sign-up for our monthly newsletter or follow our LinkedIn account. If you are already an Anchore customer, please reach out to your account manager to upgrade to 5.8 and gain access to KEV support. We also offer a 15 day free trial to get hands on with Anchore Enterprise or you can reach out to us for a guided tour.

A Guide to FedRAMP in 2024: FAQs & Key Takeaways

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987473983&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

DevSecOps Evolution: How DoD Software Factories Are Reshaping Federal Compliance

Anchore’s Vice President of Security, Josh Bressers recently did an interview with Fed Gov Today about the role of automation in DevSecOps and how it is impacting the US federal government. We’ve condensed the highlights of the interview into a snackable blog post below.

Automation is the foundation of DevSecOps

Automation isn’t just a buzzword but is actually the foundation of DevSecOps. It is what gives meaning to marketing taglines like “shift left”. The point of DevSecOps is to create automated workflows that provide feedback to software developers as they are writing the application. This unwinds the previous practice of  artificially grouping all of the “compliance” or “security” tasks into large blocks at the end of the development process. The challenge with this pattern is that it delays feedback and design decisions are made that become difficult to undo after development has completed. By inverting the narrative and automating feedback as design decisions are made, developers are able to prevent compliance or security issues before they become deeply embedded into the software.

DoD Software Factories are leading the way in DevSecOps adoption

The US Department of Defense (DoD) is at the forefront of implementing DevSecOps through their DoD software factory model. The US Navy’s Black Pearl and the Air Force’s Platform One are perfect examples of this program. These organizations are leveraging automation to streamline compliance work. Instead of relying on manual documentation ahead of Authority to Operate (ATO) reviews, automated workflows built directly into the software development pipeline provide direct feedback to developers. This approach has proven highly effective, Bressers emphasizes this in his interview:

It’s obvious why the DoD software factory model is catching on. It’s because they work. It’s not just talk, it’s actually working. There’s many organizations that have been talking about DevSecOps for a long time. There’s a difference between talking and doing. Software factories are doing and it’s amazing.

—Josh Bressers, VP of Security, Anchore

Benefits of compliance automation

By giving compliance the same treatment as security (i.e., automate all the things), tasks that once took weeks or even months, can now be completed in minutes or hours. This dramatic reduction in time-to-compliance not only accelerates development cycles but also allows teams to focus on collaboration and solution delivery rather than getting bogged down in procedural details. The result is a “shift left” approach that extends beyond security to compliance as well. When compliance is integrated early in the development process the benefits cascade down the entire development waterfall.

Compliance automation is shifting the policy checks left into the software development process. What this means is that once your application is finished; instead of the compliance taking weeks or months, we’re talking hours or minutes.

—Josh Bressers, VP of Security, Anchore

Areas for improvement

While automation is crucial, there are still several areas for improvement in DevSecOps environments. Key focus areas include ensuring developers fully understand the automated processes, improving communication between team members and agencies, and striking the right balance between automation and human oversight. Bressers emphasizes the importance of letting “people do people things” while leveraging computers for tasks they excel at. This approach fosters genuine collaboration and allows teams to focus on meaningful work rather than simply checking boxes to meet compliance requirements.

Standardizing communication workflows with integrated developer tools

Software development pipelines are primarily platforms to coordinate the work of distributed teams of developers. They act like old-fashioned switchboard operators that connect one member of the development team to the next as they hand-off work in the development production line. Leveraging developer tooling like GitLab or GitHub standardizes communication workflows. These platforms provide mechanisms for different team members to interact across various stages of the development pipeline. Teams can easily file and track issues, automatically pass or fail tests (e.g., compliance tests), and maintain a searchable record of discussions. This approach facilitates better understanding between developers and those identifying issues, leading to more efficient problem-solving and collaboration.

The government getting ahead of the private sector: an unexpected narrative inversion

In a surprising turn of events, Bressers points out that government agencies are now leading the way in DevSecOps implementation by integrating automated compliance. Historically often seen as technologically behind, federal agencies, through the DoD software factory model, are setting new standards that are likely to influence the private sector. As these practices become more widespread, contractors and private companies working with the government will need to adapt to these new requirements. This shift is evident in recent initiatives like the SSDF attestation questionnaire and White House Executive Order (EO) 14028. These initiatives are setting new expectations for federal contractors, signaling a broader move towards making compliance a native pillar of DevSecOps.

This is one of the few instances in recent memory where the government is truly leading the way. Historically the government has often been the butt of jokes about being behind in technology but these DoD software factories are absolutely amazing. The next thing that we’re going to see is the compliance expectations that are being built into these DoD software factories will seep out into the private sector. The SSDF attestation and the White House Executive Order are only the beginning. Ironically my expectation is everyone is going to have to start paying attention to this, not just federal agencies.

—Josh Bressers, VP of Security, Anchore

Next Steps

If you’re interested to learn more about how to future-proof your software supply chain with compliance automation via the DoD software factory model, be sure to read our white paper.

If you’d like to hear the interview in full, be sure to watch it on Fed Gov Today’s Youtube channel.

Automate Container Vulnerability Scanning in CI with Anchore

Achieve container vulnerability scanning nirvana in your CI pipeline with Anchore Enterprise and your preferred CI platform, whether it’s GitHub, GitLab, or Jenkins. Identifying vulnerabilities, security issues, and compliance policy failures early in the software development process is crucial. It’s certainly preferable to uncover these issues during development rather than having them discovered by a customer or during an external audit.

Early detection of vulnerabilities ensures that security and compliance are integrated into your development workflow, reducing the risk of breaches and compliance violations. This proactive approach not only protects your software but also saves time and resources by addressing issues before they escalate.

Enabling CI Integration

At a high level, the steps to connect any CI platform to Enterprise are broadly the same, with implementation details differing between each vendor.

  • Enable network connectivity between CI and Enterprise
  • Capture Enterprise configuration for AnchoreCTL
  • Craft an automation script to operate after the build process
    • Install AnchoreCTL
    • Capture built container details
    • Use AnchoreCTL to submit container details to Enterprise

Once SBOM generation is integrated into the CI pipeline, and they’re submitted to Anchore Enterprise, the following features can quickly be leveraged:

  • Known vulnerabilities with severity, and fix availability
  • Search for accidental ‘secrets’ sharing such as private API keys
  • Scan for malware like trojans and viruses
  • Policy enforcement to comply with standards like FedRAMP, CISA and DISA
  • Remediation by notifying developers and other agents via standard tools like GitHub issues, JIRA, and Slack
  • Scheduled reporting on container insights

CI Integration by Example

Taking GitHub Actions as an example, we can outline the requirements and settings to get up and running with automated SBOM generation and vulnerability management.

Network connectivity

AnchoreCTL uses port 8228 for communication with the Anchore Enterprise SBOM ingest and management API. Ensure the Anchore Enterprise host, where this is configured, is accessible on that port from GitHub. This is site specific and may require firewall, VLAN and other site-specific changes.

Required configuration

AnchoreCTL requires only three environment variables, typically set as GitHub secrets.

  • ANCHORECTL_URL – the URL of the Anchore Enterprise API endpoint. e.g. http://anchore-enterprise.example.com:8228
  • ANCHORECTL_USERNAME – the user account in Anchore Enterprise, that the anchorectl will authenticate using
  • ANCHORECTL_PASSWORD – the password for the account, set on the Anchore Enterprise instance

On the GitHub repository go to Settings -> Secrets and Variables -> Actions.

Under the ‘Variables’ tab, add ANCHORECTL_URL & ANCHORECTL_USERNAME, and set their values. In the ‘Secrets’ tab, add ANCHORECTL_PASSWORD and set the value.

Automation script

Below are the sample snippets from a GitHub action that should be placed in the repository under .github/workflows to enable SBOM generation in Anchore Enterprise. In this example,

First, our action needs a name:

name: Anchore Enterprise Centralized Scan

Pick one or more from this next section, depending on when you require the action to be triggered. It could be based on pushes to the main or other named branches, on a timed schedule, or manually.

Commonly when configuring an action for the first time, manual triggering is used until proven working, then timed or branch automation is enabled later.

on:
  ## Action runs on a push the branches listed
  push:
    branches:
      - main
  ## Action runs on a regular schedule
  schedule: 
      ## Run at midnight every day
    - cron: '0 0 * * *'
  ## Action runs on demand build
  workflow_dispatch:
    inputs:
      mode:
        description: 'On-Demand Build'  

In the env section we pass in the settings gathered and configured inside the GitHub web UI earlier. Additionally the optional ANCHORECTL_FAIL_BASED_ON_RESULTS boolean defines (if true) whether we want the the entire action to be failed based on scan results. This may be desirable, to block further processing if any vulnerabilities, secrets or malware are identified.

env:
  ANCHORECTL_URL: ${{ vars.ANCHORECTL_URL }}
  ANCHORECTL_USERNAME: ${{ vars.ANCHORECTL_USERNAME }}
  ANCHORECTL_PASSWORD: ${{ secrets.ANCHORECTL_PASSWORD }}
  ANCHORECTL_FAIL_BASED_ON_RESULTS: false

Now we start the actual body of the action, which comprises two jobs, ‘Build’ and ‘Anchore’. The ‘Build’ example here will use externally defined steps to checkout the code in the repo and build a container using docker, then push the resulting image to the container registry. In this case we build and publish to the GitHub Container Registry (ghcr), however, we could publish elsewhere.

jobs:

  Build:
    runs-on: ubuntu-latest
    steps:

    - name: "Set IMAGE environmental variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV

    - name: Checkout Code
      uses: actions/checkout@v3

    - name: Log in to the Container registry
      uses: docker/login-action@v2
      with:
        registry: ${{ env.REGISTRY }}
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }}      

    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2

    - name: build local container
      uses: docker/build-push-action@v3
      with:
        tags: ${{ env.IMAGE }}
        push: true
        load: false

The next job actually generates the SBOM, let’s break this down. First, the usualy boilerplate, but note this job depends on the previous ‘Build’ job having already run.

  Anchore:
    runs-on: ubuntu-latest
    needs: Build

    steps:

The same registry settings are used here as were used in the ‘Build’ job above, then we checkout the code onto the action runner. The IMAGE variable will be used by the anchorectl command later to submit into Anchore Enterprise.

    - name: "Set IMAGE environment variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV

    - name: Checkout Code
      uses: actions/checkout@v3

Installing the AnchoreCTL binary inside the action runner is required to send the request to the Anchore Enterprise API. Note the version number specified as the past parameter, should match the version of Enterprise.

    - name: Install Latest anchorectl Binary
      run: |
        curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin v5.7.0
        export PATH="${HOME}/.local/bin/:${PATH}"

The Connectivity check is a good way to ensure anchorectl is installed correctly, and configured to connect to the right Anchore Enterprise instance.

    - name: Connectivity Check
      run: |
        anchorectl version
        anchorectl system status
        anchorectl feed list

Now we actually queue the image up for scanning by our Enterprise instance. Note the use of --wait to ensure the GitHub Action pauses until the backend Enterprise instance completes the scan. Otherwise the next steps would likely fail, as the scan would not yet be complete.

    - name: Queue Image for Scanning by Anchore Enterprise
      run: |
        anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile --force ${IMAGE} 

Once the backend Anchore Enterprise has completed the vulnerability, malware, and secrets scan, we use anchorectl to pull the list of vulnerabilities and display them as a table. This can be viewed in the GitHub Action log, if required.

    - name: Pull Vulnerability List
      run: |
        anchorectl image vulnerabilities ${IMAGE} 

Finally, the image check will pull down the results of the policy compliance as defined in your Anchore Enterprise. This will likely be a significantly shorter output than the full vulnerability list, depending on your policy bundle.

If the environment variable ANCHORECTL_FAIL_BASED_ON_RESULTS was set true earlier in the action, or -f is added to the command below, the action will return as a ‘failed’ run.

    - name: Pull Policy Evaluation
      run: |
        anchorectl image check --detail ${IMAGE}

That’s everything. If configured correctly, the action will run as required, and directly leverage the vulnerability, malware and secrets scanning of Anchore Enterprise.

Not just GitHub

While the example above is clearly GitHub specific, a similar configuration can be used in GitLab pipelines, Jenkins, or indeed any CI system that supports arbitrary shell scripts in automation.

Conclusion

By integrating Anchore Enterprise into your CI pipeline, you can achieve a higher level of security and compliance for your software development process. Automating vulnerability scanning and SBOM management ensures that your software is secure, compliant, and ready for deployment.

Automate your SBOM management with Anchore Enterprise. Get instant access with a 15-day free trial.

High volume image scanning and vulnerability management at the Iron Bank (Platform One)

The Iron Bank provides Platform One and any US Department of Defense (DoD) agency with a hardened and centralized container image repository that supports the end-to-end lifecycle needed for secure software development. Anchore and the Iron Bank have been collaborating since 2020 to balance deployment velocity, and policy compliance while maintaining rigorous security standards and adapting to new security threats. 

The Challenge

The Iron Bank development team is responsible for the integrity and security of 1,800 base images that are provided to build and create software applications across the DoD. They face difficult tasks such as:

  • Providing hardened components for downstream applications across the DoD
  • Meeting rigorous security standards crucial for military systems
  • Improving deployment frequency while maintaining policy compliance
  • Reducing the burden of false positives on the development team

Camdon Cady, Chief Technology Officer at Platform One:

People want to be security minded, and they want to do the right thing. But what they really want is tooling that helps them to do that with all the necessary information in one place. That’s why we looked to Anchore for help.

The Solution

Anchore’s engineering team is deeply embedded with the Iron Bank infrastructure and development team to improve and maintain DevSecOps standards. Anchore Enterprise is the software supply chain security tool of choice as it provides: 

The Results: Sustainable security at DevOps speed

The partnership between Iron Bank and Anchore has yielded impressive results:

  • Reduced False Positives: The introduction of an exclusion feed captured over 12,000 known false positives, significantly reducing the security assessment load.
  • Improved SBOM Accuracy: Custom capabilities like SBOM Hints and SBOM Corrections allow for more precise component identification and vulnerability mapping.
  • Standardized Compliance: A jointly developed custom policy enforces the DoD Container Hardening requirements consistently across all images.
  • Enhanced Scanning Capabilities: Additions like time-based allowlisting, content hints, and image scanning have expanded Iron Bank’s security coverage.
  • Streamlined Processes: The standardized scanning process adheres to the DoD’s Container Hardening Guide while delivering high-quality vulnerability and compliance findings.

Even though security is important for all organizations, the stakes are higher for the DoD. What we need is a repeatable development process. It’s imperative that we have a standardized way of building secure software across our military agencies.

Camdon Cady, Chief Technology Officer at Platform One

Download the full case study to learn more about how Anchore Enterprise can help your organization achieve a proactive security stance while maintaining development velocity.

How Infoblox Scaled Product Security and Compliance with Anchore Enterprise

In today’s fast-paced software development world, maintaining the highest levels of security and compliance is a daunting challenge. Our new case study highlights how Infoblox, a leader in Enterprise DDI (DNS, DHCP, IPAM), successfully scaled their product security and compliance efforts using Anchore Enterprise. Let’s dive into their journey and the impressive results they achieved.

The Challenge: Scaling security in high-velocity Environments

Infoblox faced several critical challenges in their product security efforts:

  • Implementing “shift-left” security at scale for 150 applications developed by over 600 engineers with a security team of 15 (a 40:1 ratio!)
  • Managing vulnerabilities across thousands of containers produced monthly
  • Maintaining multiple compliance certifications (FedRAMP, SOC 2, StateRAMP, ISO 27001)
  • Integrating seamlessly into existing DevOps workflows

“When I first started, I was manually searching GitHub repos for references to vulnerable libraries,” recalls Sukhmani Sandhu, Product Security Engineer at Infoblox. This manual approach was unsustainable and prone to errors.

The Solution: Anchore Enterprise

To address these challenges, Infoblox turned to Anchore Enterprise to provide:

  • Container image scanning with low false positives
  • Comprehensive vulnerability and CVE management
  • Native integrations with Amazon EKS, Harbor, and Jenkins CI
  • A FedRAMP, SOC 2, StateRAMP, and ISO compliant platform

Chris Wallace, Product Security Engineering Manager at Infoblox, emphasizes the importance of accuracy: “We’re not trying to waste our team or other team’s time. We don’t want to report vulnerabilities that don’t exist. A low false-positive rate is paramount.

Impressive Results

The implementation of Anchore Enterprise transformed Infoblox’s product security program:

  • 75% reduction in time for manual vulnerability detection tasks
  • 55% reduction in hours allocated to retroactive vulnerability remediation
  • 60% reduction in hours spent on compliance tasks
  • Empowered the product security team to adopt a proactive, “shift-left” security posture

These improvements allowed the Infoblox team to focus on higher-value initiatives like automating policy and remediation. Developers even began self-adopting scanning tools during development, catching vulnerabilities before they entered the build pipeline.

“We effectively had no tooling before Anchore. Everything was manual. We reduced the amount of time on vulnerability detection tasks by 75%,” says Chris Wallace.

Conclusion: Scaling security without compromise

Infoblox’s success story demonstrates that it’s possible to scale product security and compliance efforts without compromising on development speed or accuracy. By leveraging Anchore Enterprise, they transformed their security posture from reactive to proactive, significantly reduced manual efforts, and maintained critical compliance certifications.

Are you facing similar challenges in your organization? Download the full case study and take the first step towards a secure, compliant, and efficient development environment. Or learn more about how Anchore’s container security platform can help your organization.

Introduction to the DoD Software Factory

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

What is an example of a DoD software factory?

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Mission

Any Department of Defense (DoD) mission will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

What are the components of a DoD Software Factory?

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

Principles of DevSecOps embedded into a DoD software factory

  1. Breakdown organizational silos
    • This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open source and reusable code
    • Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure-as-Code (IaC)
    • This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers)
    • Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left
    • Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities
    • The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps pipeline
    • A DevSecOps pipeline is the system that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber resilience
    • Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems of a DoD software factory

  1. Code Repository (e.g., Repo One)
    • Where software source code is stored, managed and collaborated on.
  2. CI/CD Build Pipeline (e.g., Big Bang)
    • The system that automates the creation of software build artifacts, tests the software and packages the software for deployment.
  3. Artifact Repository (e.g., Iron Bank)
    • The storage system for software components used in development and the finished software artifacts that are produced from the build process.
  4. Runtime Orchestrator and Platform (e.g., Big Bang)
    • The deployment system that hosts the software artifacts pulled from the registry and keeps the software running so that users can access it.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Map of all DoD software factories in the US.

Roll your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline
    • Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO
    • Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback
    • Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify)
    • Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore can help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation and management tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore fuses DevSecOps and DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

If you’re looking to dive deeper into all things DoD software factory, we have a white paper that lays out the 6 best practices for container images in highly secure environments. Download the white paper below.

AnchoreCTL Setup and Top Tips

Introduction

Welcome to the beginners guide to AnchoreCTL, a powerful command-line tool designed for seamless interaction with Anchore Enterprise via the Anchore API. Whether you’re wrangling SBOMs, managing Kubernetes runtime inventories, or ensuring compliance at scale, AnchoreCTL is your go-to companion.

Overview

AnchoreCTL enables you to efficiently manage and inspect all aspects of your Anchore Enterprise deployments. It serves both as a human-readable configuration tool and a CLI for automation in CI/CD environments, making it indispensable for DevOps, security engineers, and developers.

If you’re familiar with Syft and Grype, AnchoreCTL will be a valuable addition to your toolkit. It offers enhanced capabilities to manage tens, hundreds, or even thousands of images and applications across your organization.

In this blog series, we’ll explore top tips and practical use cases to help you leverage AnchoreCTL to its fullest potential. In this part, we’ll review the basics of getting started with AnchoreCTL. In subsequent posts, we will dive deep on container scanning, SBOM Management and Vulnerability Management.

We’ll start by getting AnchoreCTL installed and learning about its configuration and use. I’ll be using AnchoreCTL on my macOS laptop, connected to a demo of Anchore Enterprise running on another machine.

Get AnchoreCTL

AnchoreCTL is a command-line tool available for macOS, Linux and Windows. The AnchoreCTL Deployment docs cover installation and deployment in detail. Grab the release of AnchoreCTL that matches your Anchore Enterprise install.

At the time of writing, the current release of AnchoreCTL and Anchore Enterprise is v5.6.0. Both are updated on a monthly cadence, and yours may be newer or older than what we’re using here. The AnchoreCTL Release Notes contain details about the latest, and all historical releases of the utility.

You may have more than one Anchore Enterprise deployment on different releases. As AnchoreCTL is a single binary, you can install multiple versions on a system to support all the deployments in your landscape.

macOS / Linux

This following snippet will install the binary in a directory of your choosing. On my personal workstation, I use $HOME/bin, but anywhere in your $PATH is fine. Placing the application binary in /usr/local/bin/ makes sense in a shared environment.

$ # Download the macOS or Linux build of anchorectl
$ curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b $HOME/bin v5.6.0

Windows

The Windows install snippet grabs the zip file containing the binary. Once downloaded, unpack the zip and copy the anchorectl command somewhere appropriate.

$ # Download the Windows build of anchorectl
$ curl -o anchorectl.zip https://anchorectl-releases.anchore.io/anchorectl/v5.6.0/anchorectl_5.6.0_windows_amd64.zip

Setup

Quick check

Once AnchoreCTL is installed, check it’s working with a simple anchorectl version. It should print output similar to this:

$ # Show the version of the anchorectl command line tool
$ anchorectl version
Application:        anchorectl
Version:            5.6.0
SyftVersion:        v1.4.1
BuildDate:          2024-05-27T18:28:23Z
GitCommit:          7c134b46b7911a5a17ba1fa5f5ffa4e3687f170b
GitDescription:     v5.6.0
Platform:           darwin/arm64
GoVersion:          go1.21.10
Compiler:           gc

Configure

The anchorectl command has a --help option that displays a lot of useful information beyond just the list of command line options reference. Below are the first 15 lines to illustrate what you should see. The actual output is over 80 lines, so we’ve snipped it down here.

$ # Show the top 15 lines of the help
$ anchorectl --help | head -n 15
Usage:
  anchorectl [command]

Application Config:

  (search locations: .anchorectl.yaml, anchorectl.yaml, .anchorectl/config.yaml, ~/.anchorectl.yaml, ~/anchorectl.yaml, $XDG_CONFIG_HOME/anchorectl/config.yaml)

  # the URL to the Anchore Enterprise API (env var: "ANCHORECTL_URL")
  url: ""

  # the Anchore Enterprise username (env var: "ANCHORECTL_USERNAME")
  username: ""

  # the Anchore Enterprise user's login password (env var: "ANCHORECTL_PASSWORD")

On launch, the anchorectl binary will search for a yaml configuration file in a series of locations shown in the help above. For a quick start, just create .anchorectl.yaml in your home directory, but any of the listed locations are fine.

Here is my very basic .anchorectl.yaml which has been configured with the minimum values of url, username and password to get started. I’ve pointed anchorectl at the Anchore Enterprise v5.6.0 running on my Linux laptop ‘ziggy’, using the default port, username and password. We’ll see later how we can create new accounts and users.

$ # Show the basic config file
$ cat .anchorectl.yml
url: "http://ziggy.local:8228"
username: "admin"
password: "foobar"

Config Check

The configuration can be validated with anchorectl -v. If the configuration is syntactically correct, you’ll see the online help displayed, and the command will exit with return code 0. In this example, I have truncated the lengthy anchorectl -v output.

$ # Good config
$ cat .anchorectl.yml
url: "http://ziggy.local:8228"
username: "admin"
password: "foobar"

$ anchorectl -v
[0000]  INFO 
anchorectl version: 5.6.0
Usage:  anchorectl [command]



      --version         version for anchorectl
Use "anchorectl [command] --help" for more information about a command.

$ echo $?
0

In this example, I omitted a closing quotation mark on the url: line, to force an error.

$ # Bad config
$ cat .anchorectl.yml
url: "http://ziggy.local:8228
username: "admin"
password: "foobar"

$ anchorectl -v


error: invalid application config: unable to parse config="/Users/alan/.anchorectl.yml": While parsing config: yaml: line 1: did not find expected key

$ echo $?
1

Connectivity Check

Assuming the configuration file is syntactically correct, we can now validate the correct url, username and password are set for the Anchore Enterprise system with an anchorectl system status. If all is going well, we’ll get a report similar to this:

The output of anchore system status shows the services running on my Anchore Enterprise.

Multiple Configurations

You may also use the -c or --config option to specify the path to a configuration file. This is useful if you communicate with multiple Anchore Enterprise systems.

$ # Show the production configuration file
$ cat ./production.anchore.yml
url: "http://remotehost.anchoreservers.com:8228"
username: "admin"
password: "foobar"

$ # Show the development configuration file, which points to a diff PC
$ cat ./development.anchore.yml
url: "http://workstation.local:8228"
username: "admin"
password: "foobar"

$ # Connect to remote production instance
$ anchorectl -c ./production.anchorectl.yml system status 
 Status system⋮

$ # Connect to developer workstation
$ anchorectl -c ./development.anchorectl.yml system status 
 Status system⋮

Environment Variables

Note from the --help further up that AnchoreCTL can be configured with environment variables instead of the configuration file. This can be useful when the tool is deployed in CI/CD environments, where these can be set using the platform ‘secret storage’.

So, without any configuration file, we can issue the same command but setting options via environment variables. I’ve truncated the output below, but note the ✔ Status system indicating a successful call to the remote system.

$ # Delete the configuration to prove we aren't using it
$ rm .anchorectl.yml
$ anchorectl system status 

error: 1 error occurred:  * no enterprise URL provided

$ # Use environment variables instead
$ ANCHORECTL_URL="http://ziggy.local:8228" \
  ANCHORECTL_USERNAME="admin" \
  ANCHORECTL_PASSWORD="foobar" \
  anchorectl system status 
 Status system⋮

Of course, in a CI/CD environment such as GitHub, GitLab, or Jenkins, these environment variables would be set in a secure store and only set up as the job running anchorectl it initiated.

Users

Viewing Accounts & Users

In the examples above, I’ve been using the default username and password for a demo Anchore Enterprise instance. AnchoreCTL can be used to query and manage the system’s accounts and users. Documentation for these activities can be found in the user management section of the docs.

$ # Show list of accounts on the remote instance
$ anchorectl account list 
 Fetched accounts
┌───────┬─────────────────┬─────────┐
 NAME   EMAIL            STATE   
├───────┼─────────────────┼─────────┤
 admin  admin@myanchore  enabled 
└───────┴─────────────────┴─────────┘

We can also list existing users on the system:

$ # Show list of users (if any) in the admin account
$ anchorectl user list --account admin 
 Fetched users
┌──────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
 USERNAME  CREATED AT            PASSWORD LAST UPDATED  TYPE    IDP NAME  SOURCE 
├──────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
 admin     2024-06-10T11:48:32Z  2024-06-10T11:48:32Z   native │          │        
└──────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘

Managing Acounts

AnchoreCTL can be used to add (account add), enable (account enable), disable (account disable) and remove (account delete) accounts from the system:

$ # Create a new account
$ anchorectl account add dev_team_alpha 
 Added account
Name: dev_team_alpha
Email:
State: enabled

$ # Get a list of accounts
$ anchorectl account list 
 Fetched accounts
┌────────────────┬─────────────────┬─────────┐
 NAME            EMAIL            STATE   
├────────────────┼─────────────────┼─────────┤
 admin           admin@myanchore  enabled 
 dev_team_alpha                   enabled 
 dev_team_beta                    enabled 
└────────────────┴─────────────────┴─────────┘

$ # Disable an account before deleting it
$ anchorectl account disable dev_team_alpha 
 Disabled accountState: disabled

$ # Delete the account
$ anchorectl account delete dev_team_alpha 
 Deleted account
No results

$ # Get a list of accounts
$ anchorectl account list 
 Fetched accounts
┌────────────────┬─────────────────┬──────────┐
 NAME            EMAIL            STATE    
├────────────────┼─────────────────┼──────────┤
 admin           admin@myanchore  enabled  
 dev_team_alpha                   deleting 
 dev_team_beta                    enabled  
└────────────────┴─────────────────┴──────────┘

Managing Users

Users exist within accounts, but usernames are globally unique since they are used for authenticating API requests. Any user in the admin account can perform user management in the default Anchore Enterprise configuration using the native authorizer. 

For more information on configuring other authorization plugins, see Authorization Plugins and Configuration in our documentation.

Users can also be managed via AnchoreCTL. Here we create a new dev_admin_beta user under the dev_team_beta account and give then the role full-control as an administrator of the team. We’ll set a password of CorrectHorseBatteryStable for the admin user, but pass that via the environment rather than echo it out in the command line.

$ # Create a new user from the dev_team_beta account
$ ANCHORECTL_USER_PASSWORD=CorrectHorseBatteryStable \
  anchorectl user add --account dev_team_beta dev_admin_beta \
  --role full-control 
  
   Added user      dev_admin_beta
  Username: dev_admin_beta
  Created At: 2024-06-12T10:25:23Z
  Password Last Updated: 2024-06-12T10:25:23Z
  Type: native
  IDP Name:
  Source:

Let’s check that worked:

$ # Check that the new user was created
$ anchorectl user list --account dev_team_beta 
 Fetched users
┌────────────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
 USERNAME        CREATED AT            PASSWORD LAST UPDATED  TYPE    IDP NAME  SOURCE 
├────────────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
 dev_admin_beta  2024-06-12T10:25:23Z  2024-06-12T10:25:23Z   native │          │        
└────────────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘

That user is now able to use the API.

$ # List users from the dev_team_beta account
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl user list 
   Fetched users
  ┌────────────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
   USERNAME        CREATED AT            PASSWORD LAST UPDATED  TYPE    IDP NAME  SOURCE 
  ├────────────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
   dev_admin_beta  2024-06-12T10:25:23Z  2024-06-12T10:25:23Z   native │          │        
  └────────────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘

Using AnchoreCTL

We now have AnchoreCTL set-up to talk to our Anchore Enterprise, and a user other than admin to connect as let’s actually use it to scan a container. We have two options here, ‘Centralized Analysis’ and ‘Distributed Analysis’.

In Centralized Analysis, any container we request will be downloaded and analyzed by our Anchore Enterprise. If we choose Distributed Analysis, the image will be analyzed by anchorectl itself. This is covered in much more detail in the Vulnerability Management section of the docs.

Currently we have no images submitted for analysis:

$ # Query Enterprise to get a list of container images and their status
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image list 
  
   Fetched images
  ┌─────┬────────┬──────────┬────────┐
   TAG  DIGEST  ANALYSIS  STATUS 
  ├─────┼────────┼──────────┼────────┤
  └─────┴────────┴──────────┴────────┘

Let’s submit the latest Debian container from Dockerhub to Anchore Enterprise for analysis. The backend Anchore Enterprise deployment will then pull (download) the image, and analyze it.

$ # Request that enterprise downloads and analyzes the debian:latest image
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image add docker.io/library/debian:latest 
  
 ✔ Added Image      
 docker.io/library/debian:latest
 Image:  
 status:           not-analyzed (active)  
 tag:              docker.io/library/debian:latest  
 digest:           sha256:820a611dc036cb57cee7...  
 id:               7b34f2fc561c06e26d69d7a5a58...

Initially the image starts in a state of not-analyzed. Once it’s been downloaded, it’ll be queued for analysis. When the analysis begins, the status will change to analyzing after which it will change to analyzed. We can check the status with anchorectl image list.

$ # Check the status of the container image we requested 
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image list 
  
 Fetched images
┌─────────────────────────────────┬────────────────────────────────┬───────────┬────────┐
 TAG                              DIGEST                          ANALYSIS   STATUS 
├─────────────────────────────────┼────────────────────────────────┼───────────┼────────┤
 docker.io/library/debian:latest  sha256:820a611dc036cb57cee7...  analyzing  active 
└─────────────────────────────────┴────────────────────────────────┴───────────┴────────┘

After a short while, the image has been analyzed.

$ # Check the status of the container image we requested 
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image list 
  Fetched images
┌─────────────────────────────────┬────────────────────────────────┬───────────┬────────┐
 TAG                              DIGEST                          ANALYSIS   STATUS 
├─────────────────────────────────┼────────────────────────────────┼───────────┼────────┤
 docker.io/library/debian:latest  sha256:820a611dc036cb57cee7...  analyzed   active 
└─────────────────────────────────┴────────────────────────────────┴───────────┴────────┘

Results

Once analysis is complete, we can inspect the results, again with anchorectl.

Container contents

First, let’s see what Operating System packages Anchore found in this container with anchorectl image content docker.io/library/debian:latest -t os

anchorectl reporting the full OS package list from this Debian image. (the list is too large to show here)

SBOM

We can also pull the Software Bill of Materials (SBOM) for this image from Anchore with anchorectl image sbom docker.io/library/debian:latest -o table. We can use -f to write this to a file, and -o syft-json (for example) to output in a different format.

$ # Get a list of OS packages in the image
$ ANCHORECTL_USERNAME=dev_admin_beta \ 
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image sbom docker.io/library/debian:latest -o table 
  
 Fetched SBOM  docker.io/library/debian:latest
NAME                    VERSION                TYPE
adduser                 3.134                  deb
apt                     2.6.1                  deb
base-files              12.4+deb12u6           deb

util-linux              2.38.1-5+deb12u1       deb
util-linux-extra        2.38.1-5+deb12u1       deb
zlib1g                  1:1.2.13.dfsg-1        deb

Vulnerabilities

Finally let’s have a quick look to see if any OS vulnerabilities were found in this image with anchorectl image vulnerabilities docker.io/library/debian:latest -t os. This is a lot of super-wide output, click through to see the full size image.

Conclusion

So far we’ve introduced AnchoreCTL, shown it’s is easy to install, configure and test. It can be used both locally on developer workstations, and in CI/CD environments such as GitHub, GitLab and Jenkins. We’ll cover the integration of AnchoreCTL with source forges in a later post.

AnchoreCTL is a powerful tool which can be used to automate the management of scanning container contents, generating SBOMs, and analyzing for vulnerabilities.

Find out more about AnchoreCTL in our documentation, and request a demo of Anchore Enterprise.

Modernizing FedRAMP: GSA’s Roadmap to Streamline Authorization

If you’ve ever thought that the FedRAMP (Federal Risk and Authorization Management Program) authorization process is challenging and laborious, things may be getting better. The General Services Administration’s (GSA) has publicly committed to improving the authorization process by publishing a public roadmap to modernize FedRAMP

The purpose of FedRAMP is to act as a central intermediary between federal agencies and cloud service providers (CSP) in order to make it easier for agencies to purchase software services and for CSPs to sell software services to agencies. By being the middleman, FedRAMP creates a single marketplace that reduces the amount of time it takes for an agency to select and purchase a product. From the CSP perspective, FedRAMP becomes a single standard that they can target for compliance and after achieving authorization they get access to 200+ agencies that they can sell to—a win-win.

Unfortunately, that promised land wasn’t the typical experience for either side of the exchange. Since FedRAMP’s inception in 2011, the demand for cloud services has increased significantly. Cloud was still in its infancy at the birth of FedRAMP and the majority of federal agencies still procured software with perpetual licenses rather than as a cloud service (e.g., SaaS). In the following 10+ years that have passed, that preference has inverted and now the predominant delivery model is infrastructure/platform/software-as-a-service.

This has led to an environment where new cloud services are popping up every year but federal agencies don’t have access to them via the streamlined FedRAMP marketplace. On the other side of the coin, CSPs want access to the market of federal agencies that are only able to procure software via FedRAMP but the process of becoming FedRAMP certified is a complex and laborious process that reduces the opportunity cost of access to this market.

Luckily, the GSA isn’t resting on its laurels. Due to feedback from all stakeholders they are prioritizing a revamp of the FedRAMP authorization process to take into account the shifting preferences in the market. To help you get a sense of what is happening, how quickly you can expect changes and the benefits of the initiative, we have compiled a comprehensive FAQ.

Frequently Asked Questions (FAQ)

How soon will the benefits of FedRAMP modernization be realized?

Optimistically changes will be rolling out over the next 18 months and be completed by the end of 2025. See the full rollout schedule on the public roadmap.

Who does this impact?

  • Federal agencies
  • Cloud service providers (CSP)
  • Third-party assessment organization (3PAO)

What are the benefits of the FedRAMP modernization initiative?

TL;DR—For agencies

  • Increased vendor options within the FedRAMP marketplace
  • Reduced wait time for CSPs in authorization process

TL;DR—For CSPs

  • Reduced friction during the authorization process
  • More clarity on how to meet security requirements
  • Less time and cost spent on the authorization process

TL;DR—For 3PAOs

  • Reduced friction between 3PAO and CSP during authorization process
  • Increased clarity on how to evaluate CSPs

What prompted the GSA to improve FedRAMP now?

GSA is modernizing FedRAMP because of feedback from stakeholders. Both federal agencies and CSPs levied complaints about the current FedRAMP process. Agencies wanted more CSPs in the FedRAMP marketplace that they could then easily procure. CSPs wanted a more streamlined process so that they could get into the FedRAMP marketplace faster. The point of friction was the FedRAMP authorization process that hasn’t evolved at the same pace as the transition from the on-premise, perpetual license delivery model to the rapid, cloud services model.

How will GSA deliver on its promises to modernize FedRAMP?

The full list of initiatives can be found in their public product roadmap document but the highlights are:

  • Taking a customer-centric approach that reduces friction in the authorization process based on customer interviews
  • Publishing clear guidance on how to meet core security requirements
  • Streamlining authorization process to reduce bottlenecks based on best practices from agencies that have developed a strong authorization process
  • Moving away from lengthy documents and towards a data-first foundation to enable automation of the authorization process for CSPs and 3PAOs

Wrap-Up

The GSA has made a commitment to being transparent about the improvements to the modernization process. Anchore, as well as, the rest of the public sector stakeholders will be watching and holding the GSA accountable. Follow this blog or the Anchore LinkedIn page to stay updated on progress.If the 18 month timeline is longer than you’re willing to wait, Anchore is already an expert in supporting organizations that are seeking FedRAMP authorization. Anchore Enterprise is a modern, cloud-native software composition analysis (SCA) platform that both meets FedRAMP compliance standards and helps evaluate whether your software supply chain is FedRAMP compliant. If you’re interested to learn more, download “FedRAMP Requirements Checklist for Container Vulnerability Scanning” or learn more about how Anchore Enterprise has helped organizations like Cisco achieve FedRAMP compliance in weeks versus months.

Add SBOM Generation to Your GitHub Project with Syft

According to the latest figures, GitHub has over 100 million developers working on over 420 million repositories, with at least 28M being public repos. Unfortunately, very few software repos contain a Software Bill of Materials (SBOM) inventory of what’s been released.

SBOMs (Software Bill of Materials) are crucial in a repository as they provide a comprehensive inventory of all components, improving transparency and traceability in the software supply chain. This allows developers and security teams to quickly identify and address vulnerabilities, enhancing overall security and compliance with regulatory standards.

Anchore developed the sbom-action GitHub Action to automatically generate an SBOM using Syft. Developers can quickly add the action via the GitHub Marketplace and pretty much fire and forget the setup.

What is an SBOM?

Anchore developers have written plenty over the years about What is an SBOM, but here is the tl;dr:

An SBOM (Software Bill of Materials) is a detailed list of all software project components, libraries, and dependencies. It serves as a comprehensive inventory that helps understand the software’s structure and the origins of its components.

An SBOM in your project enhances security by quickly identifying and mitigating vulnerabilities in third-party components. Additionally, it ensures compliance with regulatory standards and provides transparency, essential for maintaining trust with stakeholders and users.

Introducing Anchore’s SBOM GitHub Action

Adding an SBOM is a cinch with the GitHub Action for SBOM Generation provided by Anchore. Once added to a repo the action will execute a Syft scan in the workspace directory and upload a workflow artefact SBOM in SPDX format.

The SBOM Action can scan a Docker image directly from the container registry with or without registry credentials specified. Alternatively, it can scan a directory full of artifacts or a specific single file.

The action will also detect if it’s being run during the GitHub release and upload the SBOM as a release asset. Easy!

How to Add the SBOM GitHub Action to Your Project

Assuming you already have a GitHub account and repository setup, adding the SBOM action is straightforward.

Anchore SBOM Action in the GitHub Marketplace.
  • Navigate to the GitHub Marketplace
  • Search for “Anchore SBOM Action” or visit Anchore SBOM Action directly
  • Add the action to your repository by clicking the green “Use latest version” button
  • Configure the action in your workflow file

That’s it!

Example Workflow Configuration

Here’s a bare-bones configuration for running the Anchore SBOM Action on each push to the repo.

  name: Generate SBOM

  on: [push]

  jobs:
    build:
      runs-on: ubuntu-latest
      steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Anchore SBOM Action
        uses: anchore/[email protected]

There are further options detailed on the GitHub Marketplace page for the action. For example, use output-file to specify the resulting SBOM file name and format to select whether to build an SPDX or CycloneDX formatted SBOM.

Results and Benefits

After the GitHub action is set up, the SBOM will start being generated on each push or with every release – depending on your configuration.

Once the SBOM is published on your GitHub repo, users can analyze it to identify and address vulnerabilities in third-party components. They can also use it to ensure compliance with security and regulatory standards, maintaining the integrity of the software supply chain.

Additional Resources

The SBOM action is open source and is available under the Apache 2.0 License in the sbom-action repository. It relies on Syft which is available under the same license, also on GitHub. We welcome contributions to both sbom-action and Syft, as well as Grype, which can consume and process these generated SBOMs.

Join us on Discourse to discuss all our open source tools.

Reduce risk in your software supply chain: 5 tips for container security

Rising threats to the software supply chain and increasing use of containers are causing organizations to focus on container security. Containers bring many unique security challenges due to their layered dependencies and the fact that many container images come from public repositories.

Our new white paper, Reduce Risk for Software Supply Chain Attacks: Best Practices for Container Security, digs into 5 tips for securing containers. It also describes how Anchore Enterprise simplifies implementation of these critical best practices, so you don’t have to.

5 best practices to instantly strengthening container security

  1. Use SBOMs to build a transparent foundation

SBOMs—Software Bill of Materials—create a trackable inventory of the components you use, which is a precursor for identifying security risks, meeting regulatory requirements and assessing license compliance. Get recommendations on the best way to generate, store, search and share SBOMs for better transparency.  

  1. Identify vulnerabilities early with continuous scanning

Security issues can arise at any point in the software supply chain. Learn why shifting left is necessary, but not sufficient for container security. Understand the role SBOMs are critical when responding to zero-day vulnerabilities.

  1. Automate policy enforcement and security gates

Find out how to use automated policies to identify which vulnerabilities should be fixed and enforce regulatory requirements. Learn how a customizable policy engine and out-of-the-box policy packs streamline your compliance efforts. 

  1. Reduce toil in the developer experience

Integrating with the tools developers use, minimizing false positives, and providing a path to faster remediation will keep developers happy and your software development moving efficiently.  See how Anchore Enterprise makes it easy to provide a good developer experience

  1. Protect your software supply chain with security controls

To protect your software supply chain, you must ensure that the code you bring in from third-party sources is trusted and vetted. Implement vetting processes for open-source code that you use.

Four Years of Syft Development in 4 Minutes at 4K

Our open-source SBOM and vulnerability scanning tools Syft and Grype, recently turned four years old. So I did what any nerd would do: render an animated visualization of the development using the now-venerable Gource. Initially, I wanted to render these videos at 120Hz framerate, but that didn’t go well. Read on to find out how that panned out.

My employer (perhaps foolishly) gave me the keys to our Anchore YouTube and Anchore Vimeo accounts. You can find the video I rendered on YouTube or embedded below.

For those unaware, Gource is a popular open-source project by Andrew Caudwell. Its purpose is to visualize development with pretty OpenGL-rendered videos. You may have seen these animated glowing renders before, as Gource has been around for a while now.

Syft is Anchore’s command-line tool and library for generating a software bill of materials (SBOM) from container images and filesystems. Grype is our vulnerability scanner for container images and filesystems. They’re both fundamental components of our Anchore Enterprise platform but are also independently famous.

Generating the video

Plenty of guides online cover how to build Gource visualizations, which are pretty straightforward. Gource analyses the git log of changes in a repository to generate frames of animation which can be viewed or saved to a video. There are settings to control various aspects of the animation, which are well documented in the Gource Wiki.

By default, while Gource is running, a window displaying the animation will appear on your screen. So, if you want to see what the render will look like, most of the defaults are fine when running Gource directly.

Tweak the defaults

I wanted to limit the video duration, and render at a higher resolution than my laptop panel supports. I also wanted the window to be hidden while the process runs.

tl;dr Here’s the full command line I used to generate and encode the 4K video in the background.

$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
  -s '-screen 0 4096x2160x24 ' /usr/bin/gource \
  --max-files 0 --font-scale 4 --output-framerate 60 \
  -4096x2160 --auto-skip-seconds 0.1 --seconds-per-day 0.16 \
  --bloom-multiplier 0.9 --fullscreen --highlight-users \
  --multi-sampling --stop-at-end --high-dpi \
  --user-image-dir ../faces/ --start-date 2020-05-07 \
  --title 'Syft Development https://github.com/anchore/syft' \
  -o - \
  ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i - \
  -vcodec libx264 -preset veryfast -pix_fmt yuv420p \
  -crf 1 -threads 0 -bf 0 ../syft-4096x2160-60.mkv

Let’s take a step back and examine the preparatory steps and some interesting points to note.