Author: Alan Pope
How Syft Scans Software to Generate SBOMs
Syft is an open source CLI tool and Go library that generates a Software Bill of Materials (SBOM) from source code, container images and packaged binaries. It is a foundational building block for various use-cases: from vulnerability scanning with tools like Grype, to OSS license compliance with tools like Grant. SBOMs track software components—and their associated supplier, security, licensing, compliance, etc. metadata—through the software development lifecycle.
At a high level, Syft takes the following approach to generating an SBOM:
- Determine the type of input source (container image, directory, archive, etc.)
- Orchestrate a pluggable set of catalogers to scan the source or artifact
- Each package cataloger looks for package types it knows about (RPMs, Debian packages, NPM modules, Python packages, etc.)
- In addition, the file catalogers gather other metadata and generate file hashes
- Aggregate all discovered components into an SBOM document
- Output the SBOM in the desired format (Syft, SPDX, CycloneDX, etc.)
Let’s dive into each of these steps in more detail.
Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.
Flexible Input Sources
Syft can generate an SBOM from several different source types:
- Container images (both from registries and local Docker/Podman engines)
- Local filesystems and directories
- Archives (TAR, ZIP, etc.)
- Single files
This flexibility is important as SBOMs are used in a variety of environments, from a developer’s workstation to a CI/CD pipeline.
When you run Syft, it first tries to autodetect the source type from the provided input. For example:
# Scan a container image
syft ubuntu:latest
# Scan a local filesystem
syft ./my-app/
Pluggable Package Catalogers
The heart of Syft is its decoupled architecture for software composition analysis (SCA). Rather than one monolithic scanner, Syft delegates scanning to a collection of catalogers, each focused on a specific software ecosystem.
Some key catalogers include:
apk-db-cataloger
for Alpine packagesdpkg-db-cataloger
for Debian packagesrpm-db-cataloger
for RPM packages (sourced from various databases)python-package-cataloger
for Python packagesjava-archive-cataloger
for Java archives (JAR, WAR, EAR)npm-package-cataloger
for Node/NPM packages
Syft automatically selects which catalogers to run based on the source type. For a container image, it will run catalogers for the package types installed in containers (RPM, Debian, APK, NPM, etc). For a filesystem, Syft runs a different set of catalogers looking for installed software that is more typical for filesystems and source code.
This pluggable architecture gives Syft broad coverage while keeping the core streamlined. Each cataloger can focus on accurately detecting its specific package type.
If we look at a snippet of the trace output from scanning an Ubuntu image, we can see some catalogers in action:
[0001] DEBUG discovered 91 packages cataloger=dpkg-db-cataloger...
[0001] DEBUG discovered 0 packages cataloger=rpm-db-cataloger
[0001] DEBUG discovered 0 packages cataloger=npm-package-cataloger
Here, the dpkg-db-cataloger
found 91 Debian packages, while the rpm-db-cataloger
and npm-package-cataloger
didn’t find any packages of their types—which makes sense for an Ubuntu image.
Aggregating and Outputting Results
Once all catalogers have finished, Syft aggregates the results into a single SBOM document. This normalized representation abstracts away the implementation details of the different package types.
The SBOM includes key data for each package like:
- Name
- Version
- Type (Debian, RPM, NPM, etc)
- Files belonging to the package
- Source information (repository, download URL, etc.)
- File digests and metadata
It also contains essential metadata, including a copy of the configuration used when generating the SBOM (for reproducibility). The SBOM will contain detailed information about package evidence, which packages were parsed from (within package.Metadata).
Finally, Syft serializes this document into one or more output formats. Supported formats include:
- Syft’s native JSON format
- SPDX’s tag-value and JSON
- CycloneDX’s JSON and XML
Having multiple formats allows integrating Syft into a variety of toolchains and passing data between systems that expect certain standards.
Revisiting the earlier Ubuntu example, we can see a snippet of the final output:
NAME VERSION TYPE
apt 2.7.14build2 deb
base-files 13ubuntu10.1 deb
bash 5.2.21-2ubuntu4 deb
Container Image Parsing with Stereoscope
To generate high-quality SBOMs from container images, Syft leverages a stereoscope library for parsing container image formats.
Stereoscope does the heavy lifting of unpacking an image into its constituent layers, understanding the image metadata, and providing a unified filesystem view for Syft to scan.
This encapsulation is quite powerful, as it abstracts the details of different container image specs (Docker, OCI, etc.), allowing Syft to focus on SBOM generation while still supporting a wide range of images.
Cataloging Challenges and Future Work
While Syft can generate quality SBOMs for many source types, there are still challenges and room for improvement.
One challenge is supporting the vast variety of package types and versioning schemes. Each ecosystem has its own conventions, making it challenging to extract metadata consistently. Syft has added support for more ecosystems and evolved its catalogers to handle edge-cases to provide support for an expanding array of software tooling.
Another challenge is dynamically generated packages, like those created at runtime or built from source. Capturing these requires more sophisticated analysis that Syft does not yet do. To illustrate, let’s look at two common cases:
Runtime Generated Packages
Imagine a Python application that uses a web framework like Flask or Django. These frameworks allow defining routes and views dynamically at runtime based on configuration or plugin systems.
For example, an application might scan a /plugins
directory on startup, importing any Python modules found and registering their routes and models with the framework. These plugins could pull in their own dependencies dynamically using importlib
.
From Syft’s perspective, none of this dynamic plugin and dependency discovery happens until the application actually runs. The Python files Syft scans statically won’t reveal those runtime behaviors.
Furthermore, plugins could be loaded from external sources not even present in the codebase Syft analyzes. They might be fetched over HTTP from a plugin registry as the application starts.
To truly capture the full set of packages in use, Syft would need to do complex static analysis to trace these dynamic flows, or instrument the running application to capture what it actually loads. Both are much harder than scanning static files.
Source Built Packages
Another typical case is building packages from source rather than installing them from a registry like PyPI or RubyGems.
Consider a C++ application that bundles several libraries in a /3rdparty
directory and builds them from source as part of its build process.
When Syft scans the source code directory or docker image, it won’t find any already built C++ libraries to detect as packages. All it will see are raw source files, which are much harder to map to packages and versions.
One approach is to infer packages from standard build tool configuration files, like CMakeLists.txt
or Makefile
. However, resolving the declared dependencies to determine the full package versions requires either running the build or profoundly understanding the specific semantics of each build tool. Both are fragile compared to scanning already built artifacts.
Some Language Ecosystems are Harder Than Others
It’s worth noting that dynamism and source builds are more or less prevalent in different language ecosystems.
Interpreted languages like Python, Ruby, and JavaScript tend to have more runtime dynamism in their package loading compared to compiled languages like Java or Go. That said, even compiled languages have ways of loading code dynamically, it just tends to be less common.
Likewise, some ecosystems emphasize always building from source, while others have a strong culture of using pre-built packages from central registries.
These differences mean the level of difficulty for Syft in generating a complete SBOM varies across ecosystems. Some will be more amenable to static analysis than others out of the box.
What Could Help?
To be clear, Syft has already done impressive work in generating quality SBOMs across many ecosystems despite these challenges. But to reach the next level of coverage, some additional analysis techniques could help:
- Static analysis to trace dynamic code flows and infer possible loaded packages (with soundness tradeoffs to consider)
- Dynamic instrumentation/tracing of applications to capture actual package loads (sampling and performance overhead to consider)
- Standardized metadata formats for build systems to declare dependencies (adoption curve and migration path to consider)
- Heuristic mapping of source files to known packages (ambiguity and false positives to consider)
None are silver bullets, but they illustrate the approaches that could help push SBOM coverage further in complex cases.
Ultimately, there will likely always be a gap between what static tools like Syft can discover versus the actual dynamic reality of applications. But that doesn’t mean we shouldn’t keep pushing the boundary! Even incremental improvements in these areas help make the software ecosystem more transparent and secure.
Syft also has room to grow in terms of programming language support. While it covers major ecosystems like Java and Python well, more work is needed to cover languages like Go, Rust, and Swift completely.
As the SBOM landscape evolves, Syft will continue to adapt to handle more package types, sources, and formats. Its extensible architecture is designed to make this growth possible.
Get Involved
Syft is fully open source and welcomes community contributions. If you’re interested in adding support for a new ecosystem, fixing bugs, or improving SBOM generation, the repo is the place to get started.
There are issues labeled “Good First Issue” for those new to the codebase. For more experienced developers, the code is structured to make adding new catalogers reasonably straightforward.
No matter your experience level, there are ways to get involved and help push the state of the art in SBOM generation. We hope you’ll join us!
SBOMs 101: A Free, Open Source eBook for the DevSecOps Community
Today, we’re excited to announce the launch of “Software Bill of Materials 101: A Guide for Developers, Security Engineers, and the DevSecOps Community”. This eBook is free and open source resource that provides a comprehensive introduction to all things SBOMs.
Why We Created This Guide
While SBOMs have become increasingly critical for software supply chain security, many developers and security professionals still struggle to understand and implement them effectively. We created this guide to help bridge that knowledge gap, drawing on our experience building popular SBOM tools like Syft.
What’s Inside
The ebook covers essential SBOM topics, including:
- Core concepts and evolution of SBOMs
- Different SBOM formats (SPDX, CycloneDX) and their use cases
- Best practices for generating and managing SBOMs
- Real-world examples of SBOM deployments at scale
- Practical guidance for integrating SBOMs into DevSecOps pipelines
We’ve structured the content to be accessible to newcomers while providing enough depth for experienced practitioners looking to expand their knowledge.
Community-Driven Development
This guide is published under an open source license and hosted on GitHub at https://github.com/anchore/sbom-ebook. The collective wisdom of the DevSecOps community will strengthen this resource over time. We welcome contributions whether fixes, new content, or translations.
Getting Started
You can read the guide online, download PDF/ePub versions, or clone the repository to build it locally. The source is in Markdown format, making it easy to contribute improvements.
Join Us
We invite you to:
- Read the guide at https://github.com/anchore/sbom-ebook
- Star the repository to show your support
- Share feedback through GitHub issues
- Contribute improvements via pull requests
- Help spread the word about SBOM best practices
The software supply chain security challenges we face require community collaboration. We hope this guide advances our collective understanding of SBOMs and their role in securing the software ecosystem.
Going All In: Anchore at SBOM Plugfest 2024
When we were invited to participate in Carnegie Mellon University’s Software Engineering Institute (SEI) SBOM Harmonization Plugfest 2024, we saw an opportunity to contribute to SBOM generation standardization efforts and thoroughly exercise our open-source SBOM generator, Syft.
While the Plugfest only required two SBOM submissions, we decided to go all in – and learned some valuable lessons along the way.
The Plugfest Challenge
The SBOM Harmonization Plugfest aims to understand why different tools generate different SBOMs for the same software. It’s not a competition but a collaborative study to improve SBOM implementation harmonization. The organizers selected eight diverse software projects, ranging from Node.js applications to C++ libraries, and asked participants to generate SBOMs in standard formats like SPDX and CycloneDX.
Going Beyond the Minimum
Instead of just submitting two SBOMs, we decided to:
- SBOM generation for all eight target projects
- Create both source and binary analysis SBOMs where possible
- Output in every format Syft supports
- Test both enriched and non-enriched versions
- Validate everything thoroughly
This comprehensive approach would give us (and the broader community) much more data to work with.
Automation: The Key to Scale
To handle this expanded scope, we created a suite of scripts to automate the entire process:
- Target acquisition
- Source SBOM generation
- Binary building
- Binary SBOM generation
- SBOM validation
The entire pipeline runs in about 38 minutes on a well-connected server, generating nearly three hundred SBOMs across different formats and configurations.
The Power of Enrichment
One of Syft’s interesting features is its --enrich
option, which can enhance SBOMs with additional metadata from online sources. Here’s a real example showing the difference in a CycloneDX SBOM for Dependency-Track:
$ wc -l dependency-track/cyclonedx-json.json dependency-track/cyclonedx-json_enriched.json
5494 dependency-track/cyclonedx-json.json
6117 dependency-track/cyclonedx-json_enriched.json
The enriched version contains additional information like license URLs and CPE identifiers:
{
"license": {
"name": "Apache 2",
"url": "http://www.apache.org/licenses/LICENSE-2.0"
},
"cpe": "cpe:2.3:a:org.sonatype.oss:JUnitParams:1.1.1:*:*:*:*:*:*:*"
}
These additional identifiers are crucial for security and compliance teams – license URLs help automate legal compliance checks, while CPE identifiers enable consistent vulnerability matching across security tools.
SBOM Generation of Binaries
While source code analysis is valuable, many Syft users analyze built artifacts and containers. This reflects real-world usage where organizations must understand what’s being deployed, not just what’s in the source code. We built and analyzed binaries for most target projects:
Package | Build Method | Key Findings |
---|---|---|
Dependency Track | Docker | The container SBOMs included ~1000 more items than source analysis, including base image components like Debian packages |
HTTPie | pip install | Binary analysis caught runtime Python dependencies not visible in source |
jq | Docker | Python dependencies contributed significant additional packages |
Minecolonies | Gradle | Java runtime java archives appeared in binary analysis, but not in the source |
OpenCV | CMake | Binary and source SBOMs were largely the same |
hexyl | Cargo build | Rust static linking meant minimal difference from source |
nodejs-goof | Docker | Node.js runtime and base image packages significantly increased the component count |
Some projects, like gin-gonic (a library) and PHPMailer, weren’t built as they’re not typically used as standalone binaries.
The differences between source and binary SBOMs were striking. For example, the Dependency-Track container SBOM revealed:
- Base image operating system packages
- Runtime dependencies not visible in source analysis
- Additional layers of dependencies from the build process
- System libraries and tools included in the container
This perfectly illustrates why both source and binary analysis are important:
- Source SBOMs show some direct development dependencies
- Binary/container SBOMs show the complete runtime environment
- Together, they provide a full picture of the software supply chain
Organizations can leverage these differences in their CI/CD pipelines – using source SBOMs for early development security checks and binary/container SBOMs for final deployment validation and runtime security monitoring.
Unexpected Discovery: SBOM Generation Bug
One of the most valuable outcomes wasn’t planned at all. During our comprehensive testing, we discovered a bug in Syft’s SPDX document generation. The SPDX validators were flagging our documents as invalid due to absolute file paths:
file name must not be an absolute path starting with "/", but is:
/.github/actions/bootstrap/action.yaml
file name must not be an absolute path starting with "/", but is:
/.github/workflows/benchmark-testing.yaml
file name must not be an absolute path starting with "/", but is:
/.github/workflows/dependabot-automation.yaml
file name must not be an absolute path starting with "/", but is:
/.github/workflows/oss-project-board-add.yaml
The SPDX specification requires relative file paths in the SBOM, but Syft used absolute paths. Our team quickly developed a fix, which involved converting absolute paths to relative ones in the format model logic:
// spdx requires that the file name field is a relative filename
// with the root of the package archive or directory
func convertAbsoluteToRelative(absPath string) (string, error) {
// Ensure the absolute path is absolute (although it should already be)
if !path.IsAbs(absPath) {
// already relative
log.Debugf("%s is already relative", absPath)
return absPath, nil
}
// we use "/" here given that we're converting absolute paths from root to relative
relPath, found := strings.CutPrefix(absPath, "/")
if !found {
return "", fmt.Errorf("error calculating relative path: %s", absPath)
}
return relPath, nil
}
The fix was simple but effective – stripping the leading “/” from absolute paths while maintaining proper error handling and logging. This change was incorporated into Syft v1.18.0, which we used for our final Plugfest submissions.
This discovery highlights the value of comprehensive testing and community engagement. What started as a participation in the Plugfest ended up improving Syft for all users, ensuring more standard-compliant SPDX documents. It’s a perfect example of how collaborative efforts like the Plugfest can benefit the entire SBOM ecosystem.
SBOM Validation
We used multiple validation tools to verify our SBOMs:
- CycloneDX’s sbom-utility
- SPDX’s pyspdxtools
- NTIA’s online validator
Interestingly, we found some disparities between validators. For example, some enriched SBOMs that passed sbom-utility validation failed with pyspdxtools. Further, the NTA online validator gave us another different result in many cases. This highlights the ongoing challenges in SBOM standardization – even the tools that check SBOM validity don’t always agree!
Key Takeaways
- Automation is crucial: Our scripted approach allowed us to efficiently generate and validate hundreds of SBOMs.
- Real-world testing matters: Building and analyzing binaries revealed insights (and bugs!) that source-only analysis might have missed.
- Enrichment adds value: Additional metadata can significantly enhance SBOM utility, though support varies by ecosystem.
- Validation is complex: Different validators can give different results, showing the need for further standardization.
Looking Forward
The SBOM Harmonization Plugfest results will be analyzed in early 2025, and we’re eager to see how different tools handled the same targets. Our comprehensive submission will help identify areas where SBOM generation can be improved and standardized.
More importantly, this exercise has already improved Syft for our users through the bug fix and given us valuable insights for future development. We’re committed to continuing this thorough testing and community participation to make SBOM generation more reliable and consistent for everyone.
The final SBOMs are published in the plugfest-sboms repo, with the scripts in the plugfest-scripts repository. Consider using Syft for SBOM generation against your code and containers, and let us know how you get on in our community discourse.
Enhancing Container Security with NVIDIA’s AI Blueprint and Anchore’s Syft
Container security is critical – one breach can lead to devastating data losses and business disruption. NVIDIA’s new AI Blueprint for Vulnerability Analysis transforms how organizations handle these risks by automating vulnerability detection and analysis. For enhanced container security, this AI-powered solution is a potential game-changer.
At its core, the Blueprint combines AI-driven scanning with NVIDIA’s Morpheus Cybersecurity SDK to identify vulnerabilities in seconds rather than hours or days for enhanced container security. The system works through a straightforward process:
First, it generates a Software Bill of Materials (SBOM) using Syft, Anchore’s open-source tool. This tool creates a detailed inventory of all software components in a container. This SBOM feeds into an AI pipeline that leverages large language models (LLMs) and retrieval-augmented generation (RAG) to analyze potential vulnerabilities for enhanced container security.
The AI examines multiple data sources – from code repositories to vulnerability databases – and produces a detailed analysis of each potential threat. Most importantly, it distinguishes between genuine security risks and false positives by considering environmental factors and dependency requirements.
The system then provides clear recommendations through a standardized Vulnerability Exploitability eXchange (VEX) status, as illustrated below. Container security is further enhanced by these clear recommendations.

This Blueprint is particularly valuable because it automates traditional manual security analysis. Security teams can stop spending days investigating potential vulnerabilities and focus on addressing confirmed threats. This efficiency is invaluable for organizations managing container security at scale with enhanced container security solutions.
Want to try it yourself? Check out the Blueprint, read more in the NVIDIA blog post, and explore the vulnerability-analysis git repo. Let us know if you’ve tried this out with Syft, over on the Anchore Community Discourse.
Tonight’s Movie: The Terminal (of your laptop)
A picture paints a thousand words, but a GIF shows every typo in motion. But it doesn’t have to! GIFs have long been the go-to in technical docs, capturing real-time terminal output and letting readers watch workflows unfold as if sitting beside you.
I recently needed to make some terminal GIFs, so I tried three of the best available tools, and here are my findings.
Requirements
We recently attended All Things Open, where a TV on our stand needed a rolling demo video. I wanted to add a few terminal usage examples for Syft, Grype, and Grant – our Open-Source, best-in-class container security tools. I tried a few tools to generate the GIFs, which I embedded in a set of Google Slides (for ease) and then captured and rendered as a video that played in a loop on a laptop running VLC.
To summarise, this was the intended flow:
Typing in a terminal →
↳ Recording
↳ GIF
↳ Google Slides
↳ Video Capture
↳ VLC playlist
↳ Success 🎉
We decided to render it as a video to mitigate conference WiFi issues. Nobody wants to walk past your exhibitor stand and see a 404
or “Network Connectivity Problems” on the Jumbotron®️!

The goal was for attendees passing our stand to see the command-line utilities in action. It also allowed us to discuss the demos with interested conferencegoers without busting out a laptop and crouching around it. We just pointed to the screen as a terminal appeared and talked through it.
Below is an early iteration of what I was aiming for, taken as a frame grab from a video – hence the slight blur.

My requirements were for a utility which:
- Records a terminal running commands
- Runs on Linux and macOS because I use both
- Reliably captures output from the commands being run
- Renders out a high-quality GIF
- Is preferably open source
- Is actively maintained
The reason for requiring a GIF rather than a traditional video, such as MP4, is to embed the GIF easily in a Google Slides presentation. While I could create an MP4 and then use a video editor to cut together the videos, I wanted something simple and easily reproducible. I may use MP4s in other situations – such as posting to social media – so if a tool can export to that format easily, I consider that a bonus.
It is worth noting that Google Slides supports GIFs up to 1000 frames in length. So, if you have a long-running command captured at a high frame rate, this limit is easy to hit. If that is the case, perhaps render an MP4 and use the right tool for the job, a video editor.
“High quality” GIF is a subjective term, but I’m after something that looks pleasing (to me), doesn’t distract from the tool being demonstrated, and doesn’t visibly stutter.
Feature Summary
I’ve put the full summary up here near the top of the article to save wear & tear on your mouse wheel or while your magic mouse is upside down, on charge. The details are underneath the conclusion for those interested and equipped with a fully-charged mouse.

† asciinema requires an additional tool such as agg to convert the recorded output to a GIF.
◊ t-rec supports X11 on Linux, but currently does not support Wayland sessions.
* t-rec development appears to have stalled.
Conclusion
All three tools are widely used and work fine in many cases. Asciinema is often recommended because it’s straightforward to install, and almost no configuration is required. The resulting recordings can be published online and rendered on a web page.
While t-rec is interesting, as it records the actual terminal window, not just the session text (as asciinema does), it is a touch heavyweight. As such, with a 4fps frame rate, videos made with t-rec look jerky.
I selected vhs for a few reasons.
It runs easily on macOS and Linux, so I can create GIFs on my work or personal computer with the same tool. vhs is very configurable, supports higher frame rates than other tools, and is scriptable, making it ideal for creating GIFs for documentation in CI pipelines.
vhs being scriptable is, I think, the real superpower here. For example, vhs can be part of a documentation site build system. One configuration file can specify a particular font family, size and color scheme to generate a GIF suitable for embedding in documentation.
Another almost identical configuration file might use a different font size or color, which is more suitable for a social media post. The same commands will be run, but the color, font family, font size, and even GIF resolution can be different, making for a very flexible and reliable way to create a terminal GIF for any occasion!
vhs ships with a broad default theme set that matches typical desktop color schemes, such as the familiar purple-hue terminal on Ubuntu, as seen below. This GIF uses the “BlexMono Nerd Font Mono” font (a modified version of IBM Plex font), part of the nerd-fonts project.
If this GIF seems slow, that’s intentional. The vhs configuration can “type” at a configurable speed and slow the resulting captured output down (or speed it up).

There are also popular Catppuccin themes that are pretty appealing. The following GIF uses the “catppuccin-macchiato” theme with “Iosevka Term” font, which is part of the Iosevka project. I also added a PS1
environment variable to the configuration to simulate a typical console prompt.

vhs can also take a still screenshot during the recording, which can be helpful as a thumbnail image, or to capture a particular frame from the middle of the recording. Below is the final frame from the previous GIF.

Here is one of the final (non-animated) slides from the video. I tried to put as little as possible on screen simultaneously, just the title, video, and a QR code for more information. It worked well, with someone even asking how the terminal videos were made. This blog is for them.

I am very happy with the results from vhs, and will likely continue using it in documentation, and perhaps social posts – if I can get the font to a readable size on mobile devices.
Alternatives
I’m aware of OBS Studio and other screen (and window) recording tools that could be used to create an initial video, which could be converted into a GIF.
Are there other, better ways to do this?
Let me know on our community discourse, or leave a comment wherever you read this blog post.
Below are the details about each of the three tools I tested.
t-rec
t-rec is a “Blazingly fast terminal recorder that generates animated gif images for the web written in rust.” This was my first choice, as I had played with it before my current task came up.
I initially quite liked that t-rec recorded the entire terminal window, so when running on Linux, I could use a familiar desktop theme indicating to the viewer that the command is running on a Linux host. On a macOS host, I could use a native terminal (such as iTerm2) to hint that the command is run on an Apple computer.
However, I eventually decided this wasn’t that important at all. Especially given that vhs can be used to theme the terminal so it looks close to a particular host OS. Plus, most of the commands I’m recording are platform agnostic, producing the same output no matter what they’re running on.
t-rec Usage
- Configure the terminal to be the size you require with the desired font and any other settings before you start t-rec.
- Run
t-rec
.
$ t-rec --quiet --output grant
The terminal will clear, and recording will begin.
- Type each command as you normally would.
- Press
CTRL+D
to end recording. t-rec
will then generate the GIF using the specified name.
🎆 Applying effects to 118 frames (might take a bit)
💡 Tip: To add a pause at the end of the gif loop, use e.g. option `-e 3s`
🎉 🚀 Generating grant.gif
Time: ~9s
alan@Alans-MacBook-Pro ~
The output GIF will be written in the current directory by stitching together all the bitmap images taken during the recording. Note the recording below contains the entire terminal user interface and the content.

t-rec Benefits
t-rec records the video by taking actual bitmap screenshots of the entire terminal on every frame. So, if you’re keen on having a GIF that includes the terminal UI, including the top bar and other window chrome, then this may be for you.
t-rec Limitations
t-rec records at 4 frames per second, which may be sufficient but can look jerky with long commands. There is an unmerged draft PR to allow user-configurable recording frame rates, but it hasn’t been touched for a couple of years.
I found t-rec would frequently just stop adding frames to a GIF. So the resulting GIF would start okay, then randomly miss out most of the frames, abruptly end, and loop back to the start. I didn’t have time to debug why this happened, which got me looking for a different tool.
asciinema
“Did you try asciinema?” was a common question asked of me, when I mentioned to fellow nerds what I was trying to achieve. Yes.
asciinema is the venerable Grand-daddy of terminal recording. It’s straightforward to install and setup, has a very simple recording and publishing pipeline. Perhaps too simple.
When I wandered around the various exhibitor stands at All Things Open last week, it was obvious who spent far too long fiddling with these tools (me), and which vendors recorded a window, or published an asciinema, with some content blurred out.
One even had an ugly demo of our favorite child, grype (don’t tell syft I said that), in such a video! Horror of horrors!
asciinema doesn’t create GIFs directly but instead creates “cast” files, JSON formatted text representations of the session, containing both the user-entered text and the program output. A separate utility, agg (asciinema gif generator), converts the “cast” to a GIF. In addition, another tool, asciinema-edit, can be used to edit the cast file post-recording.
asciinema Usage
- Start
asciinema
rec
, and optionally specify a target file to save as.
asciinema rec ./grype.cast
- Run commands.
- Type
exit
when finished. - Play back the cast file
asciinema play ./grype.cast
- Convert asciinema recording to GIF.
agg --font-family "BlexMono Nerd Font Mono" grype.cast grype.gif
Here’s the resulting GIF, using the above options. Overall, it looks fine, very much like my terminal appears. Some of the characters are missing or incorrectly displayed, however. For example, the animated braille characters are used while grype is parsing the container image.

asciinema – or rather agg (the cast-to-GIF converter) has a few options for customizing the resulting video. There are a small number of themes, the ability to configure the window size (in rows/columns), font family, and size, and set various speed and delay-related options.
Overall, asciinema is very capable, fast, and easy to use. The upstream developers are currently porting it from Python to Rust, so I’d consider this an active project. But it wasn’t entirely giving me all the options I wanted. It’s still a useful utility to keep in your toolbelt.
vhs
vhs has a novel approach using ‘tape’ files which describe the recording as a sequence of Type
, Enter
and Sleep
statements.
The initial tape file can be created with vhs record
and then edited in any standard text editor to modify commands, choice of shell, sleep durations, and other configuration settings. The vhs cassette.tape
command will configure the session, then run the commands in a virtual (hidden) terminal.
Once the end of the ‘tape’ is reached, vhs generates the GIF, and optionally, an MP4 video. The tape file can be iterated on to change the theme, font family, size, and other settings, then re-running vhs cassette.tape
creates a whole new GIF.
vhs Usage
- Create a .tape file with
vis record --shell bash > cassette.tape
. - Run commands.
- Type
exit
when finished.
vhs will write the commands and timings to the cassette.tape
file, for example:
$ cat cassette.tape
Sleep 1.5s
Type "./grype ubuntu:latest"
Enter
Sleep 3s
- Optionally edit the tape file
- Generate the GIF
$ vhs cassette.tape
File: ./cassette.tape
Sleep 1.5s
Type ./grype ubuntu:latest
Enter 1
Sleep 3s
Creating ...
Host your GIF on vhs.charm.sh: vhs publish <file>.gif
Below is the resulting default GIF, which looks fantastic out of the box, even before playing with themes, fonts and prompts.

vhs Benefits
vhs is very configurable, with some useful supported commands in the .tape file. The support for themes, fonts, resolution and ‘special’ key presses, makes it very flexible for scripting a terminal based application recording.
vhs Limitations
vhs requires the tape author to specify how long to Sleep
after each command – or assume the initial values created with vhs record
are correct. vhs does not (yet) auto-advance when a command finishes. This may not be a problem if the command you’re recording has a reliable runtime. Still, it might be a problem if the duration of a command is dependent on prevailing conditions such as the network or disk performance.
What do you think? Do you like animated terminal output, or would you prefer a video, interactive tool, or just a plain README.md. Let me know on our community discourse, or leave a comment wherever you read this blog post.
Automate Container Vulnerability Scanning in CI with Anchore
Achieve container vulnerability scanning nirvana in your CI pipeline with Anchore Enterprise and your preferred CI platform, whether it’s GitHub, GitLab, or Jenkins. Identifying vulnerabilities, security issues, and compliance policy failures early in the software development process is crucial. It’s certainly preferable to uncover these issues during development rather than having them discovered by a customer or during an external audit.
Early detection of vulnerabilities ensures that security and compliance are integrated into your development workflow, reducing the risk of breaches and compliance violations. This proactive approach not only protects your software but also saves time and resources by addressing issues before they escalate.
Enabling CI Integration
At a high level, the steps to connect any CI platform to Enterprise are broadly the same, with implementation details differing between each vendor.
- Enable network connectivity between CI and Enterprise
- Capture Enterprise configuration for AnchoreCTL
- Craft an automation script to operate after the build process
- Install AnchoreCTL
- Capture built container details
- Use AnchoreCTL to submit container details to Enterprise
Once SBOM generation is integrated into the CI pipeline, and they’re submitted to Anchore Enterprise, the following features can quickly be leveraged:
- Known vulnerabilities with severity, and fix availability
- Search for accidental ‘secrets’ sharing such as private API keys
- Scan for malware like trojans and viruses
- Policy enforcement to comply with standards like FedRAMP, CISA and DISA
- Remediation by notifying developers and other agents via standard tools like GitHub issues, JIRA, and Slack
- Scheduled reporting on container insights
CI Integration by Example
Taking GitHub Actions as an example, we can outline the requirements and settings to get up and running with automated SBOM generation and vulnerability management.
Network connectivity
AnchoreCTL uses port 8228
for communication with the Anchore Enterprise SBOM ingest and management API. Ensure the Anchore Enterprise host, where this is configured, is accessible on that port from GitHub. This is site specific and may require firewall, VLAN and other site-specific changes.
Required configuration
AnchoreCTL requires only three environment variables, typically set as GitHub secrets.
ANCHORECTL_URL
– the URL of the Anchore Enterprise API endpoint. e.g. http://anchore-enterprise.example.com:8228ANCHORECTL_USERNAME
– the user account in Anchore Enterprise, that theanchorectl
will authenticate usingANCHORECTL_PASSWORD
– the password for the account, set on the Anchore Enterprise instance
On the GitHub repository go to Settings -> Secrets and Variables -> Actions.
Under the ‘Variables’ tab, add ANCHORECTL_URL
& ANCHORECTL_USERNAME
, and set their values. In the ‘Secrets’ tab, add ANCHORECTL_PASSWORD
and set the value.
Automation script

Below are the sample snippets from a GitHub action that should be placed in the repository under .github/workflows
to enable SBOM generation in Anchore Enterprise. In this example,
First, our action needs a name:
name: Anchore Enterprise Centralized Scan
Pick one or more from this next section, depending on when you require the action to be triggered. It could be based on pushes to the main
or other named branches, on a timed schedule, or manually.
Commonly when configuring an action for the first time, manual triggering is used until proven working, then timed or branch automation is enabled later.
on:
## Action runs on a push the branches listed
push:
branches:
- main
## Action runs on a regular schedule
schedule:
## Run at midnight every day
- cron: '0 0 * * *'
## Action runs on demand build
workflow_dispatch:
inputs:
mode:
description: 'On-Demand Build'
In the env
section we pass in the settings gathered and configured inside the GitHub web UI earlier. Additionally the optional ANCHORECTL_FAIL_BASED_ON_RESULTS
boolean defines (if true) whether we want the the entire action to be failed based on scan results. This may be desirable, to block further processing if any vulnerabilities, secrets or malware are identified.
env:
ANCHORECTL_URL: ${{ vars.ANCHORECTL_URL }}
ANCHORECTL_USERNAME: ${{ vars.ANCHORECTL_USERNAME }}
ANCHORECTL_PASSWORD: ${{ secrets.ANCHORECTL_PASSWORD }}
ANCHORECTL_FAIL_BASED_ON_RESULTS: false
Now we start the actual body of the action, which comprises two jobs, ‘Build’ and ‘Anchore’. The ‘Build’ example here will use externally defined steps to checkout the code in the repo and build a container using docker, then push the resulting image to the container registry. In this case we build and publish to the GitHub Container Registry (ghcr), however, we could publish elsewhere.
jobs:
Build:
runs-on: ubuntu-latest
steps:
- name: "Set IMAGE environmental variables"
run: |
echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
- name: Checkout Code
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: build local container
uses: docker/build-push-action@v3
with:
tags: ${{ env.IMAGE }}
push: true
load: false
The next job actually generates the SBOM, let’s break this down. First, the usualy boilerplate, but note this job depends on the previous ‘Build’ job having already run.
Anchore:
runs-on: ubuntu-latest
needs: Build
steps:
The same registry settings are used here as were used in the ‘Build’ job above, then we checkout the code onto the action runner. The IMAGE
variable will be used by the anchorectl
command later to submit into Anchore Enterprise.
- name: "Set IMAGE environment variables"
run: |
echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
- name: Checkout Code
uses: actions/checkout@v3
Installing the AnchoreCTL binary inside the action runner is required to send the request to the Anchore Enterprise API. Note the version number specified as the past parameter, should match the version of Enterprise.
- name: Install Latest anchorectl Binary
run: |
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin v5.7.0
export PATH="${HOME}/.local/bin/:${PATH}"
The Connectivity check is a good way to ensure anchorectl
is installed correctly, and configured to connect to the right Anchore Enterprise instance.
- name: Connectivity Check
run: |
anchorectl version
anchorectl system status
anchorectl feed list
Now we actually queue the image up for scanning by our Enterprise instance. Note the use of --wait
to ensure the GitHub Action pauses until the backend Enterprise instance completes the scan. Otherwise the next steps would likely fail, as the scan would not yet be complete.
- name: Queue Image for Scanning by Anchore Enterprise
run: |
anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile --force ${IMAGE}
Once the backend Anchore Enterprise has completed the vulnerability, malware, and secrets scan, we use anchorectl
to pull the list of vulnerabilities and display them as a table. This can be viewed in the GitHub Action log, if required.
- name: Pull Vulnerability List
run: |
anchorectl image vulnerabilities ${IMAGE}
Finally, the image check
will pull down the results of the policy compliance as defined in your Anchore Enterprise. This will likely be a significantly shorter output than the full vulnerability list, depending on your policy bundle.
If the environment variable ANCHORECTL_FAIL_BASED_ON_RESULTS
was set true
earlier in the action, or -f
is added to the command below, the action will return as a ‘failed’ run.
- name: Pull Policy Evaluation
run: |
anchorectl image check --detail ${IMAGE}
That’s everything. If configured correctly, the action will run as required, and directly leverage the vulnerability, malware and secrets scanning of Anchore Enterprise.
Not just GitHub
While the example above is clearly GitHub specific, a similar configuration can be used in GitLab pipelines, Jenkins, or indeed any CI system that supports arbitrary shell scripts in automation.
Conclusion
By integrating Anchore Enterprise into your CI pipeline, you can achieve a higher level of security and compliance for your software development process. Automating vulnerability scanning and SBOM management ensures that your software is secure, compliant, and ready for deployment.
Automate your SBOM management with Anchore Enterprise. Get instant access with a 15-day free trial.
AnchoreCTL Setup and Top Tips
Introduction
Welcome to the beginners guide to AnchoreCTL, a powerful command-line tool designed for seamless interaction with Anchore Enterprise via the Anchore API. Whether you’re wrangling SBOMs, managing Kubernetes runtime inventories, or ensuring compliance at scale, AnchoreCTL is your go-to companion.
Overview
AnchoreCTL enables you to efficiently manage and inspect all aspects of your Anchore Enterprise deployments. It serves both as a human-readable configuration tool and a CLI for automation in CI/CD environments, making it indispensable for DevOps, security engineers, and developers.
If you’re familiar with Syft and Grype, AnchoreCTL will be a valuable addition to your toolkit. It offers enhanced capabilities to manage tens, hundreds, or even thousands of images and applications across your organization.
In this blog series, we’ll explore top tips and practical use cases to help you leverage AnchoreCTL to its fullest potential. In this part, we’ll review the basics of getting started with AnchoreCTL. In subsequent posts, we will dive deep on container scanning, SBOM Management and Vulnerability Management.
We’ll start by getting AnchoreCTL installed and learning about its configuration and use. I’ll be using AnchoreCTL on my macOS laptop, connected to a demo of Anchore Enterprise running on another machine.
Get AnchoreCTL
AnchoreCTL is a command-line tool available for macOS, Linux and Windows. The AnchoreCTL Deployment docs cover installation and deployment in detail. Grab the release of AnchoreCTL that matches your Anchore Enterprise install.
At the time of writing, the current release of AnchoreCTL and Anchore Enterprise is v5.6.0. Both are updated on a monthly cadence, and yours may be newer or older than what we’re using here. The AnchoreCTL Release Notes contain details about the latest, and all historical releases of the utility.
You may have more than one Anchore Enterprise deployment on different releases. As AnchoreCTL is a single binary, you can install multiple versions on a system to support all the deployments in your landscape.
macOS / Linux
This following snippet will install the binary in a directory of your choosing. On my personal workstation, I use $HOME/bin
, but anywhere in your $PATH
is fine. Placing the application binary in /usr/local/bin/
makes sense in a shared environment.
$ # Download the macOS or Linux build of anchorectl
$ curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b $HOME/bin v5.6.0
Windows
The Windows install snippet grabs the zip file containing the binary. Once downloaded, unpack the zip and copy the anchorectl
command somewhere appropriate.
$ # Download the Windows build of anchorectl
$ curl -o anchorectl.zip https://anchorectl-releases.anchore.io/anchorectl/v5.6.0/anchorectl_5.6.0_windows_amd64.zip
Setup
Quick check
Once AnchoreCTL is installed, check it’s working with a simple anchorectl version
. It should print output similar to this:
$ # Show the version of the anchorectl command line tool
$ anchorectl version
Application: anchorectl
Version: 5.6.0
SyftVersion: v1.4.1
BuildDate: 2024-05-27T18:28:23Z
GitCommit: 7c134b46b7911a5a17ba1fa5f5ffa4e3687f170b
GitDescription: v5.6.0
Platform: darwin/arm64
GoVersion: go1.21.10
Compiler: gc
Configure
The anchorectl
command has a --help
option that displays a lot of useful information beyond just the list of command line options reference. Below are the first 15 lines to illustrate what you should see. The actual output is over 80 lines, so we’ve snipped it down here.
$ # Show the top 15 lines of the help
$ anchorectl --help | head -n 15
Usage:
anchorectl [command]
Application Config:
(search locations: .anchorectl.yaml, anchorectl.yaml, .anchorectl/config.yaml, ~/.anchorectl.yaml, ~/anchorectl.yaml, $XDG_CONFIG_HOME/anchorectl/config.yaml)
# the URL to the Anchore Enterprise API (env var: "ANCHORECTL_URL")
url: ""
# the Anchore Enterprise username (env var: "ANCHORECTL_USERNAME")
username: ""
# the Anchore Enterprise user's login password (env var: "ANCHORECTL_PASSWORD")
On launch, the anchorectl
binary will search for a yaml configuration file in a series of locations shown in the help above. For a quick start, just create .anchorectl.yaml
in your home directory, but any of the listed locations are fine.
Here is my very basic .anchorectl.yaml
which has been configured with the minimum values of url
, username
and password
to get started. I’ve pointed anchorectl
at the Anchore Enterprise v5.6.0 running on my Linux laptop ‘ziggy’, using the default port, username and password. We’ll see later how we can create new accounts and users.
$ # Show the basic config file
$ cat .anchorectl.yml
url: "http://ziggy.local:8228"
username: "admin"
password: "foobar"
Config Check
The configuration can be validated with anchorectl -v
. If the configuration is syntactically correct, you’ll see the online help displayed, and the command will exit with return code 0. In this example, I have truncated the lengthy anchorectl -v
output.
$ # Good config
$ cat .anchorectl.yml
url: "http://ziggy.local:8228"
username: "admin"
password: "foobar"
$ anchorectl -v
[0000] INFO
anchorectl version: 5.6.0
Usage: anchorectl [command]
⋮
--version version for anchorectl
Use "anchorectl [command] --help" for more information about a command.
$ echo $?
0
In this example, I omitted a closing quotation mark on the url:
line, to force an error.
$ # Bad config
$ cat .anchorectl.yml
url: "http://ziggy.local:8228
username: "admin"
password: "foobar"
$ anchorectl -v
⋮
error: invalid application config: unable to parse config="/Users/alan/.anchorectl.yml": While parsing config: yaml: line 1: did not find expected key
$ echo $?
1
Connectivity Check
Assuming the configuration file is syntactically correct, we can now validate the correct url
, username
and password
are set for the Anchore Enterprise system with an anchorectl system status
. If all is going well, we’ll get a report similar to this:

anchore system status
shows the services running on my Anchore Enterprise.Multiple Configurations
You may also use the -c
or --config
option to specify the path to a configuration file. This is useful if you communicate with multiple Anchore Enterprise systems.
$ # Show the production configuration file
$ cat ./production.anchore.yml
url: "http://remotehost.anchoreservers.com:8228"
username: "admin"
password: "foobar"
$ # Show the development configuration file, which points to a diff PC
$ cat ./development.anchore.yml
url: "http://workstation.local:8228"
username: "admin"
password: "foobar"
$ # Connect to remote production instance
$ anchorectl -c ./production.anchorectl.yml system status
✔ Status system⋮
$ # Connect to developer workstation
$ anchorectl -c ./development.anchorectl.yml system status
✔ Status system⋮
Environment Variables
Note from the --help
further up that AnchoreCTL can be configured with environment variables instead of the configuration file. This can be useful when the tool is deployed in CI/CD environments, where these can be set using the platform ‘secret storage’.
So, without any configuration file, we can issue the same command but setting options via environment variables. I’ve truncated the output below, but note the ✔ Status system
indicating a successful call to the remote system.
$ # Delete the configuration to prove we aren't using it
$ rm .anchorectl.yml
$ anchorectl system status
⠋
error: 1 error occurred: * no enterprise URL provided
$ # Use environment variables instead
$ ANCHORECTL_URL="http://ziggy.local:8228" \
ANCHORECTL_USERNAME="admin" \
ANCHORECTL_PASSWORD="foobar" \
anchorectl system status
✔ Status system⋮
Of course, in a CI/CD environment such as GitHub, GitLab, or Jenkins, these environment variables would be set in a secure store and only set up as the job running anchorectl it initiated.
Users
Viewing Accounts & Users
In the examples above, I’ve been using the default username and password for a demo Anchore Enterprise instance. AnchoreCTL can be used to query and manage the system’s accounts and users. Documentation for these activities can be found in the user management section of the docs.
$ # Show list of accounts on the remote instance
$ anchorectl account list
✔ Fetched accounts
┌───────┬─────────────────┬─────────┐
│ NAME │ EMAIL │ STATE │
├───────┼─────────────────┼─────────┤
│ admin │ admin@myanchore │ enabled │
└───────┴─────────────────┴─────────┘
We can also list existing users on the system:
$ # Show list of users (if any) in the admin account
$ anchorectl user list --account admin
✔ Fetched users
┌──────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
│ USERNAME │ CREATED AT │ PASSWORD LAST UPDATED │ TYPE │ IDP NAME │ SOURCE │
├──────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
│ admin │ 2024-06-10T11:48:32Z │ 2024-06-10T11:48:32Z │ native │ │ │
└──────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘
Managing Acounts
AnchoreCTL can be used to add (account add
), enable (account enable
), disable (account disable
) and remove (account delete
) accounts from the system:
$ # Create a new account
$ anchorectl account add dev_team_alpha
✔ Added account
Name: dev_team_alpha
Email:
State: enabled
$ # Get a list of accounts
$ anchorectl account list
✔ Fetched accounts
┌────────────────┬─────────────────┬─────────┐
│ NAME │ EMAIL │ STATE │
├────────────────┼─────────────────┼─────────┤
│ admin │ admin@myanchore │ enabled │
│ dev_team_alpha │ │ enabled │
│ dev_team_beta │ │ enabled │
└────────────────┴─────────────────┴─────────┘
$ # Disable an account before deleting it
$ anchorectl account disable dev_team_alpha
✔ Disabled accountState: disabled
$ # Delete the account
$ anchorectl account delete dev_team_alpha
✔ Deleted account
No results
$ # Get a list of accounts
$ anchorectl account list
✔ Fetched accounts
┌────────────────┬─────────────────┬──────────┐
│ NAME │ EMAIL │ STATE │
├────────────────┼─────────────────┼──────────┤
│ admin │ admin@myanchore │ enabled │
│ dev_team_alpha │ │ deleting │
│ dev_team_beta │ │ enabled │
└────────────────┴─────────────────┴──────────┘
Managing Users
Users exist within accounts, but usernames are globally unique since they are used for authenticating API requests. Any user in the admin account can perform user management in the default Anchore Enterprise configuration using the native authorizer.
For more information on configuring other authorization plugins, see Authorization Plugins and Configuration in our documentation.
Users can also be managed via AnchoreCTL. Here we create a new dev_admin_beta
user under the dev_team_beta account and give then the role full-control as an administrator of the team. We’ll set a password of CorrectHorseBatteryStable
for the admin user, but pass that via the environment rather than echo it out in the command line.
$ # Create a new user from the dev_team_beta account
$ ANCHORECTL_USER_PASSWORD=CorrectHorseBatteryStable \
anchorectl user add --account dev_team_beta dev_admin_beta \
--role full-control
✔ Added user dev_admin_beta
Username: dev_admin_beta
Created At: 2024-06-12T10:25:23Z
Password Last Updated: 2024-06-12T10:25:23Z
Type: native
IDP Name:
Source:
Let’s check that worked:
$ # Check that the new user was created
$ anchorectl user list --account dev_team_beta
✔ Fetched users
┌────────────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
│ USERNAME │ CREATED AT │ PASSWORD LAST UPDATED │ TYPE │ IDP NAME │ SOURCE │
├────────────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
│ dev_admin_beta │ 2024-06-12T10:25:23Z │ 2024-06-12T10:25:23Z │ native │ │ │
└────────────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘
That user is now able to use the API.
$ # List users from the dev_team_beta account
$ ANCHORECTL_USERNAME=dev_admin_beta \
ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
ANCHORECTL_ACCOUNT=dev_team_beta \
anchorectl user list
✔ Fetched users
┌────────────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
│ USERNAME │ CREATED AT │ PASSWORD LAST UPDATED │ TYPE │ IDP NAME │ SOURCE │
├────────────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
│ dev_admin_beta │ 2024-06-12T10:25:23Z │ 2024-06-12T10:25:23Z │ native │ │ │
└────────────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘
Using AnchoreCTL
We now have AnchoreCTL set-up to talk to our Anchore Enterprise, and a user other than admin to connect as let’s actually use it to scan a container. We have two options here, ‘Centralized Analysis’ and ‘Distributed Analysis’.
In Centralized Analysis, any container we request will be downloaded and analyzed by our Anchore Enterprise. If we choose Distributed Analysis, the image will be analyzed by anchorectl itself. This is covered in much more detail in the Vulnerability Management section of the docs.
Currently we have no images submitted for analysis:
$ # Query Enterprise to get a list of container images and their status
$ ANCHORECTL_USERNAME=dev_admin_beta \
ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
ANCHORECTL_ACCOUNT=dev_team_beta \
anchorectl image list
✔ Fetched images
┌─────┬────────┬──────────┬────────┐
│ TAG │ DIGEST │ ANALYSIS │ STATUS │
├─────┼────────┼──────────┼────────┤
└─────┴────────┴──────────┴────────┘
Let’s submit the latest Debian container from Dockerhub to Anchore Enterprise for analysis. The backend Anchore Enterprise deployment will then pull (download) the image, and analyze it.
$ # Request that enterprise downloads and analyzes the debian:latest image
$ ANCHORECTL_USERNAME=dev_admin_beta \
ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
ANCHORECTL_ACCOUNT=dev_team_beta \
anchorectl image add docker.io/library/debian:latest
✔ Added Image
docker.io/library/debian:latest
Image:
status: not-analyzed (active)
tag: docker.io/library/debian:latest
digest: sha256:820a611dc036cb57cee7...
id: 7b34f2fc561c06e26d69d7a5a58...
Initially the image starts in a state of not-analyzed. Once it’s been downloaded, it’ll be queued for analysis. When the analysis begins, the status will change to analyzing after which it will change to analyzed. We can check the status with anchorectl image list
.
$ # Check the status of the container image we requested
$ ANCHORECTL_USERNAME=dev_admin_beta \
ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
ANCHORECTL_ACCOUNT=dev_team_beta \
anchorectl image list
✔ Fetched images
┌─────────────────────────────────┬────────────────────────────────┬───────────┬────────┐
│ TAG │ DIGEST │ ANALYSIS │ STATUS │
├─────────────────────────────────┼────────────────────────────────┼───────────┼────────┤
│ docker.io/library/debian:latest │ sha256:820a611dc036cb57cee7... │ analyzing │ active │
└─────────────────────────────────┴────────────────────────────────┴───────────┴────────┘
After a short while, the image has been analyzed.
$ # Check the status of the container image we requested
$ ANCHORECTL_USERNAME=dev_admin_beta \
ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
ANCHORECTL_ACCOUNT=dev_team_beta \
anchorectl image list
✔ Fetched images
┌─────────────────────────────────┬────────────────────────────────┬───────────┬────────┐
│ TAG │ DIGEST │ ANALYSIS │ STATUS │
├─────────────────────────────────┼────────────────────────────────┼───────────┼────────┤
│ docker.io/library/debian:latest │ sha256:820a611dc036cb57cee7... │ analyzed │ active │
└─────────────────────────────────┴────────────────────────────────┴───────────┴────────┘
Results
Once analysis is complete, we can inspect the results, again with anchorectl
.
Container contents
First, let’s see what Operating System packages Anchore found in this container with anchorectl image content docker.io/library/debian:latest -t os

SBOM
We can also pull the Software Bill of Materials (SBOM) for this image from Anchore with anchorectl image sbom docker.io/library/debian:latest -o table
. We can use -f
to write this to a file, and -o syft-json
(for example) to output in a different format.
$ # Get a list of OS packages in the image
$ ANCHORECTL_USERNAME=dev_admin_beta \
ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
ANCHORECTL_ACCOUNT=dev_team_beta \
anchorectl image sbom docker.io/library/debian:latest -o table
✔ Fetched SBOM docker.io/library/debian:latest
NAME VERSION TYPE
adduser 3.134 deb
apt 2.6.1 deb
base-files 12.4+deb12u6 deb
⋮
util-linux 2.38.1-5+deb12u1 deb
util-linux-extra 2.38.1-5+deb12u1 deb
zlib1g 1:1.2.13.dfsg-1 deb
Vulnerabilities
Finally let’s have a quick look to see if any OS vulnerabilities were found in this image with anchorectl image vulnerabilities docker.io/library/debian:latest -t os
. This is a lot of super-wide output, click through to see the full size image.

Conclusion
So far we’ve introduced AnchoreCTL, shown it’s is easy to install, configure and test. It can be used both locally on developer workstations, and in CI/CD environments such as GitHub, GitLab and Jenkins. We’ll cover the integration of AnchoreCTL with source forges in a later post.
AnchoreCTL is a powerful tool which can be used to automate the management of scanning container contents, generating SBOMs, and analyzing for vulnerabilities.
Find out more about AnchoreCTL in our documentation, and request a demo of Anchore Enterprise.
Add SBOM Generation to Your GitHub Project with Syft
According to the latest figures, GitHub has over 100 million developers working on over 420 million repositories, with at least 28M being public repos. Unfortunately, very few software repos contain a Software Bill of Materials (SBOM) inventory of what’s been released.
SBOMs (Software Bill of Materials) are crucial in a repository as they provide a comprehensive inventory of all components, improving transparency and traceability in the software supply chain. This allows developers and security teams to quickly identify and address vulnerabilities, enhancing overall security and compliance with regulatory standards.
Anchore developed the sbom-action GitHub Action to automatically generate an SBOM using Syft. Developers can quickly add the action via the GitHub Marketplace and pretty much fire and forget the setup.
What is an SBOM?
Anchore developers have written plenty over the years about What is an SBOM, but here is the tl;dr:
An SBOM (Software Bill of Materials) is a detailed list of all software project components, libraries, and dependencies. It serves as a comprehensive inventory that helps understand the software’s structure and the origins of its components.
An SBOM in your project enhances security by quickly identifying and mitigating vulnerabilities in third-party components. Additionally, it ensures compliance with regulatory standards and provides transparency, essential for maintaining trust with stakeholders and users.
Introducing Anchore’s SBOM GitHub Action
Adding an SBOM is a cinch with the GitHub Action for SBOM Generation provided by Anchore. Once added to a repo the action will execute a Syft scan in the workspace directory and upload a workflow artefact SBOM in SPDX format.
The SBOM Action can scan a Docker image directly from the container registry with or without registry credentials specified. Alternatively, it can scan a directory full of artifacts or a specific single file.
The action will also detect if it’s being run during the GitHub release and upload the SBOM as a release asset. Easy!
How to Add the SBOM GitHub Action to Your Project
Assuming you already have a GitHub account and repository setup, adding the SBOM action is straightforward.

- Navigate to the GitHub Marketplace
- Search for “Anchore SBOM Action” or visit Anchore SBOM Action directly
- Add the action to your repository by clicking the green “Use latest version” button
- Configure the action in your workflow file
That’s it!
Example Workflow Configuration
Here’s a bare-bones configuration for running the Anchore SBOM Action on each push to the repo.
name: Generate SBOM
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Anchore SBOM Action
uses: anchore/[email protected]
There are further options detailed on the GitHub Marketplace page for the action. For example, use output-file
to specify the resulting SBOM file name and format
to select whether to build an SPDX or CycloneDX formatted SBOM.
Results and Benefits
After the GitHub action is set up, the SBOM will start being generated on each push or with every release – depending on your configuration.
Once the SBOM is published on your GitHub repo, users can analyze it to identify and address vulnerabilities in third-party components. They can also use it to ensure compliance with security and regulatory standards, maintaining the integrity of the software supply chain.
Additional Resources
The SBOM action is open source and is available under the Apache 2.0 License in the sbom-action repository. It relies on Syft which is available under the same license, also on GitHub. We welcome contributions to both sbom-action and Syft, as well as Grype, which can consume and process these generated SBOMs.
Join us on Discourse to discuss all our open source tools.

Four Years of Syft Development in 4 Minutes at 4K
Our open-source SBOM and vulnerability scanning tools Syft and Grype, recently turned four years old. So I did what any nerd would do: render an animated visualization of the development using the now-venerable Gource. Initially, I wanted to render these videos at 120Hz framerate, but that didn’t go well. Read on to find out how that panned out.
My employer (perhaps foolishly) gave me the keys to our Anchore YouTube and Anchore Vimeo accounts. You can find the video I rendered on YouTube or embedded below.
For those unaware, Gource is a popular open-source project by Andrew Caudwell. Its purpose is to visualize development with pretty OpenGL-rendered videos. You may have seen these animated glowing renders before, as Gource has been around for a while now.
Syft is Anchore’s command-line tool and library for generating a software bill of materials (SBOM) from container images and filesystems. Grype is our vulnerability scanner for container images and filesystems. They’re both fundamental components of our Anchore Enterprise platform but are also independently famous.
Generating the video
Plenty of guides online cover how to build Gource visualizations, which are pretty straightforward. Gource analyses the git log of changes in a repository to generate frames of animation which can be viewed or saved to a video. There are settings to control various aspects of the animation, which are well documented in the Gource Wiki.
By default, while Gource is running, a window displaying the animation will appear on your screen. So, if you want to see what the render will look like, most of the defaults are fine when running Gource directly.
Tweak the defaults
I wanted to limit the video duration, and render at a higher resolution than my laptop panel supports. I also wanted the window to be hidden while the process runs.
tl;dr Here’s the full command line I used to generate and encode the 4K video in the background.
$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
-s '-screen 0 4096x2160x24 ' /usr/bin/gource \
--max-files 0 --font-scale 4 --output-framerate 60 \
-4096x2160 --auto-skip-seconds 0.1 --seconds-per-day 0.16 \
--bloom-multiplier 0.9 --fullscreen --highlight-users \
--multi-sampling --stop-at-end --high-dpi \
--user-image-dir ../faces/ --start-date 2020-05-07 \
--title 'Syft Development https://github.com/anchore/syft' \
-o - \
ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i - \
-vcodec libx264 -preset veryfast -pix_fmt yuv420p \
-crf 1 -threads 0 -bf 0 ../syft-4096x2160-60.mkv
Let’s take a step back and examine the preparatory steps and some interesting points to note.
Preparation
The first thing to do is to get Gource and ffmpeg. I’m using Ubuntu 24.04 on my ThinkPad Z13, so a simple sudo apt install gource ffmpeg works.
Grab the Syft and/or Grype source code.
$ mkdir -p ~/Videos/gource/
$ cd ~/Videos/gource
$ git clone https://github.com/anchore/syft
$ git clone https://github.com/anchore/grype
Gource can use avatar images in the videos which represent the project contributors. I used gitfaces for this. Gitfaces is available from PyPI, so can be installed with pip install -U gitfaces
or similar. Once installed, generate the avatars from within the project folder.
$ cd ~/Videos/gource/syft
$ mkdir ../faces
$ gitfaces . ../faces
Do this for each project you wish to render out. I used a central ../faces
folder as there would be some duplication between the projects I’m rendering. However, not everyone has an avatar, so they’ll show up as an anonymous “head and shoulders” in the animation.
Test render
Perform a quick test to ensure Gource is installed correctly and the avatars are working.
$ cd ~/Videos/gource/syft
$ /usr/bin/gource --user-image-dir ../faces/
A default-sized window of 1052×834 should appear with nicely rendered blobs and lines. If you watch it for any appreciable length, you’ll notice it can be boring in the gaps between commits. Gource has some options to improve this.
The --auto-skip-seconds
option defines when Gource will skip to the next entry in the git log while there is no activity. The default is 3 seconds, which can be reduced. With --seconds-per-day
we can set the render speed so we don’t get a very long video.
I used 0.1 and 0.16, respectively. The result is a shorter, faster, more dynamic video. The Gource Wiki details many other options for Gource.
Up the resolution!
While the default 1052×834 video size is fine for a quick render, I wanted something much bigger. Using the ‘4 years in 4 minutes at 4K’ heading would be fun, so I went for 4096×2160. My laptop doesn’t have a 4K display (it’s 2880×1800 natively), so I decided to render it in the background, saving it to a video.
To run it in the background, I used xvfb-run
from the xvfb
package on my Ubuntu system. A quick sudo apt install xvfb
installed it. To run Gource inside xvfb we simply prefix the command line like this:
(this is not the full command, just a snippet to show the xvfb syntax)
$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
-s '-screen 0 4096x2160x24 ' /usr/bin/gource -4096x2160
Note that the XServer’s resolution matches the video’s, and we use the fullscreen option in Gource to use the whole virtual display. Here we also specify the color bit-depth of the XServer – in this case 24.
Create the video
Using ffmpeg
—the Swiss army knife of video encoding—we can turn Gource’s output into a video. I used the x264 codec with some reasonable options. We can run these as two separate commands: one to generate a (huge) series of ppm images and the second to compress that into a reasonable file size.
$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
-s '-screen 0 4096x2160x24 ' /usr/bin/gource \
--max-files 0 --font-scale 4 --output-framerate 60 \
-4096x2160 --auto-skip-seconds 0.1 --seconds-per-day 0.16 \
--bloom-multiplier 0.9 --fullscreen --highlight-users \
--multi-sampling --stop-at-end --high-dpi \
--user-image-dir ../faces/ --start-date 2020-05-07 \
--title 'Syft Development: https://github.com/anchore/syft' \
-o ../syft-4096x2160-60.ppm
$ ffmpeg -y -r 60 -f image2pipe -vcodec ppm \
-i ../syft-4096x2160-60.ppm -vcodec libx264 \
-preset veryfast -pix_fmt yuv420p -crf 1 \
-threads 0 -bf 0 ../syft-4096x2160-60.mkv
Four years of commits as uncompressed 4K60 images will fill the disk pretty fast. So it’s preferable to chain the two commands together so we save time and don’t waste too much disk space.
$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
-s '-screen 0 4096x2160x24 ' /usr/bin/gource \
--max-files 0 --font-scale 4 --output-framerate 60 \
-4096x2160 --auto-skip-seconds 0.1 --seconds-per-day 0.16 \
--bloom-multiplier 0.9 --fullscreen --highlight-users \
--multi-sampling --stop-at-end --high-dpi \
--user-image-dir ../faces/ --start-date 2020-05-07 \
--title 'Syft Development: https://github.com/anchore/syft' \
-o - ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i - \
-vcodec libx264 -preset veryfast -pix_fmt yuv420p \
-crf 1 -threads 0 -bf 0 ../syft-4096x2160-60.mkv
On my ThinkPad Z13 equipped with an AMD Ryzen 7 PRO 6860Z CPU, this takes around 42 minutes and generates a ~10GB mkv video. Here’s what the resource utilisation looks like while this is running. Fully maxed out all the CPU cores. Toasty!
Challenges
More frames
Initially, I considered creating a video at 120fps rather than the default 60fps that Gource generates. However, Gource is limited in code to 25, 30, and 60fps. As an academic exercise, I patched Gource (diff below) to generate visualizations at the higher frame rate.
I’m not a C++ developer, nor do I play one on TV! But with a bit of grep and a small amount of trial and error, I modified and rebuilt Gource to add support for 120fps.
diff --git a/src/core b/src/core
--- a/src/core
+++ b/src/core
@@ -1 +1 @@
-Subproject commit f7fa400ec164f6fb36bcca5b85d2d2685cd3c7e8
+Subproject commit f7fa400ec164f6fb36bcca5b85d2d2685cd3c7e8-dirty
diff --git a/src/gource.cpp b/src/gource.cpp
index cf86c4f..755745f 100644
--- a/src/gource.cpp
+++ b/src/gource.cpp
@@ -153,7 +153,7 @@ Gource::Gource(FrameExporter* exporter) {
root = 0;
//min physics rate 60fps (ie maximum allowed delta 1.0/60)
- max_tick_rate = 1.0 / 60.0;
+ max_tick_rate = 1.0 / 120.0;
runtime = 0.0f;
frameskip = 0;
framecount = 0;
@@ -511,7 +511,7 @@ void Gource::setFrameExporter(FrameExporter* exporter, int video_framerate) {
this->frameskip = 0;
//calculate appropriate tick rate for video frame rate
- while(gource_framerate<60) {
+ while(gource_framerate<120) {
gource_framerate += video_framerate;
this->frameskip++;
}
I then re-ran Gource with --output-framerate 120
and ffmpeg with -r 120
, which successfully generated the higher frame-rate files.
$ ls -lh
-rw-rw-r-- 1 alan alan 7.3G Jun 15 21:42 syft-2560x1440-60.mkv
-rw-rw-r-- 1 alan alan 8.9G Jun 15 22:14 grype-2560x1440-60.mkv
-rw-rw-r-- 1 alan alan 13G Jun 16 22:56 syft-2560x1440-120.mkv
-rw-rw-r-- 1 alan alan 16G Jun 16 22:33 grype-2560x1440-120.mkv
As you can see and probably expect on some test renders, with these settings, double the frames means double the size. I could have fiddled with ffmpeg to use better-optimized options, or a different codec, but decided against it.
There’s an even more significant issue here. There are precious few places to host high-frame-rate videos; few people have the hardware, bandwidth, and motivation to watch them. So, I rolled back to 60fps for subsequent renders.
More pixels
While 4K (4096×2160) is fun and fits the story of “4 years in 4 minutes at 4K”, I did consider trying to render out at 8K (7680×4320). After all, I had time on my hands at the weekend and spare CPU cycles, so why not?
Sadly, the hardware x264 encoder in my ThinkPad Z13 has a maximum canvas size of 4096×4096, which is far too small for 8K. I could have encoded using software rather than hardware acceleration, but that would have been ludicrously more time-consuming.
I do have an NVIDIA card but don’t believe it’s new enough to do 8K either, being a ‘lowly’ (these days) GTX 2080Ti. My work laptop is an M3 MacBook Pro. I didn’t attempt rendering there because I couldn’t fathom getting xvfb working to do off-screen rendering in Gource on macOS.
I have another four years to figure this out before my ‘8 years of Syft in 8 minutes at 8K’ video, though!
Minor edits
Once Gource and ffmpeg did their work, I used Kdenlive to add some music and our stock “top and tail” animated logo to the video and then rendered it for upload. The default compression settings in Kdenlive dramatically reduced the file size to something more manageable and uploadable!
Conclusion
Syft and Grype are – in open source terms – relatively young, with a small, dedicated team working on them. As such, the Gourse renders aren’t as busy or complex as more well-established projects with bigger teams.
We certainly welcome external contributions over on the Syft and Grype repositories. We also have a new Anchore Community Discourse where you can discuss the projects and this article.
If you’d like to see how Syft and Grype are integral to your SBOM generation, vulnerability and policy enforcement tools, contact us and watch the guided tour.
I always find these renders technically neat, beautiful and relaxing to watch. The challenges of rendering them also led me down some interesting technical paths. I’d love to hear feedback and suggestions over on the Anchore Community Discourse