Successful container and CI/CD security encompass not only vulnerability analysis but also a mindset based on integrating security with every step of the Software Development Life Cycle (SDLC). At Anchore, we believe incorporating early and frequent scanning with policy enforcement can help reduce overall security risk. This blog shares some of the elements that have helped our customers be successful with Anchore.
Scan Early/Scan Often
Anchore allows you to start analyzing right away, without changing your existing processes. There is no downside in putting an `anchore-cli image add <new image>` at the end of your CI/CD pipeline, and then exploring how to use the results of vulnerability scans or policy evaluations later. Since all images added to Anchore are there until you decide to remove them, analysis can be revisited later and new policies can be applied as your organizational needs evolve.
Scanning early catches vulnerabilities and policy violations prior to deploying into production. By scanning during the CI/CD pipeline, issues can be resolved prior to runtime narrowing the focus to issues that are solely runtime-related at that point. This “Shift Left” mentality moves application quality and security considerations closer to the developer, allowing issues to be addressed sooner in the delivery chain. Whether it’s CI/CD build plugins (Jenkins, CircleCI, etc.) or repository image scanning, adding security analysis to your delivery pipeline can reduce the time it takes to resolve issues as well as lower the costs associated with fixing security issues in production.
To learn more about Anchore’s CI/CD integrations, take a look at our CI/CD documentation.
To learn more about repository image analysis, see our Analyzing Images documentation.
Custom Policy Creation
At Anchore, we believe in more than just CVEs. Anchore policies act as a one-stop-checking-spot for Dockerfile best practices, as well as keep policy enforcement in-line with your organizational security standards, such as secret storage and application configuration within your container. At a high level, policy bundles contain the policies themselves, whitelists, mappings, whitelisted images, and blacklisted images.
Policies can be configured to be compliant with NIST, ISO, and banking regulations, among many others. As industry regulations and auditing regularly affect the time to deployment, performing policy checks early in the CI/CD pipeline can help increase the speed of deployments without sacrificing auditing or regulation requirements. At a finer-grained level, custom policies can enforce organizational best practices at an earlier point in the pipeline, enabling cross-group buy-in between developers and security personnel.
To learn more about working with Anchore policies, please see our Working with Policies documentation.
Policy Enforcement with Notifications
To build upon the above topic, another best practice is enabling notifications. With a typical CI/CD process, build failures prompt notifications to fix the build, whether it is due to a missing dependency or simply a typo. With Anchore, builds can be configured to fail when an analysis or a policy evaluation fails, prompting attention to the issue.
Taking this a step further, Anchore enables notifications through webhooks that can be used to notify the appropriate personnel in the event that there is an update to a CVE or if a policy evaluation status changes. Anchore leverages the ability to subscribe to tags and images to receive notifications when images are updated, when CVEs are added or removed and when the policy status of an image changes so you can take a proactive approach to ensure security and compliance. Having the ability to stay on top of the notifications above allows for the appropriate methods for remediation and triage to take place.
To learn more about using webhooks for notifications, please see our Webhook Configuration documentation.
For an example of how notifications can be integrated with Slack, please see our Using Anchore and Slack for Container Security Notifications blog.
Archiving Old Analysis Data
There may be times that older image analysis data is no longer needed in your working set but, for security compliance reasons, the data needs to be retained. Adding an image to the archive includes all analyses, policy evaluations, and tags for an image, allowing you to delete the image from your working set. Manually moving images to an archive can be cumbersome and time-consuming, but automating the process reduces the number of images in your working set while still retaining the analysis data.
Archiving analysis data backs it up, allowing it to be removed from the working set; it can always be moved back should something in the policy change, an organizational shift occurs, or you simply want it back in the working set. Archiving image data keeps the live set of images in line with what is current; over time, it could become cumbersome to continuously be running policy evaluations and vulnerability scans against images that are old and potentially not important. Archiving them keeps the working set lighter. Anchore’s archiving service makes it simple to automatically archive images and their data, implemented via adding rules to the analysis archive. With such rules, images with an analyzed date older than a specified number of days, specific tags, and the number of images can be automatically added to the archive, making it simpler to work with the newer images your organization is concerned with while maintaining the analysis data of older images.
To learn more about archiving old analysis data, please see our Using the Analysis Archive documentation.
To learn more about working with archiving rules, please see our Working with Archive Rules documentation.
Leveraging External Object Storage to Offload Database Storage
Anchore Engine uses a PostgreSQL database to store structured data for images, tags, policies, subscriptions, and metadata about images by default, but other types of data in the system are less structured and tend to be larger pieces of data. Because of that, there are benefits to supporting key-value access patterns for things like image manifests, analysis reports, and policy evaluations. For such data, Anchore has an internal object storage interface that, while defaulted to use the same PostgreSQL database for storage, can be configured to use external object storage providers to support simpler capacity management and lower costs.
By offloading the database storage, it eliminates the need to scale-out PostgreSQL while speeding up its performance. As the database grows, the various queries that are run against it and writing new data to it slow down, in turn slowing the productivity of Anchore. By leveraging an external object store and removing bulk data from PostgreSQL, only the relevant image metadata will be stored there, while other important data is stored externally and can be archived at lower costs.
To learn more about using any of our supported external object storage drivers, please see our Object Storage documentation.
Conclusion
Leveraging some of the best practices that have made our customers successful can help your organization achieve the same success with Anchore. As an open-source community, we value feedback and hearing about what best practices the community has developed.