5 DevSecOps Myths to Dispel in 2021

DevSecOps seems to attract its share of myths. As we go into 2021, it’s time that we as an industry work to dispel those myths for our prospective customers, customers, and internal stakeholders across our organizations.

Here are some common DevSecOps myths we can all work on dispelling in 2021:

1. Organizations lose control when they move to DevSecOps.

Software development has a legacy of long development timelines in both business and the public sector. There are long quality assurance cycles with a final assessment by a security team at the end of the process. A move to DevSecOps may seem like a loss of control to project managers, developers, QA, and security teams who are used to working on development projects following traditional waterfall software development methodologies.

Dispelling the myth that your organization will lose control once you move to DevSecOps takes a multi-faceted approach. Internal training for your technology and business teams can be a powerful force to quell this misconception for starters. Then, when you tell the story of a DevSecOps pilot project, be sure to include facts around how the DevSecOps toolchain improved security and compliance coverage.

Another exciting way to dispel stories about loss of control is to focus on the new reporting options for developers, security analysts, and business stakeholders that can now be made available because of the tools and processes you’ve put in place for DevSecOps.

2. You can buy DevSecOps.

Marketing departments, PR agencies, and vendors are all trying to ride the DevSecOps trend to increase sales. The message that you can buy DevSecOps from a vendor — after all, it’s just a tool or a suite of tools — is a myth that DevSecOps has inherited from DevOps. Sales and marketing reps perpetuate this myth on sales calls all the time.

Part of any DevSecOps pilot should be education and outreach to stakeholders and influencers inside and outside your IT groups. Your non-developers are still going to feel some cultural changes that DevSecOps adoption brings to organizations.

3. DevSecOps is about Speed and Speed Only.

There’s the ongoing myth that DevSecOps is about speed and speed only. Improving software delivery velocity is but one aspect. Automation help speed deployments while improving software quality and compliance.

4. DevSecOps requires an elite senior-level Development Team.

There’s a wrong sentiment out there that DevSecOps is only for an elite team of senior-level developers working as a tight group with specialized training, certifications, and tools. There’s no secret society of DevSecOps either.

You shoot down this myth by keeping open lines of communications open between your DevSecOps delivery teams and the rest of your organization. Provide a DevSecOps overview to your business stakeholders to teach them the benefits of DevSecOps in business terms they can understand. Ask what support your business and technology teams to best communicate with each other because on of the tenants of DevSecOps is transparency, after all.

5. DevSecOps isn’t for Remote Teams.

A program manager once told me that remote teams couldn’t do DevOps. Well, COVID-19 has proven him wrong. Enough said. The same myth follows DevSecOps around as well.

Let’s say you may have a team that’s finding success with DevSecOps during the pandemic. You still need to capture and communicate the success stories and the lessons learned from working on DevSecOps as a remote team. At some point (maybe), your organization will return to everyday life back in the office. Anecdotes of DevSecOps success during COVID-19 will not be enough for some critics. Take the extra steps to capture data, metrics, and positive feedback from your internal and external customers.

Final Thoughts 

DevSecOps is another technology change that employees have to keep tracking. Some employees will embrace the changes with passion. Others will see DevSecOps as a disruption to their daily routines. DevSecOps myths take root in between these groups. Start of your 2021 with a campaign to improve your communication and education about DevSecOps to dispel such myths.

2021 DevSecOps Predictions: A Year of Growth and “Shift-Left”

As a company, Anchore has been tracking the growth of DevSecOps we’re seeing in the market and with our commercial and public sector customers during the past year. DevSecOps keep progressing despite everything that was going on with the pandemic.

 Our team recently got together and made some predictions about how DevSecOps will fare in 2021:

Shift Left Grows from Objective to Best Practice

Shift-left will become more of a practice than an objective.  In 2021, I predict that more dev teams will embrace shift-left concepts in a more pragmatic way, predicts Dan Nurmi, CTO of Anchore.  While early on, much of the messaging around shift-left security was taken as ‘moving’ responsibility from so-called ‘right’ (production, run-time, with responsibilities being on operators) to ‘left’ (closer to the source code with responsibilities being on software developers), the more realistic perspective is to embrace shift-left as ‘spreading’ the responsibilities rather than wholesale ‘moving’ them. 

In practical terms, I predict that as more quality security/compliance tools exist that integrate into a DevSecOps automation design, the reality and value of being able to detect, report and remediate security, compliance and best-practice violations at *every* stage of an SDLC will become the norm.

Shift Compliance Left Becomes Reality

Compliance is ready for shift-left treatment, Nurmi also predicted.  There is significant overlap between many aspects of an organization’s compliance requirements and the practices that exist for ensuring secure software development and delivery.  In the same way that shift-left has become a rallying cry for more efficiently handling secure software delivery, we predict that in 2021 the industry will begin looking at how a similar approach (if not identical) can apply to solving organizational compliance requirements, particularly as they pertain to the organization’s own internal use of software and software services.

DevSecOps grows outside of Compliance-based Industries

“Given the increasing number of digital assets and the average cost of a cyberattack, it is critical for organizations to constantly be looking for weaknesses in their attack surfaces. In 2021, we will see more organizations than ever adopt DevSecOps into their cybersecurity strategies, or risk having their integrity and reputations destroyed,” Blake Hearn, DevSecOps engineer for Anchore, predicts.

2020 has been a year of change for many aspects of people’s lives, especially technology. Up to this point, DevSecOps has mostly operated in industries with heavy security mandates: defense, healthcare, and finance, adds Michael Simmons, DevSecOps engineer at Anchore. “I see DevSecOps spreading to other sectors as cybercrime rises due to the importance of software in function of people’s lives in the pandemic world.”

“Additionally, California consumer data protection laws came into effect in 2020. Any businesses that operate in California need to abide by these rules,” Simmonds added. “Because of this, I see DevSecOps spreading into more mainstream industries and technology companies as they move towards maintaining compliance.”

DevSecOps continues to Grow into a Data Play

“Opinions on the growth of artificial intelligence (AI) in DevOps and DevSecOps vary. I see the release of AWS DevOps Guru more than a sign that DevOps and DevSecOps will grow into even more data-driven activities well into 2021 and beyond,” predicted Will Kelly, technical marketing manager for Anchore. 

“With so many DevSecOps teams moving to remote work, it only makes sense to maximize the use of backend data to maximize the effectiveness and efficiency of those teams. AI and machine learning tools are where we’re going to see that happen for real.”

DevSecOps in 2021

2021 is bound to be an exciting year of growth and maturing for DevSecOps as enterprises continue to lean into DevSecOps tools and strategies to apply lessons they learned during COVID-19.

2021 Container Predictions: The Year of Containers Walking Fast

So many of us will be glad when 2020 is over and one for the history books. On the bright side, it has been an excellent year for container technologies, though. Recently, some Anchore employees made their predictions for the container market in 2021:

2021: The Year of Containers “Walking Fast”

“If we look at container adoption as a matter of crawl, walk, run, 2021 is looking like many in the industry will be walking fast,” predicts Dan Nurmi, CTO of Anchore. “We’ve seen many mid to large organizations choose containers to realize their ultimate objective of delivering fast, stable, highly-automated, secure SDLC processes. Up until now, the greatest success we’ve seen has been in smaller R&D and greenfield projects.  Moving forward, organizations will be building on the tools and techniques delivered by these successful projects to drive container adoption further into critical production application environments.” 

He adds that many of these container projects have shown real value without sacrificing design characteristics.

Containers drop their Bad Reputation

He predicts that in 2021, now that many successful container-based designs have been proven out, organizations and designers will begin seeing characteristics of containers as beneficial rather than as problems.  He explains the flexibility of software choice to developers, clear and trackable content, quick update and deployment are aspects that containers can quickly provide and can be leveraged rather than resisted, now that tech exists to overcome those early concerns such as stability, security, monitoring, and provenance tracking.

Containers aren’t the enemy, advises Nurmi.  “Whenever there is an innovation/evolution in the developer infrastructure space, there is an immediate and legitimate outpouring of concerns about the challenges and problems that appear when shifting to something new.  Container technology supports a compelling enough set of values to make the change worthwhile. However, with the availability of new and ever-improved tooling, we’re now seeing organizations overcoming many of these initial concerns, ranging from software provenance to security and monitoring.”

Growing Container usage Demands  Better Tooling

More and more technology corners that haven’t adopted containerization (or have, but not entirely) will continue the path towards better and more usage predicts Alfredo Deza, a senior software engineer at Anchore. He adds, “With that usage, the demand for better tooling will follow through. The need for integrations everywhere and anywhere for containers will continue, and more tools will default to a containerized installation only.

“In areas like Machine Learning, containerization is becoming more prominent, and more cloud support dedicated to containers and machine learning will follow up,” Deza further predicts.

Multi-cloud goes Mainstream

Multi-cloud will become mainstream in 2021, according to Paul Novarese, a senior sales engineer at Anchore. Unlike multi-region techniques that are mostly availability tactics, multi-cloud is a strategic way to avoid vendor lock-in.

Serverless Container Platform Adoption Expands

Infrastructure becomes more of a hindrance, and the adoption of serverless container platforms (such as Fargate) expands, Novarese predicts.  In a serverless universe, security solutions that rely on sidecar containers, agents, or kernel modules become obsolete – deep image awareness and continuous compliance become more and more important.

Rise of Docker Build Alternatives

There are already a lot of options for building containers without Docker. In 2021, these options will continue to gain momentum, and alternatives to Dockerfile may be just as popular as docker build, according to Adam Hevenor, principal product manager at Anchore. “I am keeping my eye on Cloud Native Buildpacks, which was recently brought into the Incubation stage by the CNCF.” 

Rise of the Rest of the Registries

With changes to the Dockerhub usage caps, we can expect to see more and more projects use alternative registries, predicts Hevenor. It remains to be seen whether open source projects will move out of Docker Hub. Still, offerings from Github, Amazon Web Services, and Google are going to become increasingly common places to keep your container images. 

Beyond Kubernetes (towards Serverless)

While Kubernetes has captured most enterprises’ mindshare, managing and upgrading Kubernetes is still a big challenge for operators, stated Hevenor. He predicts with the announcement of Lambda’s support of containers and the growth of Google Cloud Run and Azure Functions you can expect to see more and more enterprises consider serverless alternatives to Kubernetes. 

Container Adoption on Microsoft Windows will Double in 2021

Mike Suding, a  sales engineer for Anchore, predicts that container usage/adoption on Microsoft Windows will double compared to 2020. Because of the law of small numbers and Microsoft’s history of excelling when their executive team gets behind a product or service.

Containers in 2021

The Anchore team has high expectations for container acceptance and adoption in the market next year even when enterprises begin to enter post-COVID-19 recovery and many digital transformation projects mark their completion.

Securing the DevSecOps Pipeline

We live in an unprecedented era of remote work due to COVID-19.  Now is the time to review the security of your DevSecOps pipeline to ensure that the tools and workflow powering your software development is secure from attacks.

Here are some tips to consider as you evaluate your approach to integrating security at each stage of the development lifecycle.

Implement Threat Modeling

Threat Modeling not only helps security teams define security requirements and assess underlying risks associated with new and existing applications; it fosters ongoing communication between security and development teams. Integrating threat modeling tools in the development lifecycle promotes collaboration between each team on the system architecture and provides a consistent method for tracking key information. Microsoft’s Threat Modeling Tool and OWASP’s Threat Dragon are popular open source tools used in DevSecOps pipelines to conduct threat modeling.

Utilize IDE Extensions

Utilizing IDE extensions to identify vulnerabilities and security flaws as developers are writing code is an easy way to catch security issues early on. It also serves as a way to educate developers on good coding practices. 

Run Peer Code Reviews

Implementing peer code reviews is another method to ensure developers are using secure coding practices. Code reviews practices can improve the quality of your organization’s code base as it fosters collaboration between reviewers and those writing the code, facilitating knowledge sharing, consistency, and legibility of code.

Implement Pre-Commit Hooks

Exposing secrets such as application programming interface (API) keys, database credentials, service tokens, and private keys to source code repositories occur more frequently than you might think and can be costly for organizations. In 2019, security researchers discovered over 200,000 unique secrets on GitHub. Integrating pre-commit tools in code repositories can prevent your secrets from being pushed inadvertently.

Bolster your DevSecOps workflow with Automated Security Scanning and Testing 

As development teams focus on ways to push faster release cadences, automated security scanning and testing is critical to identify vulnerabilities and other issues early in the development lifecycle. 

For containerized applications, container security scanning tools evaluate container applications and their underlying file system for vulnerabilities, secrets, exposed ports, elevated privileges, and other misconfigurations that may be introduced either from public base images or developer mistakes. 

With the proliferation of open source software in recent years, modern applications often consist of third-party dependencies. There are advantages to utilizing OSS; however, if not carefully inspected, it can introduce vulnerabilities and other issues. Implementing dependency checking tools can analyze dependencies in your code to identify issues such as vulnerabilities and the use of non-compliant licenses.

Static application security testing (SAST) should be integrated into the pipeline to automatically scan every code change as it is committed. Initiating workflows from scan results can facilitate immediate feedback leading to quicker remediation. Dynamic application security testing (DAST) is a good way to evaluate your running applications for vulnerabilities that may be missed by SAST tools. 

Build Secure Immutable Infrastructure in the Cloud

The adoption of DevOps and cloud-hosted services facilitated the practice of Infrastructure-as-Code (IaC) in which enterprise services could be architected, committed as code, and deployed in an automated fashion. While this has allowed IT teams to quickly deploy enterprise applications and services, this has introduced challenges for security teams to identify issues in IaC before these applications and services are deployed to production. Static security scanning tools can analyze infrastructure tools such as Terraform and Cloudformation for any misconfigurations and other security issues early in the development lifecycle and provide feedback in an automated fashion. 

Implementing practices such as configuration management and baseline configurations can help facilitate immutable infrastructure that is deployed in a consistent manner based on a defined set of requirements and continuously monitor infrastructure for inadvertent or unauthorized changes.

Utilize Secrets Management

As we discussed earlier in this post, exposed secrets can be an organizational nightmare. However, ensuring that sensitive information is protected is no small task either. This is where secrets management comes into play. Every organization should have a set of tools and processes to protect passwords, API keys, SSH keys, and other secrets. Besides providing a secure method for storing secrets, secrets management can also facilitate other best practices such as auditing, role-based access control, and lifecycle management.

Final Thoughts

Communications and collaboration amongst your team members should be part of all the tips in this post. Your new or renewed focus on DevSecOps pipeline security should also become part of any internal processes that you have in place to govern tool security and maintenance.

DevOps to DevSecOps Cultural Transformation: The Next Step

Part of any DevOps to DevSecOps transformation is cultural transformation. While you’ve probably made steps to strengthen your development and operations cultures to embrace the concepts and tools that power DevOps, there’s going to be some more work to do to transform your burgeoning corporate DevOps culture to embrace DevSecOps fully.

DevSecOps is a growing movement in the commercial and public sectors to incorporate security into the software delivery process further.  Monitoring and analytics in the continuous integration/continuous delivery (CI/CD) pipeline expose software risks and vulnerabilities to DevSecOps teams for their follow-up actions and remediation.

Here are some next steps to grow your DevOps culture to DevSecOps:

1. Position DevSecOps as the Next Step

DevOps is a journey for many in the IT industry. It takes time and investments in staffing, tools, processes, and security to move from a traditional waterfall-driven software development life cycle (SDLC) to DevOps. There are two scenarios for organizations who want to move to DevSecOps:

  • Moving straight from a waterfall SDLC, skipping traditional DevOps, and moving right to DevSecOps
  • Moving from DevOps to DevSecOps through upgrading the CI/CD toolchain with a range of security automation tools, shifting security left, and bringing security team members into the development cycle

Either DevSecOps adoption scenario needs outreach and training support to communicate expectations, next steps, and changes to your developers, sysadmins, and security team. Work closely with your teams during your move to DevSecOps to answer their questions, take their feedback on progress and changes, while giving the transformation project a chance to pivot based on lessons learned along the way.

2. Move from Gated Processes to Shared Responsibility

DevOps depends on gates between each stage. Managers, stakeholders, and even entire development organizations can justify these gates because they provide a sense of security for troubleshooting, halting delivery, or stakeholder inquiries into the project. 

DevSecOps substitutes mutual accountability for those gates. Mutual accountability comes about through process changes and improving collaboration between your development, security, and operations teams through cross-functional teams supported by the proper technology tools and executive sponsorship.

3. Communicate about Security outside your IT Department

Such a new and enduring focus on security during the application delivery life cycle means you have to keep communications and outreach channels open with your stakeholders and user community. You need to create strong internal communications about how DevSecOps is changing how your teams deliver software, and its benefits they can expect from this transformation.

You need to extend your security education to other departments, such as your sales and marketing teams. For example, moving to a DevSecOps model gives your marketing team the reason to create security-focused messaging and collateral that your sales team can use to reach prospective and existing customers who’re security conscious.

4. Make Security no longer “Us vs. Them”

Gone are the days the cybersecurity team was the “Team of No,” and security testing took place right before product launch. Today consumers and enterprise customers want rapid updates and app stores. It’s time to dismantle the vestiges of “us vs. them” and make security a priority in your application development from project kickoff. Do everything you can process and tool-wise to move away from the stress of incident and issue-driven security responses, leading to fixing security issues at the end of your development life cycle.

Building collaboration between your DevOps and security teams starts with:

  • Building security into each stage of your CI/CD workflow
  • Integrating mandatory security checks into your code reviews
  • Integrating automated container security scanning into your container supply chain

Beyond these incremental steps to build collaboration between your teams, it helps managers and team leads to set the example for collaboration. Organizational culture and internal politics can breed rivalries that can interfere with collaboration, if not the entire DevOps cycle.

5. Target Developers’ Baggage

Developers bring the best practices and bad habits of every previous employer and past contract with them. There are plenty of developers who can sort this out from their work, but some find challenges in sorting such things out for themselves.  DevOps and DevSecOps definitions and implementations vary. Not to mention, COVID-19 is also raising stress levels at home and work for people causing work to slip.

Some common ways to target developer baggage include:

  • Focusing on developer experience (DX) throughout your development tool and CI/CD toolchain selection
  • Communicate about your processes in terms of frameworks that capture approved tools, processes, and expectations for your developers, QA, and system administrators during employee onboarding

Final Thoughts

Culture can be the most essential but often misunderstood portion of DevOps transformation. I’m fond of the old saying that goes, “you can’t buy DevOps.” The same goes for DevSecOps. The security and compliance implications of DevSecOps make it, so you need to go further with your security outreach and communications to help push cultural transformation forward.

Package Blocklists Are Not Foolproof

As organizations progress in their software container adoption journeys, they realize that they need image scanning beyond simple vulnerability checks.  As security teams develop more sophisticated image policies, many implement package blocklists to keep unnecessary code such as curl and sshd out of their images. Curl can be a handy tool in development and debugging, but attackers can also use it to download malicious code into an otherwise trusted container.  There are two primary scenarios security teams want to protect against:

  • An attacker compromises a container that has curl in it and then uses curl to bring compromised code into the environment
  • A developer uses curl to download unapproved code, configurations, or binaries from unvetted sources (e.g. random GitHub repositories) during the build process (this could be malicious or inadvertent)

For the first scenario, a simple blocklisting of the curl package will cover most cases.  If we can produce an image that we know curl is not installed on, we’ve effectively mitigated an entire class of potential attacks.

Note: The policy rules, Dockerfiles, and other files used in this article are available from GitHub.

Blocklisting curl

Blocklisting packages is a pretty straightforward process.  We just need a simple policy rule:

Anchore Enterprise policy rule: Gate: packages Trigger: blacklist Parameter: name=curl Action: stop

We’ll build an example image with this Dockerfile to test the rule:

# example dockerfile that will use curl to download source, anchore
# will stop this with a simple package blocklist on curl

FROM alpine:latest   WORKDIR /
RUN apk update && apk add --no-cache build-base curl

# download source and build
RUN curl -o - https://codeload.github.com/kevinboone/solunar_cmdline/zip/master | unzip -d / -
RUN cd /solunar_cmdline-master && make clean && make && cp solunar /bin/solunar

HEALTHCHECK --timeout=10s CMD /bin/date || exit 1
USER 65534:65534
CMD ["-c", "London"]
ENTRYPOINT ["/bin/solunar"]

OK, let’s build, push, and scan the image.

pvn@gyarados ~/curl_example# export ANCHORE_CLI_USER=admin
pvn@gyarados ~/curl_example# export ANCHORE_CLI_PASS=foobar
pvn@gyarados ~/curl_example#
 export ANCHORE_CLI_URL=http://anchore.example.com:8228/v1

pvn@gyarados ~/curl_example# docker build -t 
pvnovarese/curl_example:simple .
Sending build context to Docker daemon  112.1kB
Step 1/12 : FROM alpine:latest
 ---> a24bb4013296

[...]

Successfully built 799a36c3cb2d
Successfully tagged pvnovarese/curl_example:simple

pvn@gyarados ~/curl_example# docker push pvnovarese/curl_example:simple
The push refers to repository [docker.io/pvnovarese/curl_example]

[...]

pvn@gyarados ~/curl_example# anchore-cli image add --dockerfile ./Dockerfile pvnovarese/curl_example:simple

[...]

Simple curl Example

As expected, the package blocklist caught the installed curl package and the image fails the policy evaluation.

Multi-stage Builds Add Complexity

We’ve increased our protection against developers using curl to bring unknown code from random places on the internet into our environment.

But what if the developer uses a multi-stage build?  If you’re not familiar with multi-stage builds, they are frequently used to create more compact docker images.  The most common pattern is that a first stage is used to build the software, then the binaries and other artifacts produced are transferred to the final stage, leaving behind the source code, build tools, and other bits that are vital to building the code but aren’t needed to run the code.  The build-stage container then is discarded and only the final lean container with the bare necessities moves on.

Since those intermediate-stage containers are ephemeral, Anchore Enterprise doesn’t have access to them and can only scan the final image.  Because of this, many things that happen during the actual build process can avoid detection.  A developer can install curl in the intermediate build-stage container, pull down unvetted code, and then copy a compromised binary to a final stage image without installing curl in that final image.

### example multistage build - in this case, a simple package blocklist
### will NOT stop this, since curl only is installed in the intermediate
### "builder" image and doesn't exist in the final image.  To stop this,
### we can look for curl in the RUN commands in the Dockerfile.

### Stage 1
FROM alpine:latest as builder
WORKDIR /solunar_cmdline-master
RUN apk update && apk add --no-cache build-base curl

### Clone private repository
RUN curl -o - https://codeload.github.com/kevinboone/solunar_cmdline/zip/master | unzip -d / -
RUN make clean && make

### Stage 2
FROM alpine:latest

HEALTHCHECK NONE
WORKDIR /usr/local/bin
COPY --from=builder /solunar_cmdline-master/solunar /usr/local/bin/solunar


# if you want to use a particular localtime,
# uncomment this and set zoneinfo appropriately
# RUN apk add --no-cache tzdata bash && cp /usr/share/zoneinfo/America/Chicago /etc/localtime


USER 65534:65534
CMD ["-c", "London"]
ENTRYPOINT ["/usr/local/bin/solunar"]

The final image output from this example is a completely standard alpine:latest image with a single binary copied in. Our simple package blocklist won’t catch this: the multistage image passes the policy evaluation even though curl was installed and used as part of the build process. Only the final image is checked against the package blocklist.

Our sample image passes the policy evaluation even though curl was used in the build process because the package is not installed in the final image.

To increase our protection, we should check for RUN instructions in the Dockerfile that call curl in addition to our package blocklist rule.

two new policy rules are added Gate: dockerfile Trigger: instruction Parameter: instruction=RUN check=like value=.curl. Action: stop Gate: dockerfile Trigger: no dockerfile provided Action: stop

We’ve added two rules.  The first will fail the image on any RUN instruction in the Dockerfile that includes “curl”, and the second will fail the image if no Dockerfile is submitted with the image.  We then re-evaluate with this new policy bundle (note that we don’t need to re-scan, we’re just applying the new policy to the same image) and get the desired failure:

Note: No changes were made to the Dockerfile from the previous run, and we did not rebuild the image – we only changed the policy rules.

This time, our policy evaluation caught both the installation of curl into the intermediate container and the actual execution of curl to download the unauthorized code.  Either of these alone is enough to cause the policy evaluation to fail as desired.

Also, in this case, our package blocklist was not triggered, since the final image still doesn’t contain the curl package.

Conclusion

Package blocklists can be quite useful. In most cases, whether or not particular packages are present in an image is much less of a concern than how those images are constructed and used, so looking just at the final image isn’t enough.  Anchore Enterprise’s deep image introspection includes analysis of the Dockerfile used to create the image, which allows the policy engine to enforce more best practices than simple image inspection alone.

Policies, Dockerfiles, and Jenkinsfiles used for this article can be found in my GitHub.

The Journey from DevOps to DevSecOps

Digital transformation, improved security, and compliance are the key drivers pushing corporations and government agencies to adopt DevSecOps. Some organizations will experience a journey from DevOps to DevSecOps, depending on their DevOps maturity. 

Defining DevOps and DevSecOps for your Organization

There’s a growing list of definitions for DevOps and DevSecOps out there. Some come from vendor marketing, and a few of the definitions come from new perspectives about bringing together development, security, and operations teams.

For the purposes of this blog post, DevOps combines cultural philosophies, practices, and tools that increase an organization’s ability to deliver software and services at high velocity. DevOps enables teams to develop and improve products faster than organizations using traditional software development and infrastructure management processes.

DevSecOps — by definition — brings cybersecurity and security operations tools and strategies such as container vulnerability scanning automation into your organization’s existing or new DevOps toolchain.

In the next few years, it’s a safe bet that the definition of DevSecOps will subsume the DevOps definition as corporations and public sector agencies continue to increase their security focus across the software delivery life cycle.

Moving from DevOps to DevSecOps: Step by Step

When you move from DevOps to DevSecOps, it’s another step in your DevOps journey for many reasons. Your development and operations teams are taking another step left and bringing along their colleagues in security for the trip.

  1. Start with a Small Proof of Concept Project

Starting with a small proof-of-concept project is always the best way to help your teams prepare for any technology or process changes. Choosing a small pilot project for DevSecOps lets you test adjustments and additions to your tools and processes. Your small pilot project could take one of the following forms:

  • A solution architect or small project team building out your current or creating a new DevOps pipeline with additional security tools such as Anchore Toolbox or, even better, Anchore Enterprise at each stage to support automated scanning of your containers. This pilot project is ideal if you must show additional security features to your management and project stakeholders, such as your customers.
  • A small project team is running an application development project through your sparkling new DevSecOps toolchain. An example of such a small project is an update to a small not-business critical project that your organization uses internally.

Pilot projects such as these require little startup investment if you use open source tools. However, suppose your organization has to build and maintain applications that must meet compliance. In that case, you’ll probably have to consider using open source security tools that provide you with the reporting capabilities that your auditors require.

  1. Go Agile to Deliver Code in Iterative Releases

Delivering your software code using agile methodologies in small scope iterative releases helps your DevSecOps teams check for code and container vulnerabilities through quality assurance gates embedded across your development life cycle.

  1. Implement Automated Testing across your Toolchain

Automation is integral across a DevSecOps delivery process, especially with testing. Test automation shouldn’t replace human testers. Running automated testing and dependency checks enable your testers to focus on the most critical issues preventing you from achieving compliance.

  1. Invest in Upskilling your Developers and Testers

Part of shifting security left with DevSecOps is training your developers and testers in security principles. These days that means online training from a vendor or other training providers. It also means letting your developers attend industry conferences. With national and regional technology and security conferences online, this is easy to do.

Another way to invest in upskilling is to support your developers pursuing DevSecOps, DevOps, and cloud-focused certifications. For example, there’s a Certified DevSecOps Professional Certification from Practical DevSecOps and a DevSecOps Foundation Certification from the DevOps Institute.

  1. Involve your Developers in Security Discussions

Just as you bring your development and operations teams out of their silos, you need to get your developers into the security discussion. A move to DevSecOps shifts security left, so it sits throughout your software development life cycle versus being the last step before product release.

Everybody on the project team is accountable for security in a DevSecOps environment. Your organization can only reach this accountability level when you empower your teams with expertise and resources to respond to and mitigate security threats within the toolchain and before the threats hit production.

  1. Treat Compliance like another Team Member

Failing compliance audits means an expensive, time-consuming, and sometimes litigious process to return systems to compliance. DevSecOps gives you the methodologies, framework, and tools to help your organization’s systems achieve continuous compliance at every stage of your delivery life cycle.

  1. Adopt Regular Security Practices across your Teams

DevSecOps practices mean using regular scans, code reviews, and penetration tests to ensure your applications and cloud infrastructure are secure against insider and external threats.

Final Thoughts

Taking the journey from DevOps to DevSecOps is the ultimate story of shifting security left for commercial and public sector enterprises. Some organizations will seek DevSecOps first, leaping a traditional waterfall software development life cycle. Others will mature and strengthen their DevOps processes to become more security-focused during their delivery life cycle.

Using Grype to Identify GitHub Action Vulnerabilities

About a month ago, GitHub announced the presence of a moderate security vulnerability in the GitHub Actions runner that can allow environment variables and path injection in workflows that log untrusted data to STDOUT. You can read the disclosure here for more details. Given at Anchore, we build and maintain a GitHub Action of our own; this particular announcement was one we were made aware of. While I’m sure many folks have taken the time to update their GitHub Actions accordingly, I thought this would be a good opportunity to take a closer look at setting up a CI workflow as if I were developing my own GitHub Action, and step through the options in Anchore for identifying this particular vulnerability.

To start with, I created an example repository in GitHub, demonstrating a very basic hello-world GitHub Action and workflow configuration. The configuration below scans the current directory of the project I am working on with the Anchore Container Scan Action. Under the hood, the tool scanning this directory is called Grype, an open-source project we built here at Anchore.

name: Scan current directory CI
on: [push]
jobs:
  anchore_job:
    runs-on: ubuntu-latest
    name: Anchore scan directory
    steps:
    - name: Checkout
      uses: actions/checkout@v2
    - name: Scan current project
      id: scan
      uses: anchore/scan-action@v2
      with:
        path: "./"
        fail-build: true
        acs-report-enable: true
    - name: upload Anchore scan SARIF report
      uses: github/codeql-action/upload-sarif@v1
      with:
        sarif_file: ${{ steps.scan.outputs.sarif }}

On push, I can navigate to the Actions tab and find the latest build. 

Build Output

The build output above shows a build failure due to vulnerabilities identified in the project of severity level medium or higher. To find out more information about these specific issues, I can jump over to the Security tab.

All CVEs open

Once here, we can click on the vulnerability linked to the disclosure discussed above. 

Open CVE

We can see the GHSA, and make the necessary updates to the @actions/core dependency we are using. While this is just a basic example, it paints a clear picture that adding security scans to CI workflows doesn’t have to be complicated. With the proper tools, it becomes quite simple to obtain actionable information about the software you’re building. 

If we wanted to take this a step further “left” in the software development lifecycle (SDLC), I could install Grype for Visual Studio Code, an extension for discovering project vulnerabilities while working locally in VS Code. 

Grype vscode

Here we can see for the same hello-world GitHub Action, I can get visibility into vulnerabilities as I’m working locally on my workstation and can resolve these issues before I end up pushing to my source code repository. I’ve also just added two places for security checks in the development lifecycle in just a few minutes, which means I am spreading out my checks, providing more places to catch myself should I create issues. 

Just for good measure, once I update my dependencies and push to GitHub, my CI job is now successfully passing the Anchore scan, and the security issues that were opened have now been closed and resolved. 

All CVEs closed

CVE closed

While this was just a simple demonstration of what is possible, at Anchore, we generally just think of these types of checks as good hygiene, and the more spots in the development workflow we can provide developers with security information about the code they’re writing, the better positioned they’ll be to promote shared security principles across their organization and build high-quality, secure software.

Free Download: Inside the Anchore Technology Suite: Open Source to Enterprise

Open source is foundational to much of what we do here at Anchore. It’s at the core of Anchore Enterprise, our complete container security workflow solution for enterprise DevSecOps. Anchore Toolbox is our collection of lightweight, single-purpose open source tools for the analysis and scanning of software projects.

Each tool has its place in the DevSecOps journey, depending on your organization’s requirements and eventual goals.

Our free guide explains the following:

  • The role of containers in DevSecOps transformation
  • Features of Anchore Enterprise and Anchore Toolbox
  • Ideal use cases for Anchore Enterprise
  • Ideal use cases for Anchore Toolbox
  • Choosing the right Anchore tool for your requirements

To learn more about how Anchore Toolbox and Anchore Enterprise can fit into your DevSecOps journey, please download our free guide.

Configuring Anchore Enterprise on AWS Elastic Kubernetes Services (EKS)

In previous posts, we’ve demonstrated how to create a Kubernetes cluster on AWS Elastic Kubernetes Service (EKS) and how to deploy Anchore Enterprise in your EKS cluster. The focus of this post is to demonstrate how to configure a more production-like deployment of Anchore with integrations such as SSL support, RDS database backend and S3 archival.

Prerequisites:

Configuring the Ingress/Application Load Balancer

Anchore’s Helm Chart provides a deployment template for configuring an ingress resource for your Kubernetes deployment. EKS supports the use of an AWS Elastic Load Balancing Application Load Balancer (ALB) ingress controller, an NGINX ingress controller or a combination of both.

For the purposes of this demonstration, we will focus on deploying the ALB ingress controller using the Helm chart.

To enable ingress deployment in your EKS cluster, simply add the following ingress configuration to your anchore_values.yaml:

Note: If you haven’t already, make sure to create the necessary RBAC roles, role bindings and service deployment required by the AWS ALB Ingress controller. See ALB Ingress Controller for more details.

ingress:
  enabled: true
  labels: {}
  apiPath: /v1/*
  uiPath: /*


  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing

Specify Custom Security Groups/Subnets

By default, the ingress controller will deploy a public-facing application load balancer and create a new security group allowing access to your deployment from anywhere over the internet. To prevent this, we can update the ingress annotations to include additional information such as a custom security group resource. This will enable you to use an existing security group within the cluster VPC with your defined set of rules to access the attached resources.

To specify a security group, simply add the following to your ingress annotations and update the value with your custom security group id:

alb.ingress.kubernetes.io/security-groups: "sg-012345abcdef"

We can also specify the subnets we want the load balancer to be associated with upon deployment. This may be useful if we want to attach our load balancer to the cluster’s public subnets and have it route traffic to nodes attached to the cluster’s private subnets.

To manually specify which subnets the load balancer should be associated with upon deployment, update your annotations with the following value:

alb.ingress.kubernetes.io/subnets: "subnet-1234567890abcde, subnet-0987654321edcba"

To test the configuration, apply the Helm chart:

helm install <deployment_name> anchore/anchore-engine -f anchore_values.yaml

Next, describe your ingress controller configuration by running kubectl describe ingress

You should see the DNS name of your load balancer next to the address field and under the ingress rules, a list of annotations including the specified security groups and subnets.

Note: If the load balancer did not deploy successfully, review the following AWS documentation to ensure the ingress controller is properly configured.

Configure SSL/TLS for the Ingress

You can also configure an HTTPS listener for your ingress to secure connections to your deployment.

First, create an SSL certificate using AWS Certificate Manager and specify a domain name to associate with your certificate. Note the ARN of your new certificate and save it for the next step.

Next, update the ingress annotations in your anchore_values.yaml with the following parameter and provide the certificate ARN as the value.

alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm::"

Additionally, we can configure the Enterprise UI to listen on HTTPS or a different port by including the following annotations to the ingress with the desired port configuration. See the following example:

alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}, {"HTTP": 80}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'

Next, install the deployment if this is a new deployment:

helm install anchore/anchore-engine -f anchore_values.yaml

Or upgrade your existing deployment

helm upgrade anchore/anchore-engine -f anchore_values.yaml

To confirm the updates were applied, run kubectl describe ingress and verify your certificate ARN, as well as the updated port configurations, appear in your annotations.

Analyze Archive Storage Using AWS S3

AWS’s S3 Object Storage allows users to store and retrieve data from anywhere in the world. It can be particularly useful as an archive system. For more information on S3, please see the documentation from Amazon.

Both Anchore Engine and Anchore Enterprise can be configured to use S3 as an archiving solution. Some form of archiving is highly recommended for a production-ready environment. In order to set this up on your EKS, you must first ensure that your use case is in line with Anchore’s archiving rules. Anchore stores image analysis results in two locations. The first is the working set which is where an image is stored initially after its analysis is completed. In the working state, images are available for queries and policy evaluation. The second location is the archive set. Analysis data stored in this location is not actively ready for policy evaluation or queries but is less resource-intensive and information here can always be loaded into the working set for evaluation and queries. More information about Anchore and archiving can be found here.

To enable S3 archival, copy the following to the catalog section of your anchore_values.yaml:

anchoreCatalog:
  replicaCount: 1

  archive:
    compression:
      enabled: true
      min_size_kbytes: 100
    storage_driver:
      name: s3
      config:
        bucket: ""

        # A prefix for keys in the bucket if desired (optional)
        prefix: ""
        # Create the bucket if it doesn't already exist
        create_bucket: false
        # AWS region to connect to if 'url' not specified, if both are set, then 'url' has precedent
        region: us-west-2

By default, Anchore will attempt to access an existing bucket specified under the config > bucket value. If you do not have an S3 bucket created, then you can set create_bucket to false and allow the Helm chart to create the bucket for you. If you already created one, put its name in the bucket parameter. Since S3 isn’t region-specific, you need to specify the region that your EKS cluster resides in with the region parameter.

Note: Whether you specify an existing bucket resource or set create_bucket to true, the cluster nodes require permissions to perform the necessary API calls to the S3 service. There are two ways to configure authentication:

Specify AWS Access and Secret Keys

To specify the access and secret keys tied to a role with permissions to your bucket resource, update the storage driver configuration in your anchore_values.yaml with the following parameters and appropriate values:

# For Auth can provide access/secret keys or use 'iamauto' which will use an instance profile or any credentials found in normal aws search paths/metadata service
        access_key: XXXX
        secret_key: YYYY

Use Permissions Attached to the Node Instance Profile

The second method for configuring access to the bucket is to leverage the instance profile of your cluster nodes. This eliminates the need to create an IAM role to access the bucket and manage the access and secret keys for the role separately. To configure the catalog service to leverage the IAM role attached to the underlying instance, update the storage driver configuration in your anchore_values.yaml with the following and ensure iamauto is set true:

# For Auth can provide access/secret keys or use 'iamauto' which will use an instance profile or any credentials found in normal aws search paths/metadata service
        iamauto: true

You must also ensure that the role associated with your cluster nodes has GetObject, PutObject and DeleteObject permissions to your S3 bucket (see a sample policy below).

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],      "Resource": ["arn:aws:s3:::test"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::test/*"]
    }
  ]
}

Once all of these steps are completed, deploy the Helm chart by running:

helm install stable/anchore-engine -f anchore_values.yaml

Or the following, if upgrading an existing deployment:

helm upgrade stable/anchore-engine -f anchore_values.yaml

Note: If your cluster nodes reside in private subnets, they must have outbound connectivity in order to access your S3 bucket.

For cluster deployments where nodes are hosted in private subnets, a NAT gateway can be used to route traffic from your cluster nodes outbound through the public subnets. More information about creating and configuring NAT gateways can be found here.

Another option is to configure a VPC gateway allowing your nodes to access the S3 service without having to route traffic over the internet. More information regarding VPC endpoints and VPC gateways can be found here.

Using Amazon RDS as an External Database

By default, Anchore will deploy a database service within the cluster for persistent storage using a standard PostgreSQL Helm chart. For production deployments, it is recommended to use an external database service that provides more resiliency and supports features such as automated backups. For EKS deployments, we can offload Anchore’s database tier to PostgreSQL on Amazon RDS.

Note: Your RDS instance must be accessible to the nodes in your cluster in order for Anchore to access the database. To enable connectivity, the RDS instance should be deployed in the same VPC/subnets as your cluster and at least one of the security groups attached to your cluster nodes must allow connections to the database instance. For more information, read about configuring access to a database instance in a VPC.

To configure the use of an external database, update your anchore_values.yaml with the following section and ensure enabled is set to “false”.

postgresql:
  enabled: false

Under the postgres section, add the following parameters and update them with the appropriate values from your RDS instance.

 postgresUser: 
  postgresPassword: 
  postgresDatabase: 
  externalEndpoint:

With the section configured, your database values should now look something like this:

postgresql:
  enabled: false
  postgresUser: anchoreengine
  postgresPassword: anchore-postgres,123
  postgresDatabase: postgres
  externalEndpoint: abcdef12345.jihgfedcba.us-east-1.rds.amazonaws.com

To bring up your deployment run:

helm install  stable/anchore-engine -f anchore_values.yaml

Finally, run kubectl get pods to confirm the services are healthy and the local postgresql pod isn’t deployed in your cluster.

Note: The above steps can also be applied to deploy the feeds postgresql database on Amazon RDS by updating the anchore-feeds-db section instead of the postgresql section of the chart.

Encrypting Database Connections Using SSL Certificates with Amazon RDS

Encrypting RDS connections is a best practice to ensure the security and integrity of your Anchore deployment that uses external database connections.

Enabling SSL on RDS

AWS provides the necessary certificates to enable SSL with your rds deployment. Download rds-ca-2019-root.pem from here. In order to require SSL connections on an RDS PostgreSQL instance, the rds.force_ssl parameter needs to be set to 1 (on). By setting this to 1, the PostgreSQL instance will set the SSL parameter to 1 (on) as well as modify the database’s pg_hba.conf file to support SSL. See more information about RDS PostgreSQL ssl configuration.

Configuring Anchore to take advantage of SSL is done through the Helm chart. Under the anchoreGlobal section in the chart, enter the certificate filename next to certStoreSecretName that we downloaded from AWS in the previous section. (see example below)

anchoreGlobal:
   certStoreSecretName: rds-ca-2019-root.pem

Under the dbConfig section, set SSL to true. Set sslRootCertName to the same value as certStoreSecretName. Make sure to update the postgresql and anchore-feeds-db sections to disable the local container deployment of the services and specify the RDS database values (see the previous section on configuring RDS to work with Anchore for further details). (If running Enterprise, the dbConfig section under anchoreEnterpriseFeeds should also be updated to include the cert name under sslRootCertName)

dbConfig:
    timeout: 120
    ssl: true
    sslMode: verify-full
    sslRootCertName: rds-ca-2019-root.pem
    connectionPoolSize: 30
    connectionPoolMaxOverflow: 100

Once these settings have been configured, run a Helm upgrade to apply the changes to your cluster.

Conclusion

The Anchore Helm chart provided on GitHub allows users to quickly get a deployment running on their cluster, but it is not necessarily a production-ready environment. The sections above showed how to configure the ingress/application load balancer, configuring HTTPS, archiving image analysis data to an AWS S3 bucket, and setting up an external RDS instance and requiring SSL connections to it. All of these steps will ensure that your Anchore deployment is production-ready and prepared for anything you throw at it.