Cybersecurity Executive Order Brings FedRAMP Changes Aplenty

On May 12, 2021, President Biden’s Executive Order on Improving the Nation’s Cybersecurity finally hit the street. Amongst all its goodness about the software bill of materials (SBOM), software supply chain security, and cybersecurity there’s some good news about FedRAMP and these developments are going to be a major step forward for government cloud security, compliance, and the government cloud community.

Here are some FedRAMP highlights from the executive order (EO):

Security Principles for Cloud Service Providers

Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are increasingly providing secure platforms for the next generation of federal government applications. Case in point, federal government agencies spent $6.6 billion on cloud computing in fiscal 2020. That figure is up from $6.1 billion in fiscal 2019, according to a government spending analysis by Bloomberg Government, as reported by NextGov. Section 3 Modernizing Federal Government Cybersecurity, states:

The Secretary of Homeland Security acting through the Director of CISA, in consultation with the Administrator of General Services acting through the Federal Risk and Authorization Management Program (FedRAMP) within the General Services Administration, shall develop security principles governing Cloud Service Providers (CSPs) for incorporation into agency modernization efforts. 

We’ll have to wait and see how this might affect the container orchestration and security offerings of the major CSPs and the opportunities it may open for the system integrators (SIs) and the innovative SBIRs across the federal information technology ecosystem.

Federal Cloud Security Strategy in 90 Days

The EO also positions FedRAMP — as part of the General Services Administration — as part of a task force to create a federal cloud security strategy in 90 days and then provide guidance to federal agencies. Here’s the quote from the EO:

Within 90 days of the date of this order, the Director of OMB, in consultation with the Secretary of Homeland Security acting through the Director of CISA, and the Administrator of General Services acting through FedRAMP, shall develop a Federal cloud-security strategy and provide guidance to agencies accordingly. Such guidance shall seek to ensure that risks to the FCEB from using cloud-based services are broadly understood and effectively addressed, and that FCEB Agencies move closer to Zero Trust Architecture.

There have been a few runs at a federal-level cloud strategy over the past few years. Most recently, there’s Cloud Smart that seeks to redefine cloud computing; modernization and maturity, and tackles security concerns, such as continuous data security and FedRAMP. Cloud Smart replaces Cloud First, an earlier government-wide initiative to provide a cloud strategy for federal agencies.

Federal agencies and technology firms serving the government should pay attention to the development of this new cloud security strategy as it’ll influence future cloud procurements. Their ambitious 90 day goal to deliver this strategy leaves virtually no time for any feedback from the government’s industry partners.

90 Days to a Cloud Technical Reference Architecture

 With large government cloud initiatives such as the United States Air Force’s Platform One gaining mindshare, other parts of the Department of Defense (DoD) and civilian agencies are certain to follow suit with large scale secure cloud initiatives. The EO also mandates the creation of a cloud security technical reference architecture:

Within 90 days of the date of this order, the Secretary of Homeland Security acting through the Director of CISA, in consultation with the Director of OMB and the Administrator of General Services acting through FedRAMP, shall develop and issue, for the FCEB, cloud-security technical reference architecture documentation that illustrates recommended approaches to cloud migration and data protection for agency data collection and reporting.

Just like the cloud security strategy, 90 days is an ambitious goal for a cloud technical security reference architecture. It’ll be interesting to see how much this architecture will draw upon the experience and lessons learned from Platform One, Cloud One, and other large scale cloud initiatives across the DoD and civilian agencies.

FedRAMP Training, Outreach, and Collaboration

FedRAMP accreditation and compliance is no easy task. The EO mandates establishing a training program to provide agencies training and the tools to manage FedRAMP requests. There’s no mention for training for the SI and government contractor community at this stage but it’s almost a certainty that the mandated FedRAMP training will find its way out to that community.

The EO also calls for improving communications with CSPs which normally falls under the FedRAMP PMO. Considering the complexities of FedRAMP, improving communication should be an ongoing process so the automation and standardization of communications that the EO touts could take some of the human error that might occur when communicating a technical status.

Automation is also due to extend across the FedRAMP life cycle, including assessment, authorization, continuous monitoring, and compliance. This development can help make the much-heralded Continuous ATO a reality for more agencies. It also opens the door for more innovation as SIs seek out startup partners and SBIR contracts to bring innovative companies from outside the traditional government contractor community to satisfy those new automation requirements.

Learn how Anchore helps automate FedRAMP vulnerability scans. 

Final Thoughts

Cloud security concerns are universal across the commercial and public sectors. Biden’s EO strikes all the right chords at first glance because it elevates the SBOM as a cybersecurity vulnerability. It also gives FedRAMP some much-needed support at a time when federal agencies continue to face new and emerging threats.

Want to learn more about containers and FedRAMP? Check out our 7 Must-Dos To Expedite FedRAMP for Containers webinar now available on-demand!

Latest Cybersecurity Executive Order Requires an SBOM

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473366&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

GitOps vs. DevOps: How GitOps plays in a DevOps and DevSecOps World

Operations models are coming at us fast and furious these days. DevOps and DevSecOps adoption and maturity have only increasing during the pandemic. It’s now incumbent for DevOps and DevSecOps teams to get all their system configurations under the same level of control and governance as they do with their application source code and containers. That’s right, it’s time for a new operations model. Say hello to GitOps!

What is GitOps?

GitOps practices empower development teams to perform traditional IT operations team tasks. As more organizations move to continuous integration (CI), continuous delivery (CD), and apply automation to their testing, delivery, deployment, and governance the more opportunities they have to implement GitOps to streamline infrastructure tasks that DevOps doesn’t necessarily automate and factor into its workflows. It also lets your teams take advantage of backend data through the application of analytics to give your stakeholders actionable insights on what’s happening up and down your pipelines and in your cloud infrastructure.

One of the many strengths of GitOps when compared to DevOps, is that it enables DevOps teams to loosen restrictions between development and operations sequences. GitOps is also repository centric with the project’s configuration files and deployment parameters residing in the same repository as the application source code. These strengths mean GitOps supports rapid development and complex changes, all the while minimizing reliance on the complex scripts that traditionally dominate such tasks. GitOps also emphasizes the use of Git, a single and often already familiar tool for developers. GitOps is gaining attention because of the complexities around configuring Kubernetes (K8s). When a Kubernetes shop moves to GitOps, they can manage their K8s configuration right along with their application source code. If they make a configuration mistake, they can roll back to their last known good configuration.

Critics of GitOps cite its lack of visibility into environments despite proponents seeing visibility as one of its strong suits. The criticism is because the data that teams see resides in a plain text format in Git which they see working for only simple configurations and setups.

Comparing GitOps vs. DevOps

We’re reaching a peak level of ops in the IT industry right now. It’s easy to get them all confused without the help of corporate marketing departments. The easiest way to distinguish between DevOps vs. GitOps is to think of DevOps as a pipeline mechanism that enhances software delivery. GitOps is a software development mechanism that improves developer productivity.

These ops models bleed together because of continuous integration/continuous delivery (CI/CD) and the componentization of software through containers. 

GitOps complements an overall DevOps strategy and your toolchains. By introducing GitOps into their workflows, a DevOps team can experiment with new infrastructure configurations. If the team finds that the changes don’t behave as expected, they can roll back the changes using Git history.

GitOps and Secure Development

Many of the same contrasts between DevOps and GitOps remain in GitOps vs. DevSecOps. GitOps is developer-centric. DevSecOps is security-focused with software containers playing a growing role in how DevSecOps teams secure software during development, testing, and in production.

As DevSecOps add more security and analytics to the toolchain, teams can easily extend their security measures to secure their Git repositories. The merging of DevSecOps and GitOps should be interesting to watch.

DevOps, DevSecOps, and GitOps in the Future

Evolution is constant in the IT operations world. While some foresee that DevOps and DevSecOps will merge, GitOps will certainly continue to augment DevOps and DevSecOps toolchains and processes offer teams better tools to manage configurations, thus lifting some pressure off the Ops team so they can focus on more strategic work.

How Core Values Can Foster Open Performance Discussions

Kindness, Openness, and Ownership.  These are the core values that Anchore was built on and our team members exhibit every day.  At the core of these values are the underlying themes of trust, empathy, and communication, which are paramount to our continuous performance model.  We designed this model to create manager and employee relationships that enable and empower every employee to feel safe to raise their hand with new ideas and ask for help, creating open two-way communication.

Every other month, employees sit down with their managers for a Top 2 conversation, where they discuss two things that went well and two things that can be focused on in the coming months.  Most importantly, this feedback goes both ways.  Employees have a regular opportunity to share feedback with their manager (wanting more frequent check-ins, interest in stretch projects, etc.) and managers have regular opportunities to coach employees, providing the tools and resources they need to be successful. 

Additionally, every six months managers sit down with their direct reports for a Stay Interview, providing a dedicated time to discuss motivators, communication styles, and long-term career goals. With this understanding, the feedback and opportunities presented can be deliberately aligned with each person’s individual goals and objectives. 

To learn more about how our continuous performance model has given our team members the tools to build a trusting and open relationship, we sat down with Brandon Lee (he/him), Senior Accountant, and his manager Alaina Frye (she/her), Sr. Director, Finance, Accounting and RevOps. 

Brandon has worked at various sized companies, from large financial services firms to small startups, all of which had infrequent and unclear performance models (or none at all). When he joined Anchore, Brandon welcomed the opportunity to participate in a robust and regular performance development program. 

“I think what we have with the Anchore Top 2 discussions and Stay Interviews is pretty awesome and has enabled Alaina and I to have a very open and honest relationship,” said Brandon.  “Because the Top 2 meetings occur so often, it’s easy for us to reflect on what went well in the previous months, as well as help identify some of the processes that we can continue to enhance and refine. The transparency and open dialogue we have in our Stay Interviews is really helpful for my career growth and happiness at Anchore. I enjoy the opportunity to share my short-term and long-term career goals in a very candid way and really appreciate the continuous support in accomplishing my goals.”

Alaina, whose experience also ranges from large financial services firms to small startups, has become a champion of Anchore’s Top 2’s and Stay Interviews.  As a manager, they give her the ability to have constant communication with her direct reports. This ensures that expectations and goals are clear on both sides – ensuring strong communication all around. 

Top 2’s help everyone digest feedback because we have this set framework and recurring time to discuss performance regularly. It facilitates the opportunity to receive feedback and then work together on action plans,” said Alaina. “Some months there is specific feedback about how I can better support Brandon, but other months we sit down and talk at a higher level about process improvements we want to make.”

Even ad-hoc feedback conversations outside of Top 2’s have become more natural because Brandon and Alaina have built a foundation in their sessions, understanding how each other thinks, communicates, and what motivates them.  This sense of trust and psychological safety with one another has opened the door to real time feedback opportunities. 

Through Alaina and Brandon’s embodiment of Anchore’s values they have cultivated a strong relationship where they can learn and grow – together.  If you are interested in working on a team that fosters kindness, trust and open communication, head to our careers page. 

5 Open Source Procurement Best Practices

SolarWinds and now Codecov point to the need for enterprises to better manage how they procure and intake open source software (OSS) into their DevOps life-cycle. While OSS is “free” it’s not without internal costs as you procure the software and bring it to bear in your enterprise software. 

Here are five open source procurement best practices to consider:

1. Establish Ownership over OSS for your organization

Just as OSS is becoming foundational to your software development efforts,  shouldn’t it also be to your org chart? 

We’re lucky at Anchore to have an open source tools team as part of our engineering group. Our CTO and product management team also have deep roots in the OSS world. Having OSS expertise in our development organization means there is ownership over open source software. These teams serve our overall organization plus current and prospective customers.

You have a couple of options for establishing ownership over OSS for your organization:

  • Develop strong relationships with the OSS communities behind the software you plan to integrate into your enterprise software. For example, support can take the form of paying your developers to contribute to the code base. You can also choose to be a corporate sponsor of initiatives and community events.
  • Task an appropriate developer or development team to “own” the OSS components they’re integrating into the software they’re developing.
  • Stand up a centralized open source team if you have the budget and the business need, and they can serve as your internal OSS experts on security and integration.

These are just a few of the options for establishing ownership. Ultimately, your organization needs to commit the management and developer support to ensure you have the proper tools and frameworks in place to procure OSS securely.

 

2. Do your research and ask the right questions

Due diligence and research are a necessity when procuring OSS for your enterprise projects.  Either your developers or open source team have to take the lead in asking the right questions about OSS projects you plan to include in your enterprise software. Procuring enterprise software requires a lot of work on the part of legal, contracts, and procurement teams to work through the intricacies of contracts, licensing, support, and other related business matters. There’s none of that when you procure OSS. However, it doesn’t mean you shouldn’t put in guard rails to protect your enterprise because sometimes you may not even realize what OSS your developers are deploying to production. Here are some questions that might arise:

  • Who’s maintaining the code?
  • Will they continue to maintain it as long as we need it?
  • Who do we contact if something goes wrong?

It’s not about your developers becoming a shadow procurement department. Rather, it’s putting their skills and experience to work a little differently to perform due diligence they might do when researching enterprise software. The only difference here is your developers need to find out some of the “what ifs” that come if an OSS project goes stagnant or may not deliver on the potential of their project.

3. Set up a Standard OSS Procurement Process

A key step is to set up and document a standard OSS process that’s replicable across your organization to set a standard for the onboarding process. Be sure to tap into the expertise of your IT, DevOps, cybersecurity, risk management, and procurement teams when creating the process.

You also should catalog all OSS that meet the approval process set by your cross-functional team in a database or other central repository. This is a common best practice in some large enterprises, but keeping it up to date comes at an expense.

4. Generate an SBOM for your OSS 

OSS doesn’t include a software bill of materials (SBOM), a necessary element for conducting vulnerability scans. It’s up to you to adjust your DevOps processes and put the tools in place for whomever owns OSS in your development organization. Generating an SBOM for OSS can take place at one or more phases in your DevOps toolchain.

5. Put OSS Maintenance in Place

When you’ve adopted an OSS component and integrated it into your software, you still need to have a method in place to maintain that source code. It’s a logical role if you have a dedicated open source team in-house and such work is accounted for in their budget, charter, and staffing. If you don’t have such a team, then the maintenance work would fall to a developer and that risks shifting priorities, especially if your developers are billable to client projects. The last option is to outsource the OSS maintenance to a third party firm or contractor, and that can be easier said than done, as the expertise can be hard to find (and sometimes costly!).

Then again, you can always roll the dice and hope that the OSS project remains on top of maintaining their source code and software with the necessary security updates and patches well into the future.

OSS Procurement and your Enterprise

The time is now to review and improve how your enterprise procures and maintains OSS. Doing the job right requires relationship building with the OSS community plus building internal processes and governance over OSS.

 

Do you want to generate SBOMs on the OSS in your development projects? Download Syft, our open source CLI tool for generating a Software Bill of Materials (SBOM) from container images and filesystems.

Blending Passion and Performance to Advance Innovation

As we explore the various roles and responsibilities at Anchore, one critical area is maintaining our interactions with the open source community. Anchore’s roots are deep in open source, and this area remains vital to our organization today. As a company we may expand our offerings, but the technology feedback and engagement we receive from our users in the community drives and inspires our team.

As we continually innovate and cultivate the newest technologies for secure and compliant software development, Anchore is thrilled to be hiring a Developer Advocate for our open source tools.

 

Our Vice President of Product, Neil Levine, weighed in on what he sees as the key elements to this role:

“Hiring a developer advocate is critical for Anchore as we look to grow adoption of our open source tools and evangelize the benefits of DevSecOps practices. We make our tools open source to reduce friction and encourage conversation with the developer community. This role is the essential glue between those users and the Anchore engineering team, so we can ensure that we are advancing state-of-the-art concepts when it comes to developing secure software. This role will help not just Anchore, but the broader software community.”

Are you passionate about DevSecOps and open source projects? Then apply for this role on our job board.

#NowHiring

5 Reasons AI and ML are the Future of DevSecOps

As the tech industry continues to gather lessons learned from the SolarWinds and now Codecov breaches, it’s safe to say that artificial intelligence and machine learning are going to play a role in the future of DevSecOps. Enterprises are already experimenting with AI and ML with the hopes of reaping future security and developer productivity investments.

While even DevSecOps teams with the budget and time to be early adopters are still figuring out how to implement AI and ML at the scale, it’s time more teams look to the future:

1. Cloud-Native DevSecOps tools and the Data they Generate

As enterprises rely more on cloud-native platforms for their DevSecOps toolchains, they also need to put the tools, frameworks, and processes to make the best use of the backend data that their platforms generate. Artificial intelligence and machine learning will enable DevSecOps teams to get their data under management faster while making it actionable for technology and business stakeholders alike.

There’s also the prospect that AI and machine learning offer DevOps teams a different view of development tasks and enable organizations to create a new set of metrics

Wins and losses in the cloud-native application market may very well be decided by which development teams and independent software vendors (ISVs) turn their data into actionable intelligence. Creating actionable intelligence gives their stakeholders and developers views into what their developers and sysadmins are doing right security and operations wise.

2. Data-Backed Support for the Automation of Container Scanning

As the automation of container scanning becomes a standard requirement for commercial and public sector enterprises, so will the requirements to capture and analyze the security data and the software bill of materials (SBOM) that come with containers advancing through your toolchains.

The DevSecOps teams of the future are going to require next-generation tools to capture and analyze the data that comes from the automation of vulnerability scanning of containers in their DevSecOps toolchains. AI and ML support for container vulnerability scanning offer a delicate balance of autonomy and speed to help capture and communicate incident and trends data for analysis and action by developers and security teams.

3. Support for Advanced DevSecOps Automation

It’s a safe assumption that automation is only going to mature and advance in the future with no stopping. It’s quite possible that AI and ML will take on the repetitive legwork that powers some operations tasks such as software management and some other rote management tasks that fill up the schedules of present-day operations teams.

While AI and ML won’t completely replace their operations teams, these technologies may certainly shape the future of operations team duties. While there’s always the fear that automation may replace human workers, the reality is going to be closer to ops teams becoming more about automation management.

4. DevOps to DevSecOps Transformation

The SolarWinds and Codecov breaches are the perfect prompts for enterprises to make the transformation from DevOps to DevSecOps to protect their toolchains and software supply chain. Not to mention, cloud migrations by commercial and government enterprises are going to require better analytics over development and operational data their teams and projects currently produce for on-premise applications.

5. DevSecOps to NoOps Transformation

Beyond DevSecOps lies NoOps, a state where an enterprise automates so much that they no longer need an operations team, While the NoOps trend has been around for the past ten years, it still ranks as a forward-looking trend for the average enterprise.

However, there are lessons you can learn now from NoOps in how it conceptualizes the future of operations automation that you can start applying to your DevOps and DevSecOps pipelines, even today.

Final thoughts

For the mature DevSecOps shop of the future to remain competitive, it must make the best use of data from the backend systems in its toolchain; SBOMs; and container vulnerability scanning. Artificial intelligence and machine learning are becoming the ideal technology solutions for enterprises to reach their future DevSecOps potential.

Celebrating Anchore’s Fifth Birthday

This is a special guest post from our CEO, Saïd Ziouani to celebrate and reflect on five years of the Anchore journey.

As we celebrate Anchore’s fifth birthday this month, and reflect on our journey thus far, I am truly humbled at what our talented team of professionals has accomplished in such a short period of time. Anchore was founded on three core values: Kindness, Openness, and Ownership. Our employees (affectionately called Anchorenauts) exemplify each of those values every day, and are the backbone of the company.

When Dan Nurmi, Co-founder, and I got together back in 2016 to start Anchore, container technology was still in the early stages, but adoption was starting to take shape at a pace that was like nothing we’d seen in the past. We could see how security and compliance would need to be re-imagined to a “continuous” approach that would allow developers to deliver innovation quickly and securely. We then realized that coupling the container adoption movement with developer-led security (or “shift left”) was going to be the foundational play for our next adventure.

Today, after five wonderful years of innovation, building a team, raising capital and instilling a strong operational foundation, I’m pleased to see how far we have come as a company. At 75 people strong, we are excited to be helping Fortune 100 companies such as eBay, NVIDIA and Cisco and government agencies such as the U.S. Air Force and Navy to develop secure cloud-native applications. And we’ve been honored to work alongside DevOps leaders such as GitLab, GitHub and Cloudbees to advance DevSecOps practices.

As we look forward, the next five years at Anchore will be full of new innovations as we help organizations secure their software supply chains in a world of increasing threats. We also seek to develop and inspire the next generation of engineers, technologists and leaders, both within Anchore and in the larger open source and technology community.

I’m even more excited now than I was the day Dan and I founded the company. The thrill of being at the forefront of such amazing and dynamic technology is more than we expected. As Anchorenauts have heard me say many times in the past, “it’s really all about the journey.” At Anchore, we surround ourselves with hardworking, kind individuals, all driving toward a common goal of building a technology that contributes to ensuring a safer and more secure world. I’m grateful to our industry partners, valued customers and all Anchorenauts — from those who’ve been with us since the early days to those who have embraced the journey with us more recently. We look forward to continuing to build this amazing company together!

2 SBOM & Supply Chain Security News Items to Watch

We aren’t about to stop hearing about the need for a software bill of materials (SBOM) and software supply chains security anytime soon. You can expect more news about a Presidential executive order about SBOMs and a new software supply chain breach at Codecov that we’re all still learning more about.

Impending Executive Order about SBOMs

The fallout from the SolarWinds supply chain attack is behind the U.S. federal government considering issuing an executive order that would require vendors to provide a software bill of materials (SBOM) with the software they sell to or create for a customer.

One of the potential benefits of this EO is that we might finally see a boost to some of the excellent industry and cross-industry work being done out there to better track software dependencies and related metadata. Hopefully, we’ll see SPDX, CycloneDX, SWID, and the National Telecommunications and Information Administration (NTIA) play new and collaborative roles within government and industry once this EO hits the street. 

An EO of this magnitude also sends a powerful message to government and industry about the risks of vulnerabilities that come from software dependencies. There’s also the potential of a knowledge gap that both government and industry will need to bridge. Look for security vendors to pivot their messaging and thought leadership to fill this gap.

Codecov Supply Chain Breach

Codecov — makers of a tool that lets development teams measure the testing coverage of their codebase — could be the latest high-profile software supply chain breach adding new fuel to the impending federal government EO.

Reports point to attackers exploiting a bug in Codecov’s Docker image creation process to gain access to a Bash Uploader script that maps out development environments and reports back to the development team. The modification called out for user credentials that would enable the attackers to access and exfiltrate data right from the continuous integration environment. CEO 

Jerrod Engelberg published an update on their corporate site that warned that any credentials, authentication tokens, or keys run through an affected customer’s CI process were exposed giving attackers access to application code, data stores, and git repositories.

The Codecov breach brings up the harsh realities of the need to secure the DevSecOps toolchain for government and commercial enterprises. Nowadays, any focus on application security must also include the toolchain.

Be Proactive about SBOMs and Supply Chain Security

News of the impending executive order and recent news about Codecov mean the time is now to become more proactive about your organization’s SBOM adoption. Here are some actions you can take to be proactive about SBOMs and supply chain security:

  • Review your current DevOps or DevSecOps process with your development and operations teams and look for natural points to introduce the requirement for an SBOM as an entry gate.
  • Become conversant in the major SBOM standards: SPDX, SWID, and CycloneDX because we’ve yet to see a full-court push for an industry standard. It’s also a good time to monitor the SBOM work the NTIA is doing.
  • Implement a tool to generate SBOMs from container images and file systems if you haven’t already done so. Download and take Syft for a spin. It’s our open source CLI tool and library for generating a Software Bill of Materials from container images and filesystems.

A Family Approach to Startup Life

When Chad Olds (he/him) joined Anchore in February 2020 as VP of Sales-Americas, his goal was to build a collaborative, high-performing sales organization.  The first year was filled with many unexpected challenges, most notably a global pandemic. This led him through an action-packed year beginning as a “team of one” and ending with an incredibly talented team of Account Executives, Solution Engineers, and Sales Development Representatives.  

When the pandemic hit, Chad learned quickly how much work is involved with raising three children and being present as a parent while balancing a demanding career. It completely changed the expectations and needs in his household.  

“What I learned, even before the pandemic, is that taking care of the kids is a lot of work, and it is absolutely unfair for me to think that my wife, Brittany, who owns a small business, should be expected to take on full parent duties 24/7.”  

Chad knew how important it was to participate and share in the demands required in caretaking, and finding a way to balance the ownership and responsibility was a priority.

With a career in sales spanning 15 years, Chad’s focus was aligning himself with a company that understood the importance of finding an effective balance between work and life. At Anchore he found a sense of trust in managing personal schedules that flex with an individual’s needs.  It’s not always possible to predict needs or delegate work to others at a start-up, but he knew that with proper planning and prioritization, he had the support to make it happen.   

“I changed my work schedule to help take on more of the morning responsibilities for our family. Things like help make breakfast, get the kids dressed and hair brushed. Essentially help them get ready to start the school day, which, for any parents out there, can attest that this alone can be a day’s work!”

Chad realized that even with the adjustment of helping with the morning routine, it wasn’t enough.  He wanted to support his wife in having more time to herself.  “I started blocking time during the week to spend time with my kids while Brittany was able to take the time she needed to stay balanced and healthy. It was fantastic for both of us! One of the things I appreciate about Anchore is that I don’t feel the need to hide spending time with my family. It’s something that our leadership team fully supports.”  

Being able to show up at work and contribute at the highest level involves having a life outside of work – whatever that may look like to each person.  Chad believes that burnout can happen quickly, especially at a start-up where the workload is vast, and the pressure is high. 

“I want my team to really know their friends and family.  I want them to enjoy what they do every day.  It’s about working smart, and prioritizing early and often to ensure you’re able to get done what you need to get done, while also being able to show up in other areas of your life fully, without distraction.  It is incredibly meaningful for me to not only give that support to my team in achieving what is most important to them, but to receive that level of support from my leadership as well.”

You can keep up with Chad and his series Colds Unfair Advantage on LinkedIn.