As digital journeys become the cornerstone of IT ecosystems, agile has become a preferred SDLC model, inviting an abundance of open source, third party solutions to the market, and making these journeys faster and smoother. This has led many organizations to move toward an automated DevOps ecosystem where continuous integration, testing, and deployment orchestrate in tandem.
As new-age applications created artifacts using the DevOps ecosystem, organizations realized the following benefits:
- SDLC: Full functional and non-functional test cycles were being documented and quality gates were becoming automated.
- Single source of truth gave confidence for governance teams and introduced more automation.
- Organizations moved away from effort- and cost-based performance measures to release frequency, speed to rollback, etc.
- New ways of working led to stronger AMS handshakes and development of new applications; customers moved to “Dev+Ops single team” ownership.
Organizations liked the DevOps ecosystem because there was hope for standardization amidst the “engineering democratization” and IT governance becoming more data-centric. Today, more and more organizations are leveraging DevOps data to build quantifiable information for KPI and governance automation.
As the application development, testing, and support worlds are unifying under continuous delivery (CD), the cloud is making Infrastructure as Code (IaC) mandatory, therefore uniting the infra and app worlds. Despite these new ways of working, the IT ecosystem’s security and risk management worlds still occur in silos, meaning:
- Good-to-go applications had to wait for application vulnerability and penetration testing for sign-off (manual) from security and risk teams.
- New instances of DevOps tools in the cloud or on-premises were installed and configured automatically, but related hardening and security set-up took time (or was ignored), making it a sweet spot for hackers.
- IAC supporting tools and technology spun up infra easily, but the security ops were not connected.
Importing increasingly vulnerable tools and technology into the IT landscape made customers think about how to bring security and related ops into the pipelines for better governance.
Below are scenarios where organizations could benefit from a DevSecOps approach.
Static analysis and security testing (SAST)/Dynamic analysis and security testing (DAST)
Scenario: SAST and DAST tool execution is outside the standard DevOps pipeline and managed by separate teams. The results of the tool findings are not directly halting the continuous delivery pipeline.
- Related governance teams can’t leverage the reports (or miss them entirely) because execution is not an automated pipeline gate.
- For small releases, SAST tool execution can be ignored.
- Fixing vulnerable open source libraries could be challenging in the absence of static patching for respective packages, limiting major upgrades, indirect dependency upgrades, or custom upgrades.
- Application code-based vulnerabilities identified post-UAT could put the entire project/program on hold or require redoing the complete life cycle due to coding changes. This rework causes a significant impact on a business that’s expecting a product release.
- Moving code from the dev environment to test servers automatically triggers the SAST tool and security team’s approval; mandatory deployment into SIT/UAT environment.
- Security team member with access to DAST test report participates in approving production deployment pipeline.
- Mandatory feeding of SAST, DAST, and Open Source Security Scan (OSSS) test reports into the project management tools (Jira or Octane) before go/no-go decision.
- Automatic recording of mandatory approvals (via email or system tools) into project management tools for reference and governance.
Enterprise leveraging DevOps tools
PoC of a new tool or tech stack follows all security and risk validations/verifications and is installed mostly onto an on-premise trial server, which has controlled access. Following successful trials and testing, the tool or tech stack is authorized for use across the enterprise.
- Each line of business takes the image of the tool from the PoC servers and goes ahead with instance creation in their cloud subscription.
- The type of infra/security hardening needed for installation in the dev/test servers and related housekeeping is not worked with the security team.
- All software requests are a ticket. If the software is new, the security team runs pipeline (which automates, or semi-automates the download from respective sites), plagiarism, risk, compliance, and vulnerability checks, and places the software in the PoC server.
- In case of fitment, create required pipelines for cloud/on-premise software installation that includes all required security hardening.
- The automation associated with the ticket triggers the entire pipeline.
- Most importantly, focus is on having good governance over the entire software installation and quick audit checks on existing installation to support mass upgrades and uninstallation.
Dynamic spin-up of an environment
Infrastructure setup is heavily automated, and most customers are comfortable with IaC. Customers are leveraging terraform (and other stacks) to automatically create new infra space (mutable or immutable) and more so in the cloud to dynamically spin up test environment and extend the production environment.
When new infrastructure is to be created, the related support, risk, and compliance teams will process the request, check availability, secure ports, and grant specific access to various teams. When creating the entire infrastructure via IaC, these aspects are generally considered outside the pipeline:
- Are the new infrastructure details updated in any of the centralized repositories?
- Is security hardening for the new servers complete? Old servers cleansed and past accesses revoked? Is security sanitization done on the servers that were running the older version of the application after the new versions are successfully deployed to production?
- Is there governance in place for “creation” and “access enablement,” and is owner assignment for the infrastructure available? Any metadata created on the same by the pipeline?
- For patch application across the organization, what is the confidence level that all servers are covered?
DevSecOps approach: Security and risk teams have to automate this space, bring their processes into the ecosystem, and include sanity checks on the servers from a security perspective. Security and compliance scripts can be brought into the pipeline easily, and metadata collected can play a key role in dashboard creation and governance.
If an organization wants to automatically deploy, provision environment, provision infrastructure, and provision applications, security hardening and related sanity testing of the environment must occur. Ignoring security, risk, and compliance outside of the DevOps ecosystem is not a consideration. Looking at SecOps inclusive of DevOps or DevSecOps is now a mandate. Speed without compromising security is the new mantra.