What is DevOps?
DevOps culture is described by strengthened collaboration, shared responsibility, improving quality from valued feedback and increasing automation.
DevOps is a blend between Developers and Operations teams; they follow a set of processes and tools that help them to create faster and more stable applications.
The role of a Developer is to create applications that are cutting edge and easy to use. The Operations team is tasked with keeping the application and infrastructure as stable as possible.
What goals do DevOps hope to achieve?
Summarily, its goal is to vastly improve workflows to satisfy the following:
- Deployment frequency
- Achieve quicker release times
- Lower failure rate/ bugs of new releases
- Shorten downtime between fixes
The term ‘DevOps’ was first coined by Patrick Debois and Andrew Clay back in 2009, as they advocated improvements to the already mature Agile methodology. By this time, Agile had solved many of the glaring inadequacies of the Waterfall methodology, by favouring working software and higher levels of developer autonomy over document-heavy, linear delivery processes. In essence, Agile had empowered technical staff to swiftly deliver direct value to business, respond rapidly to change, and avoid prohibitive corporate red tape alongside. Hence the term ‘Agile’.
Whilst Agile is largely focused on operational efficiencies, DevOps embellishes this with additional focus on culture. Although Agile traditionally centres around developers, DevOps openly invites cross-departmental collaboration – in this case ‘Developers’ and ‘Operations’. By blurring the lines between these departments, both must assume a shared responsibility for the overall delivery of a project, which cuts down on blame and increases overall accountability. More positively, it actively encourages developers, QAs and infrastructure engineers to proactively prevent technical issues growing out of control, and brings about an improved understanding of each other’s respective disciplines in the process (which is important for project planning, as well improving culture generally). With this stronger collaboration in place, DevOps’ can then deliver one of its biggest strengths of all: automation.
Codifying QA and infrastructure (or operations) allows developers to contribute to, and automate, these components in an ongoing ‘continual improvement’ model. Higher levels of automation help to deliver the following benefits, through minimising human repetition and mistakes:
- Reduced development or maintenance cycle time
- Improved detection and prevention of defects
- Immutability of infrastructure between environments (infrastructure as code)
- Faster speed-to-market, without compromising on QA
- Improved tracking and visibility of development, testing and release activities (via CI/CD pipelines)
- Rapid patching of production issues is achievable, and far safer than before
The DevOps Cycle & CICD Pipeline
Regardless of your current sprint cadence or current cycle times, the DevOps lifecycle remains the same. See below for a quick breakdown.
Similar to SCRUM team sprint planning, where small stories, tasks or issues are agreed by the wider team and stakeholders. A commitment is made to deliver n items by x date.
Developers, QAs and Operations work together to build a solution (working software with working infrastructure, as code).
Code commits (or pulls requests) are built by a CI server or other pipeline service. Basic static analysis and code quality checks are performed here. Software is then packaged, complete with 3rd party dependencies, ready for testing.
The application goes dynamic analysis and testing here, such as unit and integration tests. This is fast-feedback phase, where fast-running tests determine if software is obviously flawed or suited to more intensive and costly testing.
Assuming pre-deployment QA has passed, the build is tagged as a release candidate, using whatever versioning nomenclature your business prefers (a highly subjective topic!).
The release candidate is then deployed. ‘Deploy’ does not necessarily mean ‘deployed-to-production’. For example, AWS makes tens of millions of so-called “deploys” every year, but the vast majority are to pre-production environments. Only a very small percentage are ever released to customer-facing production systems.
Most modern, large-scale CICD pipelines will deploy to multiple environments in parallel, perform different levels of testing on each, and then perform a final production deployment when it is deemed safe to do so. Faster testing will fail faster, meaning that the slower, more expensive tests will be stopped before they incur unnecessary costs.
Each deployment is monitored, so that either a human or an automated process can respond to regressions post-deployment, whether this is general system health, detectable changes in error rates or performance stats, etc. How and what you monitor is up to you to determine and continually evolve.
Monitoring can be used trigger automated rollbacks, alert to engineers a potential problem or perform other automated tasks. AWS CloudWatch offers much of this capability in fact.
With all the information gathered from the last deployment, whether it went to production or not, you can begin to share lessons learned and plan the next iteration with your team. And so the DevOps cycle is complete.
It is important to understand that DevOps is a model for Continual Improvement and Continual Development. It is very well suited to client-side application developers, who will own certain application workloads for a long period of time, as it requires prolonged cultural evolution of all involved in order to yield meaningful results.
Given the revolutionary impact and success of DevOps, it is perhaps no surprise that it’s been augmented to include security. This isn’t new, as such – it’s been happening for several years by now – but by bringing security left in the release pipeline, and codifying it as much as possible, the same benefits that were felt by Operations teams can be felt by Security teams too.
In practice, auditing, monitoring, IDS, etc, can all be incorporated into Infrastructure-as-Code, whilst vulnerability scanning and patching can be written into CICD pipelines as far left as possible. It’s exactly the same principle that one would apply to releasing software (Dev) and infrastructure & environments (Ops). The benefits are analogous therefore, in that repetitive human tasks can be automated (saving time & money, whilst minimising human errors), and rather than performing audits infrequently by hand you can perform them every time you release, which is far less likely to see you suffer an unknown security breach later on. After all, if you release software every day, why perform an audit every few months, or whatever your current cadence is? Any new release could contain a potential vulnerability, so every one of them needs to be checked.
If you haven’t yet implemented DevSecOps or DevOps methodology, then you should seriously consider exploring the benefits of this model as soon as possible.
As an Advanced Partner carrying the DevOps with AWS competency, Ubertas Consulting can review and make recommendations against your current DevOps capability to give you a roadmap to achieving Operational Excellence. Our clients are able to understand and leverage a range of enabling services, tools and technologies which can be used to automate manual tasks, help teams manage complex environments at scale, and ensure engineers stay in control of the high velocity that is enabled by DevOps.
Come and speak to us – we’d love to help.
It seems a helpful blog to learn the striking fundamentals of DevOps’ value proposition.
It is really helpful. Thank you for sharing this post.
This post is really strong and helpful, and I just wanted to say thank-you for putting it up. Cheers, Rachel