by Tom Geraghty
What few disagree on though are its origins. In the tech industry, it has long been accepted that technologists are either “devs”: those who “create”, or “ops”: those who “build and maintain”. Developers write code while engineers build the system and keep it running. Conflict frequently emerges between these two camps and their seemingly incongruent goals – whereas development teams are motivated and measured by their high change frequency and scale (deploying features, fixes, and improvements), operations teams are judged by reliability and consistency, qualities which are often seen as an outcome of low change frequency and scale (though we shall see later how this isn’t necessarily true). This often results in an antagonistic relationship between the two teams.
DevOps is or at least originated as, the effort to reconcile this fracture and improve business performance.
“…all ideas are second-hand, consciously and unconsciously drawn from a million outside sources.” Mark Twain
At a high level, the practice of DevOps focuses on culture, process, velocity, feedback loops, repeatability via automation, responsiveness to change, and continuous improvement. These practices have accelerated the web-scale revolution behind high-performance tech giants such as Google, Netflix, Amazon, and Facebook.
However, these concepts are not new. They have been used by industrialists, researchers, and technologists to improve the quality and efficiency of production since the dawn of the industrial revolution.
Industrial Origins of DevOps
In 1620 Francis Bacon codified what was to become the fundamental basis for empirical knowledge: the origin of the scientific method. Bacon’s method described the conception of a theory based upon observation, and the use of experiments to test the theory. 400 years on, we still use Bacon’s approach to create and test theories, monitor systems and check technological functionality.
“In the past, the man has been first; in the future, the system must be first.” Frederick Taylor
Taylor summed up his efficiency strategies in the 1911 book “The Principles of Scientific Management.” This was voted the most influential management book of the twentieth century by Fellows of the Academy of Management in 2011. Without Taylor, it’s unlikely that Apple or Google would even exist as they do now.
20th Century Production
At the beginning of the 20th Century, most manufacturing utilised inefficient techniques – cars for instance were built the way you or I would go about the task, by assembling the all the parts in one place: craft production. However when demand for cars increased, it became clear that a form of linear, or mass production was needed. One of the most well known examples of the production line is the one adopted by Henry Ford in 1913 for the Ford Model T, which was based on Taylor’s principles. Through the use of time and motion studies, Ford refined his production line until he had reduced the production time for a car from over twelve hours to just 93 minutes. He also introduced to mainstream manufacturing the concept of repeatability and standardisation. In contrast to Taylor, however, Ford always maintained his belief in the importance of the skill and craftsmanship of the worker.
“Without data, you’re just another person with an opinion.” William Edwards Deming
He defined what is now known as the Deming Cycle: Plan – Do – Check – Act. This is similar to the software development lifecycle most of the technology industry use today. Deming championed continual analysis and improvement of processes – one of the key tenets of DevOps. He saw effective quality assurance as an essential function of high-performing organisations.
At around the same time, after studying consumer behaviour in supermarkets, the Toyota Motor Corporation began using Kanban (which means “signboard” in Japanese) to control and record work. Kanban boards have vertical columns with work packages in the form of cards to represent stages in a process. Each process is a “customer” of the preceding process to the left – that is, the work is “pulled” from left to right, rather than “pushed”. This concept reduces inventory pile-up, enabling a delivery system called just in time and minimising waste. It also aids the identification of bottlenecks in the process by highlighting Work In Progress (WIP). Kanban makes “work” visible. And making work visible is crucial to further improvement, because “you can’t manage what you don’t measure”.
“Any improvements made anywhere besides the bottleneck are an illusion.” Eliyahu M. Goldratt
The above constitutes Goldratt’s Theory of Constraints. In his 1984 management novel “The Goal”, Eli Goldratt built on Deming’s ideas and codified Lean Production, a precursor of DevOps methodology. He described a failing manufacturing plant where Alex, the main character, is brought in to turn things around within three months. Through a series of telephone calls and meetings with an acquaintance called Jonah (another physicist, like Deming), Alex solves the organisation’s problems by utilising pull rather than push processes, reducing WIP, and employing the Theory of Constraints. “The Goal” itself, Goldratt demonstrates, is simply to make money for the business. Anything else, if it cannot be demonstrated to help make money, is likely to be vanity.
People And Process
By the 1980s, the modern manufacturing revolution was in full swing, however, its often reductionist approach to workers wasn’t helpful, and staff turnover was high. Among those to recognise this was Burrhus Frederic Skinner, a psychologist, author, inventor and the Edgar Pierce Professor of Psychology at Harvard University. In describing the nature of quality work and happiness, he said:
“It’s the difference between a craftsman who makes a complete chair and a person on an assembly line who makes only the legs. The craftsman’s work is constantly reinforced by the process of seeing the chair take form, and finally of producing the finished chair. But the assembly-line worker sees only chair leg after chair leg — never the completed product.”
This is a near-definitive example of “systems thinking”- another key tenet of DevOps.
Being able to see the end result of the process is key to improving quality in the individual stages – how can someone build the perfect component if they don’t understand in the final product? Systems thinking is a cultural practice, rather than a process or tool, and relies on believing in the capability of team members to make small but important decisions regarding their part in the process, and thus being more invested in the outcome.
- Long-Term Philosophy – Base your management decisions on a long-term philosophy, even at the expense of short-term goals.
- The Right Process Will Produce the Right Results – Focus on pull processes, managing WIP, and making work visible.
- Add Value to the Organization by Developing Your People – Provide effective training, highlight team success over individual success, and challenge your partners and suppliers.
- Continuously Solving Root Problems Drives Organizational Learning – Continuously improve (in Japanese, kaizen), use the “5 whys” to get to the root cause of problems, standardise, decide slowly and act quickly, and encourage a knowledge sharing culture.
Everything in The Toyota Way and Lean Production aligns with, and indeed comprises part of the DevOps principles.
The Agile Manifesto
Also that year, at Snowbird resort in Utah, seventeen developers, frustrated with traditional heavyweight project management methodologies, came up with the Agile Manifesto. At the time, industry experts estimated that the time between a validated business need and an actual application in production was around three years. There was a real desire to find more lightweight ways to deliver value from technology, faster. The Agile manifesto is as follows:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Whilst Agile methodology is not fundamentally part of DevOps, the two usually go hand-in-hand. In technology teams, one is certainly easier to achieve in the presence of the other.
The First DevOps Role
Shortly after the Agile manifesto was written, Google was undergoing rapid expansion. As one of the few web-scale tech businesses at the time, they experienced the unprecedented challenge of trying to rapidly introduce new features whilst maintaining a highly complex, always-on, massive scale platform. The Site Reliability Engineering (SRE) team, led by Ben Traynor, was their solution.
A Site Reliability Engineer (SRE) would typically spend up to half their time performing operations-related work such as troubleshooting system issues and performing maintenance. They would spend the other half of their time on development tasks such as new features, scaling challenges, or automation. An SRE is an example of one of the first true DevOps roles in technology.
DevOps Detractors
“…the opportunities for gaining IT-based advantages are already dwindling… And as for IT-spurred industry transformations, most of the ones that are going to happen have likely already happened or are in the process of happening.” Nicholas Carr
It’s worth noting that not all in business recognised the potential of DevOps. In May 2003, Nicholas Carr published an article in the Harvard Business Review, titled “IT doesn’t matter.” In this now infamous piece, Carr defines IT as a commodity, in the same category as electricity or water. He suggests that being the first to utilise a particular technology provides only a small competitive advantage, since your competitor can purchase the same system or replicate the same technology, but you incur the lion’s share of the cost by doing it first. He stated:
“The key to success, for the vast majority of companies, is no longer to seek advantage aggressively but to manage costs and risks meticulously. If, like many executives, you’ve begun to take a more defensive posture toward IT in the last two years, spending more frugally and thinking more pragmatically, you’re already on the right course. The challenge will be to maintain that discipline when the business cycle strengthens and the chorus of hype about IT’s strategic value rises anew.”
Carr’s piece was taken very seriously at the time, and still is by many business leaders. Perhaps it is fortunate for organisations such as Salesforce and Google that they pursued technology as a competitive advantage, and disregarded Carr’s advice.
Improving IT
It is not unsurprising however that technology had such a poor reputation at the time, since research suggests that at least 80% of outages were (and potentially still are) self-inflicted. A book by Gene Kim and George Spafford, The Visible Ops Handbook (2004), described a methodology to improve operational IT. This methodology of “Visible Ops” is described in four stages:
- Stabilize Patient, Modify First Response – This first step controls risky changes and reduces MTTR (Mean Time To Resolution).
- Catch and Release, Find Fragile Artifacts – Here assets, configurations and services are inventoried in order to identify those with the lowest change success rates, highest MTTR and highest downtime costs.
- Establish Repeatable Build Library – This creates repeatable builds for critical services, to make it “cheaper to rebuild than to repair.”
- Enable Continuous Improvement – This implements metrics to enable continuous improvement of processes.
To some degree, these four stages are evolutions of elements of The Toyota Way. They formed an embryonic codification of what was to become the principles of DevOps.
Over the next few years, the technology industry underwent a paradigm shift, where methods of working were analysed, and technology became far more fundamental to the success of organisations (possibly to the chagrin of Nicholas Carr).
#DevOps
In 2008, the term DevOps was used in the industry for the first time. There’s some confusion and misinformation regarding how this came about, but I spoke to Andrew Shafer and Patrick Debois, both widely credited with creating the term “DevOps”, to get the full story…
In August 2008 at the Agile Conference in Toronto, software developer Andrew Shafer posted notice of a discussion group session entitled “Agile Infrastructure.” Just one person, system administrator Patrick Debois attended. Debois had become frustrated by the now ubiquitous conflicts between developers and operations while working on a data centre migration for the Belgium government and was looking for solutions. Shafer actually skipped his own session because he didn’t think anyone was interested, but Debois later tracked him down for a chat in the hallway. Inspired by that hallway discussion, they formed an “Agile Systems Administration” Google Group”
In November the following year, 2009, Patrick organised the first DevOpsDays conference in Belgium, though it was Shafer who (it’s believed) coined the term DevOps by tweeting using the #DevOps hashtag at the Velocity conference in June 2009 whilst watching the now famous “10 deploys a day” talk by John Allspaw and Paul Hammond of Flickr.
The Role of Cloud Technology
The tide had turned. Increasingly organisations began looking at ways of improving software deployments, moving away from large, disruptive (and frankly, stressful) deployments, towards a model of more frequent, smaller, low-risk deployments.
Jez Humble and Dave Farley wrote what is still one of the definitive texts on this approach: “Continuous Delivery” in 2010. It describes in detail how to automate your build, deployment, and testing pipeline so that you can release changes in hours or even minutes. That might not seem that impressive today, but at the time, a release cycle of months or years was very common.
Continuous delivery, according to Farley and Humble, requires:
- Comprehensive configuration management
- Continuous integration and short lived branches (in reference to Trunk-Based Development)
- Continuous testing
The automation of the build, deployment, and testing process, coupled with better collaboration between development, test, and ops teams, means that changes can be released rapidly. These smaller, low risk changes are more easily rolled back should something go wrong. “Continuous Delivery” showed how to increase velocity of change, whilst reducing risk and improving quality.
With cloud technology becoming mainstream and a desire to release software more rapidly, automation technology and tools took off. Software firms such as Puppet and Chef grew fast as developers and engineers strove to streamline their build processes and manage ever-increasing scales of infrastructure in the cloud. These tools also provided a new ability to fire up duplicate environments, such as staging, QA, test and validation, within minutes rather than weeks or months. Organisations exploiting these automation tools and using native cloud technologies felt that they were gaining significant competitive advantage by doing so, and what evidence there was, was in their favour. Even Gartner, in a 2011 report, stated that:
“By 2015, DevOps will evolve from a niche strategy employed by large cloud providers into mainstream strategy employed by 20% of Global 2000 organisations.” Gartner, March 18, 2011.
In the same report, Gartner recognised that ITIL and other “top-down” best practice frameworks had not delivered on their goals, and IT organisations were looking for something new. They understood that because DevOps was primarily a cultural shift, driven from the ground up, it could prove far easier for technology departments to adopt than ITIL or similar frameworks
The Codification of DevOps
Two years later, Gene Kim, Kevin Behr and George Spafford wrote The Phoenix Project, a novel about a failing organisation struggling to meet the demands of modern technological complexity and competition. This novel inspired technology leaders and engineers alike, because it described with eerie familiarity what it was like to work in a technology organisation with poor change control, problematic “Ops vs Devs” cultures and inadequate visibility and monitoring of work or performance.
Gene also introduces in the Phoenix Project one of the first efforts to codify DevOps, using “The Three Ways”:
- Flow (or Systems Thinking)
- Feedback Loops
- Continuous Improvement
These “Three Ways” are concepts that echo the Toyota Way, Deming’s “Plan-Do-Check-Act” cycle, and other best practices, made specific to the DevOps context. Gene’s subsequent book, written with Jez Humble (of “Continuous Delivery”), Patrick Debois and John Willis in 2016, “The DevOps Handbook”, goes deeper into the technical application of The Three Ways. It explores how to measure what matters to the business, and how to implement technical processes such as Continuous Integration and Continuous Delivery. Gene also introduces the concept of “DevSecOps”- the integration of DevOps practices into the application of information security. If The Phoenix Project was the “why” to do DevOps, The DevOps Handbook provides the “how”.
Measuring DevOps
Given that DevOps is at least partly about effective measurement and continuous improvement, it’s self-evident that we, as an industry, should measure the success of DevOps itself. In 2012, Puppet began surveying people working in technology to understand the adoption and development of DevOps practices. They published “State of DevOps” reports which focussed on twenty key capabilities.These fall along familiar categories:
- Technical (version control, test automation, deployment automation, trunk-based development)
- Process (WIP limits, visual management, visualisation of the value stream)
- Cultural (team culture, learning cultures, and job satisfaction).
Now taken over by DORA (DevOps Research and Assessment), an organisation created by Nicole Forsgren, Gene Kim, Jez Humble and Soo Choi, the State of Devops Report is being improved every year. According to Alanna Brown at Puppet, they “have built the deepest and most widely referenced body of DevOps research available, drawing on the experience of more than 30,000 technical professionals around the world.” The data from these reports demonstrates that Carr’s view of IT as a cost centre was misguided. It is clear that IT is a powerful driver of value to an organisation where velocity, security and stability are essential for success.
“…software delivery is an exercise in continuous improvement, and our research shows that year over year the best keep getting better, and those who fail to improve fall further and further behind.” Nicole Forsgren
On the back of the last four years of State of DevOps reports, Nicole Forsgren wrote the illuminating book “Accelerate”. It explains which metrics correlate to organisational performance, and what you should measure in order to find out where and how to improve.
Forsgen states that the key metrics separating high from low performers in tech organisations are:
- Deployment frequency (and pain!)
- Lead time for change (from code commit to code deploy)
- Mean Time To Restore (MTTR)
- Change failure rate
Interestingly, the first two of these metrics are throughput (traditionally development-oriented) measures; the last two are stability (traditionally operations-oriented) measures.
The State of DevOps Today
As of the 2018 State of Devops report, the findings consistently show that:
- Software delivery and availability unlock competitive advantages.
- How you implement cloud infrastructure matters.
- Use of open source software improves performance.
- Outsourcing by function is rarely adopted by elite performers and hurts performance.
- Key technical practices drive high performance. (i.e monitoring, automated testing, security integration)
- Industry doesn’t matter when it comes to achieving high performance for software delivery.
The statistics show that the high performers exhibit 46 times more frequent code deployments than low performers. They have a 7 times lower change failure rate, over 2,500 times faster lead time from code commit to deployment, and are over 2,600 times faster to recover from incidents.
When an organisation can deploy quickly, recover rapidly, and suffer few outages, it has the ability to reach the market before competitors and respond to customer demand quickly. It will also provide more stable and secure service. This results, ultimately, in Goldratt’s “Goal”, making more money for the business.
Such a state is not reached by simply automating, using cloud technology, or recruiting a “DevOps Engineer” – it is the culmination of great team culture, continuous improvement, feedback loops, systems thinking, and a rigorous approach to using the right technology. DevOps is not a framework (like ITIL), an industry standard, a suite of tools, or a job title.
DevOps encompasses the culture, technologies, tools, skills and processes that enable organisations to go from idea to production as rapidly as possible, incurring low risk and cost, and providing high security and reliability at scale.
The definition of DevOps itself is continually evolving and improving, and while I may offer a definition as above, it will be out of date within days of writing, because, like the technology and services we build, it is continuously in flux, and being improved by the same people practising it.
Where does DevOps go next? My prediction is that DevOps will soon die out, not because it is no longer useful, but because it will become simply “the way we do things”.
“Thank you Tom for that. It’s great to be able to unpack some of the industry hypes and he did that there for us brilliantly. You can find out more about Tom on his website here, and also listen to the interview he did for the show.”
“Next up we have Mark Wilson, who works for a company called Risual.“