DevOps is revolutionizing IT development. New products and services are being delivered more quickly, better meeting business expectations. Culture change and automation are key – with DevOps toolchains the key to the latter. DevOps toolchains are being built using many different solutions – such as Ansible, Bamboo, Docker, GitHub, Jenkins, Kubernetes, Nagios, Puppet, and others. This article looks at the pros and cons of such DevOps toolchains and questions how many DevOps, IT service management (ITSM), and other IT management tools your organization really needs.
The rise of DevOps and DevOps toolchains
The reasons why organizations have adopted, and are adopting, DevOps are well-publicized, including to:
- Improve collaboration – The roots of DevOps are based on a simple concept – help the IT Development and IT Operations teams to work better together. Since then, DevOps thinking has evolved to also include IT Security, Quality Assurance, Suppliers, Customers, Network Engineers, and any stakeholder that’s involved in the idea-to-value chain.
- Increase solution delivery velocity – As the pace of business change increases, IT needs a way to keep pace for delivering solutions.
- Improve responsiveness – IT also needs to respond in a timely manner to ever-changing developments in the marketplace, with an acceptable balance between cost, risk, and quality.
- Change culture – Perhaps the most important aspect of adoption, DevOps promotes a culture of “one team,” a culture of safety (in which it’s okay to fail), and a culture of experimentation and learning.
One of the ways that organizations that are embracing DevOps try to meet these objectives is to build and use toolchains. A DevOps toolchain consists of a combination of tools that aid in the delivery, development, and management of applications throughout the systems development lifecycle.
Common DevOps tool needs
Most DevOps toolchain definitions that I’ve seen include the following capabilities:
- Source code management – The critical component of the toolchain, as successful DevOps adoptions rely on a robust source code control system.
- Plan – To help define the deliverables of the solution, such as business requirement, metrics, security, business case, and release planning and timing.
- Create – To write the software or configure the components needed to deliver the solution.
- Build – To assemble the various components of the solution into a package.
- Test – To confirm that the solution meets defined requirements.
- Deploy – To implement the new or changed solution into a managed environment such that it can be used/consumed.
- Operate – To make the solution available for use/consumption.
- Monitor – To observe the performance and impact of the deployed package, and to alert teams should a negative situation arise.
These DevOps toolchains are key to automation, which allows an IT organization to become more responsive and to operate with greater velocity.
There are lots of DevOps tools
A popular toolchain “best practice” is *not* to use a homogenous toolset, but rather to implement and use combinations of tools – from both commercial product vendors and open-source solutions – to automate the plan-to-monitor value chain. Such an approach offers both pros and cons.
- Flexibility – Emerging needs can be addressed by the right tool or solution, rather than trying to fit a solution into a need.
- No vendor “lock-in” – A long-standing risk of going with a single-vendor solution is that an organization could become “locked-in” to that vendor’s solution – good or bad. Having the ability to use the best tool for the need – regardless of vendor – gives an organization a level of independence from using a single vendor’s solution – which may be inadequate or inappropriate.
- Specificity – Different tools do different things. Organizations can select a tool that has a specific focus on specific deliverables, such as integration or automated testing, to meet their specific need.
- Administrative burden – With every specific tool in the toolchain, comes administrative burden. Someone has to maintain these tools. Someone has to build and maintain all of the APIs between the tools. The alternative is that any data handoff between tools must be done manually, which defeats the idea of a toolchain.
- No transportability – Solutions developed by other organizations cannot necessarily be leveraged or duplicated. This means each organization risks “reinventing the wheel” only to derive similar results achieved by other organizations.
- No holistic toolchain support from vendors – IT organizations could find themselves “on their own” with regard to troubleshooting and maintaining the functionality and effectiveness of their toolchains.
- Complexity – The more parts of the toolchain, the more complex the toolchain becomes to ensure interoperability and consistency.
- Overlapping/duplicative functionality – The DevOps tool marketspace is full of duplicative functionality. And don’t just take my word for it – here’s a popular representation of how the DevOps marketplace is full of overlapping functionality.
And with the various solutions on the market plus the increasing complexity of managing toolchains, a new technology category has emerged – tools for managing tools. For example, AIOps tools correlate and determine a response to alerts raised by differing monitoring tools. Reporting tools gather and synthesize information from source tools to produce consolidated dashboards and other reporting formats.
Is history repeating itself?
Perhaps it’s a sign that I’m getting old, but to me, the DevOps toolchain market looks eerily like the ITSM tool market from 10-15 years ago.
The ITSM tool market at that time was a very crowded field. Some vendors had very broad-based, encompassing solutions. Other vendors offered very niched or specialized solutions. Those tools rarely talked to each other out-of-the-box. And there were no APIs or defined interfaces that allowed data to flow between the various tools – unless the purchasing organization developed those APIs.
Perhaps toolchain vendors are recognizing the similarities between the ITSM market of the past and today’s situation. Many vendors have developed APIs to facilitate interaction between their tool(s) and other tools on the market. But much of that development is likely driven by relationships between the vendors themselves, not necessarily because of customer demand.
We’re also seeing an early wave of consolidation in the DevOps toolchain space. OpsGenie was acquired by Atlassian. VictorOps was acquired by Splunk. And Microsoft has acquired GitHub. Who’s next? Further consolidation in the toolchain market is coming – it’s only a matter of time.
And I find it interesting that many of the acquisitions are involving tools that focus on the “operate” and “monitor” components of the toolchain – components for which there was already an ITSM solution market.
Why aren’t ITSM tools part of the toolchain?
Compounding the situation is that ITSM tools are seemingly not part of the DevOps toolchain. Many DevOps tools are not considering existing ITSM tools, insisting on developing new solutions for old issues. Many organizations are insisting on separating the DevOps toolchain from the ITSM tools – often for no other reason than “just because.”
Are such organizations truly interested in tearing down the “wall of confusion”? Are those organizations only trying to put a big band-aid over a poorly designed and maintained ITSM environment, trying to put its poor change management practices and the “once-and-done” ITSM process implementations of their own doing in the rear-view mirrors as quickly as possible? Perhaps.
But ignoring existing ITSM solutions, however, has a serious impact. By ignoring the ITSM solution, organizations are ignoring the human impact of solutions provided to the end-user. DevOps toolchains are ignoring the already-established interface between end-users and the IT organization – the service desk, and processes like incident management and request fulfillment.
One may argue that the “monitoring” component of a DevOps toolchain captures the user experience. But unless that capture includes complaints reported to the service desk, the monitoring component is missing a hugely critical part of effective solution delivery – the experience of the end-user. The monitoring component may identify, capture, and alert IT to technical issues in the end-user environment, but it doesn’t capture the human feedback that a service desk captures using the ITSM tool.
Could existing ITSM constructs and processes be improved? Yes. But is there a need to reinvent?
Does having more tools really solve the issue?
Which takes me back to the core issue – why are organizations considering and adopting DevOps? Because they want to improve the velocity of solution delivery. Because they want to improve their responsiveness to business changes. Because they want to improve collaboration between all stakeholders involved in the idea-to-value chain. Because they want a different organizational culture within their organizations.
But tools alone do not solve any of these issues. Tools are part of the solution – but when does the cost (including investment, training, and ongoing support) of having multiple (and potentially, overlapping) tools outweigh the benefit of having the multiple tools?
Tools won’t fix culture. Tools won’t fix poorly-designed processes – tools only make the execution of a poorly-designed process faster.
So, how many tools do you really need?
Doug Tedder is the principal of Tedder Consulting LLC, and is an accomplished and recognized leader who is equally adept in interactions from senior leadership to day-to-day practitioners.
Doug holds numerous industry certifications in disciplines ranging from ITIL, COBIT, Lean IT, DevOps, and Organizational Change Management. An active volunteer within the IT Service Management community, Doug is a frequent speaker and contributor at local industry user group meetings, webinars, and national conventions. Doug is also a member, former president, and current board member for itSMF USA as well a member of HDI.