You can’t escape the growing interest, and adoption levels, of self-service for corporate IT. And there are definitely self-service benefits to be reaped via a number of different self-service capabilities. But is your IT service management (ITSM) organization actually ready for self-service success?
Or, if you’re reading this after an unsuccessful self-service initiative, was your organization ready for its self-service launch? And instead of blaming the self-service technology for the low adoption levels, might it have been more to do with the corporate IT organization’s level of preparedness?
Technology is at best only part of the answer
An organization might have spent a considerable amount of time, money, and thought selecting the right self-service technology. It might have also invested in ensuring that the self-service capability is something that end users would want to use.
But end users still might not use it. Not because they don’t want to use it. Not because it’s difficult to use. But because they can’t find the help they need, or because what they do find doesn’t actually help them.
It’s the equivalent of buying a new car, teaching someone to drive (and the benefits of driving), but failing to fill the tank with fuel.
So self-service readiness is about more than the technology and the education of end users. It’s also about ensuring that there is sufficient fuel, especially for self-help, to power the required end user journey. This is where the concept of level zero solvable (LZS) can help with self-service success.
The need for the “level zero solvable” approach to self-service
Sadly, a common mistake made by corporate IT organizations implementing self-help capabilities is launching a knowledge base before it’s truly fit for purpose.
The knowledge base might house lots of content, knowledge articles, that has been created with love – but if the right article can’t be found, understood, and then used for self-help then there might as well be no knowledge articles. Plus, there’s the issue of insufficient (knowledge article) coverage across the spectrum of common end user issues.
This is where LZS can aid IT organizations, helping them to understand their level of preparedness for launching self-help and self-service.
LZS for self-service explained
LZS is a measure – the percentage of incidents that could have been resolved by the end user via self-help. And LZS can be used to gauge the chances of self-service success by predicting the level of end user success with the knowledge base.
If organizations can reduce the risk of end users thinking that the self-service, or more specifically the self-help, capability is useless then it will increase the probability that end users will use it again. Conversely, if an end user finds self-help to be of little or no value, then they will most likely not return to the self-service capability.
So those responsible for delivering a new self-service capability need to ensure that the knowledge base is sufficient for the most common, simple IT issues to be self-solvable by end users. It’s definitely a quality over quantity scenario – in that the knowledge base might have sufficient knowledge articles for a service desk agent to resolve the common issues but:
- Can end users find the right articles when searching for help using their language and terminology rather than IT’s?
- Even if end users find the right knowledge articles are they created in a way that allow the end users to understand and successfully use the documented resolutions?
Calculating LZS
There are two options for calculating the LZS metric – one might slow down service desk operations, the other might be seen as duplication of effort.
The first requires service desk agents to search the self-help knowledge base, while dealing with end user issues, as though they are the end user trying to solve the issue via self-help. If the end user issue can be resolved using an available knowledge article, then the incident record can be flagged as “LZS.”
With the metric then based on the LZS-flagged records as a percentage of the total number of records handled that month. So a service desk will have 40% LZS if four out of every ten issues handled could have been solved by the end user using self-service.
Alternatively, to save service desk agent time and to reduce the operational impact of the LZS approach, the LZS metric can be calculated by project staff on a sample basis. Either way, the higher the LZS metric is, or gets, the higher the probability that end users will use self-service successfully and continue to return in the future.
However, it’s important to realize that just because there’s an available knowledge article to help resolve the issue, it doesn’t mean that the issue can be flagged as LZS. It only means that it could have been LZS, i.e. if the knowledge article was more suitable to end user needs.
Ultimately, LZS needs to be used honestly, as dishonesty with LZS can only increase the chance of self-service launch failure because knowledge articles intended for self-help either can’t be found or used in anger by end users.
LZS post-launch
After the launch of a self-service capability, the LZS score will decrease as self-service adoption rises, i.e. the majority of issues that are LZS are now hopefully being resolved through self-help. But the LZS approach doesn’t stop here.
Instead the paradigm is flipped – whereas before launch the requirement was to get LZS as high as possible, post-launch the level of LZS issues hitting the service desk should be minimized (see the diagram below where the red line shows the self-service launch). With the issues hitting the service desk now analyzed to identify the need for additional, or improved, knowledge articles and other opportunities to improve the self-service capability.
Image source: HDI “What is LZS?”
So LZS is one way to help ensure that your IT organization is ready for self-help and self-service. What would you recommend?
Image Credit
Please use teh website search capability to find more helpful ITSM articles on topics such as creating products or services, support team performance, service portals, service tool selection, developing team members, improving business processes, making informed decisions, continually improving services, enhancing business operations, service delivery, service catalog management, knowledge base articles, how to improve customer satisfaction, accessing information on the move, and providing better customer interactions.
Simon Johnson
Simon is the UK General Manager at Freshworks, the Google-backed leading cloud-based customer support software company. Simon heads the company’s operations and revenue strategy for the UK covering IT service management, customer service and support management. Prior to Freshworks, Simon has led global sales teams for Microsoft and Oracle database and development software providers. Simon is a Dad of two boys, and a keen sportsman, having completed Marathons and England trials for football.