In his book Introduction to Cybernetics, published in 1956, W. Ross Ashby laid out a framework for managing systems of communications. For a book published more than sixty years ago, it provides some insights that are still useful today. One of the primary ones is the Darkness Principle – where any sufficiently complex system is impossible for a single person to view and understand. For IT service management (ITSM), IT asset management (ITAM), and security teams, the Darkness Principle is representative of a wider problem: how can we deliver services effectively when we can’t know everything we have? How can we keep those services and devices secure and protected? How can we track those devices and know what elements make them up?
Today, the complexity of systems has continued to grow alongside more security threats and risks to those devices and systems too. Solving this problem is bigger than any one individual or team – it will rely on collaboration to get all the necessary data and to use it across all the different use cases.
How hard is it to know what IT assets you have?
The biggest problem here is the lack of visibility around IT assets. For long-term ITAM professionals, this gap between hardware assets existing and records in the configuration management database (CMDB) has been a perennial issue. While getting insight into network-connected devices is now straightforward, the growth of mobile working has made it more difficult to track laptops and the software installed on them. Getting this accurate list of assets is essential but making sure it’s up to date and accurate across hardware versions, software installations, and versions is still a challenge.
One of the linked issues here is that there are more devices attached to the network that are not traditional desktops or laptops. The growth of computing around smaller machines like the Raspberry Pi and connected devices that are part of the Internet of Things (IoT) has made it more difficult to know what’s on the network (and the security implications). This is a real problem for companies of all sizes – for example, this year NASA published a report on how a large data leak for its Jet Propulsion Laboratory division was traced back to an unauthorized Raspberry Pi being installed and not discovered.
Alongside this, there’s been an increase in IT assets being hosted in the cloud, and it’s made the job of ITAM harder. Getting insight into all the applications being used – whether they’re corporate apps running on the likes of AWS, Google Cloud Platform, or Azure, or are packaged software-as-a-service (SaaS) applications – is another challenging aspect for asset management.
Today, there’s also another trend to consider. New applications are getting developed and run in software containers – small, lightweight instances that contain only the elements needed to run an application component. Typically, they run on cloud services for as long as they’re needed, and the number of containers can go up or down based on demand for a service from customers or users.
What makes containers more challenging from an ITAM perspective is that they’re ephemeral. Rather than being set up and run for years, containers are requested from an image library, run in the cloud, and then turned off when no longer useful. While they tend to be built on open source software components, which removes some of the software licensing pressures, they can exist for hours or even minutes at a time. From an asset management perspective, this makes managing an accurate list of what’s in place very difficult.
Take all these trends together – the growth of IoT, the rise of cloud, and the introduction of containers – alongside the continuing problem of keeping an accurate list of assets, and the problem that exists should be clear. However, what can we do to fix the problem?
Collaboration across assets, security, and service – changing the goalposts
One approach is to look at who else within the business requires an accurate and up to date asset list, and why. For example, IT security teams have to keep company assets, from data and intellectual property through to IT assets, secure against outside threats. These teams also have to maintain an accurate list of software and IT vulnerabilities, whether the organization is at risk from them, and then prioritize how to fix or patch them over time.
For ITAM and security teams alike, this data is essential, but it can often be generated in silos. Instead, these teams should collaborate to ensure that they have all the insight into assets that they need. However, IT security teams tend to need real-time insight into device status and software vulnerabilities in order to manage security threats as they develop. Couple this with the increase in interest around risk and security driven by compliance regulations like the European Union’s General Data Protection Regulation (GDPR), and ITAM teams may be able to leverage the additional budgets that IT security teams have access to. On the other hand, ITAM and ITSM teams tend to have more experience and history in building CMDBs and using that data for the ongoing management of software, licenses, and services.
What this means in practice is that these teams should come together around building an accurate list of IT assets and keeping it up to date in real-time. For security, this real-time element is essential for keeping assets protected over time; logically, security should retain and be responsible for the management of this data. For ITAM and ITSM teams, having access to this data can lead to changes in their priorities and key performance indicators (KPIs). Rather than being targeted on putting together a list of assets and keeping it accurate, the future should involve much more focus on how to use that data for license compliance, cost reduction, and asset re-allocation where it’s needed.
This emphasis on how to use data effectively relies on it being up to date and suitable for all teams to use. Rather than being one team in charge and the others relying on them for data handouts, any approach has to be bi-directional. By this, it means that any security system or CMDB implemented should be able to update any other system used by other teams with new data; if a change takes place in the other system, this should be propagated back via APIs so that the “single source of truth” remains up to date. This should enrich or qualify the data on specific assets so that all the teams involved can benefit from the better insight.
This can also help teams cope with one of the biggest problems in ITAM: variance
In the average asset inventory, vendor names can be rendered in eight different ways while the same product might exist under twenty different names too. Alongside this, you might see different product naming complexities develop as technology companies make acquisitions and names evolve over time. This variance makes it more difficult to manage consistency across teams; from a security perspective, it makes correlating potential vulnerabilities more difficult; for ITAM, it complicates the asset management and license compliance process. As part of enriching the data on assets, inventory data should be normalized so that it’s easier to manage variance.
Similarly, it can be easy to miss that multiple products that cover the same task are being used across a company – leading to extra software spend and additional complications. Let’s take collaboration as an example – Microsoft has provided multiple tools for real-time chat or collaboration between individuals, from the new service Microsoft Teams through to Microsoft Lync, Lync Pro, Communicator, Skype, and Skype For Business. Each of these products will have its own update and support requirements. This ITAM data can answer specific questions around security vulnerabilities or software license compliance; however, this data can and should be enriched to answer wider questions from the IT leadership team as well. This variance data solves more business problems over time and delivers more value compared to individual teams only helping themselves.
As companies ask for new services to keep up with their aims and objectives, developing new approaches to delivering on those requests will become more complex. Architecting these services, keeping them secure, and managing how they’re delivered will be a challenge. However, while the Darkness Principle still looms over IT and service design, getting to a single source of truth around IT is possible. While centralizing data on assets and how they’re used can support this, it’ll be collaboration on processes that will help multiple teams meet all their different objectives. While systems are complex, the data we can gather now can help shed more light on how to apply these systems more successfully over time.
Want to read an article about ITIL 4 or the ITIL 4 service value system? Or do you want to read about the software request process?