
The AI (artificial intelligence) hype train has well and truly embarked from its starting station. In fact, it left some time ago, and continues to gather momentum. It feels that there aren’t many people left who haven’t asked ChatGPT a burning question. It’s one of many instances of generative AI, which is now capable of acing school papers, creating photographs that can fool prize juries and even write passable code, says Mat Clothier, CEO of Cloudhouse.
And with the media whipping up the AI storm, it’s little surprise that organisations are scouring the market for AI-driven solutions in every application imaginable. But is AI the answer to all of our problems, even when it comes to complex configuration management within organisations?
Ownership of IT decisions
Configuration management has traditionally been a complex and time-consuming process for IT professionals. Continuous tweaks to configurations are needed to ensure that optimal performance is delivered to both employees and customers where applicable. It’s an acute challenge for IT teams that are already thin on the ground due to the technology skills shortage. Naturally, professionals have looked towards solutions to help handle this process.
AI, and in particular, ML (machine learning), certainly can play a role. The technology can analyse huge amounts of data that humans simply wouldn’t have the time to do. The patterns and trends that emerge can help inform employees on the best decisions around configurations to ensure optimisation or to even avoid downtime. But it’s not ideally suited to be left to devise IT actions on its own. One of the key reasons for this is the legalities of product outputs and IT actions generated by AI.
There’s already extensive discussions going on in the media about the content produced by ChatGPT, and who owns the copyright or intellectual property law associated with its output. Similarly, it becomes a murky issue for IT departments if AI becomes fully responsible for configuration management.
Flawed decision making
Legal issues can also become a significant concern if AI makes a decision that turns out to be flawed. Generative AI applications, for example, have been proven to deliver incorrect information, and can even be biased in their opinions. We’re not yet at the stage where AI can be fully relied on to make the right decision every time.
This can have significant ramifications for IT departments. Failing to configure applications correctly in line with regulations can not only lead to inefficient operations, but can also create non-compliance that incurs hefty fines. Applications that don’t comply with standards around data privacy and security, such as GDPR, can lead to monetary penalties numbering in the millions in the worst-case scenarios.
The information commissioner’s office has the power to fine businesses £17.5 million (€20.35 million) or 4% of the total annual worldwide turnover in the preceding financial year, whichever is higher. Even just having AI access sensitive customer information to analyse it can be a grey area in compliance. Information could inadvertently be leaked and fall into the hands of cyber criminals.
Where the focus needs to lie
AI can help humans make key decisions in the IT department, but advice on what actions should be taken is as far as it should go. These solutions shouldn’t be left to handle configuration management, a complex undertaking for the vast majority of businesses. Instead, there are solutions on the market that are specifically designed to handle such tasks while fully keeping humans in the loop.
Vendor agnostic monitoring tools are specifically designed to provide insights and deliver integrity validation alerts, with no ambiguity or legal murkiness over IT decisions. They empower IT professionals to automate security and compliance assessments, improve resilience by checking configuration against the strictest policy standards, and enable change management to avoid configuration drift.
Configuration drift is commonplace among systems over time as different users manage them in different ways, which can then undermine expectations on how services behave and how secure they are. It’s a complex and nuanced phenomenon that is likely to baffle even the most sophisticated AI solutions. Instead, monitoring tools can automate compliance assessment and reporting, continuously auditing systems and recording the configuration state over time.
Additionally, these tools can automatically test assets against best practices, company policy or regulatory standards. This isn’t just performed as a one-time task. It is continuously auditing IT assets to ensure that if they do fall out of compliance, they can be rectified as quickly as possible. Businesses can avoid the risk of AI treating compliance as a tick box, and instead something that needs to be ensured to protect operations moving forward.
Putting AI into perspective

In the coming years, AI will have its place in the solutions portfolio of businesses. Undoubtedly, more employees will be leveraging its capabilities to make sense of vast amounts of data and devise key decisions with these insights. However, the current limitations of AI mean that it’s not suited to handling the big decisions itself, and this also opens a can of regulatory and compliance worms. We’re still far off the sentient thoughts that AI displays in Hollywood films.
But it doesn’t mean that IT professionals can’t deploy tools that enable them to find out what they have in their estate, identify what’s out of date or non-compliant and automatically achieve compliance by moving towards best practice configuration. In a world where new IT solutions are cropping up all the time, one specifically designed to manage and configure assets is the best place to begin for gaining a complete view of a complex IT landscape. At the heart of these deployments remains human expertise, which is here to stay for some time yet.
The author is Mat Clothier, CEO of Cloudhouse.
Follow us and Comment on Twitter @TheEE_io