Artificial intelligence risks losing trust if it doesn't offer explanations - The EE

Artificial intelligence risks losing trust if it doesn’t offer explanations

Over the past few decades, artificial intelligence (AI) has gone from science fiction to an integral part of everyday business operations. According to a survey, “43% of organisations in the United Kingdom believe AI will play a big role in their operations.” 

Looking out a bit further on the horizon, Gartner predicts that “by 2023, 40% of infrastructure and operations teams will use AI-augmented automation in enterprises, resulting in higher IT productivity.” As companies proceed from narrow AI to general AI — and begin automating not only processes but also decisions — it’s vital that AI tools explain their behaviour, says Ramprakash Ramamoorthy, product manager at Zoho Labs.

The importance of explainable artificial intelligence cannot be overstated; it is absolutely vital that AI tools justify their decisions by offering detailed explanations. If an AI tool fails to offer an explanation as to how it reached a given decision, users may lose faith in the tool altogether.

While implementing AI tools into your business, it is important to retrofit AI into your existing workflows. After processes are successfully automated, you can begin to automate decisions as well. Even if you have a 100-member team specialising in anomaly detection, computer vision, natural language processing (NLP), and other AI techniques, all AI decisions should require approval from a human — at least until you have fully honed the process. Ideally, one’s AI tools should be accurate at least 80% of the time, and for every single automated decision, your tools should offer an explanation, as well as confidence intervals.

Why is explainable AI so important?

In a recent report, Forrester notes that “45% of AI decision-makers say trusting the AI system is either challenging or very challenging.” Thus, there’s a need for transparent and easily understandable AI models. For all decisions made by AI, there needs to be a readily available explanation.

Acknowledging this, you should offer pre-built explanations for all of your AI decisions. For example, perhaps you’re utilising NLP and chatbots to streamline processes for technicians; if a particular request is frequently raised and directed to the same sys admin every week at the same time, the AI recognises this pattern, automates the process, and explains why it did so.

Through “explanation-ready” AI features, you will be able to effectively assist IT teams with a host of security concerns, including log management, insider threat analysis, user behaviour analysis, and alert fatigue management. And through AI monitoring tools, it’s easier than ever to predict anomalies, outages, combinatorial anomalies, and the root causes of outages. During all of these automated discoveries and decisions, an explanation for the course of action must be provided, along with confidence intervals.

Ramprakash Ramamoorthy

Your robust solutions for DevOps and IT Operations can effectively use AI tools to assess past user behaviour and then ascertain whether an action is anomalous. While accounting for seasonality, changes in schedules and processes, and time of day, these AI tools effectively predict anomalies and outages, ultimately saving your IT teams copious of time and energy.

As an example, perhaps your website monitoring tool notes that a web page loads slowly at the same time each week when it is accessed from a certain location. AI tools will acknowledge this pattern and automatically send a ticket to the web manager via your service desk software. By integrating with multiple tools, AI automates processes, saves time, and improves productivity. 

Again, the important point to drive home is that the AI must be explainable. AI tools can suggest certain decisions; however, if these decisions don’t come with pre-built explanations, people will lose faith in the tools.

About the author:

Ramprakash Ramamoorthy, product manager for AI and ML at Zoho Labs, is in charge of implementing strategic, powerful AI features at ManageEngine to help provide an array of IT management products well-suited for enterprises of any size. Ramprakash is a passionate leader with a level-headed approach to emerging technologies, and is a sought-after speaker at tech conferences and events.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close