Smart thinking helps CISOs boost AI benefits and curb risks, says Gartner - The EE

Smart thinking helps CISOs boost AI benefits and curb risks, says Gartner

Jeremy D’Hoinne of Gartner

The spread of artificial intelligence (AI) doesn’t come without risks. Gartner estimates that by 2022, the replacement of conventional security approaches by various forms of ML will make organisations less secure by a third.

That, says Jeremy D’Hoinne, research vice president at Gartner, is why chief information security officers (CISOs) need to not only understand the benefits of AI but also its potential downsides.

Myths and misconceptions

Our view of AI can be coloured by preconceived notions about the technology. These myths and misconceptions can undermine the activities of security professionals if left unchallenged. It’s true that the majority of advanced AI algorithms can quickly disclose important information about vulnerabilities, attacks, threats, incidents and responses.

In the first instance, this can result in a noticeable strengthening of a security team’s capabilities. Other tools, such as probabilistic reasoning (often generalised as ML) and computational logic (commonly referred to as rule-based systems), can also deliver substantial benefits.

Supervised machine learning

Supervised machine learning (ML) is a well-established item in the data security toolbox for threat detection, while unsupervised ML and deep learning (DL) are growing in popularity for uncovering post breach anomalies. Most of the value we get from “AI” today comes from supervised machine learning. This is not a competition between algorithms, this is teamwork.

But what are the potential risks? Many algorithms retrieve large amounts of data so CISOs have to understand the implications of using AI with regards to data privacy. Also, many organisations have adopted AI without evaluating the systems they already use, which may be performing a more effective job of security and risk management than a new tool could achieve. Put simply, ML is not immune to attacks and AI should not be treated as a protection panacea. There is no guarantee that AI is better than alternate techniques and it can be fallible, offering incorrect and incomplete conclusions. Also, some problems cannot be solved by throwing more data at them. ML will remain one of the tools in an enterprise’s security arsenal.

CISOs must focus on the desired outcome and how these relate to their security strategy. A good first step is to perform a quick self-assessment. Does your team have a sufficient understanding of AI to implement it successfully? Will it solve the problems you are trying to solve and if so, what sorts implementation works best? Finally, what will the likely impact of AI and how do you measure that value?

AI can improve a security team’s effectiveness

The birth of AI was accompanied by some exaggerated expectations about what it could achieve coupled with fears about job security; the assumption was that AI would replace humans en masse. In 2020, this should no longer be a concern – during the course of the year, AI will have a positive impact on jobs, creating 2.3 million roles while only removing 1.8 million. AI also has the potential to improve a security team’s effectiveness, although the idea that it can provide organisations with the ability to predict attacks is simply nonsense.

Because enterprises will deploy ML as a feature of broader platforms, CISOs should set expectations that they the benefit will be real but incremental. AI will augment existing tools, processes, and teams, it will not replace them.

With all the hype surround AI it’s easy to forget that it is still an emerging technology. Moreover, it shouldn’t be a CISO’s role to determine the value of an emerging tool to their organisation – the technology should demonstrate that itself, over time. Adopting this mindset will help ensure that security and risk management leaders do not commit to inappropriate AI strategies.

Until AI realises its full potential CISOs need to balance being wary of change with an understandable anxiety about missing out if they are to achieve a practical AI strategy. Play safe to stay safe.

The author is Jeremy D’Hoinne, research vice president, Gartner.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close