
In less than a year, large language model (LLM) AI tools like ChatGPT and Google Bard have had a powerful impact on the business world. Recent research indicates that most businesses have experimented with the tools, and many are now regularly integrating them into their workflows, says Max Vetter, VP of cyber at Immersive Labs.
By stark contrast however, the highly accessible and flexible nature of LLM tools means they can also be used to aid in cyberattacks. An opposing force to legitimate businesses uses, criminals can exploit AI to construct and execute attacks faster and more efficiently and explore new techniques.
Organisations will need to adapt their defences as AI-powered attacks become more common. Alongside new security tools, businesses must also concentrate on preparing their workforce for a new era of AI-enhanced threats.
As attacks become both smarter and swifter, it becomes imperative to consider how businesses can adapt their workforce training and resilience strategies to counter these novel challenges. Further, alongside external cyber threats, the internal use of AI can also create security risks if staff aren’t given proper guidance through policies and training.
How has AI influenced the cyber threat landscape?
A wide range of cyberattack opportunities have opened up over the last 12 months, with the accessible interface of these tools meaning that even relatively unskilled criminals can get on board.
This is particularly true for social engineering attacks, which exploit human psychology rather than technical vulnerabilities.
Highly targeted spear phishing attacks are one of the most insidious cyber threats today. A skilled attacker can craft an email that usurps the identity of a trusted contact and exploit trusted relationships to deceive the target into sharing data, downloading malware, or any number of harmful results.
Constructing one of these attacks usually requires a reasonable amount of time and effort. The attacker must comb through available resources such as company sites and social media profiles to learn about their target and the individual they will be mimicking. The more they put into understanding their personality and position, the more convincing the resulting malicious emails will be.
Now with the right prompts, AI can craft emails that use flawless language and mimic specific communication styles. These emails can be alarmingly persuasive, deploying psychological tactics such as urgency or reciprocity and even using publicly available information to tailor attacks to particular individuals or organisations. The subtlety of these threats poses a significant risk, especially for organisations that manage vast amounts of sensitive data. Further, tools can accomplish all of this in a matter of minutes. Without the need for painstaking research, threat actors can launch highly targeted attacks far more quickly and in greater numbers.
While many AI operators have implemented safeguards to block obviously malicious requests, threat actors can get around them with more subtle wording, or use prompt injection to hijack the output. For example, we developed a prompt injection lab using OpenAI’s API to show first-hand how prompt injection can exploit these tools.
Another potential risk is the ‘function calling’ option OpenAI added to ChatGPT in June which enables the tool to return data in a structured format usable by other applications. There is a risk that threat actors could use prompt injection to exploit the function to access information about applications, or even perform command injection or SQL injection attacks against the infrastructure.
Internal AI use also presents risks
AI tools can pose a risk even without a threat actor getting involved. There are deepening concerns about the way tools learn from data and the security of sensitive information. In May, electronics giant Samsung made the surprising move of banning the popular ChatGPT tool outright after an apparent data leak. While full details have yet to be revealed, it appears that on several occasions, workers input confidential source code into the LLM tool to assist with their coding. As ChatGPT will retain information for future learning and modelling, Samsung fears that the sensitive information was leaked as a result.
Organisations need to get ahead of issues like this by developing internal AI policies that guide how the tools should and should not be used. This should pay particular attention to data handling, ensuring that all workers understand how the tools absorb and retain information.
There should be clear policies and documented processes on what kinds of data and activities are appropriate for AI tools, and which are off limits. These policies should be backed up by a key person or team with responsibility for coordinating and managing guidelines and limits.
Why workforce resilience is key
When it comes to external AI threats, organisations will need to be confident that their employees have the awareness and training to stand against a new wave of more sophisticated attacks. Traditional training may no longer be enough to ensure cyber workforce resilience.
Even before the added threat of AI, cyber workforce resilience has long been a struggle. Cybersecurity in general is already a difficult issue for many organisations to pin down recent research from Immersive Labs found that 55% of directors lacked enough data to accurately gauge their cyber preparedness.
The human side of security is even more difficult to assess. ‘Soft’ skills like knowledge and alertness can be very vague without the right approach and metrics. As such, human security will often be overlooked in favour of more easily measured technical approaches. Immersive’s research found that, while most companies are aware of the link between cyber workforce resilience and organisational success, only just over half (58%) believed they make effective use of this fact.
Sidelining human security is already a mistake, but the risk will grow more pronounced as AI enables threat actors to launch increasingly effective social engineering attacks targeting the workforce.
A hands-on approach is essential
Traditional training often lacks the dynamism to prepare staff for fast, sophisticated threats. This shortcoming is going to become even more pronounced as AI increases both the pace and the punch of attacks.
Skills development needs to be more engaging and realistic, striving to capture the stressful nature of a rapidly unfolding cyberattack.

Simulation exercises are one of the best ways to achieve this. These exercises use real situations to mimic the tactics of attacks, offering staff a realistic understanding of the challenges they face. Simulations can be adapted to reflect an organisation’s unique risk profile, as well as quickly incorporating new tools and tactics as attackers continue to evolve. This makes them ideal in the face of fast-moving, ever-changing AI-enhanced threats.
An initial risk assessment can identify skill gaps and areas of weakness, providing a roadmap for targeted upskilling. Given the rapid advancements in AI, continuous learning is essential, and a cycle of regular training updates ensures the workforce remains up-to-date and equipped to combat evolving threats effectively.
By adopting this hands-on approach, organisations can build a resilient and adaptable workforce, well-prepared to face the complexities of AI-enhanced risks.
The urgency of the situation cannot be overstated. As AI continues to evolve, so too will the threats it enables. This calls for immediate action from decision-makers, particularly those involved in digital transformation initiatives. The focus must shift from merely reacting to threats to proactively preparing the workforce for the challenges that lie ahead.
The author is Max Vetter, VP of Cyber at Immersive Labs.
Follow us and Comment on Twitter @TheEE_io