The dark side of Artificial Intelligence - The EE

The dark side of Artificial Intelligence

hacker with laptop Image by Freepik

Cyber criminals can hack into companies’ systems more easily and efficiently with AI

As in every industry, artificial intelligence has brought huge changes to cyber crime in recent months. According to threat intelligence from cybersecurity firm, BlueVoyant, modern technologies are increasingly being used by attackers to write malicious code and to craft highly persuasive phishing emails. In the near future, we can expect to see the emergence of artificial intelligence (AI)-enabled malware, the development of document forgery, and the rise of disinformation campaigns, among other things.

One of the key trends this year is the explosion of AI. Companies in almost every industry are looking to make the most of AI to support their own operations. Cyber crime is no exception, with AI becoming an integral part of the work of malicious attackers. It is worth being aware of the activities they can use new technologies for, as it makes it easier to defend against them.

AI tool passwords for sale

ChatGPT logins have become valuable: stolen credentials for the service are sold on the dark web in the same way they are sold for other online services. Cyber criminals typically collect login credentials, consisting of a combination of email address and password, using information theft software designed to obtain sensitive data from unprotected devices. For example, the software can easily be deployed if a user is using an older version of the operating system or has disabled automated protection on their device.

Many users register with OpenAI using their company email address. BlueVoyant’s threat intelligence has observed that this type of access data is sold on the dark web at a slightly higher price than data associated with private email addresses.

WormGPT and other open tools

ChatGPT is designed to prevent its use for illegal activities. However, there are also AI tools that are not bound by such rules. WormGPT, for example, is a service said by its creators to be designed to support the work of security professionals. It allows them to test malware that can be created using such a tool, so that they can defend against it more effectively. According to the warning on the site, the creators do not support or recommend the use of the tool to commit crimes. However, the fact remains that it can be used for illegal activities. In addition, BlueVoyant’s threat intelligence has observed that a variant has been created that can be used for malicious purposes and is available as a subscription on the dark web. It can write malicious code in various programming languages to steal cookies or other useful information from unsuspecting users’ devices.

WormGPT can also be a useful tool to support phishing campaigns, as attackers can write very persuasive letters with sophisticated language and wording, making it harder to spot a scam. It can also be used to find legitimate services that can be used for illegal activities. A good example is SMS text messaging services that can send large volumes of text messages, even for phishing campaigns.

Smart malware and counterfeiting are coming

BlueVoyant’s threat intelligence suggest that other trends in cyber space are likely to emerge in the near future as AI takes over. It is likely that new technologies will allow cyber criminals to create AI-enhanced malware to steal sensitive data and bypass anti-virus software, operating and making decisions intelligently on its own, so that attackers will have less need to communicate with their malware and therefore have a better chance of remaining hidden.

AI will also likely serve cyber criminals well in document forgery. As more and more different transactions are done online by presenting photos of identity documents, the importance of document verification grows exponentially. Advanced tools using AI make it easier for criminals to forge documents that can pass through the filters of internet systems and use them to commit illegal acts. For example, attackers can more easily open a bank account with fake documents, which they can then use to move the money they have obtained through criminal activity.

As with phishing attacks, disinformation campaigns can be bolstered if the system using artificial intelligence can be well formulated. AI tools can help malicious individuals spread false information in various languages that appear convincing.

AI isn’t coming to replace you

A common concern about artificial intelligence in various industries is that it makes human labour redundant. In the field of cybercrime, such fears have not yet arisen, according to BlueVoyant’s experts. On the contrary, the IT skills shortage has spilled over and there is a marked increase in the demand for professionals who understand generative AI and are willing to put their skills at the service of illegal activities.

As global businesses continue to adopt AI tools into their regular workflows, the “dark side” of AI will also see a spike in activity. Security teams must prepare for the coming increase in AI-fuelled cyber-threat activity, whether it be advanced phishing attacks, malware, or large-scale password theft. In this way, AI may create increased demand for cybersecurity professionals.

Article by Balazs Csendes, BlueVoyant manager for Central and Eastern Europe

Follow us and Comment on X @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.