The most popular chatbot in the world, ChatGPT, uses its powers by cybercriminals to create new varieties of malware.
Cybersecurity firm WithSecure has confirmed that it has found examples of malware created by a well-known artificial intelligence developer in the wild. What makes ChatGPT particularly dangerous is that it can generate countless malware strains, making them difficult to detect.
Bad actors can simply give ChatGPT examples of existing malware code and instruct it to create new variations from them, allowing malware to perpetuate without requiring nearly the same amount of time, effort, and knowledge as before.
For good and for bad
The message comes as we are talking about AI regulation abounds to prevent it from being used for evil purposes. Basically, there were no laws governing the use of ChatGPT when it went on a rampage last November, and within a month it had already been hijacked for writing malicious emails and files.
There are some internal safeguards within the model to prevent nefarious prompts from being executed, but there are ways cybercriminals can circumvent them.
said Juhani Hintikka, CEO of WithSecure Information Safety that artificial intelligence has typically been used by cybersecurity advocates to find and remove malware handcrafted by cybercriminals.
But now, with the free availability of powerful AI tools like ChatGPT, the tide seems to be turning around. Remote access tools have been used for illegal purposes and now it is also artificial intelligence.
Tim West, Head of Threat Intelligence at WithSecure, added that “ChatGPT will support software engineering for better or for worse, and will enable and lower the barrier of entry for cybercriminals to develop malware.”
And the phishing emails that ChatGPT can write tend to be detected by humans as LLMs become more advanced, it may be harder to prevent such scams in the future, according to Hintikk.
Moreover, as ransomware attacks increase in effectiveness at an alarming rate, cybercriminals are reinvesting and becoming more organized, expanding operations through outsourcing and deepening their knowledge of artificial intelligence to launch more effective attacks.
Hintikka concluded that looking at the future cybersecurity landscape, “It’s going to be a game where good AI versus bad AI.”