Skip to content

How ChatGPT is Turbocharging Phishing Capabilities

Jun 15, 2023

Blog Hot Topics Security

Could you tell if this article was created entirely by artificial intelligence?

I could, of course, tell you that it isn’t, but then again, that’s exactly what modern-day cybercriminals are doing to try to pull the virtual wool over your eyes.

The scary truth is that chatbots have become so good at mimicking human communication patterns that it’s now virtually (pardon the pun) impossible to tell them apart from a real human. And while this has generated a lot of debate around their use and its impact on society, no such debates are happening amongst cyber criminals. Hackers have embraced these powerful tools with open arms and using them with great effect, gaining an advantage they’ve never had before.

Phishing with bad bait

Until only recently, phishing emails were relatively easy to detect. From awkward syntax and grammatical errors, to misspelling the name of the businesses or Government agency it was purported to come from, these con-communications would generally set off alarm bells almost instantly.

With most cyber-crime groups being based in countries where English is not widely spoken, it makes sense that phishing emails were rather rudimentary. And even if hackers do speak some English, being able to write a convincing email in the formal style of an Australian financial institution or the ATO is a tall order.

These phishing emails and texts banked on you being too distracted or panicked by their urgent call to action to notice the glaring mistakes. Maybe you were working to a fast-approaching deadline, while also being on an important call, so when your “bank” emails you are informing you that your account has been compromised and you need to verify it asap, you do what they say without giving it a second thought.

And even if nearly everyone who received a phishing email immediately binned it, the hackers cast a wide enough net to make it worth their while.

The shiny new tacklebox

Cut to late 2022 and the launch of ChatGPT, a sophisticated new chatbot that uses AI to generate responses to prompts in a natural, human-like way. And while marketing teams and coders were thinking of all the work they could handball to this powerful new tool, hackers were even more excited by the possibilities this free software unlocked. 

With a few simple directions, ChatGPT can create an email from whoever the hackers want in mere seconds, written in perfect Australian English, with perfect sentence structure and no grammatical errors.

OpenAI, ChatGPT’s developer, had thought of this potential misuse and prevented it from responding to requests for phishing emails or other illegal exploits. But, as always when it comes to cybercrime, there are simple workarounds that hackers are exploiting. For example, while ChatGPT won’t write a phishing email, it has no problems creating an email from a CEO asking someone in finance to transfer some money to a client.

And with some carefully chosen prompts, these emails can be incredibly detailed, including repeated – but natural – use of the recipient’s name, placeholders for bank account details to be added, and even some believable information about the fictitious company.

ChatGPT can become even more powerful when fed with real world information in a more targeted, spear phishing attack. As with most AI software, the higher quality the input, the higher quality the output – and hackers are studying how to most effectively use ChatGPT and other AI models to create incredibly detailed, sophisticated, and organically written emails to target individuals and businesses.

Detecting the undetectable

So, if hackers are now able to create near perfect phishing emails in less than the time it’s taken you to read this sentence, what can be done to stop them in their tracks?

Assume the worst: A healthy dose of skepticism can be incredibly beneficial when it comes to tightening your cybersecurity. Given how sophisticated phishing emails can be these days, be vigilant when opening an email – particularly if it includes a request for information or asks you to click a link. Even if it appears to come from the sender’s genuine email address, it may not be them sending it thanks to the ease in which emails can be spoofed. The best line of defense is caution.

Go straight to the source: While AI might be able to mimic a colleague with near perfect accuracy, it can’t create a body double…for now, at least! If you receive an unprompted email asking for something that doesn’t seem right, pop by their office or pick up the phone and confirm they did indeed send it themselves. This will let you know for sure whether the request or link is genuine and safe or raise the alarm that your organisation may have been the victim of a targeted attack.

Ready your response: The best attitude to take is that it’s not a matter of if a phishing attack is successful, but when. Accept the fact that as hackers’ approaches become more complex, an employee will eventually fall for one – and be ready for when they do. A rapid and volume capable threat detection service ensures that when criminals get in, you not only know about it but are ready and waiting for them. A sophisticated threat detection service, monitored 24/7 by a real person, can proactively contain and neutralise a threat in minutes, before the hackers have the chance to cause real chaos within your network.

Update your training: Cyber security training is a must for any business or organisation, but it needs to incorporate the latest tools hackers are using, including ChatGPT and other AI. If staff are only being warned to look out for the glaringly obvious signs of a phishing email and not educated of the AI assisted risks they’re actually facing, they won’t stand a chance – and may even let their guard down more than usual.

The future of phishing and AI

Given the speed at which AI has progressed over the last year alone, it’s a fool’s errand to predict where this tech will be a year – or even months – from now. One thing is for certain though: ChatGPT and other AI software isn’t going anywhere, nor are hackers going to stop finding creative ways to use it for ill.

The battle between cybercriminals and their targets is already an asymmetrical one, but understanding the latest technology and how it’s being used is an integral part of avoiding attacks and minimising the damage caused if they’re successful.

And while this article has focused on the threats posed by increasingly advanced AI, those on the other side are also starting to use it to detect and neutralise threats. 

Like any tool, AI is benign. In the right hands, it can be a great force for good; but in the wrong ones, it can be especially destructive – so best prepare for whatever the future may bring.

Like to find out more about Cyber Security Training, Automated Cyber Threat Detection & Response or the myriad other ways AI and ML are set to change your technology priorities, give us a call on 1300 629 786 or Contact Us to find out more.