ChatGPT could help scammers write perfect phishing emails – here’s your guide to spotting them


The ChatGPT AI chatbot is the fastest growing application in the history of the internet, with an estimated 13 million daily users, including scammers. The AI has successfully removed language barriers and poorly spelt words, which can be used to spot a scam message first and is relatively cheap to subscribe to. Cybercriminals have trained AI to create malicious code and phishing emails, fine-tuned to the tone of voice of the company or individual being impersonated. Cybersecurity leaders warn that phishing emails will rise to 51% this year due to the popularity of ChatGPT and its ilk on the dark web.

This is bad news for everyday internet users who are learning that cybercriminals are using ChatGPT for malicious purposes, hence the reason for this article. Be wary of emails that ask for sensitive personal or organisational information. Here are some guidelines to help you spot a phishing email:

  • double check the sender's email address to know if it's really from the website address or email address of the ‘said’ company or individual. Be careful, as many look true with a subtle change
  • go through the context of the message and dictate the urgency. Most scammers exploit their victims by adding a sense of urgency to their message
  • check the greeting or salutation pattern, as most scammers trying to impersonate another individual might fail to dictate the name the true person addresses you
  • check the current events or emergencies trending in the world, as most scammers capitalise on them to solicit aid or sensitive information, for example, the COVID-19 pandemic
  • dictate those incentives or gifts you stand to gain attached to the emails that look too good to be true. Scammers exploit their victims by offering them good tidings
  • don’t click on any link attached to the email, as that's the most common means of infecting your device with malicious malware embedded in the link
  • contact the person or the institution through a call to confirm if they are the ones making the request.

Most financial institutions have made it clear to their customers that they will never ask for their sensitive information via email or SMS, while most companies have adopted a way to dictate impersonation. ChatGPT may create a good message, but if you check the message and apply the guidelines above, it has its weaknesses.