The dangers of ChatGPT: How cybercriminals are leveraging artificial intelligence

Artificial Intelligence | May 21, 2023

The dangers of ChatGPT: How cybercriminals are leveraging artificial intelligence

Cybersecurity experts warn about the most common uses of ChatGPT functionalities by cybercriminals. Being aware of these uses can help you avoid them.

The increasing popularity of ChatGPT has raised numerous concerns. It is present in various fields, including science, work, art, and unfortunately, cybersecurity. Cybercriminals may be leveraging artificial intelligence for illicit activities, and what’s worse is that they don’t necessarily need to be highly skilled to do so.

In addition to its text generation capabilities, ChatGPT opens the door to a disturbing scenario where the chatbot can be guided to search for hidden information in images, imperceptible to humans but detectable by machines. This includes information reflected in glass or mirrors, or the identification of individuals in photos.

Experts have identified three main areas, among many others, in which cybercriminals could be exploiting ChatGPT: mass phishing, reverse engineering, and intelligent malware.

The first, mass phishing, is facilitated by the power of ChatGPT, enabling cybercriminals to create personalized emails within minutes, impersonating official institutions or banks.

Reverse engineering benefits from ChatGPT as it can understand and explain the workings of binary code or obfuscated code. This gives less advanced hackers the ability to manipulate software and gain access to a company’s servers.

Lastly, intelligent malware powered by ChatGPT allows hackers to make autonomous decisions and automatically extract information, even examining more data than a human could process. This autonomy increases the danger and could lead to an increase in targeted attacks against companies.

ChatGPT, fake reviews, and low-quality content

In addition to the issue of hackers and cybercriminals, we encounter the fraudulent use of ChatGPT to generate spam or low-quality content on the internet. For example, this includes fake book reviews or automatically generated product ratings on Amazon.

Although algorithms and moderation systems attempt to detect and remove these fraudulent practices, some individuals and companies continue to try to deceive the system for their own benefit. There are even jailbreaks that aim to bypass the limitations set in place.