The statistics are scary. The entry into the scene of ChatGPT has made it possible to simplify many processes and make life easier for a large number of professionals, but it has also caused serious problems. In terms of security, it has just been revealed that phishing has increased more than 1200% due to this. And it feels like it’s not going to get better.
A group of more than 300 cybersecurity professionals participated in a study that sought to discover the effects that a new technology such as AI has had on the threats that users face. The results speak for themselves and do not bode well for a sector that is increasingly putting itself in the hands of artificial intelligence.
Statistics cannot be ignored
That is precisely what Patrick Harr, the CEO of the security company SlashNext, says, which carried out the study with many of the most recognized professionals in the sector. After having drawn their own conclusions about AI and seeing how the market was progressing, they wanted to check to what extent it was or was not a real problem.
The research results confirm the worst fears and the way in which cybercriminals, especially through the Dark Web, are turning to generative AI to enhance their attack actions. As reported in the study, since the launch of ChatGPT in November 2022, there has been a 1,265% increase in email phishing attacks. For its part, credential phishing has increased by 967%, while it has been discovered that 68% of the emails used for this use BEC attacks with text. Exactly, these BEC attacks are those that are directed at company executives, which shows that cybercriminals are taking advantage of the advantages of generative AI to attack businesses and not so much individuals.
A worrying situation
Although the security experts who organized the study insist that they do not want to raise alarm about the situation, it is obvious that the data is worrying. An increase of more than 1000% is not exactly common. That it coincides with the moment ChatGPT debuted is not at all a surprise. Hackers and criminals are using AI chat to write more credible phishing emails that can trick victims. They are also using this resource for other purposes, such as creating better resources that can support the phishing attempt. That is to say, there are already cases in which the phishing attempt is not limited to a text, but additional files are created that can try to increase the credibility of the message.
Another concern is that cybercriminals have not yet finished mastering this technology and, in fact, are still in the process of discovering and exploiting it. Less than a year has passed since its release and it is taking its first steps, so it is feared that the situation could become even more complicated in the future. This is something worrying that is being recorded on the Dark Web, where there are many cybercriminals and scammers who discuss different techniques to dominate bots in order to carry out their plans.
The attacks that are being carried out with AI are not random cases of phishing people who are not used to it. In reality, hackers are trying to attack companies’ own security experts, since they know that if they succeed, they would have access to the most sensitive parts of the business. In the study, 77% of respondents say that they have been the target of a phishing attack and 46% say that they have received the aforementioned BEC attacks in their email. In the case of the attacks they are receiving on mobile phones, the dominant note is found in smishing attempts through SMS, an action that 39% of those surveyed have suffered.
In addition, they also talk about the importance of being careful with the activity carried out on social networks. They mention that they are already beginning to see many attacks that occur on fake profiles in the social media environment from which hackers create accounts to try to scam and steal from their victims. Now that an AI can have a social profile and even respond to private messages, it is obvious that you have to be very careful with certain actions. But, as they say in the study, we must not forget that, although ChatGPT and other AIs are opening a new avenue of attack, they can also be the tools that serve to defend themselves. In the coming years we will witness important changes in terms of security.