You couldn’t know: OpenAI confirms that they are using ChatGPT to create viruses

0
2

As the months go by, those artificial intelligence platforms that we use to create texts or images are expanding in terms of usage modes. Furthermore, we must take into consideration that not all of them are as legal and legal as we would like.

Perhaps the most well-known and representative platform of this type in the sector is ChatGPT by OpenAI, an intelligent online service that many of you have surely already used. We can have almost realistic conversations or request all kinds of information in a matter of seconds. In addition, these platforms help us generate formal texts or create generative images that impress more than one person.

However, malicious actors have also wanted to take advantage of these AIs to do their thing. This is something that we have been able to verify almost since the beginning of the launch of artificial intelligence. In fact, for some time now, many experts began to show their suspicions that some of these intelligent platforms were used to create malicious code.

It should be noted that almost from its inception, the aforementioned ChatGPT received certain complaints for allowing this type of actions by third parties. Attackers with certain knowledge used these platforms to create new viruses and all types of malware. What’s more, AI can be especially useful in developing these malicious contents by users without much knowledge of programming and software development.

The company behind the creation of this chatbot that we mentioned, OpenAI, states that in recent times it has dismantled multiple malicious operations that were abusing its platform to develop malware. Other unauthorized movements come into play here, such as spreading false information online, or methods to evade detection by security systems.

ChatGPT is used to create malware, confirmed

Not only this, these smart services like ChatGPT have even been used to carry out phishing attacks. These types of malicious creations have put a multitude of security-related companies on notice. The main reason for this is the ease that AI offers certain users now to create this type of malicious code en masse with minimal effort.

So much so that right now and for the first time, OpenAI has just confirmed that its ChatGPT platform has been used to generate different types of malware. In fact, they have published some international examples of malicious actors who have taken advantage of the benefits of this AI to develop and improve their projects of this type.

chatgpt interface

They claim that thanks to the intelligent functions of chatbots like ChatGPT, they can make their attacks generated here more efficient. They assure that AI helps them in all stages of creation, as well as in the deployment phases. All of this also saves on resources and efforts with the advantage of obtaining better results.

And the worst thing is not this, but that it is almost inevitable that the conjunction of AI with malicious developments in the future will continue. It is to be expected that malicious actors continue to use these intelligent platforms to improve their projects, no matter how many efforts companies like OpenAI put into preventing it. An example of this is found in the battle between these attackers and antivirus developers that has been with us for decades.

Previous articleSkyShowtime does not wait any longer and announces its most notable premieres in films and series for November
Next articleApple presents the new iPad mini, compatible with Apple Intelligence