GPT-4 is the limit: experts and figures like Elon Musk ask to pause AI development to avoid risks in society

0
88

As the development in Artificial Intelligence progresses, we also reach the limit that experts consider safe. Elon Musk, founding member of OpenAI, creators of ChatGPT, along with other AI experts, have released a rather alarming statement that puts on the table what the risks will be for society.

GPT-4 is considered to be the last safe standard for the time being and the fierce race for AI development in laboratories has entered a phase out of control due to competition where there could be irreversible consequences.

AI chats “are a risk to society”

Any technological advance can pose a risk, but the forecasts that some experts in the field are having are alarming. A few days ago we congratulated ourselves on GPT-4 and its possibilities for its implementation in ChatGPT, but it seems that there is the barrier between

In an open letterElon Musk and a group of AI experts and industry executives call for a six-month pause in developing systems more powerful than OpenAI’s recently released GPT-4: “We call on all AI labs to to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

GPT-4

The reasons for requesting this pause in development are that they are considered to pose profound risks to society and humanity. “Advanced AI could represent a profound change in the history of life on Earth, and it must be planned and managed with care and resources. Unfortunately, this level of planning and management is not happening, and even in recent months AI labs have entered an out-of-control race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can understand. reliably predict or control.

A break of 6 months is necessary

Bearing in mind that the competition to develop systems powered by Artificial Intelligence is indeed the new boom in which a lot of new players are taking part, a 6-month moratorium is requested.

The reasons for this arise from a series of questions about how this technology could evolve, with some of them closer to Skynet in Terminator: “Should we let the machines flood our information channels with propaganda and falsehood? Should we automate all jobs? Should we develop non-human minds that could eventually outnumber us, outsmart us, and obsolete and replace us? Should we risk losing control of our civilization?

ChatGPT

It is in light of these fears that all AI labs are requested to immediately pause for at least 6 months training on AI systems more powerful than GPT-4. In fact, it is so urgently urged that if such a pause does not occur privately in every case, they believe that governments should step in and institute a moratorium.

AI research and development must be refocused on making today’s powerful, next-generation systems more accurate, secure, interpretable, transparent, robust, aligned, reliable, and loyal.

disclaimer

Previous articlePreparing for the Easter holidays? Here are all the DGT radars, province by province
Next articleApproved the Animal Welfare Law: how it will affect your pets