Artificial intelligence would already be able to ‘read’ the human mind

0
98
They create a decoder of brain activity based on artificial intelligence, which manages to ‘read’ the mind by translating the thoughts of people with speech problems into text while they imagine stories or watch videos.

A decoder of brain activity that has been developed with a new artificial intelligence system based on a methodology similar to that of Open AI’s ChatGPT and Google’s Bard helps to ‘read’ the human mind in a non-invasive way, since it is not Surgical implants are necessary for it to work, and it could be useful for people who do not have cognitive problems but have lost the ability to speak – for example, after suffering a stroke – to communicate intelligibly again.

Called a semantic decoder, this system was developed by researchers at the University of Texas at Austin, who measured the brain activity of their study participants with an fMRI scanner after prolonged decoder training, during which the individual listened to hours of podcasts on the scanner. Then, and only if the participant agrees to have their thoughts decoded, the device can generate text from their brain activity while listening to a new story or imagining telling a story.

“For a non-invasive method, this is a real breakthrough compared to what’s been done before, which is usually single words or short sentences.” “We are getting the model to decode a continuous language over long periods of time with complicated ideas,” said Alex Huth, an assistant professor of neuroscience and informatics at UT Austin and one of the leaders of the study, whose results have been published in Nature Neuroscience.

Brain decoding only worked if the person cooperated

The researchers have explained that their objective was to capture the essence of what people say or think and that, although it is not an infallible method, they found that when the coder was trained to monitor the brain activity of a participant, the machine produced a very similar to the intended meanings of the original words.

“While technology is in such an early stage, it is important to be proactive and enact policies that protect people and their privacy”

These scientists have given an example to help understand the methodology. During the tests the thoughts of a participant who heard someone say “I don’t have a driver’s license yet” were translated into words like “she hasn’t even started learning to drive yet”. And hearing “she didn’t know whether to scream, cry or run away. Instead I said leave me alone! It was translated as “I started screaming and crying, and then she just said, ‘I told you to leave me alone.’”

Another of their experiments involved having participants watch four short, silent videos while in the scanner, and the semantic decoder was able to use their brain activity to accurately describe some of the situations that appeared in the videos. The decoder, however, only worked properly if people had voluntarily participated in the training and if they offered no resistance during the tests, for example, with thoughts that distorted the results.

Is it possible that they read our mind without us knowing it?

The authors of the paper have reported that this system is not yet viable outside the laboratory because it requires spending a lot of time in an fMRI machine, but they are confident that it could later be transferred to other more accessible brain imaging systems, such as functional spectroscopy. near infrared (fNIRS).

They have also indicated that it is not possible to use this technology to spy on us without us knowing it, since “a person needs to spend up to 15 hours lying in an MRI scanner, be perfectly still and pay close attention to the stories they are hearing before that this really works well on her,” Huth said.

In fact, when they tested the system on people who had not participated in this training, they found that the results were unintelligible. In addition, they found that when those who had done the training resisted attempts at brain decoding –for example, thinking about animals– it was not possible to achieve good results either.

“We take concerns that it could be used for bad purposes very seriously and have worked to prevent it.” “We want to make sure that people only use these types of technologies when they want to and that they help them.” said Jerry Tang, a doctoral student in computer science at UT Austin who also led the study. However, he believes that “at this time, while technology is in such an early state, it is important to be proactive and enact policies that protect people and their privacy.” “Regulating what these devices can be used for is also very important.”

.

Previous articleAvoid being scammed: The most frequent Bizum scams
Next articleWith these prices, your gaming setup will be the envy of other players