“ChatGPT is garbage” is the title of a real scientific paper: this is what it says

0
14
“ChatGPT is garbage” is the title of a real scientific paper: this is what it says

A new academic paper by three Scottish scientists tries to burst the hype bubble around popular conversational artificial intelligences, ensuring that, most of the time, they are not capable of offering anything other than “nonsense.”

The direct and uncomplicated title of this scientific study has helped it gain some popularity since it was published on June 8 in the journal Ethics and Information Technology. “ChatGPT is bullshit” is a paper written by three Scottish academics from the University of Glasgow, in which they demystify the phenomenon known as “hallucinations” (incorrect responses from artificial intelligence), and argue that these surreal outputs deserve to be called “nonsense.” ». Because as the researchers indicate, false answers are nothing more than normal behavior in models that do not know how to distinguish between truth and lies.

Regarding hallucinations, “we argue that these falsehoods, and the general activity of large linguistic models, are best understood as nonsense in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are largely indifferent to the truth of its results,” the document states.

Thus, compared to the utopian and optimistic story about the supposed revolution that these chatbots represent, the researchers suggest that this widespread enthusiasm or hype does not respond to a quality product: “Because these programs by themselves cannot care about the truth, and because they are designed to produce truth-sounding texts without any real concern for truth, it seems appropriate to call their results nonsense,” the paper reads.

Temporary chat ChatGPT OpenAI

And, they argue, using terms such as “hallucinations” can cause the general public to acquire the wrong idea about the capabilities of these tools. Because talking about hallucination can already imply assuming that an AI is in some way conscious or tending towards the truth, and that it lies when it does not find it. It also humanizes them in a certain way, giving mysticism to what is nothing more than an erroneous answer. No current chatbot has the slightest idea what is true or false. Therefore, “calling their errors ‘hallucinations’ is not harmless: it leads to confusion that the machines are somehow misperceiving, but are still trying to convey something they believe or have perceived.”

Therefore, talking about hallucination is using “the wrong metaphor.” Machines do not try to communicate something they believe or perceive. Its inaccuracy is not due to misperception or hallucination. As we have noted, they are not trying to convey any information. “They are lying,” the study concludes.

In networksthe study has generated hundreds of comments, and some users claim to be waiting for, once and for all, what they consider to be a bubble created around this technological product to deflate, which behind all the marketing, hides some quite disappointing capabilities.

AIs, “unreliable” and “impractical”

All this coincides with a recent article published by Microsoft, in which they talk about some of the main characteristics that artificial intelligence chatbots show today. Speaking about “AI Jailbreaks”, which would be something like hacking an AI so that it stops applying security controls and says things it shouldn’t say, Microsoft explains what some of the great weaknesses of these chatbots are:

  • Imaginative but sometimes unreliable
  • Suggestible and literal-minded, without proper guidance
  • Persuasive and potentially exploitable
  • Knowledgeable but not practical for some scenarios

Likewise, Microsoft believes that AIs tend to be overconfident, trying to impress the user with a realistic appearance despite not knowing if what they say is true or not, and being very influenceable just by how a question is formulated or prompt.

Previous articleOneDrive has a serious problem in Windows 11 that still has no solution
Next articleExploited vulnerabilities in macOS increase by 30%