Google has been filled with images created by AI, and that can spread misinformation

0
13

The Internet is filled with garbage. The debate on this topic continues on networks, and now it is the Google Images section where bad signs are seen about what may happen in the future with the emergence of content generation using artificial intelligence.

If we type something like ‘baby peacock’ into Google Images, what we get on its first page of results are mostly AI-created images where what you see has nothing to do with what a person actually looks like. peacock breeding. A large thread has recently formed on Reddit about this, with hundreds of users commenting on their sadness and concern about the unstoppable torrent of garbage content or slop, as poor quality content generated by AI is known, that floods networks and platforms.

These false images of a supposed newborn peacock have also been shared on Facebook, where, apparently, many people come to believe what they see, and are encouraged to share and leave their ‘like’ because of the cuteness of the image.

And, in this case, the worst that can happen is that someone keeps in their mind a wrong projection of what a baby peacock is like, but if this same practice is repeated with other more serious topics, what we can obtain is Of course: rampant misinformation and quality content (I don’t think there is a shortage of good shots by professional photographers of peacocks) being buried by junk publications.

Post on Twitter

«On YouTube it is also beginning to appear little by little. “Everything is becoming fake and low quality,” says one user on reddit. «This type of Internet is not interesting. We will simply trust the Internet less and spend less time on it. Let Google read itself now,” comments another.

As possible remedies, some point to returning to the era of blogs: “Create your own website! It’s time to take back ownership of your own content and maybe even have an “interesting links/friends sites” section to share. It’s time for people to own the Internet again, not companies,” someone recommends.

AI generated label

An Internet for bots

The ‘dead internet’ theory continues to gain followers with these types of symptoms. A theory according to which bots, automated programs and artificial intelligence will increase your share of Internet usage until most of the traffic comes from these agents, rather than from real people.

The result? Misinformation, poor quality content and spam, making the Internet a more hostile and intoxicated place with falsehoods that makes it more difficult for the average user to find what they are looking for.

However, some will be able to make money in the meantime. One user explains it like this: “The other day I was looking for a solution to a programming problem and came across a YouTube video with AI voice, with stock footage and generated subtitles. The script for the video was also written by AI, as it didn’t really say anything useful and just talked about the general topic for 10 minutes. I got curious and checked the channel, and the guy had 2,000 videos uploaded with 1,000-10,000 average views. All with the same type of shitty AI content. “I think someone automated a bot that constantly uploads content to YouTube and the guy only gets the money from the ads as passive income,” he comments. @Competitive-Lack-660.

For now, we will have to regularly warn our parents, uncles or grandparents not to believe everything they see on Facebook or receive on WhatsApp.

Previous articleDrinking caffeine may improve vascular health in lupus patients
Next articleShort on OneDrive space? Microsoft prepares two new plans of 5 TB and 10 TB