Google abandons its commitment and gives the green light to the use of AI for the development of weapons and surveillance

0
2

The disappearance of these points in its ethical manual implies that Google wants to part with voluntary commitments of this type, opening the ban to very different uses or applications of artificial intelligence.

In 2018, long before artificial intelligence was generalized in everyday life as it has done now, Google added a section of ethical principles on its website that indicated the self -imposed limits to which the company decided to adhere to the development market of This technology. Now, Google has eliminated about two points of this ethical code about the use of AI for the manufacture of weapons or for the surveillance of people.

Google assured, in this declaration of principles, that it would not participate in developments of AI that had to do with weapons or other technologies aimed at hurting people. They also refused to participate in technologies used for surveillance beyond what is stipulated in international standards.

On the updated page, we stop finding the “applications that we will not pursue”, which included as the first point “technologies that cause or are likely to cause general damage.”

Google Principles Section

In 2018, when Google first made this guide public, more than 4,000 employees signed a petition demanding “a clear policy that established that neither Google nor its contractors will ever build war technology”, and some twelve workers renounced for this reason. The original guide prior to the modification commented here can be found in the repository of Internet Archive.

Complex geopolitical panorama

In a publication of Blog From Google on Tuesday, the senior vice president of research, laboratories, technology and company society, James Manyika, and the director of Google Deepmind, Demis Hassabis, shared what is the current perspective of the company on AI. These managers pointed out that the Marcos de ia published by democratic countries have deepened the «understanding of Google about the potential and risks of AI«.

«There is an ongoing global competition for the leadership of AI within an increasingly complex geopolitical panorama. We believe that democracies must lead the development of AI, guided by fundamental values ​​such as freedom, equality and respect for human rights, ”reads the post.

Surveillance with AI

«We believe that companies, governments and organizations that share these values ​​must work together to create an AI that protects people, promotes global growth and supports national security.”

Thus, although they have eliminated that direct difference to weapons and surveillance, from Google they ensure that they want to participate in a “responsible development and deployment.”

From the American company they say they are behind general artificial intelligence, technology with social implications that become “incredibly deep. It is not just about developing a powerful AI, but about building the most transformative technology in the history of humanity, using it to solve the greatest challenges of humanity and ensure that appropriate safeguards and governance are implemented, for the benefit of the world » , tell Google.

Previous article6 curious tricks you can do with the Windows 11 CMD console
Next articleThe latest figures leave no doubt: Twitch is dead