Google Revises AI Ethics, No Longer Rules Out AI‘s use for Weapons and Surveillance

In the current scenario of AI, corporate ethics appear to be very flexible. It is becoming very certain that the boundaries that separate innovation, ethics, and business interests are blurring by the day. It seems like Google’s AI ethics is now open source, as it’s free for anyone to rewrite, including Google itself. Google has silently removed one of the central ethical barriers that was once enshrined in its AI principles, a pledge not to develop AI technology for weapons and mass surveillance. This change, pointed out by CNN‘s analysis of the Internet Archive Wayback Machine, now indicates a major shift in Google’s perspective on AI ethics.
Ethical breach:
The much denied combat applications once had envisioned a greater consequences for such actions, AI principles generally formulated that Google would not engage in AI applications for weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people, nor develop technology that gathers or uses information for surveillance and resulting in violating internationally accepted norms. With the latest update, such language has completely disappeared, thus leaving it less clear on how Google engages with these topics now.
Since OpenAI released ChatGPT in 2022, AI has reached an unheard and unmatched level of evolution without proper regulation and ethical oversight. It can be assumed that with applications in law-and-order cases and military projects, Google could be flexibly engaging with such governments and defense contractors with its new policy wording.
A Shift in Values:
In a Tuesday blog, Senior Vice President of research, labs, technology and society, James Manyika and Google DeepMind head Demis Hassabis defended the policy shift, stating that, “AI frameworks published by democratic countries have deepened Google’s understanding of AI’s potential and risks. There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights”.
This latest turn is radically opposed to everything Google had committed itself to in the past. In 2018, thousands of upset and protesting employees who signed a petition against military applications of AI, Google had bid $10 billion for a Pentagon cloud computing contract. It explained then that it could not be sure that this project would be within its AI principles, as some employees even resigned in protest.
On this matter the post further elaborated and said, “We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” AI will remain ahead, and so shall the tussles regarding its ethical use, as Google’s recent pivot indicates that its position is far from being cemented.
Read More: OpenAI Seals Partnership with Kakao, Expanding Its Asian Collaborations