Can’t break promises if you riscind them entirely.

Google updated its artificial intelligence principles on Tuesday to remove commitments around not using the technology in ways “that cause or are likely to cause overall harm.” A scrubbed section of the revised AI ethics guidelines previously committed Google to not designing or deploying AI for use in surveillance, weapons, and technology intended to injure people. The change was first spotted by The Washington Post and captured here by the Internet Archive.

Coinciding with these changes, Google DeepMind CEO Demis Hassabis, and Google’s senior exec for technology and society James Manyika published a blog post detailing new “core tenets” that its AI principles would focus on. These include innovation, collaboration, and “responsible” AI development — the latter making no specific commitments.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” reads the blog post. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Hassabis joined Google after it acquired DeepMind in 2014. In an interview with Wired in 2015, he said that the acquisition included terms that prevented DeepMind technology from being used in military or surveillance applications. 

While Google had pledged not to develop AI weapons, the company has worked on various military contracts, including Project Maven — a 2018 Pentagon project that saw Google using AI to help analyze drone footage — and its 2021 Project Nimbus military cloud contract with the Israeli government. These agreements, made long before AI developed into what it is today, caused contention among employees within Google who believed the agreements violated the company’s AI principles.

Google’s updated ethical guidelines around AI bring it more in line with competing AI developers. Meta’s Llama and OpenAI’s ChatGPT tech are permitted for some instances of military use, and a deal between Amazon and government software maker Palantir enables Anthropic to sell its Claude AI to US military and intelligence customers.

 

Share.

Leave A Reply