Google announced on Tuesday that it has revised its artificial intelligence principles, removing language that promised not to develop AI for weapons or surveillance purposes. The updated policy no longer lists banned uses for Google’s AI initiatives and offers more flexibility to pursue potentially sensitive applications.
As we make progress towards AGI, developing AI needs to be both innovative and safe. ⚖️
To help ensure this, we’ve made updates to our Frontier Safety Framework – our set of protocols to help us stay ahead of possible severe risks.
Find out more → https://t.co/YwtVDqQWW9 pic.twitter.com/LbHMdInAHQ
— Google DeepMind (@GoogleDeepMind) February 4, 2025
In a blog post, Google executives James Manyika and Demis Hassabis cited the widespread adoption of AI, evolving standards, and global competition as reasons for the policy change.
From "Don't be evil" to "Just shoot them".
A lesson in corporate ethics.https://t.co/k58PiRZK30
— Tommaso Valletti (@TomValletti) February 5, 2025
They emphasized that Google will implement “appropriate human oversight, due diligence, and feedback mechanisms” and focus on projects that align with its mission, scientific focus, and areas of expertise. The original AI principles, published in 2018, were a response to employee protests over Google’s involvement in a US military project. The principles had stated that Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.
Multiple Google employees expressed concern about the policy change.
Google updating its AI policy reflects tech change, the growing importance of AI for the military, progress in responsible AI, & and bridging between Silicon Valley and the Pentagon. Thoughts from me in this @nitashatiku @GerritD piece in @washingtonpost https://t.co/LsDcONw3Vx
— Michael C. Horowitz (@mchorowitz) February 5, 2025
Google’s AI policy revision details
It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public,” said Parul Koul, a Google software engineer and president of the Alphabet Workers Union.
Former Google employees involved in reviewing projects for adherence to the principles said the work was challenging due to varying interpretations and pressure from higher-ups to prioritize business imperatives. Timnit Gebru, a former co-lead of Google’s ethical AI research team, questioned the company’s commitment to the principles from the start. Google’s Cloud Platform Acceptable Usage Policy still prohibits violating legal rights, engaging in illegal activity, or causing serious harm or injury.
However, when asked about the company’s cloud computing contract with the Israeli government, which has benefited the country’s military, Google stated that the agreement does not involve highly sensitive or military workloads. The policy change comes as the rapid growth of AI has fueled debates on governing new technologies and mitigating their risks. It also coincides with an 8% drop in Alphabet’s shares amid slightly lower-than-expected revenues, primarily due to slower growth in its cloud business sector.
Rashan is a seasoned technology journalist and visionary leader serving as the Editor-in-Chief of DevX.com, a leading online publication focused on software development, programming languages, and emerging technologies. With his deep expertise in the tech industry and her passion for empowering developers, Rashan has transformed DevX.com into a vibrant hub of knowledge and innovation. Reach out to Rashan at [email protected]























