Google has lifted its self-imposed ban on using artificial intelligence for weapons and surveillance. The company announced on Tuesday that it had updated its AI principles and removed the commitment not to pursue technologies that could cause harm, such as weapons and surveillance tools.
As we make progress towards AGI, developing AI needs to be both innovative and safe. ⚖️
To help ensure this, we’ve made updates to our Frontier Safety Framework – our set of protocols to help us stay ahead of possible severe risks.
Find out more → https://t.co/YwtVDqQWW9 pic.twitter.com/LbHMdInAHQ
— Google DeepMind (@GoogleDeepMind) February 4, 2025
Google’s AI head, Demis Hassabis, and senior vice president James Manyika explained the change in a blog post. They said the guidelines were being updated in response to a rapidly evolving world.
From "Don't be evil" to "Just shoot them".
A lesson in corporate ethics.https://t.co/k58PiRZK30
— Tommaso Valletti (@TomValletti) February 5, 2025
They emphasized that AI should protect “national security” and that democracies should lead in AI development, guided by values like freedom, equality, and respect for human rights. The company first published its AI principles in 2018 after employees protested against its involvement in a Pentagon project using AI for drone imaging.
Google updating its AI policy reflects tech change, the growing importance of AI for the military, progress in responsible AI, & and bridging between Silicon Valley and the Pentagon. Thoughts from me in this @nitashatiku @GerritD piece in @washingtonpost https://t.co/LsDcONw3Vx
— Michael C. Horowitz (@mchorowitz) February 5, 2025
Google opted not to renew the contract following the backlash, which saw staff members resign and thousands sign a petition against the company’s participation.
Google updates AI policy for weapons
The updated policy comes amid broader changes in the tech industry’s relationship with government contracts.
Historically, Silicon Valley has had strong ties with the U.S. military. However, from roughly 2015 to 2025, tech companies avoided public association with military activities to prevent public relations issues. With the start of President Donald J.
Disrupt this right now, @Google employees! When companies “relax” their standards and threaten our freedoms, it’s not business as usual. Time to fight back! @TheDemocrats @FLI_org @EncodeAction @SenMarkey https://t.co/BbHFAAa3B2
— Bill de Blasio (@BilldeBlasio) February 5, 2025
Trump’s second term, Big Tech appears more open about its government collaborations. As these companies contribute millions to Trump’s administration, they seem less concerned about public perception and more focused on lucrative opportunities. The ramifications of this policy change may be significant, revealing a less discreet side of Silicon Valley’s relationship with government and military operations.
The shift in stance marks a major change for Google, which had previously pledged not to allow its AI to be used for technologies that could cause harm or contravene international law and human rights.
Cameron is a highly regarded contributor in the rapidly evolving fields of artificial intelligence (AI) and machine learning. His articles delve into the theoretical underpinnings of AI, the practical applications of machine learning across industries, ethical considerations of autonomous systems, and the societal impacts of these disruptive technologies.























