Google owner Alphabet has revised its policy on the use of AI technology in the developing military hardware and surveillance systems.  

This change marks a significant departure from the company’s previous stance, which prohibited the application of AI in such contexts. 

Discover B2B Marketing That Performs

Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.

Find out more

The updated principles no longer include language about avoiding technologies that might cause harm.  

Google AI division head Demis Hassabis stated that the company’s policies are evolving to meet the demands of a changing global landscape, with a renewed focus on national security. 

Hassabis and technology and society senior vice-president James Manyika wrote in a blog post: “There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” 

The initial AI principles were established by Google in 2018 in response to internal protests over its involvement in Project Maven, a US Department of Defense initiative that explored AI’s potential to improve the precision of drone strikes.  

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

The controversy led to employee resignations and widespread petitioning within the company, prompting Google to withdraw from the project. 

In response to the backlash, Google decided not to renew its contract with the Pentagon. 

The revised guidelines emphasise the importance of human oversight, adherence to international laws and human rights, and thorough testing of AI to prevent unintended negative consequences.  

The blog post further stated: “As we move forward, we believe that the improvements we’ve made over the last year to our governance and other processes, our new Frontier Safety Framework, and our AI Principles position us well for the next phase of AI transformation.  

“The opportunity of AI to assist and improve the lives of people around the world is what ultimately drives us in this work, and we will continue to pursue our bold, responsible, and collaborative approach to AI.” 

A GlobalData survey, the Thematic Intelligence: Tech Sentiment Polls Q4 2023, revealed that businesses consider AI to be a disruptive technology, with 78% of respondents anticipating disruption and 54% already feeling its impact. 

The change has been described as “incredibly concerning” by Human Rights Watch as cited by BBC

Human Rights Watch senior AI researcher Anna Bacciarelli was quoted as saying: “For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever.”  

Airforce Technology Excellence Awards - Nominations Closed

Nominations are now closed for the Airforce Technology Excellence Awards. A big thanks to all the organisations that entered – your response has been outstanding, showcasing exceptional innovation, leadership, and impact.


Excellence in Action
Discover how Virtualitics is transforming mission readiness with explainable AI, predictive maintenance, and data-driven decision intelligence across the U.S. Department of Defense through its AI-powered Integrated Readiness Optimization suite, for which it has won the Innovation and Business Expansion awards.

Discover the Impact