AI Watch Update: Google Steps Back on its Promise Not to Use AI Technology for Weapons or Surveillance

Google’s updated public AI ethics policy removes its promise that it won’t use the technology to pursue applications for weapons and surveillance.
In a previous version of the principles, Google included applications that it wouldn’t pursue. One such category was weapons or other technology intended to injure people. Another was the technology used to surveil beyond international norms:
https://web.archive.org/web/20250130091939/https://ai.google/static/documents/EN-AI-Principles.pdf
The language is absent on its current page:
https://ai.google/responsibility/principles/
Google’s decision to remove explicit prohibitions against using AI for weapons or surveillance is a significant policy shift that signals a departure from its earlier commitment to ethical AI development.
Historically, when corporations soften or remove ethical guardrails, they often anticipate or are already engaging in activities that would have previously violated those principles. The absence of clear restrictions leaves room for interpretation and potential justification for actions that would have been outright forbidden before.
There are several concerns here.
Weapons Development — Removing the explicit ban suggests that Google might now be open to government contracts for military AI applications, including battlefield automation and autonomous targeting systems. This was a significant point of contention with employees in the past (e.g., Project Maven).
Surveillance Expansion — The phrase “surveillance violating internationally accepted norms” was a safeguard against authoritarian misuse. Without it, AI-driven surveillance projects have a broader latitude, possibly including mass data collection, predictive policing, and AI-enhanced facial recognition.
Loopholes and Ambiguity — The remaining principles now rely on subjective criteria like “overall harm” and “widely accepted international law,” which can be interpreted to favor business interests over ethical considerations.
Industry Influence and Precedent — When a tech giant like Google relaxes its ethical stances, it pressures competitors to follow suit. Companies hesitant to engage in military or surveillance AI might feel compelled to enter those markets to stay competitive.
History suggests that they’re hard to rebuild once ethical commitments are weakened. It’s not a bad idea to closely watch what contracts and partnerships Google enters into next — those will likely reveal its true intentions behind this shift.
Thank you for being a part of this fascinating journey.
BearNetAI. From Bytes to Insights. AI Simplified.
BearNetAI is a proud member of the Association for the Advancement of Artificial Intelligence (AAAI), and a signatory to the Asilomar AI Principles, committed to the responsible and ethical development of artificial intelligence.
BearNetAI, LLC | © 2024, 2025 All Rights Reserved