In a significant change to its AI ethics, Google has removed its previous commitments not to apply artificial intelligence in areas such as weapons development and surveillance technologies.
The update, announced on February 4, 2025, signals a shift in Google’s approach to AI deployment, allowing for more flexibility in pursuing potentially sensitive applications.
Since 2018, Google had enforced a set of AI principles aimed at preventing the use of its technology for purposes deemed harmful.
This included banning the development of weapons, surveillance tools violating human rights, and other technologies that might cause harm. However, the new policy eliminates these restrictions, offering Google greater leeway to explore AI applications in military or surveillance contexts.
The revised AI principles focus on ensuring that AI technologies align with user goals, social responsibility, and international law. While the company now avoids strict bans, it emphasizes the need for appropriate human oversight and due diligence to minimize unintended harmful outcomes.
The decision has sparked concern among some employees, who worry that it compromises Google’s commitment to ethical AI development. Parul Koul, a Google software engineer, expressed dismay, stating that the removal of the pledge goes against employee sentiment that the company should avoid engaging in military and war-related AI work.
Despite these concerns, Google executives defend the changes, pointing to the growing importance of AI in global geopolitics and the need for continued innovation in this space.
The company stresses its commitment to AI that supports global growth, human rights, and national security, as it moves forward with more collaborative and bold AI projects.