Software

AI + ML

Google torpedoes 'no AI for weapons' rules

Will now happily unleash the bots when 'likely overall benefits substantially outweigh the foreseeable risks'


Google has published a new set of AI principles that don’t mention its previous pledge not to use the tech to develop weapons or surveillance tools that violate international norms.

The Chocolate Factory's original AI principles, outlined by CEO Sundar Pichai in mid-2018, included a section on "AI applications we will not pursue." At the top of the list was a commitment not to design or deploy AI for "technologies that cause or are likely to cause overall harm" and a promise to weigh risks so that Google would “proceed only where we believe that the benefits substantially outweigh the risks”.

Other AI applications Google vowed to steer clear of that year included:

Those principles were published two months after some 3,000 Googlers signed a petition opposing the web giant's involvement in a Pentagon program called Project Maven that used Google's AI to analyze drone footage.

The same month Pichai published Google’s AI principles post, the search and ads giant decided not to renew its contract for Project Maven after its expiry in 2019.

In December 2018 the Chrome maker challenged other tech firms building AI to follow its lead and develop responsible tech that "avoids abuse and harmful outcomes."

On Tuesday this week, Pichai's 2018 blog post added a notice advising readers that, as of February 4, 2025, Google has "made updates to our AI Principles" that can be found at AI.Google.

The Chocolate Factory's updated AI principles center on three things: Bold innovation; responsible development and deployment; and collaborative process.

These updated principles don't mention applications Google won’t work on nor pledges to not use AI for harmful purposes or weapons development.

They do state that Google will develop and deploy AI models and apps “where the likely overall benefits substantially outweigh the foreseeable risks."

There’s also a promise to always use "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights," plus a pledge to invest in "industry-leading approaches to advance safety and security research and benchmarks, pioneering technical solutions to address risks, and sharing our learnings with the ecosystem."

The Big G has also promised "rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias" along with “promoting privacy and security, and respecting intellectual property rights."

A section of the new principles offers the following example of how they will operate:

We identify and assess AI risks through research, external expert input, and red teaming. We then evaluate our systems against safety, privacy, and security benchmarks. Finally, we build mitigations with techniques such safety tuning, security controls, and robust provenance solutions.

Also on Tuesday, Google published its annual Responsible AI Progress Report, which addresses the current AI arms race.

"There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," said James Manyika, SVP for research, labs, technology and society, and Demis Hassabis, co-founder and CEO of Google DeepMind.

"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," the Google execs continued. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."

Google will continue to pursue "AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights," the duo added.

Google did not immediately respond to The Register's inquiries, including if there are any AI applications it won't pursue under the updated AI principles, why it removed the weapons and surveillance mentions from its banned uses, and if it has any specific policy or guidelines around how its AI can be used for these previously not-OK purposes.

We will update this article if and when we hear back from the Chocolate Factory.

Meanwhile, Google's rivals happily provide machine-learning models and IT services to the United States military and government, at least. Microsoft has argued America's armed forces deserve the best tools, which in the Windows giant's mind is its technology. OpenAI, Amazon, IBM, Oracle, and Anthropic work with Uncle Sam on various projects. Even Google these days. The internet titan is just less openly squeamish about it. ®

Send us news
34 Comments

Despite Wall Street jitters, AI hopefuls keep spending billions on AI infrastructure

Sunk cost fallacy? No, I just need a little more cash for this AGI thing I’ve been working on

Murena kicks Google out of the Pixel Tablet

Privacy-centric Android makes more sense on this form factor than a phone

We meet the protesters who want to ban Artificial General Intelligence before it even exists

STOP AI warns of doomsday scenario, demands governments pull the plug on advanced models

How nice that state-of-the-art LLMs reveal their reasoning ... for miscreants to exploit

Blueprints shared for jail-breaking models that expose their chain-of-thought process

LLM aka Large Legal Mess: Judge wants lawyer fined $15K for using AI slop in filing

Plus: Anthropic rolls out Claude 3.7 Sonnet

Microsoft warns Trump: Where the US won't sell AI tech, China will

Rule hamstringing our datacenters is 'gift' to Middle Kingdom, vice chair argues

Hurrah! AI won't destroy developer or DBA jobs

Bureau of Labor Statics warns lawyers and customer service reps to brace for change, says techies will be fine

UK's new thinking on AI: Unless it's causing serious bother, you can crack on

Plus: Keep calm and plug Anthropic's Claude into public services

If you thought training AI models was hard, try building enterprise apps with them

Aleph Alpha's Jonas Andrulis on the challenges of building sovereign AI

Network edge? You get 64-bit Armv9 AI. You too, watches. And you, server remote management. And you...

Arm rolls out the Cortex-A320 for small embedded gear that dreams of big-model inference

Hey programmers – is AI making us dumber?

If you don't need to think about easy questions, will you be able to answer complex questions?

India's top telco plans cloud PCs for its 475 million subscribers

PLUS: China bans AI leaders from visiting USA; Acer data leak suspect cuffed; and more