Factual Reference

AI Company Policy Comparison

What each company has publicly stated about surveillance, autonomous weapons, and government contracts. As of February 28, 2026. Stated policies aren't guarantees. Every company on this chart deserves scrutiny, including the ones that say the right things.

This chart reflects publicly stated positions and documented contract terms. Policies can change. This isn't an endorsement of any company. Sources linked where available.

Company Stated Policy: Mass Surveillance Stated Policy: Autonomous Weapons Gov't/Military Contracts Key Notes & Open Questions
Anthropic (Claude) Opposes Opposes Had $200M Pentagon contract; designated "supply chain risk" Feb 2026 Stated mass surveillance violates fundamental rights and AI isn't reliable for autonomous weapons. Lost government contract after refusing to modify guardrails. Open question: Would these positions hold under different financial pressures? Stated policies aren't contractual guarantees. Source
OpenAI (ChatGPT) Opposes, per Pentagon deal terms Opposes, per Pentagon deal terms Pentagon classified deal signed Feb 2026 CEO stated OpenAI shares similar positions to Anthropic on surveillance and weapons. Deal reportedly includes human oversight for lethal force and no mass surveillance. Open question: How are these terms enforced in a classified contract? Who audits compliance? Source: NPR
Google (Gemini) No explicit public policy Internal principles since 2018 Pentagon contract; extensive government cloud infrastructure Withdrew from Project Maven (2018) after employee pressure. Chief Scientist Jeff Dean publicly opposed mass surveillance (Feb 2026). 100+ employees signed letter opposing unrestricted military AI use. Open question: Do internal principles bind the company, or can they be quietly revised? Source
Microsoft (Copilot) No explicit public policy No explicit public policy Massive Azure Government contracts; deep defense and intelligence ties Employees demanded management prevent unrestricted Pentagon AI use (Feb 2026). No public corporate statement on the dispute. Open question: As the largest government IT provider, does Microsoft's silence indicate tacit acceptance of any use case the government requests?
Meta (Llama) No explicit policy No explicit policy Open-source models; limited direct contracts Llama models are open-source and can be used by anyone, including governments, without Meta's oversight or consent. Open question: Is releasing AI models with no use restrictions a form of enabling whatever use case emerges, including surveillance and weapons?
xAI (Grok) No stated restrictions No stated restrictions Pentagon contract; approved for classified use Feb 2026 Federal agencies have flagged Grok as "sycophantic" and "susceptible to manipulation." Owner Elon Musk has significant government advisory roles while holding defense contracts (conflict-of-interest concerns). Open question: Who provides independent safety oversight when the vendor's owner has direct influence over the agencies evaluating the product? Source: CNBC
Important

Stated policies and actual behavior aren't always the same. Companies can change terms at any time. Government contracts contain classified provisions. Use this chart as a starting point, not a final answer. Your loyalty should be to principles, not brands.

A Note on Sycophancy

Sycophancy (AI telling you what you want to hear instead of what's accurate) is a known risk flagged in federal evaluations. It's also solvable. Through deliberate configuration, structured prompting, and human oversight, organizations can deploy AI that challenges assumptions instead of reinforcing them. This requires intentional design, not just better models. Learn more at aicoworkerblueprint.com and aithinkingmodel.com.

Now You Know. Act on It.

Information without action is just trivia.