OpenAI says it will change its Pentagon contract to block the use of its AI for domestic surveillance of Americans, drawing criticism and protests while the debate over government access to powerful tools heats up.
OpenAI confirmed it is revising parts of its agreement with the Pentagon to prohibit using its systems for surveillance of U.S. persons and nationals. The move came amid public backlash, employee protests, and an outspoken boycott campaign that argued the company’s original language left room for government overreach. Company leadership framed the change as a clarification consistent with U.S. law, while critics continued to question the limits and enforcement of those promises.
Pressure grew after another AI firm, Anthropic, walked away from a similar Pentagon arrangement, and nearly 500 employees at OpenAI and Google signed an open letter criticizing potential uses of AI in mass surveillance. Protesters showed up at OpenAI offices in San Francisco and London, and online campaigns urged users to quit or boycott the platform. That public reaction forced a clearer stance from the company on what it will and will not allow.
OpenAI’s CEO posted an internal memo to X saying the company would “make some additions in our agreement” with the Department of Defense. The controversy centered on whether broad contract language could permit sweeping analysis of commercially acquired data and social media feeds at scale. From a conservative perspective, there’s a clear tension: national security priorities sometimes require robust tools, but Americans expect protections against unchecked domestic spying.
From Business Insider:
OpenAI said it is amending its contract with the Pentagon.
After public concerns that OpenAI’s new deal with the Pentagon would allow the government to use its AI for mass surveillance, CEO Sam Altman posted an internal memo to X on Monday evening, saying that the company is working with the Pentagon to “make some additions in our agreement.”
“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of US persons and nationals,” Altman wrote on X.
“The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract,” Altman added.
Here is re-post of an internal post:
We have been working with the DoW to make some additions in our agreement to make our principles very clear.
1. We are going to amend our deal to add this language, in addition to everything else:
"• Consistent with applicable laws,…
— Sam Altman (@sama) March 3, 2026
Even with the written assurances, many remain skeptical because similar safeguards have proven porous in other federal programs. Civil liberties groups have already documented dozens of government AI applications tied to biometric identification and social media monitoring. From a Republican viewpoint, we should insist on clear, enforceable boundaries that protect Americans while allowing legitimate defense and law enforcement needs.
Public records show agencies like the Department of Homeland Security have dozens of AI use cases that touch immigration enforcement, tips analysis, and social media monitoring. Advocates warn that technologies labeled as benign can be combined into persistent, mass surveillance systems unless strict guardrails and oversight are in place. Critics argue that vague contract clauses such as “all lawful purposes” can be stretched to justify intrusive programs.
From Fedscoop:
From litigation to federal prisons to criminal investigations, artificial intelligence appears to have touched nearly every corner of the Department of Justice in the past year.
Just two years ago, the DOJ reported four use cases of AI at the agency. In its most recent 2025 use case inventory, the agency logged 315 cases, a 31% increase from last year. The use cases varied widely in function, though technology and privacy experts took particular note of instances where AI was deployed at the agency for crime prediction, public surveillance, and litigation.
Of these cases, 114 were deemed “high-impact” by the agency. Under the latest guidance, high-impact AI includes models that could have “significant impacts” when deployed, including for decisions or actions with a “legal, material, binding or significant effect on rights and safety.”
Jay Stanley, a senior policy analyst with the American Civil Liberties Union’s Speech, Privacy, and Technology Project, told FedScoop that the DOJ’s 2025 inventory provides a “snapshot” of how the federal government “is aggressively seeking to test and exploit a wide variety of AI algorithms and sifting through data on ordinary people.”
The Justice Department’s own inventory shows exponential growth in AI use, including cases labeled high-impact that could affect rights and safety. That rapid expansion is why transparency, independent audits, and strict statutory limits matter so much. Conservatives can and should push for strong oversight that both preserves civil liberties and allows law enforcement to access tools responsibly when warranted.
OpenAI and Anthropic face a hard choice: either keep selling powerful tools to government customers or refuse work that could undermine civil liberties. OpenAI’s revised language promises limits but leaves questions about monitoring, enforcement, and future contract changes. The debate now moves to Congress and the courts, where policy and legal frameworks must catch up to the technology.
At the end of the day, Americans deserve both security and privacy, and that requires clarity in contracts, enforceable rules, and persistent public oversight. This episode is a reminder that when companies partner with the government on novel tech, the terms must be explicit, auditable, and resilient against mission creep.




