The recent legal fight between Anthropic and the Department of Defense has forced the tech world to confront a blunt question: what will companies allow their AI to be used for? Anthropic sued the government after being blacklisted from some federal work, arguing that the move violated its First Amendment rights. The company says it will not let its models be used for domestic mass surveillance or fully autonomous lethal weapons, and it has pushed back against a DoD demand to permit "any lawful use" of its technology.

From anti-war protests to defense contracts

Not long ago, many tech employees treated military work as off limits. In 2018 thousands of Google workers protested Project Maven, an effort to analyze drone footage for the Department of Defense. Those protests led Google to decline renewing the contract and to publish policies against building technology that could "cause or directly facilitate injury to people."

Fast forward a few years and the landscape looks very different. Major tech firms have moved toward closer ties with government and military customers. Some factors driving that change include the current administration's push to expand AI use across federal agencies, concerns about international competitors, and an increase in defense spending worldwide. Company leaders and boards have also shown willingness to pursue defense deals that once would have prompted internal revolt.

Google removed the 2018 language that blocked weapon-related work and tightened controls on employee activism. The company has since offered its AI systems to government projects, and in recent years it cut employees involved in protests linking the company to military or foreign policy controversies. OpenAI once barred military access to its models, but the company now has senior staff participating in a military innovation program and joined other AI firms in a substantial DoD contract to integrate their technology into defense systems.

At the same time, companies that built their businesses around defense work, such as Palantir and Anduril, have grown into influential players. Palantir took over the Maven work after Google stepped back, and its leadership has openly advocated for tighter tech-military integration. The system once associated with Project Maven has evolved into a classified platform used by the military to access commercial AI tools.

Why Anthropic pushed back

Anthropic has tried to draw limits. The company says it will not let its models be deployed for domestic mass surveillance or for fully autonomous weapons. In its lawsuit the company also said that its version of Claude for government use, "Claude Gov," carries fewer restrictions for military customers than the civilian product. That shows how far Anthropic was already willing to adjust its technology for defense use, even as it insists on a few bright lines.

Anthropic co-founder Dario Amodei has been clear that he sees both risks and uses for AI in the military context. He has warned about dangers like the potential for biological threats and misuse by autocratic states, while arguing that democracies should not be left behind in deploying advanced AI for defense. He also raised concerns about too few people having concentrated control over autonomous systems.

Still, Amodei has said he wants the company to keep working with the Defense Department. He wrote that Anthropic "has much more in common with the Department of War than we have differences." He also said the company is comfortable with almost all military use cases except for a small number of specific exclusions.

What the Pentagon is doing

The Defense Department has pushed back against Anthropic, including designating the company a supply chain risk. The government has reportedly used Anthropic's Claude for target selection and analysis in operations related to Iran, a use the company has not publicly objected to. At the same time, other AI firms secured new classified and unclassified contracts with the DoD, signaling a broader industry willingness to supply technology for military purposes.

Why this matters

The Anthropic case highlights a shifting reality in tech ethics and business. Companies that once declined military projects are now exploring or accepting defense contracts. Some firms are shaping policies to allow that work, while others, like Anthropic, are trying to set narrow limits. The result is a patchwork of stances across the industry and a renewed debate about whether corporate restraint can keep pace with government demand for advanced AI.

As Margaret Mitchell, an AI researcher and chief ethics scientist, put it: "If people are looking for good guys and bad guys, where a good guy is someone who doesn’t support war, then they’re not going to find that here." That blunt assessment underlines the messy reality: many companies are caught between ethical pledges, competitive pressures, and government interest in powerful AI.

For now, the Anthropic lawsuit will be a test case for how far firms can set limits on military use of AI, and whether those limits will hold when national security priorities press hard against them.