has been less than ethical. See this:
Anthropic Isn't a #Resistance Hero (Slate, March 3, 2026)
https://www.democraticunderground.com/100221069072
https://slate.com/technology/2026/03/ai-anthropic-openai-pentagon-resistance.html
The hullaballoo around Anthropics fight overshadowed another major development last week: The company was ditching its responsible scaling policy, a safeguard, unique within the sector, meant to prevent it from developing risky A.I. tools too quickly. Its not the first time Anthropic has been so flexible with its self-imposed rules. In 2024, it scrapped its blanket ban against selling Claude products to government spy agencies; just after Trumps reelection, it also partnered with Palantir and Amazon to sell their tools to U.S. military customers. This year, the Pentagon made use of the Palantir-Anthropic suite in planning the kidnapping of Venezuelan President Nicolás Maduro, a campaign that killed dozens of locals. Even after the capture, Anthropic participated in a Pentagon bidding contest, proposing a system whereby Claude would interpret voice commands so as to guide offensive, semi-autonomous drone swarms that wold employ some human backup.
In the most technical sense, none of this violates the red lines that Amodei outlined around surveilling Americans or allowing his tech to power fully autonomous killing machines. But those lines appear all the thinner when you consider that Anthropic willingly outsourced Claude use to two corporationsPalantir and Amazonthat are actively enthusiastic about both applications, especially in partnership with this administration.
That kind of convenient ethical punt has been a constant of Anthropics brief life span. Long before it reneged on its promise of responsible and careful A.I. development, Anthropic used the same unethical shortcuts that have invited so much opprobrium upon competitors like Meta and OpenAI: mass-pirating copyright books and songs to speed up model training, allegedly circumventing Reddits anti-A.I.-crawler protections, and extending its timeline for retaining users private chats and Claude sessions. For a company founded by ex-OpenAI executives disaffected with Sam Altmans business practices, it seemingly has little compunction about the aggressive tacks its already taken to shore up its $380 billion bottom line.
Every single generative AI company that trained its AI on data sets of stolen intellectual property - and I'm not aware of any that didn't - made a deliberate unethical choice to steal IP and harm the owners of that IP. The genAI industry is built on theft.
Anthropic is slightly less unethical than other AI companies working with Trump and the Pentagon. And I'm glad they made that decision last week.