Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,420 posts)
1. Nice recap of what happened last week, but ignores ways in which Anthropic, like other AI companies,
Wed Mar 4, 2026, 11:36 AM
Yesterday

has been less than ethical. See this:

Anthropic Isn't a #Resistance Hero (Slate, March 3, 2026)
https://www.democraticunderground.com/100221069072

https://slate.com/technology/2026/03/ai-anthropic-openai-pentagon-resistance.html

The hullaballoo around Anthropic’s fight overshadowed another major development last week: The company was ditching its “responsible scaling policy,” a safeguard, unique within the sector, meant to prevent it from developing risky A.I. tools too quickly. It’s not the first time Anthropic has been so flexible with its self-imposed rules. In 2024, it scrapped its blanket ban against selling Claude products to government spy agencies; just after Trump’s reelection, it also partnered with Palantir and Amazon to sell their tools to U.S. military customers. This year, the Pentagon made use of the Palantir-Anthropic suite in planning the kidnapping of Venezuelan President Nicolás Maduro, a campaign that killed dozens of locals. Even after the capture, Anthropic participated in a Pentagon bidding contest, proposing a system whereby Claude would interpret voice commands so as to guide offensive, semi-autonomous drone swarms that wold employ some human backup.

In the most technical sense, none of this violates the red lines that Amodei outlined around surveilling Americans or allowing his tech to power fully autonomous killing machines. But those lines appear all the thinner when you consider that Anthropic willingly outsourced Claude use to two corporations—Palantir and Amazon—that are actively enthusiastic about both applications, especially in partnership with this administration.

That kind of convenient ethical punt has been a constant of Anthropic’s brief life span. Long before it reneged on its promise of “responsible” and careful A.I. development, Anthropic used the same unethical shortcuts that have invited so much opprobrium upon competitors like Meta and OpenAI: mass-pirating copyright books and songs to speed up model training, allegedly circumventing Reddit’s anti-A.I.-crawler protections, and extending its timeline for retaining users’ private chats and Claude sessions. For a company founded by ex-OpenAI executives disaffected with Sam Altman’s business practices, it seemingly has little compunction about the aggressive tacks it’s already taken to shore up its $380 billion bottom line.


Every single generative AI company that trained its AI on data sets of stolen intellectual property - and I'm not aware of any that didn't - made a deliberate unethical choice to steal IP and harm the owners of that IP. The genAI industry is built on theft.

Anthropic is slightly less unethical than other AI companies working with Trump and the Pentagon. And I'm glad they made that decision last week.

Recommendations

2 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»Anthropic: The AI Company...»Reply #1