Anthropic CEO Scratches Pete Hegseth's Belly While Telling Him No
Dario Amodei has responded to the Pentagon's ultimatum on AI-controlled murder drones and mass surveillance.
If you are not caught up on this week’s drama between AI giant Anthropic, which runs the chat robot Claude, and Pete Hegseth’s Pentagon, which apparently wants to use Claude to mass-surveil innocent Americans and murder them with drones, click here.
The simplest CliffsNotes version is that the Pentagon had given Anthropic until 5:01 p.m. today to remove all its safeguards, specifically the ones that would allow them to use Claude for said mass surveillance, and to make final targeting decisions on what to bomb the shit out of, without human involvement. In other words, killer drones that can make their own drone-killing decisions, as we explained yesterday.
Secretary Shitfaced is a not-very-bright bully, a model of human ideally suited to carrying shopping bags, blaming others for his own problems, and making loud grunting sounds. (But not kettlebell swings and pull-ups, he’s bad at those.) So he’s been threatening to invoke the Defense Production Act on Anthropic to give him the precious murder technology he craves. Is he getting off on the idea that if Claude made the final decisions on targeting and murdering, nobody could charge him personally with war crimes? Maybe. But to invoke the DPA would suggest that these murder drone surveillance capabilities are so crucial to national security, we just cannot live without them.
But simultaneously he’s threatening to designate Anthropic as a supply chain risk, which would seem to be in conflict with the idea that it is totally necessary for national security!
It’s almost like Secretary Shitfaced is a liar and a little bitch.
And Anthropic seems to have his number.
After we published yesterday, Anthropic issued a long statement ahead of today’s deadline, and it is a master class in telling that little wart-faced, make-up wearing weenus “no,” while distracting him by scratching his belly and catering to the exposed nerves of the poor little thing’s gaping masculine insecurities.
Get a load of this statement from Anthropic CEO Dario Amodei. We’re reproducing the whole thing because we want you to see what he did here. (Hint: Look at the words we bolded and italicized in the body.)
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.
Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.
To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.
The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
Regardless, these threats do not change our position: we cannot in good conscience accede to their request.
It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.
We remain ready to continue our work to support the national security of the United States.
Well, that’s a firm “no.”
We like how throughout, Amodei keeps asserting that the United States under this administration should be fighting the autocrats and defending democracy, as opposed to the other way around. He even cites the defense of Ukraine as a good thing, which is sure to climb up Hegseth’s Russia-fellating ass and stop him up.
Amodei gently and slowly explains to Hegseth, like he’s five years old, why Claude, while very advanced for where AI is right now, is simply not safe for for use in fulfilling whatever weird fantasies get Shitfaced’s dick hard. (And quite frankly, we think Shitfaced should focus on getting the Pentagon’s house in order to stop humiliating headlines like this one about how Pentagon dumbfucks just shot down a BORDER PATROL DRONE over a Texas border town on Thursday.)
Oh yeah, and he mentions out loud how stupid and contradictory Hegseth’s threats really are, and also exposes the sleight of hand the Pentagon is using with its bullshit insistence that it only wants to use Claude for “lawful” purposes. Lotta things might be “lawful,” because Congress hasn’t done shit to keep up with what this technology can do. In other words, same as it’s been with literally every rapidly advancing technology that’s ever existed.
But yet he does that while, every few sentences, inserting a phrase that tickles the balls of Hegseth’s insecurities and makes him feel safe. “Department of WAR,” because Pete Hegseth feels like more of a man if he can be the secretary of WAR, because “defense” is for pussies, and you know who is not a pussy? The Secretary of WAR!
Likewise, “WARFIGHTERS,” because that is Pete Hegseth’s secret word like on Pee-wee’s Playhouse, he cheers and dances and claps his hands and piddles on himself whenever he hears it. “Warfighter” is his secret word every day, and the Anthropic CEO clearly knows it.
We’re just surprised Amodei didn’t work in the phrase “FAFO.” Hegseth really gets giggly with that one. At the same time, it might have artificially inseminated the WAR secretary with too much fake machismo. We wouldn’t want him to OD on synthetic manliness and end up on the floor in a paroxysmal fit.
Of course, “Chinese Communist Party” counts too, because that’s a prostate-stimulating egg for all Republicans. It’s one of their buzzwords, like when they insist on calling it the “Democrat Party” instead of its actual name, it sends a thrill right up their buttholes.
We’ll see how Amodei’s statement goes over with Hegseth, and if the Pentagon is prepared to make good on its competing threats to designate Anthropic as a terrorist group and also kidnap Claude and force it to serve as Hegseth’s personal Fleshlight when the clock strikes 5:01.
You’ll note at the end that Amodei says if this must be goodbye, then they’ll be first in line to help make sure it’s a clean break, so long, farewell, auf wiedersehen, fuck off. That suggests Anthropic can afford it. Maybe these other AI companies like OpenAI and xAI, with their weird creep leaders, really can’t. (Google obviously can, but Google does a whole lot more than Sam Altman’s and Elon Musk’s pervert robots.)
As we said yesterday, it’s telling that this is the argument we are having. All this AI shit is problematic at the very least, much of it downright evil, but this is happening. Stupid Hitler’s Pentagon is playing robot chicken — literally — because the best AI, the one that’s already in their systems, refuses to let them indiscriminately mass surveil Americans and drone-murder them in their sleep.
And this time, the company on the other side at least appears to be up to the fight, at least for now. That may not be the case next time.
The dark times are here!
Let’s see what happens at 5:01.
Want to read more Evan than just what’s at Wonkette? Visit The Moral High Ground and subscribe to it!
Follow me on Instagram!
And on BlueSky!
And on Facebook!







![A Sober [Hic!] Assessment Of Pete Hegseth’s First (Last?) Year As Defense Secretary](https://substackcdn.com/image/fetch/$s_!hnxI!,w_140,h_140,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd18c303-2f49-4bb9-9559-7138b54e8c37_1456x811.webp)

"Staggering Incompetence" - the two most used words I've read this morning for the latest total fuckup by Drunken Pete and his gang of Total Fuckups. Shooting down a CBP drone worth 20 million dollars "accidentally" and having to once again close airspace is STAGGERING INCOMPETENCE! I'm gonna spell this out because it just needs to be spelled out - Jesus Fucking Tap Dancing Christ on a Fucking Cracker.
This shit is like "Black Mirror," but with more alcohol, meth, theft, corruption, ethnic cleansing, murder, pedophilia and rape.