Sign up for the Free Tangle Newsletter Highly curated unbiased news for busy, open-minded people.
Processing your application
Please check your inbox and click the link to confirm your subscription.
There was an error sending the email
Written by: Isaac Saul

What happened between the Pentagon and Anthropic?

Plus, a question about transgender youth and parental rights.

Defense Secretary Pete Hegseth arrives at the Capitol to brief senators on the situation in Venezuela — January 7, 2026 | REUTERS/Jonathan Ernst, edited by Russell Nystrom
Defense Secretary Pete Hegseth arrives at the Capitol to brief senators on the situation in Venezuela — January 7, 2026 | REUTERS/Jonathan Ernst, edited by Russell Nystrom

I'm Isaac Saul, and this is Tangle: an independent, nonpartisan, subscriber-supported politics newsletter that summarizes the best arguments from across the political spectrum on the news of the day — then “my take.”

Are you new here? Get free emails to your inbox daily. Would you rather listen? You can find our podcast here.

Today’s read: 14 minutes.

🤖
The DoD ditches Anthropic for OpenAI, and a question about one of the stories Trump discussed in his State of the Union address.

A special report.

We know that all eyes are on Iran right now, but we also wanted to highlight a special report from West Texas we broke over the weekend. Last week, the Trump administration waived a series of environmental laws and regulations to begin awarding contracts for border wall construction in Texas’s Big Bend region, where Tangle Executive Editor Isaac Saul owns land. Isaac wrote about what he’s hearing from residents, the reason he’s opposed to a wall, and why he hopes Trump abandons the plan. This is a special report, available for free to all readers. You can read it here

Quick hits.

  1. U.S. Central Command announced the U.S. death toll in the United States’s conflict with Iran rose to six after two service members’ bodies were recovered from an Iranian strike on a military position in Kuwait. (The update) Separately, Secretary of State Marco Rubio said that the “hardest hits are yet to come from the U.S. military.” Rubio also said that, after Israel shared their plans to attack Iran, the administration deemed the threat of a counterattack on U.S. assets as imminent. (The comments)
  2. Voters will take part in primary elections on Tuesday in Arkansas, Texas, and North Carolina. The Democratic and Republican Senate primaries are the most closely watched races in these states. (The primaries
  3. The Supreme Court voted 6–3 to temporarily block a California law that limits when schools can notify parents about a student’s transgender identity while a challenge to the policy continues. (The ruling) Separately, the Court temporarily blocked an effort to redraw the lines of New York’s 11th Congressional district, represented by Rep. Nicole Malliotakis (R), overruling a lower court judge who found that the current map violates the state constitution. The Court’s three liberal justices dissented. (The decision)
  4. The House Oversight Committee released video of its separate closed-door hearings with former President Bill Clinton and former Secretary of State Hillary Clinton last week. (The videos)
  5. French President Emmanuel Macron said his country will grow its nuclear arsenal and work more closely with European allies on deterrence measures, a departure from France’s longstanding nuclear-weapons policy. (The announcement)

Today’s topic.

The Anthropic–Pentagon dispute. On Friday, President Donald Trump ordered federal agencies to immediately cease their use of the artificial intelligence (AI) company Anthropic’s technology after it declined to allow the Pentagon unrestricted access to its models. Later in the day, Defense Secretary Hegseth directed the Pentagon to designate Anthropic as a “supply chain risk to national security,” which he said would bar contractors, suppliers, or partners doing business with the U.S. military from conducting commercial activity with Anthropic. Shortly after, OpenAI announced it had agreed to let the Pentagon use its AI models within classified systems. 

Back up: The Department of Defense has been using Anthropic’s technology — most notably, its AI assistant Claude — since last year, including within classified systems. However, the Defense Department has recently sought increased access to the technology, which Anthropic resisted. Specifically, the company refused to allow its model to be used for mass surveillance of U.S. citizens or the development of autonomous weapons. Defense Secretary Hegseth gave the company until Friday to reach an agreement, then quickly progressed a deal with OpenAI once the deadline passed. 

In a post on Truth Social, President Trump called Anthropic a “RADICAL LEFT, WOKE COMPANY” that “made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” While federal agencies were directed to stop using Anthropic’s technology immediately, Trump said agencies that currently rely on its models will have six months to phase out their use.

On Friday, Anthropic sued to challenge the “supply chain risk” designation. In a statement, Anthropic called the move “legally unsound,” adding that it “[did] not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons… [and] mass domestic surveillance of Americans constitutes a violation of fundamental rights.”

In its own statement, OpenAI said its deal with the Pentagon included similar restrictions on use that Anthropic had requested: mass surveillance, autonomous weapons and “high-stakes automated decisions,” such as a social credit system. Full details on those restrictions have not yet been made public, but an excerpt of the contract shared by OpenAI says the Pentagon may use its systems for “all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

Today, we’ll break down the Anthropic–Pentagon conflict, with views from the left, right, and technology writers. Then, Executive Editor Isaac Saul gives his take.

What the left is saying.

  • The left backs Anthropic, saying it was right to restrict the Trump administration’s access to a powerful technology.
  • Others add that further conflicts over government AI will come.

In The New York Times, Maureen Dowd wrote “real despots hijack artificial intelligence.”

“Hegseth should be focused on our nerve-racking duel with Iran. Instead, he spent the week at war with Dario Amodei, the thoughtful chief executive of Anthropic and one of the few in Silicon Valley advocating for humanity,” Dowd said. “President Trump and Hegseth already have a healthy disregard for democracy. Trump is trying to take over our elections because he’s rightly worried that his party is going to get shellacked in November. And now he’s escalating his push to remove the few pathetic guardrails that exist on A.I.”

“On Tuesday, Hegseth summoned Amodei to the Pentagon to demand that he let the Pentagon do whatever it wanted, as long as it was ‘lawful.’ This is poppycock, of course, because Trump and Hegseth have contempt for the law when it gets in the way of their whims, power grabs and revenge plots,” Dowd wrote. “The self-styled secretary of war offered Amodei a double ultimatum: He would invoke the Defense Production Act to compel Anthropic to give the Pentagon unrestricted use of its model, or he would designate it a supply-chain risk — a national security threat — which would put the company’s government contracts, and possibly the company itself, in jeopardy. Anthropic had a choice: be extorted or blacklisted.”

In MS NOW, Hayes Brown argued “Anthropic was right not to trust Pete Hegseth.”

“The Pentagon’s inability to accept constraints isn’t necessarily unique to Trump or Hegseth. The defense community has pushed back hard against anything seen as a constraint on potential actions under GOP and Democratic administrations alike,” Brown said. “History is littered with examples, from America’s refusal to join the International Criminal Court to rejecting the international treaty banning land mines. Whether the Pentagon intends to use Claude in the ways Anthropic rejects is in many ways secondary to the idea that the military would accept guardrails on its actions from outside the chain of command.”

“Given the competition’s amoral mindset toward products that have the potential to cause massive harm, the bar for assessing Anthropic’s self-regulation is so low as to truly be in hell,” Brown wrote. “As with almost every major technological leap, America’s laws are deeply lagging when it comes to policing the rapid growth of AI. Without real safeguards and regulations, there’s little stopping the Pentagon from blacklisting a company that dares draw the line at having Americans’ data siphoned up rather than foreigners’, or at having a robot being the one pulling the trigger.”

What the right is saying.

  • Many on the right see the administration’s demands as legitimate but question the utility of battling Anthropic.
  • Others say the prospect of autonomous AI weapons creates difficult trade-offs.

In The Wall Street Journal, Allysia Finley wrote about “Trump’s road to war with Anthropic.”

“Many Silicon Valley leaders lean left, but most fiercely oppose government efforts to regulate AI. Mr. Amodei broke with his competitors by endorsing a Biden executive order that imposed federal oversight of AI models. Anthropic also lobbied for regulation of AI by states such as New York and California and opposed the administration’s efforts to pre-empt state laws,” Finley said. “All this made Mr. Amodei persona non grata in the Trump administration. David Sacks, a Silicon Valley venture capitalist who serves as Mr. Trump’s AI czar, accused Mr. Amodei and other ‘AI doomers’ of stoking public fear to encourage government control of AI.”

“Trump officials are right that the sort of AI regulation Mr. Amodei has advocated could amount to unilateral disarmament by slowing innovation,” Finley wrote. “The Chinese, Russians and other U.S. adversaries won’t handcuff themselves with regulation. Yet banning government agencies and contractors from using Anthropic tools — which Pentagon officials favor for their dexterity — will handcuff the U.S. and could damage national security.”

In The New York Times, Ross Douthat asked “if A.I. is a weapon, who should control it?”

“It’s easy to get Skynet vibes from the Pentagon’s demands. As Matt Yglesias noted, all the weird and complicated scenarios spun out by A.I. doomers get a lot simpler if our government decides to start building autonomous killer robots,” Douthat said. “That’s not what the Pentagon says it intends to do. Its professed concern is that it can’t embed a crucial technology into the national security architecture and then give a private company a general ethical veto over its use, even if those ethics seem reasonable on paper. Doing so outsources decisions that are supposed to be made by an elected president and his appointees.”

“Over the long run, though, one can imagine Pentagon officials offering some advantages over the typical A.I. mogul when it comes to safety and control,” Douthat wrote. “First, they tend to be focused more on concrete strategic objectives than on machine gods and the Singularity. Second, they are constrained from certain gambles by bureaucratic caution and the chain of command. Third, they answer to the public, through elections and civilian control, in a way that C.E.O.s do not.”

What technology writers are saying.

  • Some technology writers worry that OpenAI’s Pentagon deal will bring about the dangers Anthropic sought to avoid.
  • Others question Anthropic’s claim to veto power over how the government uses its technology. 

In MIT Technology Review, James O’Donnell said “OpenAI’s ‘compromise’ with the Pentagon is what Anthropic feared.”

“OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology… You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon,” O’Donnell wrote. “It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line.”

“The whole reason Anthropic earned so many supporters in its fight — including some of OpenAI’s own employees — is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles,” O’Donnell said. “On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use.”

In Stratechery, Ben Thompson explored “Anthropic and alignment.”

“What is the standard by which it should be decided what is allowed and not allowed if not laws, which are passed by an elected Congress? Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management and its board — ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public,” Thompson wrote. “And, on the second point, who decides when and in what way American military capabilities are used? That is the responsibility of the Department of War, which ultimately answers to the President, who also is elected.”

“I do have tremendous discomfort about AI’s surveillance capabilities in particular; there are a lot of safeguards we thought we had that were actually mostly due to the friction entailed in overcoming them. AI, even more than computers and the Internet, is a friction solvent, and I completely understand why Anthropic’s pushback on this specific point resonates broadly,” Thompson said. “The way to address this new reality, however, is with new laws and through strengthening accountable oversight; cheering or even demanding that an unelected executive decide how and where such powerful capabilities can be used is the road [to] an even more despotic future.”

My take.

Reminder: “My take” is a section where we give ourselves space to share a personal opinion. If you have feedback, criticism or compliments, don't unsubscribe. Write in by replying to this email, or leave a comment.

  • Deciding who does, or should, have influence over advanced technologies is complicated.
  • This story is simple: Anthropic has the right to reject a government contract, and the government is trying to punish them for it.
  • OpenAI’s deal is rife with concerning loopholes, and the government’s AI position is now nonsensical.

Executive Editor Isaac Saul: The government has a right to deny contracts to any private company. Similarly, any private company has a right to refuse the terms that the government offers them.

What the government shouldn’t do (and maybe legally can’t do) is use its power to punish any private company that turns down a government contract.

That’s all easy to say, but a lot of questions complicate that simplicity. Who do we want holding the power of next generation artificial intelligence, the government or tech companies? How do we hold them each accountable? What happens if the CEO of a private company has more control over critical defense technologies than our top military generals? Would that diffusion of power necessarily be bad? These questions are not easy to answer.

But the basic story is still simple: Anthropic is a private company with technology the government wants (and already relies on). The government tried to work with Anthropic, but Anthropic didn’t like the terms of the deal — and when it walked away, the federal government tried to punish it in the harshest way possible. That’s not good governance. 

When the dust settled, OpenAI got a deal with restrictions that appeared to be similar to what Anthropic wanted; and some reporting suggested personalities spiked the deal, not philosophies. But the deals are not the same — far from it. And when we look at all the details about why the Anthropic deal fell through, the federal government comes away looking even worse.

On Thursday, Anthropic CEO Dario Amodei released a statement warning that AI can “undermine, rather than defend, democratic values,” which is unsettling to say after talking with the government. He added that he believed in “the use of AI for lawful foreign intelligence and counterintelligence missions,” but not for “mass domestic surveillance.” Based on the deal Amodei said the administration was asking for, the government could have used Anthropic’s tools to buy up “detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” feed it into artificial intelligence, and create a “comprehensive picture of any person’s life — automatically and at massive scale.”

Amodei also said that his company supported the development of “partially autonomous weapons” like the ones used in Ukraine, but that frontier AI systems “are simply not reliable enough to power fully autonomous weapons… We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.”

We can all read between the lines here: The government wants to be able to use these tools for mass surveillance of Americans and to create fully autonomous weapons. But Amodei left no room for any doubt, saying explicitly that the Department of Defense “will only contract with AI companies who accede to ‘any lawful use’ and remove safeguards in the cases mentioned above.” 

Not surprising. Definitely alarming. Then, when Anthropic walked away, the DoD responded by punishing them — sending a message to the entire industry that businesses could face devastating consequences if they don’t cooperate with the government.

In his newsletter Stratechery (under “What technology writers are saying”), Ben Thompson asked (paraphrasing here) why Amodei gets to decide what a permissible use for its AI would be and not Congress or the U.S. government.

Again, the dynamics here are complicated, but I don’t think Thompson’s premise is quite right. 

The government can regulate the tools that already exist through the legislative process, but company leaders have a say in how their company’s products are used. That’s not unique to Anthropic; it applies to every company and the entire private sector. If Amodei believes his technology is so advanced that the laws haven’t caught up to properly regulate it, then he is within his rights to contractually limit who uses those tools and for what. 

Thompson also asked, “Who decides when and in what way American military capabilities are used?... Anthropic’s position is that an unaccountable Amodei can unilaterally restrict what its models are used for.”

Again, I don’t think this is quite right. American military capabilities do not include unfettered access to Anthropic’s tools — that’s the point. Amodei was negotiating terms of a deal to give the American military more capability by granting more access to his tools under certain conditions; if the government doesn’t like those conditions, which it doesn’t, then it doesn’t get the tools. That’s business. 

This, actually, happens all the time. As tech researcher Dean Ball wrote:

Every transaction of technology between a private firm and the military involves a contract (indeed, the companies that do this are called defense contractors for a reason), and these contracts routinely contain operational use restrictions (“system X cannot be used in countries Y,” a common restriction with telecommunications technology such as Elon Musk’s Starlink), technological limitations (“this fighter jet is only certified for uses in X conditions and use of it outside those conditions is a breach of warranty”), and intellectual-property restrictions (“the contractor owns, and may repurpose and resell, the knowhow and IP associated with X weapon system developed with public funds”).

What’s more, the government already agreed to these terms with Anthropic under the Biden administration. The Trump administration agreed, too, before changing its mind.

In the chaos of this deal blowing up, OpenAI’s Sam Altman swooped in to take advantage of the opportunity. Altman apparently has no problem working within the confines that Amodei rejected, though he’s trying desperately to convince the public that the terms he agreed to somehow protect the values Amodei espoused. 

They don’t

OpenAI has claimed it has various safeguards in its contract with the Department of Defense to protect Americans, like a clause (emphasis mine) that “the AI System shall not be used for unconstrained monitoring of US persons’ private information as consistent with these authorities.” But the Defense Intelligence Agency (DIA), a spy agency inside the DoD, purchases public, bulk smartphone data. That data is already legal to purchase, so Altman’s assurance that his models will only be used within the bounds of the law is inadequate — the deal still leaves pathways open for domestic surveillance. Also, if OpenAI can’t be used for “unconstrained” monitoring, does any constraint of any kind mean the AI systems can now be used to monitor Americans? And what if the government is monitoring a noncitizen who just so happens to be speaking with a U.S. citizen, a loophole that has been known and discussed for years? Pressed by journalists to point to where exactly in their contract the worst outcomes here might be prohibited, OpenAI has not been able (or has declined) to do so

Altman, in trying to play clean-up, shared his own extremely careful language about OpenAI and the Department of Defense never “intentionally” being used for domestic surveillance and limiting the “deliberate” tracking or monitoring of U.S. citizens. OpenAI now says it has introduced new language that could be helpful: “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” But that language has not been formally agreed to, and you might forgive me if I don’t trust Sam Altman’s PR speak at this moment. 

To say this was a reputation-defining move by Altman is underselling it. I’m not naive enough to think Amodei and Anthropic are motivated purely by altruism and democratic freedoms, but I personally reacted to the news by canceling my ChatGPT subscription, attempting to delete all my data from OpenAI’s servers, and moving my limited AI-based work over to Claude. The choice has less to do with protesting Altman’s public stance than with protecting my own privacy, and I’m apparently not the only one

I have to add that the whole of U.S. artificial intelligence policy is now comically backward. Anthropic, a U.S.-based company that apparently has some semblance of a moral compass in the democratic spirit, whose technology was literally used in a military operation over the weekend, has been labeled a “supply chain risk.” Yet the government’s relationship with Chinese AI providers like DeepSeek remains unchanged, despite credible reports that they’re being used against our interests. We are selling American chips to foreign companies to compete with us in the great AI race, but we’re threatening, punishing, and limiting companies operating in our domestic AI sector.

As ChatGPT might say: It’s not just worrisome, it’s nonsensical. And if this technology is as important, world-changing, and norm-shifting as people say — it’s going to be a very big problem. 

Take the survey: What do you think of the military partnering with AI services? Let us know.

Disagree? That's okay. Our opinion is just one of many. Write in and let us know why, and we'll consider publishing your feedback.

Your questions, answered.

Q: [During the State of the Union,] President Trump called out a girl in the audience who supposedly was being transitioned into a boy during school against her wishes and without the parents’ knowledge or approval. I’ve read several different reviews detailing the inaccuracies and truths of the State of the Union speech and no one has mentioned this call out.

I find it hard to believe any school would try to transition a student from one gender to the other against the student’s wishes and without parents’ involvement. What is the background and reality of this situation?

— Ann from Douglassville, PA

Tangle: The situation with Sage Blair that Trump described was a bit different than your summary. When she was in high school, Blair reportedly told a friend that she wanted to go by a male name and pronouns. A guidance counselor who overheard the conversation talked to Blair, and the school agreed to socially transition her. When the school notified her adoptive mother, Michele Blair (her paternal grandmother), she disapproved, but the school allegedly continued referring to Blair with a male name and pronouns and allowing her to use the boys’ bathroom while withholding this decision from Blair’s grandmother. Michele Blair also claims that the school failed to notify her of bullying and harassment her daughter experienced from male peers, which she alleges resulted in Blair running away from home and being sex-trafficked. 

Trump’s description of this story was consistent with Michele Blair’s claims in a lawsuit against the school, though the president not specifying that Blair was being “socially” transitioned could imply the school had supported a medical transition. Attorneys for the school argued that it had no duty to inform Michele Blair of Sage’s transition and that the school’s actions could not be factually linked to Sage’s leaving home.

Other cases of schools moving forward with social transitions without informing parents have been recorded, but the frequency of these events is difficult to gauge. In most of these cases, the schools argue that the child consented to the transition and wanted it hidden from parents.

Want to have a question answered in the newsletter? You can reply to this email (it goes straight to our inbox) or fill out this form.

Under the radar.

On February 26, a Kansas law went into effect mandating that identity documents, including birth certificates and driver’s licenses, list sex assigned at birth rather than sex matching gender identity. The law also reversed all prior changes to sex identification. Because the law had no grace period, more than 1,000 transgender Kansans’ official documents, including driver’s licenses, were invalidated as soon as the law went into effect. According to Harper Seldin, an attorney for the American Civil Liberties Union, the Kansas law is the first to retroactively invalidate residents’ legally obtained driver’s licenses. USA Today has the story.

Numbers.

  • $200 million. The ceiling value of the original deal between Anthropic and the Department of Defense.
  • 2.56 trillion. The number of use tokens represented by Anthropic’s Claude Sonnet 4.5 in the past month, the fifth-most of any large language model (LLM), according to OpenRouter.
  • 1.28 trillion. The number of use tokens represented by OpenAI’s gpt-oss-120b model in the past month, the thirteenth-most of any LLM.
  • 2.18 trillion. Anthropic’s token share the week of February 21, 2026, according to OpenRouter.
  • 1.42 trillion. OpenAI’s token share the week of February 21, 2026.

The extras.

  • One year ago today we wrote about the Oval Office blowup between Trump and Zelensky.
  • The most clicked link in yesterday’s newsletter was once again the Supreme Court’s ruling about the Postal Service.
  • Nothing to do with politics: An insanely cute baby pygmy slow loris learning to climb.
  • Yesterday’s survey: 4,989 readers responded to our survey on the U.S. going to war with Iran with 71% opposed. “We should not go to war without approval from Congress! The president shouldn’t be the one to decide,” one respondent said. “This evil must be stopped. We can’t keep kicking the can down the road,” said another.

Have a nice day.

Omar Yaghi grew up in a refugee community in Jordan without access to running water or electricity; in 2025, he won the Nobel Prize in chemistry for his work on machines that can pull drinking water from the desert air. Now, Yaghi has invented a machine that runs on thermal energy — and, according to his company Atoco, can generate up to 1,000 liters of clean water every day, even in arid climates. For islands like Carriacou, which was devastated by Hurricane Beryl in 2024 and which currently imports water from Grenada during the dry season, the invention shows promise. “The technology’s ability to function off-grid using only ambient energy is particularly compelling for our context,” Carriacou government official Davon Baker said. The Guardian has the story.

Member comments

More from Tangle News related to this article

Recently Popular on Tangle News