Close Menu

    Subscribe to Updates

    Get the latest creative news from infofortech

    What's Hot

    File Your Taxes With TurboTax Full Service Now Before Prices Go Up

    March 17, 2026

    Death by Tariffs: Volvo Discontinuing Entry-Level EX30 EV in the US

    March 16, 2026

    Nvidia launches NemoClaw, Agent Toolkit to enhance AI agents

    March 16, 2026
    Facebook X (Twitter) Instagram
    InfoForTech
    • Home
    • Latest in Tech
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    Facebook X (Twitter) Instagram
    InfoForTech
    Home»Latest in Tech»The fight between Trump and Anthropic is also about nuclear weapons
    Latest in Tech

    The fight between Trump and Anthropic is also about nuclear weapons

    InfoForTechBy InfoForTechMarch 1, 2026No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    The fight between Trump and Anthropic is also about nuclear weapons
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email


    President Donald Trump ordered the entire federal government to stop using products from the AI company Anthropic on Friday to stop what he called a “radical left, woke company” from encroaching on the military’s decision-making.

    The public feud between the Pentagon and Anthropic which resulted in the firm’s blacklisting has become effectively a proxy for the larger battle over the future governance of AI.

    The coverage has focused on Anthropic’s refusal to budge off its two “red lines” — using its product in mass domestic surveillance or to power fully autonomous weapons — and whether Defense Secretary Pete Hegseth’s Pentagon can be trusted to use powerful software with a looser requirement to only use it in a “lawful” manner, as the administration demands.

    But, according to reports this week, the confrontation that sparked the feud actually focused on a different but related issue: how AI might be used in the event of a nuclear attack on the United States.

    Semafor and the Washington Post have reported that in early December, Under Secretary of Defense for Research and Engineering Emil Michael asked Anthropic’s Dario Amodei whether, in a scenario where nuclear missiles were flying toward the US, the company would “refuse to help its country due to Anthropic’s prohibition on using its tech in conjunction with autonomous weapons.” Administration sources say Michael was infuriated when Amodei said the Pentagon should reach out and check with Anthropic. Anthropic denies the story and says it was willing to create a carve-out for missile defense, but either way, the conversation poisoned relations between the two institutions. (Disclosure: Vox’s Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)

    As I reported for Vox in November, there’s an active and ongoing debate over whether and how artificial intelligence should be integrated into nuclear command and control systems. We don’t know to what extent it already is, but we do know that the US military is actively looking at ways AI and machine learning can be used “to enable and accelerate human decision-making.”

    Discussions around nuclear weapons and AI tend to focus on whether machines would ever be given control of the ability to launch nuclear weapons, and the imperative to keep a “human in the loop” for discussions of the use of humanity’s deadly weapons. But many experts and officials say that debate is the low-hanging fruit: Neither the US, nor any other country, is likely to ever hand over decisions on whether to order a nuclear strike to AI.

    A much trickier question is the degree to which AI should be relied on for functions like “strategic warning” — synthesizing the massive amount of data collected by satellites, radar, and other sensor systems to detect potential threats as soon as possible.

    This is the sort of hypothetical use case that it sounds like Michael was proposing to Amodei. If the system is only being used to give us a better chance of shooting down an incoming missile, it might seem like a no-brainer.

    But in a scenario where the US was under attack by ballistic missiles, the president would immediately be faced with a decision — which would have to be made in only minutes — about whether to retaliate, potentially setting off a full-blown nuclear war.

    The lives of millions of people might rely on the system getting it right — and there are plenty of examples from the history of nuclear weapons of detection systems leading to near-misses that were only averted by human intuition.

    The technology to do that kind of threat detection likely doesn’t exist yet, which, given the stakes, may have been one reason Amodei was reluctant to commit to this scenario.

    Retired Lt. Gen. Jack Shanahan, who flew nuclear missions in the Air Force and was later the head of the Pentagon’s Joint Artificial Intelligence Center, told Vox that if nuclear threat detection and response were turned over to artificial intelligence agents, “I don’t want to say it’s certain that there’s going to be a catastrophe, but I think you’re heading down that path.”

    He pointed to a widely-reported study released this week from a researcher at King’s College London which found that AI models including Claude, ChatGPT, and Google Gemini were far more likely than human participants to recommend nuclear options in simulated war games. In this scenario, an AI might not be launching a weapon, but a president would have to overrule a panicked-sounding multibillion-dollar system’s prescription under extreme pressure.

    One factor that makes military use of AI different from previous technologies with obvious national security uses is that in this case, much of the cutting edge research was done by private firms that initially had an eye on the commercial market, rather than companies responding to demand from the military. (An example of the latter case would be the internet, which evolved from Defense Department and academic projects long before companies found commercial uses for it.)

    The new dynamic is bound to lead to culture clashes, particularly between a company like Anthropic that, though it has been happy until now to let its product be used by the Pentagon, has built its public image around its concerns about AI safety, and Pete Hegseth’s “anti-woke” Pentagon.

    “Boeing would never object to building anything the government would ask them to build,” said Shanahan, who led the Pentagon’s controversial 2018 partnership with Google, Project Maven, a previous DC-Silicon Valley culture clash. “It’s a defense-industrial base company. [AI is] being born in a very different world with a group of people who don’t see things the way employees of Lockheed may have seen the Cold War. It’s Mars-Venus to an extent.”

    How the clash plays out, and whether other companies are willing to let their models be deployed with fewer questions asked, may go a long way toward determining what role AI might play in a hypothetical nuclear war.

    This story was produced in partnership with Outrider Foundation and Journalism Funding Partners.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    InfoForTech
    • Website

    Related Posts

    Death by Tariffs: Volvo Discontinuing Entry-Level EX30 EV in the US

    March 16, 2026

    Seniors ballot every week just to play mahjong with young S’poreans

    March 16, 2026

    Week in Review: Most popular stories on GeekWire for the week of March 8, 2026

    March 16, 2026

    Playdate games to check out before the Catalog’s 3-year anniversary sale ends

    March 16, 2026

    Today’s NYT Connections Hints, Answers for March 16 #1009

    March 15, 2026

    With 2 factories in the Amazon, this biz sells 1 bil Brazil nuts/yr to 45 countries

    March 15, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views

    Rising Digital Financial Fraud in South Africa

    January 15, 20264 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Advertisement
    About Us
    About Us

    Our mission is to deliver clear, reliable, and up-to-date information about the technologies shaping the modern world. We focus on breaking down complex topics into easy-to-understand insights for professionals, enthusiasts, and everyday readers alike.

    We're accepting new partnerships right now.

    Facebook X (Twitter) YouTube
    Most Popular

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views
    Categories
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    • Latest in Tech
    © 2026 All Rights Reserved InfoForTech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.