Ã… skape fremragende programvare
La oss bygge noe ekstraordinært sammen.
Stol på Lasting Dynamics for enestående programvarekvalitet.
Michele Cimmino
feb 27, 2026 • 7 min read

On February 24, 2026, Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic CEO Dario Amodei: remove all restrictions on military use of the Claude AI model, or face consequences. The threats were not subtle. The Pentagon indicated it might invoke the Defense Production Act — an extraordinary measure never before applied to an AI company — or designate Anthropic as a "supply chain risk," effectively blacklisting it from defense work.
The next day, the Pentagon took its first concrete step: it asked Boeing and Lockheed Martin to assess their reliance on Anthropic's technology, signaling that blacklisting was not merely a negotiating tactic.
The irony is sharp. Anthropic's Claude is currently the only major chatbot approved for use on classified military systems. The model the Pentagon depends on most is built by the company the Pentagon is now threatening to cut off.
This is not just a political drama. It is a structural revelation about the future of AI in defense — and it has profound implications for every organization building or buying military AI software.
The confrontation escalated over the course of a single week in February 2026.
On February 24, Axios reported that Hegseth met Amodei and delivered a Friday deadline: remove military-use restrictions on Claude or face action. Anthropic refused to allow Claude for weapons control or mass surveillance. The very next day, CNN reported that Anthropic quietly changed its core safety policy — the company says the change is "separate and unrelated" to the Pentagon discussions. Simultaneously, the Pentagon asked Boeing and Lockheed Martin to evaluate their Anthropic dependency. Also on February 25, CBS News reported that the Pentagon sent Anthropic a "best and final offer" for unrestricted military use; Anthropic again requested guardrails, and the Pentagon refused.
By February 26, Politico reported that tech lawyers and AI policymakers called the Pentagon's plans "legally and technically incoherent." The Wall Street Journal revealed that the US military had already used Claude during a raid, prompting Anthropic to reconsider its policies. That same day, Nvidia CEO Jensen Huang commented publicly that the Pentagon-Anthropic rift is "not the end," implying more conflicts between big tech and defense are inevitable.
The dispute crystallized a question that has been building for years: who controls the rules of engagement for military AI — governments or the companies that build it?
The Anthropic situation is dramatic, but it is a symptom, not the disease. The underlying problem is structural: military organizations are building critical capabilities on top of commercial AI platforms whose providers may, at any point, change their policies, restrict access, or impose conditions that conflict with operational needs.
This is not unique to Anthropic. Google pulled out of Project Maven in 2018 after employee protests. Microsoft faced internal opposition to its HoloLens military contract. OpenAI's policies on military use have shifted multiple times. Every major AI provider has, at some point, either restricted defense applications or faced pressure to do so.
La oss bygge noe ekstraordinært sammen.
Stol på Lasting Dynamics for enestående programvarekvalitet.
For defense organizations — whether the Pentagon, European NATO allies, or national defense ministries — this creates an unacceptable dependency. There is policy risk: a company's internal ethics board can, overnight, decide that your mission is outside their acceptable use policy. There is supply chain risk: if a provider is blacklisted, designated a supply chain risk, or exits the defense market, organizations face capability gaps with no quick alternatives. And there is sovereignty risk: for European defense organizations, depending on AI models built by US companies introduces a layer of geopolitical vulnerability. US government actions — sanctions, export controls, or policy shifts — could restrict European access to critical AI capabilities.
The Pentagon's own actions prove the point. By threatening to invoke the Defense Production Act, the US government demonstrated that it views AI as a strategic resource that can be commandeered. European defense organizations should ask: if the US government can pressure an AI company to change its policies for US military benefit, what happens when European interests diverge from American ones?
The Anthropic standoff is happening against the backdrop of the most aggressive AI push in military history.
In January 2026, the Pentagon — now rebranded as the "Department of War" in a signal of its more aggressive posture — released a strategy memo mandating an AI-first approach across defense operations. The memo calls for substantial expansion of AI compute infrastructure, from centralized data centers to the tactical edge. It demands AI integration into every domain — land, air, sea, space, and cyber. And it establishes an "AI-first" default for acquisition and development decisions.
Den Holland & Knight legal analysis of the memo describes it as inaugurating "a new era for defense contractors." AI is no longer optional. It is the baseline.
Meanwhile, the Chief Digital and AI Office (CDAO) continues to expand its mandate, pushing for enterprise-wide AI adoption. The 2026 defense budget allocates significant resources to AI infrastructure, and programs that were once experimental — autonomous targeting, predictive maintenance, intelligence fusion — are moving into production deployment.
The demand signal could not be clearer: the military wants AI everywhere, it wants it now, and it wants it without restrictions.
The Anthropic crisis illuminates why an increasing number of defense organizations are turning to custom-built AI solutions rather than licensing commercial models.
The advantages of custom AI development for defense are structural.
Fra idé til lansering lager vi skalerbar programvare som er skreddersydd til dine forretningsbehov.
Samarbeid med oss for å akselerere veksten din.
No policy conflicts. When you build your own AI, there is no third-party ethics board that can restrict your use cases. The organization that commissions the development defines the rules of engagement. This doesn't mean abandoning ethical considerations — it means ethical frameworks are set by democratically accountable institutions, not private companies.
Full ownership and control. Custom AI solutions belong to the client. There is no vendor lock-in, no licensing dependency, no risk of a provider changing terms. The models, the training data pipelines, the deployment infrastructure — all under the organization's control.
Mission-specific optimization. Commercial AI models are generalists — optimized for breadth, answering any question, generating any text. Military AI needs to be optimized for specific missions: analyzing satellite imagery for a particular theater, processing signals intelligence from specific sensor types, or coordinating autonomous vehicles in a defined operational environment. Custom models, fine-tuned on mission-relevant data, outperform general-purpose models on specific tasks.
Security architecture from the ground up. Commercial AI models are not designed for classified environments. Adapting them requires significant security engineering — air-gapped inference, classification-level isolation, audit trails for every query and response. Custom-built solutions can embed these requirements from the architecture level, not bolt them on after the fact.
The defense AI landscape already reflects this shift. Companies like Palantir, Anduril, and Shield AI have built their businesses on defense-specific AI — not by licensing OpenAI or Anthropic models, but by building purpose-built AI systems for military applications. Helsing, the European AI defense company that won the €268 million German drone contract, follows the same model.
For European defense organizations, the Anthropic-Pentagon saga carries an additional warning.
The dispute demonstrates that US AI companies operate within a US political and legal framework that may not align with European strategic interests. When the Pentagon can pressure Anthropic to change its safety policies — or threaten to commandeer its technology under the Defense Production Act — European organizations using the same AI tools are exposed to decisions made in Washington, not Brussels.
This risk is compounded by several factors. GDPR and data sovereignty requirements mean that European defense data processed through US AI models may transit US-controlled infrastructure, raising legal and security questions under European data protection frameworks. Export control uncertainty is growing — US AI technologies may become subject to restrictions that limit European access, a risk that intensifies as US-China tensions escalate. And the EU's defense posture increasingly emphasizes strategic autonomy — the ability to act independently without dependency on non-European technology. AI is arguably the most critical domain where this autonomy must be established.
The path forward for European defense is clear: build European AI capabilities for European defense missions, using European software development partners who operate under European law and align with European strategic interests.
Vi designer og bygger digitale produkter av høy kvalitet som skiller seg ut.
PÃ¥litelighet, ytelse og innovasjon i alle ledd.
This doesn't mean isolation or reinventing everything. It means developing custom AI solutions where the intellectual property, the deployment infrastructure, and the governance framework are European. It means working with technology partners who understand both AI/ML engineering and the specific requirements of defense applications — security architecture, real-time processing, edge deployment, and mission-critical reliability.
Companies like Lasting Dynamics, with deep expertise in AI and machine learning development and European roots, can build bespoke AI solutions where the client owns the technology and defines the rules of engagement — no policy conflicts, no geopolitical dependencies, no third-party restrictions on mission-critical capabilities.
The Anthropic situation is not the last of its kind. Jensen Huang's comment — that this is "not the end" — is almost certainly correct. The tension between big tech's commercial and ethical considerations and the military's operational requirements is structural and will persist.
Organizations building AI capabilities for defense should:
Audit their AI dependencies. Map every AI model, platform, and service in use. Identify which are commercial products subject to provider policy changes. Assess the impact of a provider exit or restriction.
Develop build-vs-buy strategies for critical AI. Not every AI capability needs to be custom-built. But mission-critical applications — those where a provider policy change would create operational risk — should be on a path toward owned, custom solutions.
Invest in AI infrastructure sovereignty. Custom models need compute infrastructure for training and inference. Ensuring this infrastructure is under organizational or national control is a prerequisite for genuine AI autonomy.
Partner with software development companies that understand defense. The gap between a general-purpose AI vendor and a defense-capable AI developer is substantial. Look for partners with production-ready AI/ML capabilities, security-first architecture, agile delivery, and — for European organizations — European identity and governance.
The age of AI in defense is here. The question is not whether military organizations will use AI — they already do, and they will use more. The question is whether they will control it, or whether they have outsourced that control to companies whose priorities may diverge from the mission at the worst possible moment.
The Pentagon found out the answer to that question in February 2026. The lesson is available to any organization willing to learn from it.
Lasting Dynamics builds custom AI and machine learning solutions for organizations that need full ownership and control of their AI capabilities. To discuss how we can help build defense-grade AI systems tailored to your mission, contact our team.
Internal Links:
- Europe's €800 Billion Defense Rearmament: Why Software Is the New Battleground
- The Machine War: How Ukraine's Robot Army Is Rewriting Autonomous Warfare
- Defense Cybersecurity in 2026: AI Threats, CMMC 2.0, and the Race to Secure Military Systems
- Software Development for the Defense Industry: The Complete Guide
Forvandle dristige ideer til kraftfulle applikasjoner.
Let’s create software that makes an impact together.
Michele Cimmino
Jeg tror på hardt arbeid og daglig engasjement som den eneste måten å oppnå resultater på. Jeg føler en uforklarlig dragning mot kvalitet, og når det gjelder programvare, er det denne motivasjonen som gjør at jeg og teamet mitt har et sterkt grep om smidig praksis og kontinuerlige prosessevalueringer. Jeg har en sterk konkurranseinnstilling til alt jeg tar fatt på - på den måten at jeg ikke slutter å jobbe før jeg har nådd toppen, og når jeg først er der, begynner jeg å jobbe for å beholde posisjonen.