Anthropic’s line in the sand—and why it matters

6 March 2026 | 6 minute read

When technology companies talk about ethics and values, the language is elevated, promising a future full of possibilities, efficiencies, and promoting the common good. But the test of those words arises when a company discovers that its products are used for purposes not intended (as in dual-use), or when a government requires that the company provides technology that can cause harm to human rights (such as imposing mass surveillance). That is why the recent stand taken by the artificial intelligence company Anthropic in response to the US Department of Defense’s demands is noteworthy. It suggests that at least one major AI developer is trying to define the boundaries of what it will (and will not) do before the technology outpaces the moral conversation.

The company’s position deserves praise, not because it settles the debate, but because it insists that the debate must happen. It should be noted that other companies in the AI space have acceded to the Pentagon's demands, making Anthropic's stance somewhat unique. Anthropic’s response essentially centres on the need for clarity around what constitutes lawful use and raises serious concerns about the development and deployment of fully autonomous weapons, which can be operationalised without human supervision or oversight. Anthropic also has major concerns over mass domestic surveillance for which its products may be used. 

In recent days, the situation for the company has become more complicated: the Trump administration has now characterised Anthropic as a supply chain risk, which is a serious classification usually deployed against companies from countries with which the US is at war. Anthropic is challenging the designation legally and continues to maintain its dialogue with the Department of Defense. Anthropic is right in saying that strategic decisions about war are best left to legitimate governments and elected representatives, and not with private corporations. That said, Anthropic must stay firm in its stance that we simply don't know enough about what AI is capable of, and caution is essential, human supervision required, and mass surveillance is incompatible with a democracy. 

It should also be said that recent reports suggest that Anthropic’s technology has been used in some form during the US-Israeli war with Iran, which legal scholars call an illegal ‘war of choice.’ As we shall see, this calls for the company to apply the human rights lens and align its policies with international law and standards. AI systems used in intelligence analysis or operational planning can still influence lethal decisions indirectly.  If a system helps prioritise targets, interpret surveillance data, or identify potential threats, it becomes part of the chain that leads to military action. If Anthropic wishes to maintain credibility in its ethical commitments, it should explain how its systems were used and what safeguards were in place, or investigate further if this usage took place without its knowledge, and make its findings public, as Microsoft has recently done.

The question of “lawful” use

Lawful use sounds good in principle, but which law is it, who develops it or interprets it, and how it is applied are all legitimate questions. While Anthropic does not explicitly mention conflict or international law, its concerns deal with how its technology may be used in a situation of armed conflict. A zone of conflict is not a law-free zone – international humanitarian law, the Geneva Conventions, and the laws of war apply, all of which deal with proportionality of force, protection of civilians, humane treatment of prisoners of war, and so on. Lawful use, on the other hand, can be subjective, decided by a country and can be applied differently in different circumstances. Governments operate within legal frameworks; companies follow the law. But the concept of legality is not always straightforward. By asking for clarity, Anthropic is effectively saying that legality cannot simply be assumed. That said, the concept of “lawful use” must be defined carefully. If technology companies rely solely on narrow legal interpretations provided by governments, human rights protection can get undermined. 

Anthropic also rejected the idea that its systems should be used for domestic surveillance of citizens. This is because while surveillance is legal in some contexts, the law simply hasn’t caught up with the reality of what the product is capable of doing. These are not trivial choices. They reflect an understanding that the decisions made by AI developers today will shape how the technology is used in the most consequential domains of power: warfare, intelligence, law enforcement, and governance.

The automation of lethal force

Anthropic’s caution regarding automated weapon systems is particularly significant. The debate over what are often called “killer robots” has been simmering for years in international forums and civil society campaigns: should machines be allowed to make life-and-death decisions without meaningful human control?

Machine learning systems can misidentify objects. They can inherit biases from training data. They can behave unpredictably in complex environments. When these systems are integrated into weapons platforms, especially in ways that allow them to select or engage targets without human oversight, the margin for catastrophic mistakes increases. Human judgment, imperfect though it may be, carries moral and legal accountability. Laws apply to human beings and human beings must abide by them. Algorithms follow rules; they do not have moral judgment.

This is why the concept of “meaningful human control” has emerged as a key principle in international discussions about autonomous weapons. The idea is straightforward: a human being must remain responsible for critical decisions about the use of lethal force.

Anthropic’s skepticism about automated weapons suggests it recognises this principle. It does not allow its technology to be used in systems that can select or engage targets without any human intervention, since it believes such applications can be dangerous both for civilians and combatants.  

Drawing a line on domestic surveillance

Equally important is Anthropic’s rejection of domestic espionage applications.

Modern AI systems can analyse enormous volumes of data: messages, voice recordings, images, location information. Used by governments, such capabilities could enable unprecedented forms of surveillance. Authoritarian regimes already deploy AI-powered monitoring to track dissent and control populations. Democracies are not immune to the temptation.

History offers a warning. Intelligence tools created for national security purposes often migrate into domestic contexts. Surveillance programs initially justified by terrorism concerns have expanded into broader monitoring infrastructures. (In some cases, the practice of tracing individuals during the pandemic to prevent its spread was later used for domestic surveillance). 

For a company like Anthropic to say that its technology should not be used for domestic espionage is therefore significant. It reflects an understanding that AI could easily become a tool of political repression. Technology companies sometimes argue that they cannot control how their tools are used once released. That is only partly true. Companies make choices about clients, contracts, safeguards, and access. In some instances, companies also may have the capability to turn off a client’s misuse of technology. They can decide whether to build features designed explicitly for surveillance or targeting.

By drawing a line here, Anthropic signals that commercial success does not have to come at the expense of civil liberties.

From personal values to institutional policy

There is also a broader issue at stake: governance.

Anthropic has often framed its ethical commitments in terms of the values of its leadership, particularly its chief executive, Dario Amodei. While leadership matters, corporate ethics cannot depend solely on the personal convictions of a CEO. Values expressed by individuals are not the same as institutional accountability.

If Anthropic truly wants to lead in responsible AI development, it should move beyond general statements. One obvious place to start is the United Nations’ Guiding Principles on Business and Human Rights and develop a comprehensive human rights policy, which is embedded in corporate governance structures and publicly available. The UNGPs establish a clear expectation: companies have a responsibility to respect human rights, to develop a human rights policy, conduct due diligence to identify and assess risks, and address harms linked to their operations. They also emphasise transparency and remedy. Joining and participating in multi-stakeholder initiatives such as the Global Network Initiative is also recommended for the sake of accountability, shared learning, and collective responsibility. 

As Iain Levine and Dunstan Allison-Hope have argued, “A human rights policy is not just good for users and other rightsholders, but also good for business because it helps address legal compliance and reputational concerns. While there is much emphasis in the AI world on the critical need for innovation, which has tended to drive companies to oppose regulation, innovation ultimately depends on trust. And trust in AI will only be achieved when companies demonstrate that they are serious about addressing its potential harms through a clear and unambiguous commitment to human rights.” 

The industry is watching

Anthropic’s stance arrives at a moment when the entire AI industry is facing difficult choices. Anthropic’s rival OpenAI signed a deal with the Pentagon soon after the talks between Anthropic and the Department of Defense broke down, but the company faced significant backlash after it admitted it could not control how the government would use its technology, and its chief executive Sam Altman confessed that the deal it had struck was ‘opportunistic and sloppy’ to prevent its systems from being used intentionally for domestic surveillance. 

The decisions made by one company inevitably influence others. Anthropic’s decision to question automated weapons, resist domestic surveillance, and seek clarity about lawful use is therefore more than a corporate policy dispute. It is a recognition that technological capability must be matched by responsibility. Through embedding human rights principles into its governance structures, and by demonstrating that its commitments apply consistently across all markets and partnerships, Anthropic could set a valuable precedent demonstrating that ethical safeguards are compatible with technological leadership. If it does so, Anthropic will not merely have taken a principled stand in a single controversy, but helped define what responsible AI leadership looks like in an age when the stakes could not be higher.