Trump administration bans Anthropic AI after refusal to grant military access

Trump administration bans Anthropic AI after refusal to grant military access

A cartoon of people in a room, some holding objects, with birds flying in the background and a building, text at the bottom reads "New Morality or the Promised Institutional of the High Priest of the Philosophicalthropies, with the Horatio and his Suite".

Trump administration bans Anthropic AI after refusal to grant military access

The US government under President Donald Trump has taken a hard line on AI firms refusing military cooperation. In late February 2024, tensions escalated when Defense Secretary Pete Hegseth demanded full access to Anthropic's Claude model by February 27. When the company refused, Trump ordered a complete ban on Anthropic systems across federal networks the following day.

Anthropic cited concerns over mass surveillance and autonomous weapons as reasons for its refusal. The firm has since announced plans to take legal action against the administration's decision.

The standoff began when Hegseth issued an ultimatum on February 26, insisting on unrestricted access to Anthropic's AI technology. The company rejected the demand, arguing that such access could enable harmful applications. By February 28, Trump retaliated by banning all Anthropic products from government use.

Experts in AI ethics have backed Anthropic's position, praising its commitment to safeguards against autonomous weapons and mass surveillance. Meanwhile, other firms like xAI have complied with military requests, while OpenAI continues to negotiate terms with protective measures in place.

A group of Catholic moral theologians and ethicists later voiced support for Anthropic's legal challenge. They agreed with the company's stance on AI-driven weapons, even arguing that such systems could never be morally justified, regardless of reliability. The scholars also opposed mass surveillance, citing Catholic teachings on privacy and the principle of subsidiarity, which warns against excessive centralisation of power.

Their statement highlighted the risks of AI in warfare, particularly the erosion of human responsibility in life-and-death decisions. Rapid, automated choices in combat, they argued, fail to meet the moral standards required for just war under Catholic doctrine. The principle of subsidiarity, they added, further condemns mass surveillance as a tool that weakens individual freedom and strengthens authoritarian control.

Anthropic's lawsuit now moves forward, challenging the government's ban and its demands for AI access. The dispute has drawn wider attention to ethical concerns around military AI, with religious and academic voices reinforcing the company's objections. For now, the ban remains in place, leaving federal agencies without Anthropic's technology while the legal battle unfolds.

Neueste Nachrichten