The Justice Department says Anthropic can’t be trusted with warfare systems

The Trump administration argued in a court filing Tuesday that it did not violate Anthropic’s First Amendment rights by designating the AI ​​developer as a supply chain risk and predicted the company’s case against the government would fail.

« The First Amendment is not a license to unilaterally impose contractual terms on the government, and Anthropic cites nothing to support such a radical conclusion, » the US Department of Justice lawyers wrote.

The response was filed in federal court in San Francisco, one of two places where Anthropic is challenging the Pentagon’s decision to slap the company with a label that could bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argued that the Trump administration overstepped its authority in enforcing the label and preventing the department from using the company’s technology. If the designation is upheld, Anthropic could lose up to billions of dollars in expected revenue this year.

Anthropic wants to resume business as usual until the litigation is resolved. Rita Lin, the judge overseeing the case in San Francisco, has scheduled a hearing for next Tuesday to decide whether to grant Anthropic’s request.

Justice Department lawyers, writing for the Defense Department and other agencies in Tuesday’s filing, described Anthropic’s concerns about potential business loss as « legally insufficient to constitute irreparable harm » and urged Lynn to deny the company a reprieve.

The lawyers also wrote that the Trump administration was motivated to act because of « concerns about Anthropic’s potential future conduct if it retains access » to government technology systems. « No one has purported to limit Anthropic’s expressive activity, » they wrote.

The government argued that Anthropic’s drive to limit how the Pentagon could use its AI technology led Defense Secretary Pete Hegseth to « reasonably » determine that « Anthropic personnel could sabotage, maliciously introduce an unwanted function, or otherwise undermine the design, integrity, or operation of a national security system. »

The Department of Defense and Anthropic are fighting over potential restrictions on the company’s Claude AI models. Anthropic believes its models should not be used to facilitate widespread surveillance of Americans and are not currently reliable enough to power fully autonomous weapons.

Several legal experts previously told WIRED that Anthropic has a strong case that the supply chain measure constitutes unlawful retaliation. But courts often favor national security arguments over the government, and Pentagon officials have described Anthropic as a contractor that is delusional and its technology can’t be trusted.

« Specifically, the DoW is concerned that allowing Anthropic continued access to the DoW’s technical and operational warfighting infrastructure would result in an unacceptable risk to the DoW’s supply chains, » Tuesday’s document said. « Artificial intelligence systems are highly vulnerable to manipulation, and Anthropic may attempt to disable its technology or preemptively alter the behavior of its model prior to or during ongoing military operations if Anthropic – in its sole discretion – feels that its corporate ‘red lines’ have been crossed. »

The Department of Defense and other federal agencies are working to replace Anthropic’s AI tools with products from competing technology companies over the next few months. One of the most popular uses of Claude in the military is through Palantir data analysis software, people familiar with the matter told WIRED.

In Tuesday’s filings, attorneys argued that the Pentagon « cannot simply flip a switch at a time when Anthropic is currently the only AI model authorized for use » on the department’s « classified systems and high-intensity combat operations. » The department is working to implement AI systems from Google, OpenAI and xAI as alternatives.

A number of companies and groups, including AI researchers, Microsoft, a federal employee union and former military leaders have filed court briefs in support of Anthropic. None have been filed in support of the government.

Anthropic has until Friday to file a counter-response to the government’s arguments.

Business,Business / Artificial Intelligence,Model Fight

#Justice #Department #Anthropic #trusted #warfare #systems

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *