Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/pentagon-takes-first-step-toward-blacklisting-anthropic.24615/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030970
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Pentagon takes first step toward blacklisting Anthropic

tonyget

Well-known member

Pentagon takes first step toward blacklisting Anthropic​

Washington — The Pentagon gave Anthropic an ultimatum this week: Give the U.S. military unrestricted use of its AI technology or face a ban from all government contracts.

The Pentagon asked two major defense contractors on Wednesday to provide an assessment of their reliance on Anthropic's AI model, Claude — a first step toward a potential designation of Anthropic as a "supply chain risk," Axios has learned.

Why it matters: That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.

 
At issue is the guardrails Anthropic placed on its AI model Claude. The Pentagon, which has a $200 million contract with Anthropic, wants the company to lift its restrictions for the military to be able to use the model for “all lawful use,” according to two sources familiar with the discussions.

But Anthropic has concerns over two issues that it isn’t willing to drop, the source said: AI-controlled weapons and mass domestic surveillance of American citizens. According to one source familiar, Anthropic believes AI is not reliable enough to operate weapons, and there are no laws or regulations yet that cover how AI could be used in mass surveillance.

Anthropic has long positioned itself as the AI company most concerned with AI safety. Its founders were all former OpenAI employees who left the company over disagreements about the ChatGPT maker’s direction, approach to safety and pace of AI development. Anthropic also recently announced it is giving $20 million to a political group campaigning for more regulation of AI.

https://edition.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-military-amodei
 
US based labs are throttling real-world uses of AI. The legal and moral stuff is a cope. They want to maintain control.
 
Back
Top