RPT-BREAKINGVIEWS-Pentagon clash exposes Anthropic's weak moat

Reuters
03/06
RPT-BREAKINGVIEWS-Pentagon clash exposes Anthropic's weak moat

The author is a Reuters Breakingviews columnist. The opinions expressed are her own.

By Karen Kwok

LONDON, March 5 (Reuters Breakingviews) - Anthropic CEO Dario Amodei's collision with the Pentagon may turn the $380 billion artificial intelligence darling’s biggest strength into its biggest weakness. As U.S. defense chief Pete Hegseth moves to designate the Amazon.com AMZN.O-backed startup's technology a supply-chain risk, government contractors like Lockheed Martin LMT.N are expected to ditch its Claude model, Reuters reported. The dispute may still be resolved. Yet a rapid pivot away from the AI lab most successful at selling to big business would be an uncomfortable indicator that today's models are still far from sticky.

Anthropic and its CEO Dario Amodei have focused intently on winning enterprise customers. The company has said that they account for roughly 80% of its revenue, which Bloomberg reported on Monday is now running close to an annualized rate of $20 billion. This breakneck growth came as businesses rushed to automate coding and operations using Claude. Ramp AI data suggest Anthropic now leads U.S. business spending on models, surpassing arch-rival OpenAI.

Unfortunately for Amodei, big organizations are susceptible to political pressure. President Trump ordered all federal agencies to ditch Claude, with the Department of State moving to OpenAI for an internal tool. For now, Claude still powers Palantir's PLTR.O Maven Smart System, which is supporting U.S. military operations in Iran. But defense contractor Lockheed Martin said it expected minimal impact from any forced switch.

In an internal memo reported by The Information, Amodei framed the fallout as a principled refusal to compromise over safeguards around the use of its technology. In dispute is contract language regarding the "analysis of bulk acquired data" or the use of AI for autonomous lethal weapons.

If companies really do rip Anthropic from their systems, it would send a concerning signal that even leading models may be interchangeable. Many large institutions are experimenting with the technology and have yet to pick a winner. Amodei’s early-mover advantage could be under threat as rivals like OpenAI, Alphabet GOOGL.O, Mistral and Cohere rush in to provide an alternative to a vendor now seemingly at odds with the government.

Anthropic has reopened discussions with the Pentagon, Bloomberg reported, offering hope that a truce may yet be reached. The looming specter of competition with China could ultimately make officials leery of sidelining a leading lab. Nonetheless, the whole saga will naturally lead customers to contemplate how mission-critical Claude – or any of its rivals – really is.

Follow Karen Kwok on LinkedIn and X.

CONTEXT NEWS

U.S. defense contractors, like Lockheed Martin, are expected to follow the Pentagon’s order to purge artificial intelligence tools developed by Anthropic from their supply chains, Reuters reported on March 3.

President Donald Trump on February 27 announced a ban on federal agencies’ use of the company’s products, with a six-month phase-out period. Pentagon chief Pete Hegseth has said that his agency would designate Anthropic a supply-chain risk.

Anthropic, developer of the Claude chatbot, said it would challenge the ban in court.

Anthropic moves ahead of OpenAI in US business subscriptions https://www.reuters.com/graphics/BRV-BRV/dwpkydaeypm/chart.png

(Editing by Jonathan Guilford; Production by Maya Nandhini)

((For previous columns by the author, Reuters customers can click on KWOK/karen.kwok@thomsonreuters.com))

应版权方要求,你需要登录查看该内容

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10