Workplace Apps Resist Integration with External AI Agents

Deep News
昨天

The rapid evolution of AI agents continues to generate excitement—and for some Amazon employees, anxiety. These AI agents are designed to automate a wide range of white-collar computer-based tasks. For instance, on Monday evening, Anthropic took another step in this direction by releasing a new version of its Claude chatbot, which can take control of a user's computer and operate various business applications like a human. However, as this column has frequently reported, not all enterprise software companies are permitting users to grant access to their applications via Claude or other AI agents. A new ranking now reveals which applications are most open to such AI agents and which are most restrictive. According to the ranking published by startup Arcade.dev, the most closed applications include Slack, Workday, Meta's advertising platform, and WhatsApp. In contrast, GitHub and Figma are among the most open platforms. Arcade sells software to businesses for managing AI agents and has collaborated with Anthropic on portions of the Model Context Protocol (MCP), an open-source standard that enables AI agents to interact with various software applications. In other words, Arcade has a vested interest in the debate over application accessibility for AI agents. Last week, the startup launched ToolBench, a tool that rates MCP servers from other companies. MCP servers function similarly to APIs, allowing AI agents to perform actions within applications, such as deleting folders in Google Drive. Users can also bypass the MCP protocol entirely to access enterprise and communication applications through agents like Claude and OpenClaw. This explains why stock market investors and other observers view such AI agents as a threat to enterprise applications. If users primarily interact with applications through AI agents, the applications' ability to upsell new products and features to existing users could be diminished. Arcade's analysis notes that while enterprise applications like Slack and Workday have expressed enthusiasm for a future where AI agents and humans collaboratively use their products, the level of support they provide is not equal. For example, Arcade's report indicates that when users employ external AI agents like Claude to retrieve data from Slack or automate messages, Slack imposes restrictions on these requests. This aligns with our reporting from nine months ago, when Slack tightly limited access to its platform data for AI search tools like Glean. Salesforce, Slack's parent company, declined to comment on Arcade's report but has previously stated that restrictions on external AI access are primarily intended to protect user data. While Slack's MCP server does allow AI agents from 12 partners, including OpenAI, Anthropic, Cursor, and Perplexity, to connect to the platform, Arcade found that even with access, the number of actions an AI agent can perform within Slack is severely limited. A similar situation exists with Workday. Although the human resources application developer has released tools allowing users to access the platform via external AI agents, the company has intentionally made it more difficult to use these tools with third-party agents like those from Arcade. Security Justifications Arcade's report states: "For companies trying to connect AI assistants across multiple enterprise apps, Workday is a dead end." Workday, for its part, states that restrictions on AI agent access are based on important security considerations. "We host highly sensitive systems of record for HR and financial data," a spokesperson said. "We are expanding certified pathways for users and partners to connect agents to Workday." (The company has previously stated it plans to monetize by charging for AI agent access to its software.) Meanwhile, Meta's advertising tools and communication platform WhatsApp do not have built-in support for the MCP protocol; the same is true for LinkedIn and another communication app, Discord. It is conceivable that these companies may fear users' AI agents behaving erratically or posting what is termed "low-quality AI content" into user feeds. (As my colleague Jothy reported last week, even Meta's own employees are concerned about the security risks posed by AI agents.) The central question remains: Will users increasingly demand that enterprise, social, and communication applications open access for their external AI agents? And if application providers refuse, will users take action?

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10