Workplace Apps Resist Integration with External AI Agents

Deep News
昨天

The rapid evolution of AI agents continues to generate excitement—and for some Amazon employees, anxiety. These AI agents are designed to automate a wide range of white-collar computer-based tasks. For instance, on Monday evening, Anthropic took another step in this direction by releasing a new version of its Claude chatbot, which can take control of a user's computer and operate various business applications like a human. However, as this column has frequently reported, not all enterprise software companies are permitting users to grant access to their applications via Claude or other AI agents. A new ranking now reveals which applications are most open to such AI agents and which are most restrictive. According to the ranking published by startup Arcade.dev, the most closed applications include Slack, Workday, Meta's advertising platform, and WhatsApp. In contrast, GitHub and Figma are among the most open platforms. Arcade sells software to businesses for managing AI agents and has collaborated with Anthropic on portions of the Model Context Protocol (MCP), an open-source standard that enables AI agents to interact with various software applications. In other words, Arcade has a vested interest in the debate over application accessibility for AI agents. Last week, the startup launched ToolBench, a tool that rates MCP servers from other companies. MCP servers function similarly to APIs, allowing AI agents to perform actions within applications, such as deleting folders in Google Drive. Users can also bypass the MCP protocol entirely to access enterprise and communication applications through agents like Claude and OpenClaw. This explains why stock market investors and other observers view such AI agents as a threat to enterprise applications. If users primarily interact with applications through AI agents, the applications' ability to upsell new products and features to existing users could be diminished. Arcade's analysis notes that while enterprise applications like Slack and Workday have expressed enthusiasm for a future where AI agents and humans collaboratively use their products, the level of support they provide is not equal. For example, Arcade's report indicates that when users employ external AI agents like Claude to retrieve data from Slack or automate messages, Slack imposes restrictions on these requests. This aligns with our reporting from nine months ago, when Slack tightly limited access to its platform data for AI search tools like Glean. Salesforce, Slack's parent company, declined to comment on Arcade's report but has previously stated that restrictions on external AI access are primarily intended to protect user data. While Slack's MCP server does allow AI agents from 12 partners, including OpenAI, Anthropic, Cursor, and Perplexity, to connect to the platform, Arcade found that even with access, the number of actions an AI agent can perform within Slack is severely limited. A similar situation exists with Workday. Although the human resources application developer has released tools allowing users to access the platform via external AI agents, the company has intentionally made it more difficult to use these tools with third-party agents like those from Arcade. Security Justifications Arcade's report states: "For companies trying to connect AI assistants across multiple enterprise apps, Workday is a dead end." Workday, for its part, states that restrictions on AI agent access are based on important security considerations. "We host highly sensitive systems of record for HR and financial data," a spokesperson said. "We are expanding certified pathways for users and partners to connect agents to Workday." (The company has previously stated it plans to monetize by charging for AI agent access to its software.) Meanwhile, Meta's advertising tools and communication platform WhatsApp do not have built-in support for the MCP protocol; the same is true for LinkedIn and another communication app, Discord. It is conceivable that these companies may fear users' AI agents behaving erratically or posting what is termed "low-quality AI content" into user feeds. (As my colleague Jothy reported last week, even Meta's own employees are concerned about the security risks posed by AI agents.) The central question remains: Will users increasingly demand that enterprise, social, and communication applications open access for their external AI agents? And if application providers refuse, will users take action?

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10