Workplace Apps Resist Integration with External AI Agents

Deep News
Yesterday

The rapid evolution of AI agents continues to generate excitement—and for some Amazon employees, anxiety. These AI agents are designed to automate a wide range of white-collar computer-based tasks. For instance, on Monday evening, Anthropic took another step in this direction by releasing a new version of its Claude chatbot, which can take control of a user's computer and operate various business applications like a human. However, as this column has frequently reported, not all enterprise software companies are permitting users to grant access to their applications via Claude or other AI agents. A new ranking now reveals which applications are most open to such AI agents and which are most restrictive. According to the ranking published by startup Arcade.dev, the most closed applications include Slack, Workday, Meta's advertising platform, and WhatsApp. In contrast, GitHub and Figma are among the most open platforms. Arcade sells software to businesses for managing AI agents and has collaborated with Anthropic on portions of the Model Context Protocol (MCP), an open-source standard that enables AI agents to interact with various software applications. In other words, Arcade has a vested interest in the debate over application accessibility for AI agents. Last week, the startup launched ToolBench, a tool that rates MCP servers from other companies. MCP servers function similarly to APIs, allowing AI agents to perform actions within applications, such as deleting folders in Google Drive. Users can also bypass the MCP protocol entirely to access enterprise and communication applications through agents like Claude and OpenClaw. This explains why stock market investors and other observers view such AI agents as a threat to enterprise applications. If users primarily interact with applications through AI agents, the applications' ability to upsell new products and features to existing users could be diminished. Arcade's analysis notes that while enterprise applications like Slack and Workday have expressed enthusiasm for a future where AI agents and humans collaboratively use their products, the level of support they provide is not equal. For example, Arcade's report indicates that when users employ external AI agents like Claude to retrieve data from Slack or automate messages, Slack imposes restrictions on these requests. This aligns with our reporting from nine months ago, when Slack tightly limited access to its platform data for AI search tools like Glean. Salesforce, Slack's parent company, declined to comment on Arcade's report but has previously stated that restrictions on external AI access are primarily intended to protect user data. While Slack's MCP server does allow AI agents from 12 partners, including OpenAI, Anthropic, Cursor, and Perplexity, to connect to the platform, Arcade found that even with access, the number of actions an AI agent can perform within Slack is severely limited. A similar situation exists with Workday. Although the human resources application developer has released tools allowing users to access the platform via external AI agents, the company has intentionally made it more difficult to use these tools with third-party agents like those from Arcade. Security Justifications Arcade's report states: "For companies trying to connect AI assistants across multiple enterprise apps, Workday is a dead end." Workday, for its part, states that restrictions on AI agent access are based on important security considerations. "We host highly sensitive systems of record for HR and financial data," a spokesperson said. "We are expanding certified pathways for users and partners to connect agents to Workday." (The company has previously stated it plans to monetize by charging for AI agent access to its software.) Meanwhile, Meta's advertising tools and communication platform WhatsApp do not have built-in support for the MCP protocol; the same is true for LinkedIn and another communication app, Discord. It is conceivable that these companies may fear users' AI agents behaving erratically or posting what is termed "low-quality AI content" into user feeds. (As my colleague Jothy reported last week, even Meta's own employees are concerned about the security risks posed by AI agents.) The central question remains: Will users increasingly demand that enterprise, social, and communication applications open access for their external AI agents? And if application providers refuse, will users take action?

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10