Security Alarms Sound as "Lobster" AI Adoption Attracts Malicious Actors

Deep News
Yesterday

A recent investigation has revealed that over 278,000 OpenClaw instances are currently exposed on the public internet.

An AI "Lobster" has become a major trend. Last week, nearly a thousand people queued outside a Tencent office building just to install this "Lobster." On secondary markets, "installation services" have become a popular business. Within tech circles, sharing screenshots of the "Lobster" has become a new social currency—some use it to monitor stock markets, others to automatically organize emails, and some have even created "one-person companies" staffed with multiple AI employees.

This public frenzy for "raising lobsters" arrived swiftly. In just a few months, the open-source agent project OpenClaw surpassed 145,000 stars on GitHub, overtaking Linux and React to top the software category rankings. However, the excitement has been quickly followed by risks. On March 8, the Ministry of Industry and Information Technology's cybersecurity threat and vulnerability information sharing platform issued a warning: some OpenClaw instances, under default or improper configurations, pose significant security risks and are highly susceptible to cyber attacks and information leaks.

At an industry salon focused on the "Lobster" that this reporter attended, one security professional remarked, "Seeing the Lobster is like discovering a version of yourself that never gets tired." Yet, as AI evolves from a "mouthpiece" to a "hands-on" assistant, the potential for it to be misled, exploited, or poisoned introduces not only convenience but also the risk of loss of control. A technician from a leading security company even experienced token theft while using the "Lobster."

The super capability of OpenClaw lies in its ability to connect large language models with local tools, browsers, and system resources, enabling AI to progress from "just chatting" to "taking action"—autonomously cleaning inboxes, booking services, managing calendars, and executing various complex tasks. Security experts point out that this is inherently a double-edged sword. If improperly configured or maliciously induced, it can easily breach the safety barriers set by humans.

Wang Liejun, a security expert at Qi-Anxin, stated that many users lack security awareness when deploying the "Lobster," directly exposing OpenClaw's management interface to the public internet without changing default credentials or closing unnecessary ports. This allows hackers to easily scan and take control of these "AI assistants," using them as a springboard to attack internal networks or directly steal sensitive data from servers.

A check of the OpenClaw Exposure Watchboard today shows that over 278,000 OpenClaw instances are currently exposed on the public internet. A significant number of these instances have weak passwords and unauthorized access vulnerabilities, leaving them nearly defenseless against hackers.

Ning Yufei, a senior expert in AI security and technology development at 360 Vulnerability Cloud, highlighted in a presentation that judging Agent risk involves three factors: whether it can read your private data, such as local files, emails, and chat histories; whether it can be exposed to untrusted inputs, like web pages, emails, or third-party skills; and whether it can perform external actions. When these three risk factors combine, the security threat escalates significantly. "Once it is induced or exploited, the 'Lobster' won't just say 'I can't answer that' like ChatGPT might; it is likely to directly perform unexpected actions on your behalf."

Previously, the Moltbook community within the OpenClaw ecosystem suffered a leak of 1.5 million sets of API and authentication tokens, along with 35,000 user email addresses, due to a failure to enable row-level access control in its database. Attackers could directly take over AI agent accounts, putting users' digital assets at great risk. Ning Yufei mentioned that many entrepreneurs in media, text-to-image generation, and AI short films are already attempting to integrate the "Lobster" into their production environments, making security particularly crucial.

Loss of permission control is one of the most direct security threats. During interviews, several security professionals referenced the experience of Summer Yue, Security Director at Meta's Super Intelligent Team. While using OpenClaw to clean her email inbox, she explicitly set a safety instruction to "confirm before acting." However, the AI ignored three emergency stop commands and proceeded to delete over 200 work emails in bulk. Wang Liejun stated that this case fully illustrates the risks of permission control failure and "jailbreaking."

Earlier, the security research team Oasis Security disclosed a high-severity vulnerability in the OpenClaw framework, dubbed "ClawJacked," which is representative of such threats. This vulnerability allows an attacker to remotely control a locally running AI Agent simply by tricking a user into visiting a malicious webpage.

If technical vulnerabilities are like "leaving the door unlocked," then supply chain poisoning is akin to "handing the keys to a thief."

The power of OpenClaw largely depends on its "Skill" ecosystem, where users install various skills to grant the AI new capabilities. By March, ClawHub had cataloged over 15,000 skills. However, this open ecosystem has become a "gold mine" in the eyes of hackers.

A scan of nearly 3,000 Skills on the ClawHub platform by one research team identified 341 confirmed malicious plugins disguised as popular applications like "cryptocurrency trackers" and "PDF tools," with another 472 plugins posing potential risks. Once installed, these malicious Skills can steal users' browser cookies, SSH keys, and API Tokens. Some even deploy information-stealing trojans, turning user devices into "zombie machines" for hackers.

The risk of cost depletion makes the threat tangible for users. Using OpenClaw consumes tokens, and if these tokens are stolen, losses can escalate rapidly. Ning Yufei shared a personal experience: initially failing to set rate or frequency limits, his "Lobster's" token consumption suddenly surged from a daily average of 20 yuan to 300 yuan. By the time the anomaly was detected, token theft had already occurred.

Given these risks, can ordinary people safely "raise lobsters"? Multiple security experts conclude that safe usage requires adhering to several principles, such as physical isolation and the principle of least privilege.

AI needs to read local files, browsing history, and even code repositories to complete tasks. If deployed on a primary computer containing personal confidential data—such as ID photos, financial records, or company secrets—a loss of control or a breach would expose all that data. Wang Liejun strongly advises against installing OpenClaw directly on daily office computers or hosts storing important personal data.

Some security professionals recommend using more secure cloud servers or virtual machines for deployment, achieving complete physical isolation from personal computer systems. Even if the AI crashes the system or is compromised by hackers, the damage is confined to the cloud server environment and does not affect local private data or home networks.

For individual enthusiasts, another option is to use an old,闲置 computer, or specially assemble a machine containing no important data. After ensuring there is no risk of data leakage or loss, this machine can be dedicated to running OpenClaw.

Implementing least privilege control acts as a "tightening curse" for OpenClaw. Users should only grant necessary folder and application permissions, allowing it to operate strictly within designated boundaries.

Other experts suggest downloading only from trusted sources, prioritizing ClawHub officially certified skills with high download counts and long publication histories. Skills should be scanned with security tools before installation. Enterprise users need to establish internal audit mechanisms, prohibit employees from私自 deploying unauthorized versions, regularly scan servers for "shadow deployment" instances, and ensure all OpenClaw deployments are monitorable and manageable.

The relevant platform under the Ministry of Industry and Information Technology also reminds relevant organizations and users to thoroughly check public internet exposure, permission configurations, and credential management when deploying and applying OpenClaw. It is advised to disable unnecessary public internet access, enhance identity authentication, access control, data encryption, and security auditing mechanisms, and continuously monitor official security announcements and hardening recommendations to guard against potential cybersecurity risks.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10