Following Lei Jun, Yu Donglai, founder of Fat Donglai retail chain, has become another prominent figure harmed by AI impersonation.
Today, Li Liang, Vice President of ByteDance Group, disclosed that during the first quarter of this year, TikTok's platform witnessed a surge of accounts using artificial intelligence (AI) technology to impersonate Yu Donglai's voice and appearance for unauthorized marketing and live-streaming sales. To date, over 1,000 such fraudulent accounts have been banned, with relevant evidence forwarded to regulatory authorities for further action.
Investigation reveals that on third-party trading platforms like Xianyu, AI face-swapping customization tools are widely available at prices as low as 2 yuan, making the cost of creating fake celebrity videos extremely affordable. Meanwhile, AI-generated celebrity impersonation content continues to proliferate, resulting in numerous users falling victim to scams.
Industry experts believe that as AI tools continue to evolve, their impersonation capabilities are becoming increasingly difficult to detect with the naked eye. Platforms must employ technical solutions for rapid identification to prevent ongoing damage to personal and corporate reputations.
**Fat Donglai and Lei Jun Fight Back**
The extent of damage caused by AI impersonation of Yu Donglai became evident as early as November 2024, when Fat Donglai Commercial Group issued a statement condemning online infringement activities.
The company officially stated that it discovered multiple accounts on third-party platforms using AI technology without authorization to generate Yu Donglai's voice and illegally edit videos owned by Fat Donglai. These accounts added AI-synthesized audio and misleading content before publication, causing public confusion and misunderstanding.
Fat Donglai sternly declared that such actions violated Yu Donglai's personality rights and the company's copyright, negatively impacting the corporate brand image. The company demanded immediate cessation of infringement activities and announced plans to pursue legal action against serious violators to protect legitimate rights and interests.
In February this year, Fat Donglai issued another "Public Notice on Handling Infringement Activities," reiterating that unscrupulous merchants continued using AI to impersonate Yu Donglai's voice and likeness for marketing purposes, misleading consumers into believing related products originated from Fat Donglai for profit. The company emphasized that such behavior not only violated Yu Donglai's portrait rights but also constituted false advertising. The legal department pursued accountability through legal channels to maximize prevention of public deception. This demonstrates Fat Donglai's unwavering commitment to protecting corporate reputation through nearly a year of continuous monitoring and combating AI impersonation.
Similar situations are not limited to Fat Donglai. Xiaomi Corp.'s founder Lei Jun has also suffered from AI impersonation videos.
In recent years, Lei Jun's "AI avatars" flooded short video platforms, with some exploiting his image for improper profit. Facing this situation, Lei Jun both appealed online for users to stop spreading such parody videos and actively sought protection through legal and policy channels. During this year's National People's Congress, Lei Jun's proposals included strengthening regulation in the "AI face-swapping and voice cloning" sector.
He pointed out that the abuse of AI face-swapping and voice cloning technologies by criminals has become extremely serious, emerging as a major area for citizen rights violations, requiring enhanced governance at both legislative and enforcement levels.
**AI Face-Swapping: Prices as Low as 2 Yuan**
In recent years, criminals leveraging deepfake and other generative AI technologies to mimic the appearance and voices of figures like Yu Donglai and Lei Jun have not only achieved very low production costs but have also reached technological maturity.
Investigation found that on third-party platforms like Xianyu, tools for "AI character replacement" and "video substitution" start at just 2 yuan. Merchants reported high sales volumes, claiming that after purchase, buyers can not only swap faces in photos and videos but also earn money by teaching others. One merchant stated: "Regular 1-minute videos completed within 2 hours. Skin texture and lighting transitions appear ultra-natural," adding "anything can be swapped for personalized customization."
Similar products are common on third-party platforms, with numerous tutorials available on platforms like Bilibili and Xiaohongshu. Some tutorial content mentions that such tools can create convincingly realistic audio-visual content, providing opportunities for criminal exploitation.
ByteDance Group Vice President Li Liang noted that AI fraud barriers are increasingly lowering, making the cost of producing "convincingly real" videos cheaper. Often, AI-generated images and sounds appear authentic, creating new challenges for ordinary consumers in distinguishing authenticity. Without careful attention, people may be misled by false advertising and even purchase unreliable products. He also mentioned merchants and influencers exploiting public ignorance about agricultural knowledge, using AI-generated videos to mislead users into placing orders. Videos showing crop yields and growth patterns completely inconsistent with real agriculture remain difficult for ordinary consumers to identify.
Industry analyst Zhang Shule believes that before AI tools became widespread, beyond normal celebrity quote citations, there were behaviors of misinterpreting, selectively quoting, or fabricating celebrity statements for attention. Now, with mature and low-barrier tools, criminals can exploit these to create illusions that "celebrities actually said this" through generated videos, thereby gaining traffic, attracting followers, and monetizing through irregular channels. This represents ancient quote fabrication driven by modern profit motives.
Zhang Shule stated that only magic can defeat magic. As AI celebrity video generation tools continue iterating, their impersonation levels become increasingly undetectable to the naked eye, similar to how fabricated celebrity quotes were difficult to verify in the past. Even experienced editors struggle with effective identification. To eliminate such content requires not only crowd reporting and editorial vigilance but also enhanced "machine recognition" capabilities for AI-generated videos. This enables rapid platform removal upon initial appearance. Additionally, platforms must use technical means to quickly identify and ban accounts frequently producing fakes and associated accounts, using thunderous authority to deter boundary-crossing behavior.