Xiaohongshu Takes a Stand Against AI-Generated Low-Quality Content

Deep News
Apr 28

On April 27, Xiaohongshu held its first AI Governance Open Day in Beijing, unveiling its official stance on artificial intelligence. The event systematically explained the platform's attitudes and principles regarding AI and invited high-quality creators to share their creative processes.

Behind this announcement lies a hard-to-ignore reality: data indicates that by 2025, China's animated micro-drama market surged by 276.3% year-on-year, with over 120,000 AI-generated series being broadcast. Simultaneously, the Cyberspace Administration of China has urged six major platforms to remove more than 37,000 non-compliant videos, as AI-generated content is rapidly consuming digital space at an astonishing rate.

When people begin questioning whether there is still a real person behind the screen, the foundation of trust essential for content platforms begins to shake. Xiaohongshu has chosen to step forward, attempting to answer the difficult questions plaguing all platforms: Is AI an amplifier for creativity or a production line for counterfeit content? How will communities, once built on the principle of "authentic sharing," confront this new reality?

**Neither Rejection Nor Laissez-Faire** In an era where content is king, AI introduces new challenges such as "counterfeiting, infringement, and low-quality content," quietly eroding the trust between people and the content they consume. Since the beginning of 2026 alone, Xiaohongshu has addressed over one million instances of AI-related misconduct, including more than 800,000 AI-managed accounts and nearly 150,000 AI-generated counterfeit posts. A thoughtfully conceived piece of content can be rapidly replicated using an AI template and, propelled by algorithmic recommendations, overwhelm the visibility of original creators.

More alarming is the grey industry chain of "AI-managed accounts." Some accounts are entirely operated by AI tools for registration, posting, and interaction, with all public notes on their profiles being published by proxy. Users may believe they are engaging with an interesting individual, but in reality, they are interacting with lines of code devoid of consciousness. For example, tools like OpenClaw, previously flagged by cybersecurity authorities, can autonomously generate content and conduct social interactions based solely on natural language commands. Xiaohongshu considers this a fundamental violation of the community's core values.

When the public can no longer trust what they see, who benefits? The answer is that neither users nor platforms stand to gain. The root of the problem lies not in AI itself, but in a rapidly accelerating paradox.

One of the core drivers of AI's increasing capability is the continuous supply of vast amounts of high-quality data. However, the scarcity of such data is becoming a widespread concern in the industry. From the moment of its inception, low-quality, homogenized AI-generated content has begun flooding the internet. Using this content to repeatedly train AI models can lead to model performance degradation, causing outputs to become increasingly uniform, repetitive, and distorted—a phenomenon known in academic circles as "model collapse." Understanding this logic leads to a clear conclusion: content genuinely stemming from human experience, independent thought, and original expression is becoming more valuable than ever before.

Therefore, Xiaohongshu's AI governance stance highlights a crucial point: platforms should not be forced to choose between "completely rejecting AI" and "letting AI run wild." Instead, they must strike a balance between efficiency and authenticity, and between tool usage and ethical boundaries. This statement is significant because it acknowledges that platforms cannot be passive nor pretend AI does not exist. Governing AI essentially involves redrawing a bottom line for human value in an age of artificial intelligence proliferation.

**One Stance, Two Lists** In its governance framework, Xiaohongshu encourages the use of AI as a creativity amplifier while opposing its use as a tool for counterfeiting and producing low-quality content. All AI-generated or synthesized content must be proactively labeled. The platform has clarified its position using two lists—"what is encouraged" and "what is prohibited"—to reduce cognitive load for creators. Such a clear and decisive stance is relatively uncommon in the industry.

The core principles are straightforward: Encouraged content includes honestly disclosing AI involvement, using AI for creative expression, enhancing informational value or character creation, and improving visual aesthetics. Specifically, three directions are emphasized: AI for informational value, AI for character creation, and AI for visual creation. Prohibited behaviors encompass AI counterfeiting, generating low-quality content, AI-related infringement, and using AI accounts to disrupt community order.

The fundamental logic of this governance framework is to position AI as an "amplifier of creativity," not a "replacement for creativity."

How will this be implemented technically? Representatives at the event revealed that the platform has developed multiple identification capabilities. These include continuously improving the automatic detection of AI generation, deepfakes, and anomalous creation traces in images and videos; assessing the risk of repeated violations by combining historical review results with account information; and achieving "early detection and early interception" through routine patrols and risk alerts.

Of course, relying solely on technological countermeasures is not a foolproof solution. AI generation models, particularly deepfake technology, evolve extremely rapidly. Whether Xiaohongshu's detection capabilities can consistently outpace evasion tactics remains to be seen over time, as a cat-and-mouse dynamic is likely to persist. Furthermore, "proactive labeling" depends on creator conscientiousness, meaning the platform will still face significant ongoing enforcement pressure.

**An Industry Case Study and a Platform's Response** A graduate specializing in Song Dynasty court painting from the Central Academy of Fine Arts, known as Yifang, spends nearly a month completing a single artwork. The intricate brushstrokes and subtle ink washes traditional to Chinese painting make animating such works nearly impossible. Since late 2024, she began experimenting with multiple AI video generation tools. By inputting her static paintings and providing simple scene instructions, she can now bring traditional paintings to life naturally. In Yifang's view, the creative conception, originality, and overall idea must be executed manually; AI serves as an amplifier in the final stage.

This case perfectly aligns with Xiaohongshu's governance stance: AI should not replace the creative process but should instead unleash imagination beyond the creator's original capabilities. More importantly, Yifang does not conceal her use of AI; she proactively labels it and shares her process openly. This attitude itself reinforces the community's foundational values.

From an industry perspective, Xiaohongshu's strategy offers a reference model. It neither imposes a blanket ban on AI nor adopts a laissez-faire approach. Instead, it charts a middle course between efficiency and authenticity, taking a leading role by establishing clear, principled governance rules. This in itself represents a positive push for the industry.

For other platforms, Xiaohongshu's approach provides a template worth considering. This framework may not be perfect, but it at least confronts a fundamental question: What a platform becomes depends on how it coexists with AI. If platform governance remains absent in the long term, the inevitable outcome will be the inferior driving out the superior.

For ordinary creators, this governance stance sends a clear signal: as long as they adhere to originality and use AI tools appropriately, they will not be negatively impacted. On the contrary, they may find a clearer path for growth.

As Yifang stated, AI can only present the final outcome of a moment; it cannot replace the therapeutic nature of the painting process itself. Ultimately, people settle in a community not because it has the most abundant information, but because it has the most authentic people. AI will not cease its evolution. What platforms can do is ensure that the humans who are "present" are seen, and that the machines that are "necessarily present" do not overshadow them.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10