China’s top cyber regulator, the Cyberspace Administration of China (CAC), has officially declared war on “AI addiction.” On December 27, 2025, the agency released draft rules—titled the Interim Measures for the Management of Anthropomorphic Interactive Services—specifically targeting AI designed to mimic human personalities, engage in emotional bonding, or simulate human thought patterns.
The move marks the world’s most aggressive stance against “AI companions” and virtual boyfriends/girlfriends. Beijing is particularly concerned about the “blurring of boundaries” between humans and machines, citing risks to social order and the psychological health of its citizens. Under the new rules, AI providers will no longer be allowed to simply “let the bot talk”; they must now act as psychological monitors, identifying if a user is becoming too dependent on their digital friend and intervening if things get “extreme.”
The “Red Lines”: What AI Companies Must Do Now
- The Two-Hour Rule: AI services must trigger a pop-up window every two hours of continuous use to remind the user they are talking to a machine, not a person.
- Mandatory Intervention: If a user expresses thoughts of suicide or self-harm, the AI must immediately stop, and a real human must take over the interaction.
- Addiction Monitoring: Companies are now legally responsible for assessing a user’s “level of dependence.” If signs of addiction appear, the provider must “take necessary measures” to restrict access.
- Content Hard-Stop: AI is strictly prohibited from generating content that “endangers national security,” undermines “core socialist values,” or spreads rumors that disrupt economic order.
- Strict Reporting: Any service reaching 1 million registered users or 100,000 monthly active users must submit a formal security assessment to the government.

