Artificial Intelligence in China: A New Law Forces Transparency Between Humans and Machines
There is something profoundly human about wanting to know who — or what — you are talking to. Is the voice on the other end of the line a living person, or a machine trained to mimic one? Is the photograph before you a record of reality, or a digital fantasy stitched together by an algorithm? Without that distinction, people are left in a fog: unable to tell real from fake, truth from manipulation.
It is a person’s fundamental right not to be fooled. To know whether they are engaging with another human being or with an artificial system. This clarity matters not only for trust but for dignity: people need to anchor themselves in reality, not be tricked into illusions.
China has now written that principle into law. On September 1, 2025, new rules came into effect requiring that every piece of content generated by artificial intelligence — words, images, audio, or video — carry a label making its origin clear. The labels may be obvious, like a watermark on a video, or hidden in metadata that can be checked by platforms and regulators. But either way, the message is the same: if it was made by a machine, citizens must be told.
⸻
A bid for control
The measures, known formally as the Regulations on the Labelling of Generative Artificial Intelligence Content, are part of a broader effort by Beijing to manage the disruptive power of AI. Officials see three urgent needs.
First, social stability. Generative AI makes it cheap and fast to produce convincing disinformation or synthetic propaganda. Mandatory labelling gives the state and the platforms a way to trace and suppress destabilizing material.
Second, regulatory leadership. Just as Europe’s AI Act is setting new standards in the West, China wants to demonstrate that it is ahead of the curve in governing the technology. By imposing a blanket rule, it signals that Chinese firms are “responsible” and that the government is firmly in control.
Third, industrial protection. Domestic companies like Alibaba, Tencent, Baidu, DeepSeek, and Zhipu now have to build compliance tools into their models. Foreign firms unwilling to follow suit may find themselves effectively locked out of the Chinese market.
⸻
How it works
In practice, the rule reaches deep into daily life.
A short video made by ByteDance’s Seedance system will carry a visible mark. An essay produced by Zhipu’s GLM model will embed a hidden tag. Even images uploaded to WeChat or Weibo will contain signals identifying them as synthetic.
Platforms must enforce the rules, blocking or flagging unlabelled AI content. This makes compliance not an option but an obligation, baked into the entire digital ecosystem.
⸻
Winners and losers
For the big Chinese tech companies, the law is both a burden and an opportunity. Compliance requires new watermarking technology and larger moderation teams. But once those costs are absorbed, the firms can market themselves as safe, lawful, and politically aligned. Smaller startups may struggle to keep up, giving larger incumbents an advantage.
Foreign companies face a different problem. For OpenAI or Anthropic to operate in China, they would need to embed the same labels and accept state oversight. Many will refuse, effectively closing the door further on their access to Chinese users. Zhipu, sensing the opportunity, has already begun courting those left behind by Western restrictions, offering token credits and easy migration from foreign services.
⸻
For ordinary citizens
The immediate changes may feel subtle. A watermark here, a small tag there. But the larger effect is profound.
Over time, citizens will grow used to seeing clear signals about whether a piece of content was made by a person or a machine. That could build trust, giving people confidence they are not being manipulated. But it could also chill creativity, as users become more cautious about what they post.
And behind the labels lies a deeper reality: every AI-generated work can now be tracked. The state has created an audit trail for synthetic media, tightening its ability to monitor the flow of information.
⸻
The global picture
China is not alone. The European Union’s AI Act contains similar provisions for labelling, and the United States has encouraged watermarking through executive orders. But the difference is scale and compulsion. In China, labelling is mandatory and universal, backed by the full authority of the state.
The approach reflects a long-standing philosophy: technology should serve stability, not disrupt it.
⸻
What to watch
Three questions will determine how far this policy reaches. Will enforcement be strict from the outset, or phased in slowly? How easily will tech-savvy users find ways to strip labels out? And will Chinese companies, having perfected watermarking tools, try to export them abroad as a new standard?
⸻
A final reckoning
For China, the law is both a shield and a sword. It reassures citizens that they will not be tricked by invisible machines. But it also hardwires political control into the very code of artificial intelligence.
The tension is clear. Transparency brings dignity — the right not to be fooled. But it also delivers more power to the state. In China’s model, both outcomes arrive together.
⸻