cross-posted from: https://lemmy.sdf.org/post/42723239
Huawei has announced the co-development of a new safety-focused version of the DeepSeek artificial intelligence model, designed to block politically sensitive discussions with what it claims is near-total success. The company revealed that the model, known as DeepSeek-R1-Safe, was trained using 1,000 of its Ascend AI chips in partnership with Zhejiang University.
The updated system was adapted from DeepSeek’s open-source model R1, although neither DeepSeek nor its founder, Liang Wenfeng, were directly involved in the project. Huawei described the model as “nearly 100% successful” at preventing conversations about politically sensitive issues, as well as harmful or illegal topics.
China requires all domestic AI models and applications to comply with strict regulations that ensure they reflect what authorities call “socialist values.” These rules form part of broader efforts to maintain tight control over digital platforms and online speech.
[…]
When America does it the press call it “alignment” or “guard-rails”, when China does it they call it censorship.
It is accurate to call both censorship, and that’s how the Local-LLM movement have been calling it since day one.
Corporations call it alignement, Chinese authority call it security and harmony, this is the same shit. You always need to dig deeper than the official speech.
I agree, this just happens to be the first time I’ve seen the press actually call it censorship.


