The idea that DeepSeek’s censorship disappears when run locally is a misconception.
According to a Wired investigation, DeepSeek’s censorship is built into both its application and training models, meaning it persists even when the AI is downloaded to your computer.
Running a locally installed version of DeepSeek revealed instances of censorship.
For example, it avoided mentioning the Cultural Revolution and focused only on “positive” aspects of the Chinese Communist Party.
Similarly, when asked about the Kent State shootings, it responded normally, but refused to discuss the Tiananmen Square protests of 1989.
The censorship is not limited to the app layer; it is part of the AI’s training, meaning even if you bypass the app, the content restrictions remain.