DeepSeek-R1 produces ‘toxic output, biased content’: report

DeepSeek-R1 produces ‘toxic output, biased content’: report

Tech in Asia·2025-02-01 00:00

An analysis by a security and compliance research platform indicates that DeepSeek-R1 is 11 times more likely to produce harmful content than OpenAI’s O1 model.

Enkrypt AI’s report shows higher chances of the R1 model generating toxic output, biased content, and insecure code, including information on chemical and biological threats.

It also finds the model generated discriminatory and explicit content, harmful extremist material, and insecure code such as malware and hacking tools.

……

Read full article on Tech in Asia

Technology