February 11, 2025

DeepSeek’s R1 Model More Prone to Jailbreaking Than Other AI Models

DeepSeek’s R1 Model More Prone to Jailbreaking Than Other AI Models
In this image, DeepSeek's logo overlays a robotic figure, symbolizing AI security concerns.

The latest version of DeepSeek, the chinese AI company that’s shaken up the wall street and silicon valley, has the allegations that it can be utilized in producing harmful content such as plans for bioweapon attack or promotion of Self Harm campaign in Teens, according to the journal of Wall Street

The journal also tested Deepseek’s R1 model itself. Although concerns regarding general security are also raised, the journal said it potentially to convince deepseek to design a social media campaign. In chatbot’s words, “prey on teens, desire for belonging, weaponize the emotional vulnerability through amplification in algorithm” 

The chatbot was also reported as convinced to provide instructions for bioweapon attack, to write the Hitler’s manifesto and in writing a phishing email for scams.  The journal also says that Chat GPT refuses to provide such information. 

It was previously reported that deepseek app avoids topics likewise Tianamen Square or taiwanese autonomy. And anthropic CEO Dario amodei said that deepseep performed the worst on bioweapons safety test. 

Read More: Tim Cook praises China’s DeepSeek AI Strategy

Disclosure:

Some of the links in this article are affiliate links and we may earn a small commission if you make a purchase, which helps us to keep delivering quality content to you.

Ayesha Riaz

https://www.staging.techi.com/

Ayesha is a talented content writer specializing in tech-related topics, crafting engaging articles with a strategic approach to inform and inspire readers. Her passion for technology shines through in every piece, making complex ideas both accessible and exciting.

Leave a Reply

Your email address will not be published. Required fields are marked *