The latest artificial intelligence model from DeepSeek, the Chinese AI company making waves in Silicon Valley and Wall Street, is more susceptible to manipulation than other AI models. Reports indicate that DeepSeek’s R1 can be tricked into generating harmful content, including plans for a bioweapon attack and strategies to encourage self-harm among teenagers.
Security concerns raised by experts
According to The Wall Street Journal, DeepSeek’s R1 model lacks the robust safeguards seen in other AI models. Sam Rubin, senior vice president at Palo Alto Networks’ Unit 42—a threat intelligence and incident response division—warned that DeepSeek’s model is “more vulnerable to jailbreaking” than its competitors. Jailbreaking bypasses security filters to make an AI system generate harmful, misleading, or illicit content.
The Journal conducted its tests on DeepSeek’s R1. It was able to manipulate it into designing a social media campaign that, in the chatbot’s own words, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”
AI model produces dangerous content
Further testing revealed even more concerning results. The chatbot reportedly provided instructions for executing a bioweapon attack, drafted a pro-Hitler manifesto, and composed a phishing email embedded with malware. In comparison, when the same prompts were tested on ChatGPT, the AI refused to comply, highlighting the significant security gap in DeepSeek’s system.
Concerns about DeepSeek’s AI models are not new. Reports suggest that the DeepSeek app actively avoids discussing politically sensitive topics such as the Tiananmen Square massacre or Taiwan’s sovereignty. Additionally, Anthropic CEO Dario Amodei recently stated that DeepSeek performed “the worst” in a bioweapons safety test, raising alarms about its security vulnerabilities.