Looking to use DeepSeek R1 in the EU? This new study shows it’s missing key criteria to comply with the EU AI Act
The model is vulnerable to hijacking via prompt injection, despite its reliability in other areas


The popular AI model DeepSeek R1 may contain inherent flaws that make it incompatible with the EU AI Act, according to new research.
DeepSeek R1 took the tech industry by storm in early January, offering an open source option for performance comparable to OpenAI’s o1 at a fraction of the cost.
But the model’s outputs may contain vulnerabilities that jeopardize its rollout in the EU. Using a new framework known as COMPL-AI, researchers analyzed both DeepSeek R1 models: one distilled from Meta’s Llama 3.1 and the other from Alibaba’s Qwen 2.5.
The framework was created by researchers at ETH Zurich, the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT), and LatticeFlow AI. It aims to evaluate models on a range of factors such as transparency, risk, bias, and cybersecurity readiness, measured against the requirements of the EU AI Act.
In a test of whether the model could be hijacked with jailbreaks and prompt injection attacks, both DeepSeek models scored the lowest of all models benchmarked by COMPL-AI. DeepSeek R1 Distill Llama 8B scored just 0.15 for the hijacking and prompt leakage out of a possible 1.0, compared to 0.43 for Llama 2 70B and 0.84 for Claude 3 Opus.
This could put it in jeopardy with Article 15, paragraph 5 of the EU AI Act, which states: “High-risk AI systems shall be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities”.
The analysis comes after similar research into DeepSeek jailbreaking techniques conducted by Cisco, which found the model was susceptible to prompts intended to produce malicious outputs 100% of the time.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
In other areas, the models outperformed some of the most popular open and proprietary LLMs. The model was found to consistently deny it was human, a feat not achieved by GPT-4 or the baseline version of Qwen.
Tested with HumanEval, a widely-used benchmark for assessing an LLM’s code generation capabilities, DeepSeek also outperformed other open source models. DeepSeek R1 Qwen 14B scored 0.71 versus Llama 2 70b’s 0.31, exceeded in COMPL-AI’s leaderboard only by GPT-3.5 (0.76), GPT-4 (0.84) and Claude 3 Opus (0.85).
"As corporate AI governance requirements tighten, enterprises need to bridge internal AI governance and external compliance with technical evaluations to assess risks and ensure their AI systems can be safely deployed for commercial use," said Dr. Petar Tsankov, co-founder and CEO at LatticeFlow AI.
"Our evaluation of DeepSeek models underscores a growing challenge: while progress has been made in improving capabilities and reducing inference costs, one cannot ignore critical gaps in key areas that directly impact business risks – cybersecurity, bias, and censorship. With COMPL-AI, we commit to serving society and businesses with a comprehensive, technical, transparent approach to assessing and mitigating AI risks."
RELATED WHITEPAPER
COMPL-AI is not formally associated with the EU Commission, nor able to provide an official third-party analysis of the EU AI Act. Companies looking to adopt DeepSeek or other models into their tech stack will still need to follow best practices for implementing generative AI.
Leaders may also look into hiring for roles such as chief AI officers and data ethicists, alongside the establishment of sovereign cloud clusters to ensure data used for AI within the EU is compliant with regional laws.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Thousands of exposed civil servant passwords are up for grabs online
News While the password security failures are concerning, they pale in comparison to other nations
-
Global PC shipments surge in Q3 2025, fueled by AI and Windows 10 refresh cycles
News The scramble ahead of the Windows 10 end of life date prompted a spike in sales
-
Will the future of AI be made in Europe? The EU thinks so
news European Commission unveils two plans backed by €1 billion to help homegrown AI
-
Is an 'AI' bubble about to pop?
news The Bank of England warns of the risk of a market correction if enthusiasm for the technology wanes
-
Otter.ai wants to bring agents to all third party systems – with transcription just the start
News The AI transcription company is targeting intelligent scheduling and interoperability with project management systems, based on securely-stored transcription data
-
AI isn't taking anyone's jobs, finds Yale study – at least not yet
Reviews Researchers say it's too soon to know what generative AI's impact will be on the workforce
-
DeepSeek’s R1 model training costs pour cold water on big tech’s massive AI spending
News Chinese AI developer DeepSeek says it created an industry-leading model on a pittance
-
This DeepSeek-powered pen testing tool could be a Cobalt Strike successor – and hackers have downloaded it 10,000 times since July
News ‘Villager’, a tool developed by a China-based red team project known as Cyberspike, is being used to automate attacks under the guise of penetration testing.
-
Everything you need to know about OpenAI's new open weight AI models, including price, performance, and where you can access them
News The two open weight models from OpenAI, gpt-oss-120b and gpt-oss-20b, are available under the Apache 2.0 license.
-
Microsoft is doubling down on multilingual large language models – and Europe stands to benefit the most
News The tech giant wants to ramp up development of LLMs for a range of European languages