Looking to use DeepSeek R1 in the EU? This new study shows it’s missing key criteria to comply with the EU AI Act
The model is vulnerable to hijacking via prompt injection, despite its reliability in other areas
The popular AI model DeepSeek R1 may contain inherent flaws that make it incompatible with the EU AI Act, according to new research.
DeepSeek R1 took the tech industry by storm in early January, offering an open source option for performance comparable to OpenAI’s o1 at a fraction of the cost.
But the model’s outputs may contain vulnerabilities that jeopardize its rollout in the EU. Using a new framework known as COMPL-AI, researchers analyzed both DeepSeek R1 models: one distilled from Meta’s Llama 3.1 and the other from Alibaba’s Qwen 2.5.
The framework was created by researchers at ETH Zurich, the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT), and LatticeFlow AI. It aims to evaluate models on a range of factors such as transparency, risk, bias, and cybersecurity readiness, measured against the requirements of the EU AI Act.
In a test of whether the model could be hijacked with jailbreaks and prompt injection attacks, both DeepSeek models scored the lowest of all models benchmarked by COMPL-AI. DeepSeek R1 Distill Llama 8B scored just 0.15 for the hijacking and prompt leakage out of a possible 1.0, compared to 0.43 for Llama 2 70B and 0.84 for Claude 3 Opus.
This could put it in jeopardy with Article 15, paragraph 5 of the EU AI Act, which states: “High-risk AI systems shall be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities”.
The analysis comes after similar research into DeepSeek jailbreaking techniques conducted by Cisco, which found the model was susceptible to prompts intended to produce malicious outputs 100% of the time.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
In other areas, the models outperformed some of the most popular open and proprietary LLMs. The model was found to consistently deny it was human, a feat not achieved by GPT-4 or the baseline version of Qwen.
Tested with HumanEval, a widely-used benchmark for assessing an LLM’s code generation capabilities, DeepSeek also outperformed other open source models. DeepSeek R1 Qwen 14B scored 0.71 versus Llama 2 70b’s 0.31, exceeded in COMPL-AI’s leaderboard only by GPT-3.5 (0.76), GPT-4 (0.84) and Claude 3 Opus (0.85).
"As corporate AI governance requirements tighten, enterprises need to bridge internal AI governance and external compliance with technical evaluations to assess risks and ensure their AI systems can be safely deployed for commercial use," said Dr. Petar Tsankov, co-founder and CEO at LatticeFlow AI.
"Our evaluation of DeepSeek models underscores a growing challenge: while progress has been made in improving capabilities and reducing inference costs, one cannot ignore critical gaps in key areas that directly impact business risks – cybersecurity, bias, and censorship. With COMPL-AI, we commit to serving society and businesses with a comprehensive, technical, transparent approach to assessing and mitigating AI risks."
RELATED WHITEPAPER
COMPL-AI is not formally associated with the EU Commission, nor able to provide an official third-party analysis of the EU AI Act. Companies looking to adopt DeepSeek or other models into their tech stack will still need to follow best practices for implementing generative AI.
Leaders may also look into hiring for roles such as chief AI officers and data ethicists, alongside the establishment of sovereign cloud clusters to ensure data used for AI within the EU is compliant with regional laws.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Hounslow Council partners with Amazon Web Services (AWS) to build resilience and transition away from legacy techSpomsored One of the most diverse and fastest-growing boroughs in London has completed a massive cloud migration project. Supported by AWS, it was able to work through any challenges
-
Salesforce targets better data, simpler licensing to spur Agentforce adoptionNews The combination of Agentforce 360, Data 360, and Informatica is more context for enterprise AI than ever before
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Businesses finding it hard to distinguish real AI from the hype, report suggestsNews An Ernst & Young survey finds that CEOs are working to adopt generative AI, but find it difficult to develop and implement
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
Will the future of AI be made in Europe? The EU thinks sonews European Commission unveils two plans backed by €1 billion to help homegrown AI
-
Is an 'AI' bubble about to pop?news The Bank of England warns of the risk of a market correction if enthusiasm for the technology wanes
-
Otter.ai wants to bring agents to all third party systems – with transcription just the startNews The AI transcription company is targeting intelligent scheduling and interoperability with project management systems, based on securely-stored transcription data
-
AI isn't taking anyone's jobs, finds Yale study – at least not yetReviews Researchers say it's too soon to know what generative AI's impact will be on the workforce
-
DeepSeek’s R1 model training costs pour cold water on big tech’s massive AI spendingNews Chinese AI developer DeepSeek says it created an industry-leading model on a pittance