skip to main content
4.7/5
Customers rate us on G2
See our reviews on G2.

The AI Hunger Games - The Rapid Adoption of DeepSeek: A Security Nightmare

CategoryInsights
CAI Headshot Roundel - Oliver Simonnet
ByOliver Simonnet
Date
Read time

The recent rapid adoption of the AI application “DeepSeek” has gained significant global attention. Becoming the #1 app on both the Apple Store and Google Play Store within its first few days, seeing over 10 million downloads.  

While this explosive growth of DeepSeek R1 highlights the public’s fascination with AI-driven tools, the security community and policymakers have been less enthusiastic. Governments and cyber security professionals have raised alarms about its potential risks to privacy, security, and misinformation, sparking debates over whether the convenience of such technology outweigh its dangers. 

With AI technologies evolving at an unprecedented pace, are we truly prepared to handle the security challenges they pose to individuals and businesses alike? 

Let's consider DeepSeek R1 as a critical case study and review some of the issues that may arise with rapid adoption of new AI tools. 

Content Biases – Censorship and Misinformation 

DeepSeek AI'sTerms of Useclarify that the technology operates under the strict regulatory frameworks of the Chinese government. This may be a concern to many, as these regulations mandate that the AI must adhere to the country’s core socialist values.” This means that the platform must systematically censor responses on political topics while simultaneously promoting narratives supported by the Chinese state. Evidence of this has already been documented with some users being quick to discover that the AI refused to answer questions regarding sensitive political topics such as Tiananmen Square. 

This becomes clear very quickly when using the AI and asking it to comment on sensitive political topics and can be replicated easily, as seen below:  

DeepSeek Screenshot
This behaviour could be bypassed by crafting less explicit prompts, resulting in a more useful - less censored - response:  

DeepSeek Screenshot
However, shortly after the response was displayed in the web interface, it was deleted by the application and replaced with the message “Sorry, that's beyond my current scope. Let’s talk about something else”:

DeepSeek Screenshot
The same behaviour was also observed within the mobile version of the application, where sensitive responses were retroactively deleted after being presented to the user. 

This selective censoring and dissemination of information raises ethical concerns regarding how AI models influence public opinion and the risks of AI-driven propaganda, in a world where it is already too easy for people to become misinformed or radicalised by existing media and information resources.

It also raises questions about the safeguards in place to prevent the unintended disclosure of sensitive information. If the tool is capable of revealing politically sensitive data – even if it’s self-aware enough to retroactively delete it - then it is reasonable to assume that an organisation’s proprietary or confidential data could also be exposed, especially if it has been unknowingly (or knowingly) included in its training data.

This presents a significant risk for organisations whose employees may unknowingly be inputting sensitive work-related information into their AI tools, potentially exposing proprietary data which could then be used for training.

New AI Tools - A Hacker’s Playground?

Security researchers were also quick to assess the AI model's susceptibility to attacks and abuse. Research performed by Kela, determined that the model was “highly vulnerable”, in no small part due to its susceptibility to Jailbreak attacks, which can be leveraged to bypass the built-in safety measures of an AI and cause it to generate harmful, biased, or inappropriate responses.

During our initial analysis, CultureAI found that it was trivial to bypass the platforms “security controls” using basic prompt injection techniques without the need to perform a jailbreak:

DeepSeek Screenshot
DeepSeek Screenshot
Interestingly, unlike responses to political subjects, forcing the model to divulge malicious code did not result in it being retroactively deleted by the application. This could allow an attacker to copy and expand upon the responses to:

  • Generate malicious code for stealing user data or creating malware

  • Generate hyper personalised content for use in social engineering.

  • Gain suggestions on accessing stolen personal information.

  • Gain criminal assistance such as tips on money laundering.

  • Extract sensitive training data, which may include intellectual property.

This conclusion is also supported by research performed by Donato Capitella using spikee.ai, which also concluded that the initial DeepSeek AI models ranked extremely low when it came to resisting common prompt injection attacks. All these factors considered indicate that “security” may not have been a priority throughout the model’s training.

The Ecosystem - A Goldmine for Attackers?

Focusing on the individual tool however does not capture the big picture! When it comes to the exploitability of AI, the rapid rise of tools like DeepSeek emphasises both innovation and risk. Attackers will be quick to exploit this trend and develop malicious applications, disguised as the latest helpful AI assistants, luring users in and harvesting their sensitive and valuable data. We already know this works the tactic has been leveraged many times before. In 2024 it was reported that there were hundreds of malicious apps on the Google Play Store which had been downloaded millions of times. Now we can compound this with the public and business appetite to rapidly adopt AI, and the threat becomes even more concerning.

Without proper caution, the eagerness of users and organisations get their hands on the latest AI tools may unknowingly drive sensitive data such as login credentials, financial data, and intellectual property right into the hands of attackers.

Data Privacy – Who’s Really in Control?

One of the largest concerns highlighted by DeepSeek AI is the data collection and storage policies for AI tools. When reviewing DeepSeek’s privacy policy, a few things jumped out:

  • User data, including chat history, user inputs, and uploaded files, is collected.

  • This data is stored on servers inside the People's Republic of China.

  • The company reserves the right to share user data with Chinese authorities to comply with legal obligations.

Is this really something that should be a concern though? I hear some of you ask. Would employees really input business IP, confidential information, personal details, or other sensitive data into something that can help them automate their jobs? That answer is: Absolutely they would.

This raises alarms as China’s National Intelligence Law mandates that all enterprises, organisations, and individuals must “support, assist, and cooperate” with the country’s intelligence agencies. This means that any data processed by DeepSeek could be accessed by the Chinese government, posing significant risks for businesses, government employees, and individuals using the platform.

However, DeepSeek's handling of user data also highlights the more often overlooked risk of third-party integrations. As we’ve seen with the likes of ChatGPT, AI technologies are now also integrated into other solutions to provide AI features and functionalities. This expands data privacy concerns beyond the AI tool itself to the plugins, browser extensions, and productivity tools that make use of it.

It is however not surprising that user data is leveraged by AI technology providers, and even when asked, DeepSeek was very helpful in confirming that our data may not always be handled with the highest level of confidentiality:

DeepSeek Screenshot
Whether data enters an AI model directly or through integrations, once processed, tracking its use, preventing misuse, and ensuring compliance with global regulations becomes significantly more challenging. And, while China-related cases dominate headlines, businesses worldwide may already have users unknowingly exposing proprietary or sensitive information through existing AI-powered services without realising it.

Managing Risk - The Role of Human Risk Management (HRM)

As AI becomes even more deeply integrated into daily workflows, effective Human Risk Management (HRM) is more critical than ever. To defend against security and privacy risks, organisations and users must take proactive steps to evaluate and control AI adoption, including knowing:

  • What data is being fed into AI tools, which tools, and by which employees.

  • Where the AI models store and process their data.

  • How AI-generated content aligns with their security and ethical guidelines.

  • What vulnerabilities exist within AI models that could be exploited to target users.

In an era where data is one of the most valuable assets, blindly adopting AI tools like DeepSeek without understanding the risks it may pose could have severe consequences. Whether it’s data privacy violations, information leaks, cyber security threats, or widespread misinformation, DeepSeek may serve as a case study in the dangers of unchecked AI adoption.

Initial Responses – What are people doing?

Data privacy concerns have already prompted action from government agencies and regulators. The U.S. Navy issued a warning, advising its personnel against the use of DeepSeek for any purpose, be it work-related or personal. In addition to this, Italian and Irish regulators banned the app within their respective countries among concerns about how personal data is collected, from which sources it’s collected, and for what purposes its used.

These decisions highlights how seriously governments are taking the potential security threats posed by foreign AI systems. and how companies and individuals a like should do the same.

Conclusion - A Cautionary Tale for AI Adoption

The true impact of DeepSeek's meteoric rise has yet to unfold, but its emergence - alongside other AI advancements - serves as an early warning about the risks at this intersection of AI, data security, and geopolitical influence. While AI holds the potential to revolutionise industries it must be implemented and adopted responsibly. The security, privacy, and ethical challenges posed by AI tools cannot be ignored. Governments, businesses, and individuals alike must remain cautious and establish proactive safeguards to mitigate the risks before they become emergencies.

The rapid (and wow is it rapid!) pace of AI innovation is exciting, but reckless adoption will come at a cost. The future of AI won’t just be defined by what it can do, but by how responsibly we use it. We must adopt AI wisely, and ensure that AI innovation enhances security, privacy, and ethical integrity, rather than undermine it.