In the rapidly evolving landscape of Artificial Intelligence, security considerations often lag behind the pace of innovation. Recently, Wiz Research uncovered a significant security lapse in DeepSeek, a Chinese AI startup making waves with its cost-effective AI models. A publicly accessible database belonging to DeepSeek allowed full control over database operations, including the ability to access internal data. This incident serves as a critical reminder of the importance of robust security measures in the AI industry.
The Wiz Research team, while assessing DeepSeek's external security posture, discovered an exposed ClickHouse database. What made this exposure particularly alarming was the complete lack of authentication, granting unfettered access to sensitive internal data.
Here's a breakdown of what was exposed:
The database was hosted on open ports (8123 & 9000) associated with the following hosts:
Wiz Research used basic reconnaissance techniques like passive and active subdomain discovery to map the external attack surface. While initial scans of standard HTTP ports (80/443) didn't reveal significant risks, the discovery of unusual open ports (8123 & 9000) quickly led to the unprotected ClickHouse database.
By leveraging ClickHouse’s HTTP interface via the "/play" path, direct SQL queries could be executed through the browser. A review of accessible datasets revealed a table named "log_stream" containing a substantial amount of sensitive data. Columns such as “string.values” and “_source " exposed plaintext logs, chat histories, API keys, and internal directory structures.
Upon discovering the vulnerability, Wiz Research promptly notified DeepSeek, who swiftly secured the database. While this fast response is commendable, the incident highlights a larger issue within the AI industry:
This DeepSeek database exposure provides invaluable lessons for organizations venturing into AI:
The DeepSeek incident underscores the urgent need for the AI industry to prioritize security alongside innovation. As AI becomes increasingly integrated into our lives, protecting sensitive data and ensuring robust security practices must be paramount. By learning from these exposures and proactively addressing security risks, the AI community can foster a safer and more trustworthy future.
For a deeper understanding of the AI landscape, explore The State of AI in the Cloud 2025 report.