How to use DeepSeek AI locally using Ollama and Chatbox

Run DeepSeek AI Locally: A Comprehensive Guide Using Ollama and Chatbox

Want to harness the power of DeepSeek AI without relying on cloud services? This guide provides a step-by-step walkthrough on how to set up and run the DeepSeek AI model locally using Ollama and Chatbox. You'll be able to run AI-powered interactions efficiently on your local machine.

Prerequisites

Before diving in, ensure you have the following:

  • Hardware:
    • A computer with Docker Desktop installed.
    • At least 16GB of system RAM.
    • A dedicated graphics card (e.g., NVIDIA GeForce GTX 1080).
  • Software:

Background: Ollama & DeepSeek AI

Ollama is a platform that simplifies running powerful AI models locally using Docker containers. It packages models and dependencies into a single, easy-to-distribute image.

DeepSeek AI is known for its advanced machine learning capabilities and is optimized for deep querying and AI analysis. In this guide, we'll use the deepseek-r1:8b version.

ChatBox provides a user-friendly interface to interact with these AI models seamlessly.

Ollama vs. DeepSeek: Which to Choose?

  • Ollama: Ideal if you're focusing on running a private and efficient chatbot or conversational AI locally.
  • DeepSeek: Better suited if you need a model with deep querying, advanced AI analysis, or enterprise-grade performance.

In this guide, we'll use the Chatbox interface to interact with our locally hosted deepseek model provided by Ollama.

Step-by-Step Setup Guide

Step 1: Install Ollama Docker Container

Open your terminal and execute the following Docker command:

docker run -d --name ollama -p 11434:11434 ollama/ollama

This command does the following:

  • -d: Runs the container in detached mode (in the background).
  • --name ollama: Assigns the name "ollama" to the container for easy management.
  • -p 11434:11434: Maps port 11434 on your host machine to port 11434 in the container, which is the default port Ollama uses.
  • ollama/ollama: Specifies the Docker image to use, in this case, the official Ollama image.

Step 2: Download the DeepSeek Model

Once the Ollama container is running and has finished downloading, pull the DeepSeek AI model:

docker exec -it ollama ollama pull deepseek-r1:8b

This command fetches the deepseek-r1:8b version of the model from the Ollama repository.

Step 3: Validate the Setup

Make sure that Ollama is running and that the model has successfully been downloaded.

docker exec -it ollama ollama list

If the model is listed, you are good to go.

Step 4: Download and Install ChatBox

Go to the ChatBoxAI website and download the application for your operating system. Install the downloaded file.

Step 5: Select the DeepSeek Model in ChatBox

Launch ChatBox and navigate to the settings. Select the local provider and add the ollama url http://localhost:11434. Next, select the deepseek-r1:8b model from the list.

Final Steps and Testing

With the model running and ChatBox configured, you can now interact with the DeepSeek AI model. Ask questions, run tasks, and explore its capabilities.

For example, try asking it to write a simple Python script. Though DeepSeek isn't specifically designed for coding, its response can be insightful.

Conclusion

By following this guide, you've successfully set up DeepSeek AI locally using Ollama and ChatBox. This setup allows you to leverage the power of advanced AI models on your own machine, enhancing privacy and control.

Consider supporting the author with a coffee if you found this article helpful!

. . .