Are you experiencing slow response times or system overload when using DeepSeek? Deploying DeepSeek Large Language Models (LLMs) locally can significantly improve performance and provide more control over your AI interactions. This guide walks you through the process of setting up DeepSeek on your local machine using Ollama, a powerful open-source framework designed to simplify local LLM deployment.
Ollama is an open-source tool that makes it incredibly easy to download, install, and run LLMs locally. Here's how to get started:
Ollama provides a set of simple commands for managing your LLMs:
ollama run <model_name>
: Runs the specified model in the command line.ollama list
: Lists all available models.ollama ps
: Shows the status of running models.ollama rm <model_name>
: Removes a model.ollama serve
: Starts the Ollama API service.Before deploying DeepSeek R1, ensure your system meets the necessary requirements. A machine with at least 32GB of RAM is recommended, especially for larger models.
ollama run deepseek-r1:8b
for the 8B model).If you experience slow download speeds during the model installation, try these steps:
Ctrl + C
to cancel the download and then rerun the command.To interact with DeepSeek through a user-friendly interface, you can integrate it with a WebUI. Here are several options:
Install Docker: Download and install Docker Desktop.
Run Open WebUI: Use the following Docker command to run Open WebUI:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Access Open WebUI: Open your web browser and navigate to http://localhost:3000
.
chrome-extension://jfgfiigpkhlkbnfnbobbkinehhfdhndo/options.html
.
Page Assist on GitHubSet Environment Variables: To allow access from other websites, you need to set the OLLAMA_ORIGINS
environment variable.
Open Terminal: Open your terminal.
Set Variable (MacOS/Zsh Example):
echo 'export OLLAMA_ORIGINS="*"' >> $HOME/.zshrc
Restart Ollama: Close and restart the Ollama application.
Use Chatbox:
Deploying DeepSeek locally with Ollama offers a powerful and convenient way to harness the capabilities of large language models on your own machine. By following this guide, you can enjoy improved performance, enhanced privacy, and greater control over your AI interactions.