The world of Large Language Models (LLMs) is constantly evolving, and DeepSeek is at the forefront with its powerful R1 and V3 models. These models offer impressive capabilities in natural language reasoning, self-verification, and multi-step problem-solving. This article provides a comprehensive guide on how to download and deploy DeepSeek R1 and V3 models, including full versions, quantized variants, and distilled options.
DeepSeek models are designed for advanced AI applications, boasting features like:
You can download and utilize DeepSeek R1 models through two primary methods: Ollama and Hugging Face.
Ollama simplifies the process of running LLMs locally. Here's how to get started with DeepSeek R1 using Ollama:
Install Ollama: Download and install Ollama from the official Ollama website. Choose the version appropriate for your operating system (macOS, Linux, or Windows).
Run the Model: Once Ollama is installed, use the following commands in your terminal to run different versions of DeepSeek-R1:
# Base Model (67.1B)
ollama run deepseek-r1:671b
# Distilled Models
# 1.5B Parameters
ollama run deepseek-r1:1.5b
# 7B Parameters
ollama run deepseek-r1:7b
# 8B Parameters
ollama run deepseek-r1:8b
# 14B Parameters
ollama run deepseek-r1:14b
# 32B Parameters
ollama run deepseek-r1:32b
# 70B Parameters
ollama run deepseek-r1:70b
Refer to this guide on Running DeepSeek Models Locally with ChatBox: Ollama Deployment Guide for a detailed walkthrough.
Hugging Face offers more control over the model and its configuration.
Installation Instructions (Hugging Face):
Ensure you have Git LFS (Large File Storage) installed. If not, run git lfs install
.
Clone the desired model repository:
# For Base Model
git lfs install
git clone https://huggingface.co/deepseek-ai/DeepSeek-R1
# For Zero Model (Distilled)
git lfs install
git clone https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero
Similar to R1, DeepSeek V3 models can be downloaded and deployed using Ollama or Hugging Face.
Using Ollama provides a straightforward way to run DeepSeek V3 locally.
Install Ollama: If you haven't already, download and install Ollama from the official website.
Run the Model: Use the following commands to run different variants of DeepSeek-V3:
# Available Variants
# Latest (404GB)
ollama run deepseek-v3:latest
# Base 671B (404GB)
ollama run deepseek-v3:671b
# FP16 (1.3TB)
ollama run deepseek-v3:671b-fp16
# Q4_K_M (404GB)
ollama run deepseek-v3:671b-q4_K_M
# Q8_0 (713GB)
ollama run deepseek-v3:671b-q8_0
For advanced deployment configurations read this guide on Running DeepSeek V3 on Ollama: Advanced Local AI Deployment Guide.
Hugging Face offers access to both the base and chat-optimized versions of DeepSeek V3.
DeepSeek V3 Base Model:
DeepSeek V3 Chat Model: Fine-tuned for dialogue and interaction.
Installation Instructions (Hugging Face):
Install Git LFS if you haven't already: git lfs install
Clone the desired repository:
# For Base Model
git lfs install
git clone https://huggingface.co/deepseek-ai/DeepSeek-V3-Base
# For Chat Model
git lfs install
git clone https://huggingface.co/deepseek-ai/DeepSeek-V3
The best model and version will depend on your specific needs and available resources:
Consider your hardware capabilities and the specific requirements of your application when making your choice.
DeepSeek R1 and V3 models offer powerful capabilities for a variety of AI applications. By following this guide, you can successfully download and deploy these models using Ollama or Hugging Face, and unlock their potential for your projects. Remember to choose the model and version that best suits your needs and resources. Stay tuned for further advancements in the DeepSeek ecosystem! Consider exploring other helpful resources available on Chat Stream for more insights into leveraging large language models.