RackNerd Billboard Banner

Download and run DeepSeek R1 with Ollama on Windows

If you’re looking to run DeepSeek R1 locally using Ollama on your Windows machine, this guide walks you through the process step by step. No Linux tricks, no extra layers—just Windows, DeepSeek R1, and Ollama.

What Is DeepSeek R1?

DeepSeek R1 is an open-source large language model (LLM) built for research and development. It’s fast, flexible, and designed to be run locally. With the help of Ollama, a tool for easily running LLMs, you can get DeepSeek R1 up and running without diving into complex installations.


Prerequisites

Before you begin, make sure your system checks the following boxes:

  • Windows 10 or 11 (64-bit)
  • WSL 2 (Windows Subsystem for Linux) installed
  • Docker Desktop installed and running
  • At least 16 GB RAM recommended
  • Ollama installed

Step 1: Install WSL and Docker

  1. Enable WSL:
    Open PowerShell as administrator and run: wsl --install Reboot your system if prompted.
  2. Install Ubuntu from Microsoft Store
    Search for Ubuntu in the Microsoft Store and install the latest version.
  3. Set up Docker Desktop
    • Download Docker from docker.com
    • During installation, ensure “Use WSL 2 based engine” is selected.
    • Start Docker and let it integrate with WSL.

Step 2: Install Ollama

Ollama provides a simple CLI to run models locally. Here’s how to get it:

  1. Download Ollama for Windows from the official site:
    https://ollama.com/download
  2. Run the installer and follow the setup instructions.
  3. After installation, open a terminal and run: ollama --version If you see the version number, you’re good to go.

Step 3: Download DeepSeek R1

With Ollama set up, downloading and running DeepSeek R1 is just one command away:

ollama run deepseek

Ollama will automatically pull the DeepSeek R1 model and its dependencies.

⚠️ First-time download might take a while (~10GB), depending on your internet speed.


Step 4: Chat with DeepSeek R1

Once the model is loaded, you’ll get a prompt:

>>>

Start chatting directly with DeepSeek R1 in your terminal. You’re now running a cutting-edge LLM locally on Windows.


Troubleshooting Tips

  • Docker not running? Make sure Docker Desktop is open and the WSL integration is enabled.
  • Ollama can’t connect to the model? Restart your system or reinitialize Docker.
  • Out of memory? DeepSeek R1 is large—close other memory-heavy apps and consider upgrading your RAM if you consistently hit limits.

Final Thoughts

DeepSeek R1 + Ollama is a powerful local AI setup that brings open-source language models to your fingertips. With a bit of setup, you can run an LLM on your own machine—no cloud required, no data sharing, no subscription fees.

Questions? Drop them in the comments or reach out on Ollama’s Discord.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

RackNerd Billboard Banner
Copy link