Running large language models locally is becoming easier and more efficient. If you’re using Ubuntu 24.04 and want to experiment with DeepSeek, a powerful open-source LLM, this guide walks you through installing it locally using Ollama, a lightweight tool that simplifies running LLMs.
What You’ll Need
- A machine running Ubuntu 24.04
- At least 16GB RAM (32GB+ recommended)
- A GPU is optional but helps
- Basic terminal knowledge
Step 1: Install Ollama
Ollama handles model downloads, setup, and running with a single command. Here’s how to get it:
1.1 Download the Installer
curl -fsSL https://ollama.com/install.sh | sh
This script will automatically install Ollama and its dependencies.
1.2 Start the Ollama Service
ollama serve
Leave this running in the background or start it as a service depending on your setup.
Step 2: Pull the DeepSeek Model
Ollama supports various models out of the box. To use DeepSeek, run:
ollama pull deepseek-llm
You can also pull a specific variant if needed (e.g., deepseek-coder
, deepseek-chat
, etc.):
ollama pull deepseek-coder
Step 3: Run DeepSeek Locally
Once downloaded, start a local chat session:
ollama run deepseek-llm
You’ll now be able to interact with DeepSeek directly from your terminal.
Optional: Use DeepSeek via API
Ollama also provides a local API server on port 11434. You can interact with the model using HTTP requests:
Example with curl
:
curl http://localhost:11434/api/generate -d '{
"model": "deepseek-llm",
"prompt": "Explain quantum computing like I’m five."
}'
This makes it easy to integrate DeepSeek into your own applications.
Troubleshooting Tips
- Model not found? Make sure you typed the exact model name. Use
ollama list
to see installed models. - Out of memory? Try smaller model variants or free up RAM.
- Port in use? Check for other services using port 11434.
Conclusion
Running DeepSeek locally on Ubuntu 24.04 with Ollama is straightforward. Within minutes, you can have a cutting-edge LLM running offline, ready for development, testing, or private use. Ollama’s simplicity makes it a great tool for developers, researchers, or anyone curious about AI without relying on the cloud.
If you’re running other models locally or building AI workflows on your machine, feel free to share your setup or questions in the comments.
Leave a Reply