Running DeepSeek R1 On Your Computer for Beginners
Heard the buzz? Play with it for free without sending data somewhere else
There has been a lot of news lately around the new model ‘DeepseekR1” from China. There are concerns about privacy, but with some AI work, they’ve been able to ‘distill’ the model to make it run on a smaller computer. It’s not quite as powerful as the big model but it’s quite good. Here’s instructions on how to get it running with an open source program called OpenWEBUI and Ollama.
First, what we’re doing:
We’re going to install Ollama which lets you run any kind of local ‘model’ available on your computer.
Running on top of that is a program called Docker which let’s you install Open-WebUI - it’s easy to use, familiar, and has even more features than ChatGPT as a program I’d say!
1 ) Installing Ollama
ollama.com and download install it.
Open your terminal shell (showing mac)
For testing if ollama works… type:
ollama run llama3.2:3b
- it should start downloading llama3.2:3b
Or for DeepSeek
Ollama run deepseek-r1
Now go do Docker.com and download ‘docker desktop’. This will take a bit. You don’t need an account.
Finally, go back to your terminal window and follow these instructions to install Open-WebUI for a regular computer, assuming you don’t have a “GPU'“.
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
NOW
Go to localhost:3000 in your regular browser (any time you want)
and you’ll see llama3.2 or deepseek available to use! All the data stays local!