Tired of paywalls just to talk to an AI? Subscription fees for generating images, video, or music? What if you could have your own personal, private AI, running right on your Windows machine? It sounds complex, but this blueprint makes it simple.
Today, we’re building a powerful, private AI kingdom on your PC. We’ll install OpenWebUI, a premium chat interface, and fuel it with cutting-edge models like Meta’s Llama 3 and OpenAI’s new gpt-oss
—your very own private ChatGPT.1 No more subscription fees, no more privacy concerns. This is the real-world, end-to-end instruction set to get you up and running.
For a complete visual walkthrough of every step, check out our full video guide here: (https://youtu.be/6Kx_4T-Eifs)
Let’s get this system upgrade started.
Chapter 1: Pre-Flight Check – Is Your PC Ready?
Before we install anything, we need to run a quick diagnostic. This entire system relies on a hardware feature called virtualization, which allows your computer to run a separate operating system (in our case, Linux) in a secure and efficient way.3
Checking if it’s enabled is easy:
- Open Task Manager (Ctrl + Shift + Esc).
- Go to the Performance tab and click on CPU.
- On the right side, look for Virtualization. It should say Enabled.
If it says “Disabled,” you’ll need to restart your computer and enable it in your PC’s BIOS settings. A quick search for “how to enable virtualization in BIOS” for your specific computer brand will give you the exact steps. Do not proceed until this says “Enabled.”
Chapter 2: The Foundation – Installing & Configuring WSL
To run powerful AI tools, we need a Linux environment. The Windows Subsystem for Linux (WSL) lets us run a full-fledged Ubuntu system directly inside Windows, giving us the best of both worlds.5
Installation:
A quick pro-tip: Run a Windows Update before you start to prevent common installation issues.
Now, open Windows Terminal or PowerShell as an administrator (right-click the icon and select “Run as administrator”). Type this single command to install both WSL and the Ubuntu operating system 5:
wsl --install -d Ubuntu
Press Enter and let it work its magic. Once it’s done, you must do a full system reboot to complete the installation. After rebooting, Ubuntu will launch, and you’ll create your Linux username and password.
Crucial Configuration:
By default, WSL is conservative with your PC’s resources. AI models, however, are hungry for memory.6 We need to tell WSL it can use more of your PC’s power.
- Open File Explorer. In the address bar, type
%userprofile%
and hit Enter. - In this folder, create a new text file named exactly
.wslconfig
(make sure it doesn’t end in.txt
). - Open the file and paste in the following configuration 7:
[wsl2]
memory=16GB
processors=8
You should adjust these numbers based on your system. A good rule of thumb is to set memory
to about half of your total system RAM, and processors
to half of your available CPU cores.9
To apply these settings, open Terminal and run wsl --shutdown
. The next time WSL starts, it will use your new, more powerful configuration.
A Critical Note on Memory: For AI, the most important memory is your graphics card’s VRAM. The memory
setting we just configured acts as an overflow bucket. If a model is too big for your VRAM, it will spill over into this slower system RAM, and performance will suffer.10 Matching the model size to your VRAM is the real cheat code for speed.
Chapter 3: The Engine – Installing Docker
We could install our AI software directly into Ubuntu, but that can get messy. The modern solution is containerization, and the king of containers is Docker.11 It packages applications neatly, ensuring they run consistently everywhere.
- Go to the official Docker website and download Docker Desktop for Windows.
- Run the installer, making sure the “Use WSL 2 based engine” option is checked.11
- Once Docker Desktop is running, open its Settings (the gear icon).
- Navigate to Resources > WSL Integration.
- Make sure the toggle for your Ubuntu distribution is turned ON. This is the most common point of failure, so double-check it! 11
Chapter 4: The Accelerator – The NVIDIA Stack
This is the step where most guides fail, but it’s simple if you follow one golden rule. To get blazing-fast GPU performance, we need to connect Docker to your graphics card.
The Golden Rule: You must only install the standard NVIDIA driver on your Windows host. Do NOT, under any circumstances, try to install a Linux driver inside your Ubuntu terminal.13 The magic of WSL is that the Windows driver securely passes the GPU through to Linux.
With that understood, open your Ubuntu terminal and run these commands one by one to install the NVIDIA Container Toolkit, which gives Docker access to the GPU 14:
1. Setup NVIDIA Repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
2. Install Toolkit:
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
3. Restart Docker Service (CRITICAL STEP):
This final command makes Docker aware of the new GPU toolkit. Don’t skip it!
sudo systemctl restart docker
Chapter 5: The Launch – Installing OpenWebUI
With our foundation, engine, and accelerator in place, launching the AI interface is as simple as running one definitive command in your Ubuntu terminal.
This command uses --network=host
for the most reliable connection, gives the container access to our NVIDIA stack with --gpus all
, and uses the :cuda
version of the OpenWebUI image, which is purpose-built for GPU acceleration.
docker run -d --network=host --gpus all -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
Once it’s done downloading, open your browser to http://localhost:8080.
(Note: Because we’re using host networking, the port is 8080, not 3000).
Create your first user account, and you’ll see the professional-grade interface for your new local AI hub.
Chapter 6: The Fuel – Loading Your First AI
Our interface is running, but it’s an empty shell. It needs an AI model. For that, we’ll use Ollama, an amazing tool that makes running models incredibly easy.
1. Install Ollama:
In a new Ubuntu terminal, run this command:
curl -fsSL https://ollama.com/install.sh | sh
2. Pull Your First Models:
We’ll start with two powerful models. In your terminal, run these commands one at a time. The first download will take a while, so be patient.
- For a great all-around model from Meta:Bash
ollama run llama3
- For the powerful new reasoning model from OpenAI:Bash
ollama run gpt-oss:20b
Heads-up: The gpt-oss:20b
model is a heavyweight. It’s over 11GB to download and needs a powerful machine, ideally with 16GB of VRAM or system RAM dedicated to WSL, to run smoothly.
Once the downloads are complete, you can type /bye
in the terminal to exit the model prompt.
3. Connect and Chat:
Go back to your OpenWebUI in the browser (http://localhost:8080). The Docker command we ran earlier already connected it to Ollama. Simply click the “Select a model” dropdown at the top, and you should see both llama3 and gpt-oss:20b ready to go.
You’ve done it!
The Pro-Tip
Running this on localhost
is great, but what if you want to access your personal AI from anywhere? A free tool called Cloudflare Tunnel allows you to create a secure, private link from the internet directly to this application, without exposing your home IP address. It’s an entire blueprint of its own, and we’re building a dedicated manual for it soon.
The Recap
So there you have it. We ran a pre-flight check, installed and configured the WSL foundation, set up the Docker engine, installed the NVIDIA accelerator stack, launched the OpenWebUI container, and fueled it with live AI models using Ollama. You now have the keys to your own private, fully operational AI kingdom.
If you’re ready to keep turning the pages of the manual with us, be sure to watch the full video guide and hit that subscribe button!