How to Run Large Language Models Locally on a Windows Machine Using WSL and Ollama

0xkoji - Dec 4 '23 - - Dev Community

prerequisite

Install WSL

https://learn.microsoft.com/en-us/windows/wsl/install?source=post_page-----9d53151e254a--------------------------------

Install curl

sudo apt install curl
Enter fullscreen mode Exit fullscreen mode

Install Ollama via curl

curl https://ollama.ai/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Run Ollama

In this case, we will try to run Mistral-7B.
If you want to try another model, you can pick from the following site.
https://ollama.ai/library

ollama serve
Enter fullscreen mode Exit fullscreen mode

Open another Terminal tab and run the following command. The following command will pull a model.

ollama run mistral
Enter fullscreen mode Exit fullscreen mode

If everything works properly, you will see something like below.
My machine has a GPU, RTX3070. So Ollama is using the GPU.

Terminate Ollama

If you want to exit Ollama, you need to type the following.

/exit
Enter fullscreen mode Exit fullscreen mode

Then ctrl + c in a terminal what you ran ollama serve.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player