we will install ollama now
** In this example**
username = you
host = somewhere
(you should replace these with you real username and hostname)
you@somewhere:~# wget https://ollama.ai/install.sh
you@somewhere:~# sh ./install.sh
Now we download a model let's be careful not to download one to big or it may function badly. Let Use deepseek-r1 and llama3.3
let's run ollama pull deepseek-r1
then
ollama pull llama3.2
Once done, Lets
run ollama -h
you should now see something like.
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
stop Stop a running model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, --help help for ollama
-v, --version Show version information
That's a lot let look at the models we installed and you forgot about because it took took long ;-).. I get it
But let look
ollama list
will show installed models
ollama run deepseek-r1
will run deepseek