Installing Ollama and Gemma 3B on Linux

Olama is a tool that makes it very easy to work with large language models (LLMs) like Gemma 3B. Instead of installing a mountain of dependencies and configuring a complex environment, Olama simplifies the entire process.

Think of it as a personal assistant for AI that allows you to:

  • Download models: Olama lets you instantly find and download pre-trained models.
  • Hassle-free testing: It eliminates the need to set up complex development environments.

1. Install Olama

Run the following commands in your terminal:

For more information or to install it on a different operating system, visit the official Ollama website: https://ollama.com/download

2. Install a model in Olama

Ollama has a library where you can browse available models at https://ollama.com/search. In this example, I will install Gemma 3, which is a model capable of running even on a single CPU.

Model Gemma3

Gemma 3 Model on Olama

Execute the following commands in your terminal:

The text after the colon (“:”) specifies the exact version, as models may have different variations depending on size, context window, supported inputs, etc.

Why use the 1B version?

Mainly for two reasons:

  1. Minimal RAM Usage: It requires only 1.5 GB to 2 GB RAM.
  2. Quick Speed: It is ideal for tasks where response must be immediate.

3. Enter your signal

Type your prompt (question or instruction you give to the model), and the Olama terminal will display the generated text.

Example of running a model in Olama



<a href

Leave a Comment