Olama is a tool that makes it very easy to work with large language models (LLMs) like Gemma 3B. Instead of installing a mountain of dependencies and configuring a complex environment, Olama simplifies the entire process.
- Download models: Olama lets you instantly find and download pre-trained models.
- Hassle-free testing: It eliminates the need to set up complex development environments.
1. Install Olama
Run the following commands in your terminal:
curl -fsSL https://ollama.com/install.sh | sh
For more information or to install it on a different operating system, visit the official Ollama website: https://ollama.com/download
2. Install a model in Olama
Ollama has a library where you can browse available models at https://ollama.com/search. In this example, I will install Gemma 3, which is a model capable of running even on a single CPU.

Gemma 3 Model on Olama
Execute the following commands in your terminal:
ollama run gemma3:1b
The text after the colon (“:”) specifies the exact version, as models may have different variations depending on size, context window, supported inputs, etc.
Why use the 1B version?
Mainly for two reasons:
- Minimal RAM Usage: It requires only 1.5 GB to 2 GB RAM.
- Quick Speed: It is ideal for tasks where response must be immediate.
3. Enter your signal
Type your prompt (question or instruction you give to the model), and the Olama terminal will display the generated text.

Link
<a href