🚀 How to Create a Personal AI Assistant with Ollama + NativeMind

NativeMind Team6 min read

ollamanativemindollama installationollama nativemind guidelocal llm

🤔 What is Ollama?

Ollama is a tool that lets you run AI models on your own computer, as simple as installing any desktop app like Chrome. Once installed, you can:

  • Use AI completely offline - chat without needing an internet connection
  • Protect your privacy - all data stays on your computer and never gets uploaded to the cloud
  • Use it for free - no paid subscriptions needed, install once and use forever
  • Access multiple AI models - supports various AI models on par with ChatGPT

📋 Pre-Installation Requirements

System Requirements (macOS Example)

  • macOS 11 Big Sur or later
  • At least 16GB RAM (24GB+ recommended)
  • Apple Silicon chips recommended (M1, M2, M3, etc. for better performance)
  • At least 10GB available disk space
  • Stable internet connection (only needed for downloading)

💡 Pro Tip

Just like running demanding games, AI models require substantial hardware resources. Macs with Apple Silicon chips run AI models most efficiently, while Intel-based Macs can still run them but with relatively slower performance.


🚀 Step 1: Download Ollama

1.1 Visit the Official Website

Open your browser and go to https://ollama.com

ollama.com

1.2 Download the macOS Version

The website will automatically detect your Mac system. Click the "Download for macOS" button to start downloading.

Ollama Download Page

This will automatically download the installer package, approximately 180MB in size.


🔧 Step 2: Install Ollama

2.1 Open the Downloaded Installer

  1. Locate the downloaded file
    • Usually found in your "Downloads" folder
    • File name will be something like: Ollama-darwin.zip
  2. Extract and double-click to run
    • Double-click the zip file to extract automatically
    • Double-click the extracted Ollama.app file

Ollama Installation

2.2 Move to Applications Folder

After double-clicking Ollama, you'll see a prompt: "Ollama works best when run from the Applications directory."

Please select "Move to Applications" - this ensures Ollama will function properly.

Move Ollama to Applications Folder

2.3 Ollama Initial Setup

After moving is complete, Ollama will display the welcome screen:

Ollama Welcome Screen

There will be three steps here:

  1. Click "Next" to continue
  2. Install command line tools
  3. Prepare to run your first model

2.4 Install Command Line Tools

Next, you'll see the "Install the command line" interface:

Install Command Line Tools

Click the "Install" button, and the system will prompt you to:

  • Enter your Mac's password (this is needed to install system-level components)
  • After entering the password, the command line tools will be installed

2.5 Complete Installation

Finally, you'll see the "Run your first model" interface:

Run Your First Model

Seeing this interface means Ollama has been successfully installed! Click "Finish" to complete the installation.


🎯 Step 3: Download Your First AI Model in NativeMind

Now that Ollama is installed, you'll see the llama icon in the top-right corner of your Mac's menu bar. Next, we need to download an AI model through the NativeMind extension. Just like buying a gaming console still requires buying games, installing Ollama is only the first step.

Ollama Menu Bar Icon

3.1 Understanding Different Models

Model NameFile SizeMemory UsageFeaturesBest For
qwen3:4b ⭐2.6GB~4GBAlibaba's latest generation open-source large model
Supports reasoning, lightweight and fast, supports 119 languages
According to official claims, 4B can match the performance of previous generation 72B models
First choice for 16GB RAM users
qwen3:8b5.2GB~8GBStronger reasoning performance, better response quality, but more resource-intensive and higher memory usageRecommended for 24GB+ RAM users

3.2 Open the NativeMind Extension

  1. Launch your browser (Chrome/Edge, etc.)
  2. Click the NativeMind extension icon (usually in the top-right corner of the browser)
  3. On first launch, it will automatically detect Ollama status

NativeMind Extension Interface

3.3 Select and Download a Model

When NativeMind detects that Ollama is installed, it will display the model selection interface:

Model Selection Interface

If it doesn't detect Ollama, you can click the "Re-scan for Ollama" button in the extension to rescan and detect Ollama.

We strongly recommend choosing qwen3:4b:

  • ✅ Small file size (2.6GB), quick download, immediate access to powerful local reasoning
  • ✅ Alibaba's latest generation open-source large model
  • ✅ Supports 119 languages including English, French, Spanish, German, Japanese, Korean, Chinese, and more
  • ✅ Low memory usage (4GB), runs easily on 16GB RAM computers
  • ✅ Amazing performance: 4B parameters can achieve the performance level of previous generation 72B models
  • ✅ Powerful reasoning and translation capabilities
  • ✅ Beginner-friendly

3.4 Start Download

  1. Select qwen3:4b from the dropdown menu
  2. Click the "Download & Install" button
  3. Confirm the download popup
    • Will display: Download "qwen3:4b" Model (2.6GB)
    • Click "Download" to start downloading

ollama model download

3.5 Wait for Download Completion

  • Download time: Approximately 3-10 minutes (depending on internet speed)
  • Progress display: Progress bar will show download status
  • Cancellable: Click "Cancel" if you don't want to continue downloading

ollama model download progress

3.6 Download Complete

Once download is complete, NativeMind will automatically:

  1. Load the model
  2. Navigate to the chat interface
  3. Display welcome message

Setup Complete - Chat Interface

🔧 Troubleshooting Common Issues

Q1: "Cannot be opened because it is from an unidentified developer"

Solution:

  1. System Preferences → Security & Privacy
  2. Click "Open Anyway"
  3. Or hold Control key and click Ollama.app, then select "Open"

Q2: NativeMind shows "Ollama not detected"

Solution:

  1. Confirm Ollama is properly installed and moved to Applications folder
  2. Check if Ollama icon appears in the top-right menu bar
  3. Restart the Ollama application
  4. Click the "Re-scan for Ollama" button in NativeMind

Q3: Installation fails after entering password

Solution:

  1. Confirm you entered the correct Mac login password
  2. Ensure your account has administrator privileges
  3. Restart Ollama and try installing again

Q4: Mac runs slowly

Solution:

  1. Close other resource-intensive applications
  2. Choose a smaller model (recommend qwen3:4b)
  3. If you have 16GB RAM, consider upgrading to 24GB or more
  4. Check if Mac has sufficient storage space
  5. Slower performance on Intel-based Macs is normal

🎉 Installation Complete! What's Next?

  • Webpage summarization: Intelligently summarize webpage content
  • Chat with webpage: Deep understanding of page information, multi-tab support
  • Web search integration: Combine with real-time search for latest information and answers
  • Immersive translation: One-click webpage translation for immersive reading
  • Intelligent writing assistant (Coming Soon): Rewrite, polish, and create various types of text