1. Install Ollama Server (Optional)
If you want to use Ollama models, ReactorAI relies on the Ollama server to run AI models. You can install it locally or use a remote server.
- Visit ollama.com to download the server
- Follow platform-specific setup instructions
- Run a test command:
ollama run llama2to verify it works
1a. How to Install Ollama and Download Models
Installing Ollama
macOS & Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows:
Download the installer from ollama.com/download/windows and run it.
Downloading Popular Models
Essential Models for ReactorAI:
# General purpose model (recommended for beginners)
ollama pull llama3.2
# Best for coding tasks
ollama pull codellama
# Fast, lightweight model
ollama pull phi3
# Great for multilingual tasks
ollama pull mistral
# Advanced reasoning (larger model)
ollama pull llama3.1:70b
Essential Commands
# List installed models
ollama list
# Start ollama service
ollama serve
# Test a model interactively
ollama run llama3.2
# Remove a model
ollama rm modelname
# Pull specific model version
ollama pull llama3.2:3b
Tips for ReactorAI Users
- Start Small: Begin with
llama3.2orphi3- they're fast and work great - Check System Resources: Larger models (70b+) need 32GB+ RAM
- Keep Ollama Running: ReactorAI connects to the ollama service on
http://localhost:11434 - Model Switching: Download multiple models and switch between them in ReactorAI
1b. Setup Gemini API (Alternative)
If you prefer to use Google's Gemini models instead of or alongside Ollama:
- Visit Google AI Studio
- Create an API key for Gemini
- Keep the API key ready for app configuration
2. Install ReactorAI on macOS
- Go to the Mac App Store
- Click “Get” to install
- Launch the app and configure the Ollama server address or Gemini API key
3. Install ReactorAI on iOS
- Open the App Store on your iPhone or iPad
- Search for "ReactorAI" and install the app
- Open the app and set your Ollama server (local or remote) or Gemini API key
4. Install ReactorAI on Windows
- Go to the Microsoft Store
- Click “Install” to get ReactorAI
- Open the app and provide the Ollama model server URL or Gemini API key
5. Install ReactorAI on Linux MCP SUPPORT
The Linux version of ReactorAI includes Model Context Protocol (MCP) server support, enabling your AI models to interact with external tools and data sources!
- Download the Linux version below
- Extract the ZIP file to your preferred location
- For MCP support: Install Node.js on your system:
sudo apt update && sudo apt install nodejs npm - Run the ReactorAI executable
- Configure your Ollama server address or Gemini API key
- Enable MCP server connections in settings
What is MCP?
The Model Context Protocol allows ReactorAI to connect your AI models with external tools, file systems, and data sources. This means your AI can read files, execute commands, and interact with your development environment directly!