Skip to content
NeuroCanvas logo

Blog

OpenCode Setup Guide: The Ultimate Open Source AI Agent

9 min read

If you’re writing code in the terminal and you are frustrated by the lock-in of proprietary AI tools, this is the guide you’ve been looking for. I’ve tested OpenCode extensively——integrating it with everything from high-end cloud models to fully local setups——and the flexibility is unmatched. This guide covers exactly how to install, configure, and master OpenCode to turbocharge your development workflow.


🔍 What is OpenCode?

OpenCode is an open-source, terminal-native AI coding agent. Think of it as a senior developer living in your command line. Unlike cloud-based SaaS coding assistants like Claude Code or GitHub Copilot, OpenCode is 100% open-source and entirely provider-agnostic.

Out of the box, it features a beautiful Terminal User Interface (TUI) built for speed. It allows you to use top-tier models like Claude 4.5 Opus, OpenAI’s GPT-5 Nano, Google Gemini, or completely private local models running on your own hardware. OpenCode reads your project files, suggests architectural changes, writes code, executes bash commands, and even automates your GitHub Pull Requests.


⚡ Why Use OpenCode?

  • Provider Agnostic: You aren’t locked into one ecosystem. Swap between Claude, OpenAI, DeepSeek, or local Ollama models instantly.
  • Zero Cost Barrier: Leverage OpenCode Zen’s free models (like Grok Code Fast 1 or GLM 4.7) to code without spending a dime.
  • Plan vs. Build Modes: Explore unfamiliar codebases safely in a read-only plan mode before switching to build mode for execution.
  • Client/Server Architecture: Run the engine on a powerful remote machine while driving the TUI locally or from your IDE.

✅ Step 1 – Install the OpenCode CLI

Before installing, ensure you remove any older versions (0.1.x or below) if you’ve experimented with early betas.

For the fastest installation on macOS, Linux, or WSL, use the official one-liner:

Terminal window
curl -fsSL https://opencode.ai/install | bash

If you prefer package managers, OpenCode is widely supported:

Terminal window
# macOS / Linux (Homebrew)
brew install anomalyco/tap/opencode
# Windows (Scoop)
scoop install opencode
# NPM / Bun / PNPM
npm i -g opencode-ai@latest

Once installed, verify the installation by typing opencode in your terminal. You will be greeted by the interactive TUI. You can also use Cmd+Esc (Mac) or Ctrl+Esc (Windows/Linux) while inside VS Code or Cursor to open OpenCode directly in a split-view terminal pane.


✅ Step 2 – Configure Your AI Provider

OpenCode needs a “brain” to function. By default, it looks for standard environment variables like OPENAI_API_KEY or ANTHROPIC_API_KEY. If you have one of those set, OpenCode automatically detects it and connects to the default model.

For more granular control, you should define your models in a configuration file. Create a global config at ~/.config/opencode/opencode.json or a project-specific one in your repository root ./opencode.json:

{
"$schema": "https://opencode.ai/config.json",
"provider": {
"anthropic": {
"npm": "@ai-sdk/anthropic",
"options": {
"apiKey": "your-api-key-here"
},
"models": {
"claude-3-5-sonnet-20241022": {
"name": "Claude 3.5 Sonnet"
}
}
}
}
}

If you are on a budget, you can use OpenCode Zen, a built-in pay-as-you-go service that offers highly competitive rates for models like Minimax or Kimi, as well as a rotating cast of completely free models for light usage.


✅ Step 3 – Connect Local Models (Docker Model Runner)

If privacy is paramount, or you want to avoid recurring API costs entirely, you can point OpenCode to local open-source models (like llama3.2 or gpt-oss) using Docker Model Runner (DMR) or Ollama.

Since DMR is OpenAI-compatible, you can configure OpenCode to treat your local machine as an OpenAI endpoint. Edit your opencode.json file:

{
"$schema": "https://opencode.ai/config.json",
"provider": {
"dmr": {
"npm": "@ai-sdk/openai-compatible",
"name": "Local DMR",
"options": {
"baseURL": "http://localhost:12434/engines/v1",
"apiKey": "docker"
},
"models": {
"ai/llama3.2:3B-Q4_0": {
"name": "llama3.2:3B (Local)"
},
"ai/gpt-oss:20B-UD-Q8_K_XL": {
"name": "GPT-OSS 20B (Local)"
}
}
}
}
}

Once saved, hit the / key in the OpenCode terminal to open the command palette, navigate to the models menu, and select your local model.


✅ Step 4 – Master Plan vs. Build Modes

OpenCode features two primary agents that fundamentally change how the AI interacts with your system. You can toggle between them instantly by pressing the Tab key.

  1. plan mode (Read-Only): This agent is restricted from editing files. It is perfect for exploring a massive codebase, debugging logic, or architecting a feature before writing a single line of code. It will ask for explicit permission before running any bash commands (like grep or ls).
  2. build mode (Full Access): This is the default engine. It executes code changes, writes files, installs dependencies, and runs tests autonomously.

Here’s the thing: always start complex refactors in plan mode. Ask the agent to outline the file changes and architectural decisions. Once you approve the plan, press Tab to switch to build mode and tell it to “execute the plan.” This prevents the AI from rushing down a rabbit hole of incorrect assumptions.


✅ Step 5 – Set Up AGENTS.md for Project Context

To get the best results from OpenCode, you need to feed it your project’s specific conventions. The standard way to do this is by creating an AGENTS.md file in the root of your repository.

Whenever OpenCode launches, it reads this file to understand your architecture, tech stack, and preferred coding style.

# AGENTS.md — My SaaS App
## Tech Stack
- Frontend: Next.js 15 App Router, React 19, Tailwind v4
- Backend: Supabase, PostgreSQL, Drizzle ORM
- Language: TypeScript (Strict mode enabled)
## Coding Conventions
- Prefer Server Components; only use `"use client"` when interactivity is strictly required.
- Do not use `any` types or `@ts-ignore`.
- Follow REST API naming conventions for server actions.
- Write unit tests using Vitest for all utility functions in `src/lib/`.

By defining these boundaries, OpenCode stops generating generic React boilerplate and starts writing code that fits seamlessly into your specific architecture.


🛠️ Troubleshooting

Even the best AI agents stumble. Here are the most common issues users encounter and exactly how to fix them.

ErrorCauseFix
AI stops generating code mid-fileContext window limit reached.If using local DMR, increase the context size via CLI: docker model configure --context-size=100000 [model-name].
Extremely slow response timesUsing heavily rate-limited free models.Switch from free OpenCode Zen models to a paid tier, or use a cheap API like Minimax ($10/mo limits are very generous).
Changes look correct but tests failMissing project context.Ensure your AGENTS.md explicitly lists your testing framework and preferred mocking strategies.
Agent deletes important commentsOver-aggressive refactoring.Use /undo immediately. Ask the agent to “modify the function but preserve all surrounding comments.”
TUI layout breaks in IDE terminalWindow resizing glitch.Use Cmd+Esc (Mac) to pop OpenCode into a dedicated IDE split view for better rendering.

💡 Tips & Best Practices

To transition from “chatting with AI” to “orchestrating an AI engineering team,” you need to build efficient workflows.

💡 Tip: Create custom sub-agents for specialized tasks. You can define specific agents in your config (like code-reviewer or effort-estimator). When you finish a massive feature, just type @code-reviewer review these changes for security vulnerabilities to get an instant audit with a fresh context window.

💡 Tip: Use the Model Debate workflow. If you aren’t sure about an architectural choice, ask your primary agent (running Claude 4.5) to propose a design, then invoke a sub-agent (running Gemini) to critique it. Let the two debate until they reach a consensus.

💡 Tip: Leverage Model Context Protocol (MCP) servers. OpenCode supports MCP out of the box. Add the Context7 MCP server to your config to give your agent real-time access to the absolute latest API documentation for libraries like Next.js or Tailwind v4.

💡 Tip: Automate GitHub PRs directly. If you link your GitHub account, you can comment /opencode Fix this specific edge case on a GitHub Issue. OpenCode will spin up a GitHub Actions runner, checkout the code, fix the bug, and open a PR entirely autonomously.

💡 Tip: Build a “Skills” folder. Transform your personal notes on UX psychology, Agile frameworks, or Tailwind tricks into Markdown files inside an agents/skills/ directory. OpenCode can dynamically pull these executable knowledge notes into its context when solving specific problems.


✅ Final Thoughts

OpenCode represents the next logical step in developer tooling. By staying open-source and provider-agnostic, it ensures you never get trapped by a single AI company’s pricing model or feature deprecation. Whether you run a 20B parameter model locally to save money or connect it to Claude Opus to plow through complex refactors, OpenCode molds to your specific needs. Now go build something.


❓ FAQ

Q: How is OpenCode different from Claude Code?

A: While they share similar terminal capabilities, OpenCode is 100% open-source and not locked to Anthropic. You can use OpenAI, Google, deep-seek, or local models. OpenCode also features a more advanced client/server architecture and out-of-the-box LSP support for deeper codebase understanding.

Q: What is the most cost-effective way to run OpenCode?

A: If you have capable hardware, running local models via Ollama or Docker Model Runner is completely free. For cloud models, OpenCode Zen allows you to use high-quality models like Minimax 2.5 or Kimi in a pay-as-you-go format, which often works out to less than $10 a month for moderate usage.

Q: Can I use my existing GitHub Copilot subscription with OpenCode?

A: Yes! GitHub recently rolled out official support for third-party tools. You can authenticate OpenCode using your Copilot credentials to utilize their backend models, though you are limited by their specific request caps (e.g., 300 requests/month depending on your tier).

Q: Does OpenCode read my entire codebase on every prompt?

A: No. OpenCode uses intelligent retrieval, RAG, and LSP integrations to find the specific files relevant to your query. However, using the @ symbol allows you to explicitly fuzzy-search and attach specific files or directories to your prompt to guarantee they are included in the context window.


📚 Additional Resources