opencode-cli
SpillwaveSolutions/opencode_cliThis skill should be used when configuring or using the OpenCode CLI for headless LLM automation. Use when the user asks to "configure opencode", "use opencode cli", "set up opencode", "opencode run command", "opencode model selection", "opencode providers", "opencode vertex ai", "opencode mcp servers", "opencode ollama", "opencode local models", "opencode deepseek", "opencode kimi", "opencode mistral", "fallback cli tool", or "headless llm cli". Covers command syntax, provider configuration, Vertex AI setup, MCP servers, local models, cloud providers, and subprocess integration patterns.
SKILL.md
name: opencode-cli description: This skill should be used when configuring or using the OpenCode CLI for headless LLM automation. Use when the user asks to "configure opencode", "use opencode cli", "set up opencode", "opencode run command", "opencode model selection", "opencode providers", "opencode vertex ai", "opencode mcp servers", "opencode ollama", "opencode local models", "opencode deepseek", "opencode kimi", "opencode mistral", "fallback cli tool", or "headless llm cli". Covers command syntax, provider configuration, Vertex AI setup, MCP servers, local models, cloud providers, and subprocess integration patterns.
OpenCode CLI Skill
Use OpenCode CLI for headless LLM automation via subprocess invocation.
Table of Contents
- Quick Start
- Overview
- Basic Usage
- Model Format
- Configuration
- Reference Guides
- Subprocess Invocation
- Limitations vs Claude CLI
- Environment Variables
- Verify Setup
- Best Practices
Quick Start
- Install OpenCode CLI (see OpenCode documentation)
- Set environment variables for the provider:
export ANTHROPIC_API_KEY="sk-..." # For Anthropic # OR export GOOGLE_CLOUD_PROJECT="project-id" # For Vertex AI - Verify installation:
opencode --version - Test with a simple prompt:
opencode run --model google/gemini-2.5-pro "Hello, world"
Overview
OpenCode is a Go-based CLI that provides access to 75+ LLM providers through a unified interface. This skill focuses on the headless run command for automation and subprocess integration.
Basic Usage
Command Format
opencode run --model <provider/model> "<prompt>"
Key points:
- Use
runsubcommand for headless (non-interactive) mode - Model format is always
provider/model - Prompt is a positional argument at the end
- No stdin support (unlike Claude CLI's
-pflag)
Examples
# Using Anthropic Claude
opencode run --model anthropic/claude-sonnet-4-20250514 "Explain this code"
# Using Google Gemini
opencode run --model google/gemini-2.5-pro "Review this architecture"
# Using free Grok tier
opencode run --model opencode/grok-code "Generate tests for this function"
Model Format
Models use the pattern provider/model-name:
| Provider | Example Model |
|---|---|
anthropic |
anthropic/claude-sonnet-4-20250514 |
google |
google/gemini-2.5-pro |
opencode |
opencode/grok-code (free tier) |
openai |
openai/gpt-4o |
google-vertex |
google-vertex/gemini-2.5-pro |
Configuration
Config File Locations
- Environment variable:
OPENCODE_CONFIGpath - Project-level:
opencode.jsonin project root - Global:
~/.config/opencode/opencode.json
Configs are merged (project overrides global).
Basic Configuration
{
"$schema": "https://opencode.ai/config.json",
"model": "anthropic/claude-sonnet-4-5",
"small_model": "anthropic/claude-haiku-4-5"
}
Authentication
Credentials stored in ~/.local/share/opencode/auth.json after running /connect in TUI mode, or configure via environment variables.
Reference Guides
Load the appropriate reference for detailed configuration:
| Task | Reference File |
|---|---|
| Setting up Google Vertex AI | vertex-ai-setup.md |
| Configuring providers (Anthropic, OpenAI, etc.) | provider-config.md |
| Cloud providers (Deepseek, Kimi, Mistral, etc.) | cloud-providers.md |
| Local models (Ollama, LM Studio) | local-models.md |
| MCP server configuration | mcp-servers.md |
| Subprocess integration patterns | integration-patterns.md |
Vertex AI Setup
See vertex-ai-setup.md for Vertex AI configuration including environment variables and service account setup.
Subprocess Invocation
Basic Pattern
import subprocess
result = subprocess.run(
["opencode", "run", "--model", "google/gemini-2.5-pro", prompt],
capture_output=True,
text=True,
timeout=600
)
output = result.stdout
Key Considerations
- Stagger parallel calls - Add 5-10 second delays between parallel invocations to avoid cache race conditions
- Implement fallback - Consider Claude CLI as fallback if OpenCode fails
- Health check - Use
opencode --versionto verify availability - Timeout handling - Set appropriate timeouts (default 600s for long generations)
See integration-patterns.md for complete patterns.
Limitations vs Claude CLI
| Feature | OpenCode | Claude CLI |
|---|---|---|
| Headless mode | run subcommand |
-p flag with stdin |
| Hooks/settings | Not supported | --settings flag |
| Directory access | Not supported | --add-dir flag |
| Tool pre-approval | Not supported | --allowedTools flag |
| Prompt input | Positional argument | Stdin or -p |
Environment Variables
| Variable | Purpose |
|---|---|
OPENCODE_CONFIG |
Custom config file path |
GOOGLE_CLOUD_PROJECT |
GCP project for Vertex AI |
GOOGLE_APPLICATION_CREDENTIALS |
Service account JSON path |
VERTEX_LOCATION |
Vertex AI region |
Verify Setup
Complete this checklist to verify a working installation:
- Check version - Confirm CLI is installed:
opencode --version - Test default model - Verify basic connectivity:
opencode run --model google/gemini-2.5-pro "Say hello" - Check configuration - Review active config:
cat ~/.config/opencode/opencode.json - Verify MCP servers (if configured) - Test MCP connectivity by running a command that uses MCP tools
Best Practices
- Use project-level config - Create
opencode.jsonfor project-specific settings - Prefer environment variables - Use
{env:VAR_NAME}syntax in config for secrets - Implement retries - Network failures are common; implement retry logic
- Log output - Capture both stdout and stderr for debugging
- Stagger parallel calls - Prevent cache race conditions with delays
README
OpenCode CLI Skill
A Claude Code skill for headless LLM automation using the OpenCode CLI.
Table of Contents
- Overview
- Features
- Installation
- Usage
- Quick Reference
- Project Structure
- Key Differences from Claude CLI
- License
Overview
This skill provides Claude with comprehensive knowledge about the OpenCode CLI, a Go-based tool that provides access to 75+ LLM providers through a unified interface. The skill focuses on the headless run command for automation and subprocess integration.
Features
- Multi-Provider Support: Access 75+ LLM providers including Anthropic, Google, OpenAI, and more
- Headless Automation: Use the
runcommand for non-interactive LLM operations - Subprocess Integration: Patterns for integrating OpenCode into your automation pipelines
- Local Model Support: Configuration guides for Ollama and LM Studio
- MCP Server Configuration: Set up and manage Model Context Protocol servers
Installation
Installing with Skilz
The easiest way to install this skill is using the Skilz universal installer:
# Install skilz if you haven't already
npm install -g skilz
# Install this skill
skilz install SpillwaveSolutions_opencode_cli/opencode_cli
The skill will be installed to ~/.claude/skills/opencode_cli/.
View this skill on the Skilz Marketplace.
Manual Installation
Clone or download this repository to your Claude Code skills directory:
git clone https://github.com/SpillwaveSolutions/opencode_cli.git ~/.claude/skills/opencode_cli
Or manually place the skill files in:
~/.claude/skills/opencode_cli/
Usage
The skill activates automatically when you ask Claude about:
- Configuring OpenCode CLI
- Using OpenCode for headless automation
- Setting up providers (Vertex AI, Anthropic, OpenAI, etc.)
- Configuring MCP servers
- Running local models (Ollama, LM Studio)
- Subprocess integration patterns
Example Prompts
"How do I configure opencode for Vertex AI?"
"Set up opencode with Gemini 2.5 Pro"
"What's the opencode run command syntax?"
"Configure opencode for local Ollama models"
"How do I integrate opencode into a Python subprocess?"
Quick Reference
Basic Command
opencode run --model <provider/model> "<prompt>"
Supported Providers
| Provider | Example Model |
|---|---|
anthropic |
anthropic/claude-sonnet-4-20250514 |
google |
google/gemini-2.5-pro |
google-vertex |
google-vertex/gemini-2.5-pro |
openai |
openai/gpt-4o |
opencode |
opencode/grok-code (free tier) |
ollama |
ollama/llama3.2 (local) |
Configuration Locations
Configuration files are searched in the following order (project settings override global):
- Environment Variable:
OPENCODE_CONFIGpath - Project-Level:
opencode.jsonin project root - Global:
~/.config/opencode/opencode.json
Project Structure
opencode_cli/
├── SKILL.md # Main skill entry point
├── README.md # This file
└── references/
├── vertex-ai-setup.md # Google Vertex AI configuration
├── provider-config.md # Provider configuration guide
├── cloud-providers.md # Deepseek, Kimi, Mistral, etc.
├── local-models.md # Ollama, LM Studio setup
├── mcp-servers.md # MCP server configuration
└── integration-patterns.md # Subprocess integration patterns
Key Differences from Claude CLI
| Feature | OpenCode | Claude CLI |
|---|---|---|
| Headless mode | run subcommand |
-p flag with stdin |
| Hooks support | No | Yes |
| Directory access | No | Yes (--add-dir) |
| Prompt input | Positional argument | Stdin or -p |
License
MIT