adding-models
Guide for adding new LLM models to Letta Code. Use when the user wants to add support for a new model, needs to know valid model handles, or wants to update the model configuration. Covers models.json configuration, CI test matrix, and handle validation.
$ Installieren
git clone https://github.com/letta-ai/letta-code /tmp/letta-code && cp -r /tmp/letta-code/.skills/adding-models ~/.claude/skills/letta-code// tip: Run this command in your terminal to install the skill
name: adding-models description: Guide for adding new LLM models to Letta Code. Use when the user wants to add support for a new model, needs to know valid model handles, or wants to update the model configuration. Covers models.json configuration, CI test matrix, and handle validation.
Adding Models
This skill guides you through adding a new LLM model to Letta Code.
Quick Reference
Key files:
src/models.json- Model definitions (required).github/workflows/ci.yml- CI test matrix (optional)src/tools/manager.ts- Toolset detection logic (rarely needed)
Workflow
Step 1: Find Valid Model Handles
Query the Letta API to see available models:
curl -s https://api.letta.com/v1/models/ | jq '.[] | .handle'
Or filter by provider:
curl -s https://api.letta.com/v1/models/ | jq '.[] | select(.handle | startswith("google_ai/")) | .handle'
Common provider prefixes:
anthropic/- Claude modelsopenai/- GPT modelsgoogle_ai/- Gemini modelsgoogle_vertex/- Vertex AIopenrouter/- Various providers
Step 2: Add to models.json
Add an entry to src/models.json:
{
"id": "model-shortname",
"handle": "provider/model-name",
"label": "Human Readable Name",
"description": "Brief description of the model",
"isFeatured": true, // Optional: shows in featured list
"updateArgs": {
"context_window": 180000,
"temperature": 1.0 // Optional: provider-specific settings
}
}
Field reference:
id: Short identifier used with--modelflag (e.g.,gemini-3-flash)handle: Full provider/model path from the API (e.g.,google_ai/gemini-3-flash-preview)label: Display name in model selectordescription: Brief description shown in selectorisFeatured: If true, appears in featured models sectionupdateArgs: Model-specific configuration (context window, temperature, reasoning settings, etc.)
Provider prefixes:
anthropic/- Anthropic (Claude models)openai/- OpenAI (GPT models)google_ai/- Google AI (Gemini models)google_vertex/- Google Vertex AIopenrouter/- OpenRouter (various providers)
Step 3: Test the Model
Test with headless mode:
bun run src/index.ts --new --model <model-id> -p "hi, what model are you?"
Example:
bun run src/index.ts --new --model gemini-3-flash -p "hi, what model are you?"
Step 4: Add to CI Test Matrix (Optional)
To include the model in automated testing, add it to .github/workflows/ci.yml:
# Find the headless job matrix around line 122
model: [gpt-5-minimal, gpt-4.1, sonnet-4.5, gemini-pro, your-new-model, glm-4.6, haiku]
Toolset Detection
Models are automatically assigned toolsets based on provider:
openai/*→codextoolsetgoogle_ai/*orgoogle_vertex/*→geminitoolset- Others →
defaulttoolset
This is handled by isGeminiModel() and isOpenAIModel() in src/tools/manager.ts. You typically don't need to modify this unless adding a new provider.
Common Issues
"Handle not found" error: The model handle is incorrect. Run the validation script to see valid handles.
Model works but wrong toolset: Check src/tools/manager.ts to ensure the provider prefix is recognized.
Repository
