LLM & Agents
6763 skills in Data & AI > LLM & Agents
go-testing-code-review
Reviews Go test code for proper table-driven tests, assertions, and coverage patterns. Use when reviewing *_test.go files.
vitest-testing
Vitest testing framework patterns and best practices. Use when writing unit tests, integration tests, configuring vitest.config, mocking with vi.mock/vi.fn, using snapshots, or setting up test coverage. Triggers on describe, it, expect, vi.mock, vi.fn, beforeEach, afterEach, vitest.
beads
Dependency-aware issue tracking with bd. Use for managing tasks, bugs, and features with automatic dependency tracking, ready work detection, and AI agent integration. Perfect for tracking work that needs to be coordinated across multiple agents or tracking blockers and dependencies.
ai-elements
Vercel AI Elements for workflow UI components. Use when building chat interfaces, displaying tool execution, showing reasoning/thinking, or creating job queues. Triggers on ai-elements, Queue, Confirmation, Tool, Reasoning, Shimmer, Loader, Message, Conversation, PromptInput.
agent-architecture-analysis
Perform 12-Factor Agents compliance analysis on any codebase. Use when evaluating agent architecture, reviewing LLM-powered systems, or auditing agentic applications against the 12-Factor methodology.
validating-setup-commands
Use before creating worktrees or executing tasks - validates that CLAUDE.md defines required setup commands (install, optional postinstall) and provides clear error messages with examples if missing
llm-streaming
LLM streaming response patterns. Use when implementing real-time token streaming, Server-Sent Events for AI responses, or streaming with tool calls.
platform-integration-workflow
Guide for adding new AI platform support (e.g., Gemini, Mistral, Anthropic) to the Chrome extension with strategy pattern and testing requirements
skill-creator
Guides you through creating well-structured Claude Code skills with proper modularization, templates, and best practices. Use when creating new skills or improving existing ones.
quality-gate
Complete quality validation workflow combining TypeScript checking, linting, tests, coverage, and build validation. Works with any TypeScript/JavaScript project. Returns structured pass/fail with detailed results for each check. Used in conductor workflows and quality assurance phases.
sqlite-vec
sqlite-vec extension for vector similarity search in SQLite. Use when storing embeddings, performing KNN queries, or building semantic search features. Triggers on sqlite-vec, vec0, MATCH, vec_distance, partition key, float[N], int8[N], bit[N], serialize_float32, serialize_int8, vec_f32, vec_int8, vec_bit, vec_normalize, vec_quantize_binary, distance_metric, metadata columns, auxiliary columns.
rag-retrieval
Retrieval-Augmented Generation patterns for grounded LLM responses. Use when building RAG pipelines, constructing context from retrieved documents, adding citations, or implementing hybrid search.
secure-flow
A comprehensive security skill that integrates with Secure Flow to help AI coding agents write secure code, perform security reviews, and implement security best practices. Use this skill when writing, reviewing, or modifying code to ensure secure-by-default practices are followed.
langgraph-supervisor
LangGraph supervisor-worker pattern. Use when building central coordinator agents that route to specialized workers, implementing round-robin or priority-based agent dispatch.
rulebook-quality-gates
Automated quality checks and enforcement for code commits. Use when validating code quality, running pre-commit checks, ensuring test coverage, or enforcing coding standards before commits and pushes.
langgraph-human-in-loop
LangGraph human-in-the-loop patterns. Use when implementing approval workflows, manual review gates, user feedback integration, or interactive agent supervision.
pydantic-ai-common-pitfalls
Avoid common mistakes and debug issues in PydanticAI agents. Use when encountering errors, unexpected behavior, or when reviewing agent implementations.
langgraph-architecture
Guides architectural decisions for LangGraph applications. Use when deciding between LangGraph vs alternatives, choosing state management strategies, designing multi-agent systems, or selecting persistence and streaming approaches.
multi-agent-ai-projects
Guidelines for multi-agent AI and learning projects with lesson-based structures. Activate when working with AI learning projects, experimental directories like .spec/, lessons/ directories, STATUS.md progress tracking, or structured learning curricula with multiple modules or lessons.
cursor-agent-supervisor
Offloading tasks with a well-defined scope to sub-agents, for instance to use a sub-agent to implement a set of specs. Use this skill whenever a task should not need a broad knowledge of the whole project