Research
Research tools and academic skills
3205 skills in this category
Subcategories
sf-imagen
AI-powered visual content generation for Salesforce development. Generates ERD diagrams, LWC mockups, architecture visuals using Nano Banana Pro. Also provides Gemini as a parallel sub-agent for code review and research.
generate-docs
Generate configuration reference documentation from conclaude-schema.json using src/bin/generate-docs.rs.USE WHEN the schema file changes, configuration options are added/modified, or documentation needs to be regenerated for the website.
screenshot-feature-extractor
Analyze product screenshots to extract feature lists and generate development task checklists. Use when: (1) Analyzing competitor product screenshots for feature extraction, (2) Generating PRD/task lists from UI designs, (3) Batch analyzing multiple app screens, (4) Conducting competitive analysis from visual references.
memory-management
Persistent memory management for Claude Code via AutoMem. Use this skill when:- Starting a session (recall project context, decisions, patterns)- Making architectural decisions or library choices- Fixing bugs (store root cause and solution)- Learning user preferences or code style- Completing significant work (store summary)- Debugging issues (search for similar past problems)
bib-managing
Curate and validate BibTeX bibliographies against academic databases.
atris
ATRIS workspace navigation and task management. Use when user mentions atris, TODO, tasks, backlog, navigator, executor, validator, MAP.md, journal, inbox, or asks "where is X?" in an atris-enabled project.
experiment-tracker
Manages ML experiment tracking with MLflow, Weights & Biases, or SpecWeave's built-in tracking. Activates for "track experiments", "MLflow", "wandb", "experiment logging", "compare experiments", "hyperparameter tracking". Automatically configures tracking tools to log to SpecWeave increment folders, ensuring all experiments are documented and reproducible. Integrates with SpecWeave's living docs for persistent experiment knowledge.
pine-publisher
Prepares Pine Scripts for publication in TradingView's community library with proper documentation and compliance. Use when preparing to publish, adding documentation, ensuring House Rules compliance, writing descriptions, or finalizing scripts for release. Triggers on "publish", "release", "documentation", "House Rules", or preparation requests.
journal
Guide for using the AI's persistent journal database
specification-phase
Provides standard operating procedures for the /specify phase including feature classification (HAS_UI, IS_IMPROVEMENT, HAS_METRICS, HAS_DEPLOYMENT_IMPACT), research depth determination, clarification strategy (max 3, informed guesses for defaults), and roadmap integration. Use when executing /specify command, classifying features, generating structured specs, or determining research depth for planning phase. (project)
hallucination-detector
Detect and prevent hallucinated technical decisions during feature work. Auto-trigger when suggesting technologies, frameworks, APIs, database schemas, or external services. Validates all tech decisions against docs/project/tech-stack.md (single source of truth). Blocks suggestions that violate documented architecture. Requires evidence/citation for all technical choices. Prevents wrong tech stack, duplicate entities, fake APIs, incompatible versions.
ultrathink
Deep planning philosophy for craftsman-level architecture. Transforms planning from research-then-design to research-question-simplify-design. Use when --deep flag is set, for epics, complex features (30+ tasks), or when auto_deep_mode preference is enabled. Invokes assumption questioning, codebase soul analysis, and ruthless simplification. (project)
blz-docs-search
Teaches effective documentation search using the blz CLI tool. Use when searching documentation with blz, looking up APIs, finding code examples, retrieving citations, or when questions mention libraries, frameworks, "how to", or documentation topics. Covers BM25 full-text search patterns, citation retrieval, and efficient querying.
create-meta-prompts
Create optimized prompts for Claude-to-Claude pipelines with research, planning, and execution stages. Use when building prompts that produce outputs for other prompts to consume, or when running multi-stage workflows (research -> plan -> implement).
subagent-prompt-construction
Systematic methodology for constructing compact (<150 lines), expressive, Claude Code-integrated subagent prompts using lambda contracts and symbolic logic. Use when creating new specialized subagents for Claude Code with agent composition, MCP tool integration, or skill references. Validated with phase-planner-executor (V_instance=0.895).
baseline-quality-assessment
Achieve comprehensive baseline (V_meta ≥0.40) in iteration 0 to enable rapid convergence. Use when planning iteration 0 time allocation, domain has established practices to reference, rich historical data exists for immediate quantification, or targeting 3-4 iteration convergence. Provides 4 quality levels (minimal/basic/comprehensive/exceptional), component-by-component V_meta calculation guide, and 3 strategies for comprehensive baseline (leverage prior art, quantify baseline, domain universality analysis). 40-50% iteration reduction when V_meta(s₀) ≥0.40 vs <0.20. Spend 3-4 extra hours in iteration 0, save 3-6 hours overall.
grey-haven-prompt-engineering
Master 26 documented prompt engineering principles for crafting effective LLM prompts with 400%+ quality improvement. Includes templates, anti-patterns, and quality checklists for technical, learning, creative, and research tasks. Use when writing prompts for LLMs, improving AI response quality, training on prompting, designing agent instructions, or when user mentions 'prompt engineering', 'better prompts', 'LLM quality', 'prompt templates', 'AI prompts', 'prompt principles', or 'prompt optimization'.
agent-prompt-evolution
Track and optimize agent specialization during methodology development. Use when agent specialization emerges (generic agents show >5x performance gap), multi-experiment comparison needed, or methodology transferability analysis required. Captures agent set evolution (Aâ‚™ tracking), meta-agent evolution (Mâ‚™ tracking), specialization decisions (when/why to create specialized agents), and reusability assessment (universal vs domain-specific vs task-specific). Enables systematic cross-experiment learning and optimized Mâ‚€ evolution. 2-3 hours overhead per experiment.
methodology-bootstrapping
Apply Bootstrapped AI Methodology Engineering (BAIME) to develop project-specific methodologies through systematic Observe-Codify-Automate cycles with dual-layer value functions (instance quality + methodology quality). Use when creating testing strategies, CI/CD pipelines, error handling patterns, observability systems, or any reusable development methodology. Provides structured framework with convergence criteria, agent coordination, and empirical validation. Validated in 8 experiments with 100% success rate, 4.9 avg iterations, 10-50x speedup vs ad-hoc. Works for testing, CI/CD, error recovery, dependency management, documentation systems, knowledge transfer, technical debt, cross-cutting concerns.
grey-haven-code-quality-analysis
Multi-mode code quality analysis covering security reviews (OWASP Top 10), clarity refactoring (readability rules), and synthesis analysis (cross-file issues). Use when reviewing code for security vulnerabilities, improving code readability, conducting quality audits, pre-deployment checks, or when user mentions 'code quality', 'code review', 'security review', 'refactoring', 'code smell', 'OWASP', 'code clarity', or 'quality audit'.