Browse Skills
44358 skills found
game-development.md
62
1
export
game-development
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Game development orchestrator. Routes to platform-specific skills based on project needs.
2026-01-15
mobile-typography.md
62
1
export
mobile-typography
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Mobile typography system for React Native and Flutter. Type scales, font weights, line heights, responsive typography, and dark mode support.
2026-01-15
nemo-curator.md
62
1
export
nemo-curator
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features fuzzy deduplication (16× faster), quality filtering (30+ heuristics), semantic deduplication, PII redaction, NSFW detection. Scales across GPUs with RAPIDS. Use for preparing high-quality training datasets, cleaning web data, or deduplicating large corpora.
2026-01-16
llamaindex.md
62
1
export
llamaindex
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.
2026-01-16
sentencepiece.md
62
1
export
sentencepiece
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k sentences/sec), lightweight (6MB memory), deterministic vocabulary. Used by T5, ALBERT, XLNet, mBART. Train on raw text without pre-tokenization. Use when you need multilingual support, CJK languages, or reproducible tokenization.
2026-01-16
codebase-analyzer.md
62
1
export
codebase-analyzer
2
from
"severity1/claude-code-auto-memory"
from
"severity1/claude-code-auto-memory"
3
This skill should be used when the user asks to "initialize auto-memory", "create CLAUDE.md", "set up project memory", or runs the /auto-memory:init command. Analyzes codebase structure and generates CLAUDE.md files using the exact template format with AUTO-MANAGED markers.
2026-01-15
behavioral-modes.md
62
1
export
behavioral-modes
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
AI operational modes (brainstorm, implement, debug, review, teach, ship, orchestrate). Use to adapt behavior based on task type.
2026-01-15
condition-based-waiting.md
62
1
export
condition-based-waiting
2
from
"mneves75/dnschat"
from
"mneves75/dnschat"
3
Use when tests have race conditions, timing dependencies, or inconsistent pass/fail behavior - replaces arbitrary timeouts with condition polling to wait for actual state changes, eliminating flaky tests from timing guesses
2026-01-13
openrlhf-training.md
62
1
export
openrlhf-training
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.
2026-01-16
documentation-templates.md
62
1
export
documentation-templates
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Documentation templates and structure guidelines. README, API docs, code comments, and AI-friendly documentation.
2026-01-15
llamaguard.md
62
1
export
llamaguard
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.
2026-01-16
artifacts-builder.md
62
1
export
artifacts-builder
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
React/Tailwind component construction patterns for building reusable UI components.
2026-01-15
multiplayer.md
62
1
export
multiplayer
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Multiplayer game development principles. Architecture, networking, synchronization.
2026-01-15
code-review-checklist.md
62
1
export
code-review-checklist
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Code review guidelines covering code quality, security, and best practices.
2026-01-15
rwkv-architecture.md
62
1
export
rwkv-architecture
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel), infer like RNN (sequential). Linux Foundation AI project. Production at Windows, Office, NeMo. RWKV-7 (March 2025). Models up to 14B parameters.
2026-01-16
powershell-windows.md
62
1
export
powershell-windows
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
PowerShell Windows patterns. Critical pitfalls, operator syntax, error handling.
2026-01-15
mamba-architecture.md
62
1
export
mamba-architecture
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
State-space model with O(n) complexity vs Transformers' O(n²). 5× faster inference, million-token sequences, no KV cache. Selective SSM with hardware-aware design. Mamba-1 (d_state=16) and Mamba-2 (d_state=128, multi-head). Models 130M-2.8B on HuggingFace.
2026-01-16
serving-llms-vllm.md
62
1
export
serving-llms-vllm
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.
2026-01-16
nemo-guardrails.md
62
1
export
nemo-guardrails
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.
2026-01-16
memory-processor.md
62
1
export
memory-processor
2
from
"severity1/claude-code-auto-memory"
from
"severity1/claude-code-auto-memory"
3
Process file changes and update CLAUDE.md memory sections. Use when the memory-updater agent needs to analyze dirty files, update AUTO-MANAGED sections, verify content removal, or detect stale commands. Invoked after file edits to keep project memory in sync.
2026-01-15
multi-mind.md
61
1
export
multi-mind
2
from
"kaushikgopal/dotfiles"
from
"kaushikgopal/dotfiles"
3
Multi-specialist collaborative analysis for complex decisions. Spawns parallel subagents with diverse domain expertise to analyze a topic from multiple angles. Use when user says "multi-mind", or for complex architecture decisions, technology choices, strategic planning, or any multi-faceted problem with no obvious right answer.
2026-01-15
ppage.md
61
1
export
ppage
2
from
"kaushikgopal/dotfiles"
from
"kaushikgopal/dotfiles"
3
Captures session learnings, decisions, and context to a markdown file for future agent ramp-up. Use when user says "ppage", "page context", "save context", "capture learnings", or before ending a substantial work session.
2026-01-15
find.md
61
1
export
find
2
from
"kaushikgopal/dotfiles"
from
"kaushikgopal/dotfiles"
3
Fast file and code search using `fd` and `rg`. Use when the user asks to locate files or search code patterns.
2026-01-15
analyze-function.md
61
1
export
analyze-function
2
from
"kaushikgopal/dotfiles"
from
"kaushikgopal/dotfiles"
3
Deep line-by-line analysis of a function or method. Explains what each line does, why it's written that way, performance implications, edge cases, and design patterns. Use when user says "analyze-function", "analyze {function}", "deep dive on {function}", or "explain {function} line by line".
2026-01-15