Browse Skills
44357 skills found
url-dump.md
62
1
export
url-dump
2
from
"huytieu/COG-second-brain"
from
"huytieu/COG-second-brain"
3
Quick capture URLs with automatic content extraction, insights, and categorization into knowledge booklets
2026-01-15
deployment-procedures.md
62
1
export
deployment-procedures
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Production deployment principles and decision-making. Safe deployment workflows, rollback strategies, and verification. Teaches thinking, not scripts.
2026-01-15
red-team-tactics.md
62
1
export
red-team-tactics
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Red team tactics principles based on MITRE ATT&CK. Attack phases, detection evasion, reporting.
2026-01-15
architecture.md
62
1
export
architecture
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing system design.
2026-01-15
requesting-code-review.md
62
1
export
requesting-code-review
2
from
"mneves75/dnschat"
from
"mneves75/dnschat"
3
Use when completing tasks, implementing major features, or before merging to verify work meets requirements - dispatches superpowers:code-reviewer subagent to review implementation against plan or requirements before proceeding
2026-01-13
daily-brief.md
62
1
export
daily-brief
2
from
"huytieu/COG-second-brain"
from
"huytieu/COG-second-brain"
3
Generate personalized news intelligence with verified sources (7-day freshness requirement)
2026-01-15
training-llms-megatron.md
62
1
export
training-llms-megatron
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.
2026-01-16
defense-in-depth.md
62
1
export
defense-in-depth
2
from
"mneves75/dnschat"
from
"mneves75/dnschat"
3
Use when invalid data causes failures deep in execution, requiring validation at multiple system layers - validates at every layer data passes through to make bugs structurally impossible
2026-01-13
mobile-games.md
62
1
export
mobile-games
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Mobile game development principles. Touch input, battery, performance, app stores.
2026-01-15
grpo-rl-training.md
62
1
export
grpo-rl-training
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training
2026-01-16
huggingface-tokenizers.md
62
1
export
huggingface-tokenizers
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in <20 seconds. Supports BPE, WordPiece, and Unigram algorithms. Train custom vocabularies, track alignments, handle padding/truncation. Integrates seamlessly with transformers. Use when you need high-performance tokenization or custom tokenizer training.
2026-01-16
tailwind-patterns.md
62
1
export
tailwind-patterns
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Tailwind CSS v4 principles. CSS-first configuration, container queries, modern patterns, design token architecture.
2026-01-15
axolotl.md
62
1
export
axolotl
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Expert guidance for fine-tuning LLMs with Axolotl - YAML configs, 100+ models, LoRA/QLoRA, DPO/KTO/ORPO/GRPO, multimodal support
2026-01-16
executing-plans.md
62
1
export
executing-plans
2
from
"mneves75/dnschat"
from
"mneves75/dnschat"
3
Use when partner provides a complete implementation plan to execute in controlled batches with review checkpoints - loads plan, reviews critically, executes tasks in batches, reports for review between batches
2026-01-13
using-superpowers.md
62
1
export
using-superpowers
2
from
"mneves75/dnschat"
from
"mneves75/dnschat"
3
Use when starting any conversation - establishes mandatory workflows for finding and using skills, including using Skill tool before announcing usage, following brainstorming before coding, and creating TodoWrite todos for checklists
2026-01-13
nodejs-best-practices.md
62
1
export
nodejs-best-practices
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Node.js development principles and decision-making. Framework selection, async patterns, security, and architecture. Teaches thinking, not copying.
2026-01-15
nanogpt.md
62
1
export
nanogpt
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Karpathy. Perfect for understanding GPT architecture from scratch. Train on Shakespeare (CPU) or OpenWebText (multi-GPU).
2026-01-16
testing-patterns.md
62
1
export
testing-patterns
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Testing patterns and principles. Unit, integration, mocking strategies.
2026-01-15
implementing-llms-litgpt.md
62
1
export
implementing-llms-litgpt
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.
2026-01-16
brainstorming.md
62
1
export
brainstorming
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
Socratic questioning protocol + user communication. MANDATORY for complex requests, new features, or unclear requirements. Includes progress reporting and error handling.
2026-01-15
quantizing-models-bitsandbytes.md
62
1
export
quantizing-models-bitsandbytes
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers.
2026-01-16
seo-fundamentals.md
62
1
export
seo-fundamentals
2
from
"xenitV1/claude-code-maestro"
from
"xenitV1/claude-code-maestro"
3
SEO fundamentals, E-E-A-T, Core Web Vitals, and Google algorithm principles.
2026-01-15
optimizing-attention-flash.md
62
1
export
optimizing-attention-flash
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.
2026-01-16
evaluating-llms-harness.md
62
1
export
evaluating-llms-harness
2
from
"zechenzhangAGI/AI-research-SKILLs"
from
"zechenzhangAGI/AI-research-SKILLs"
3
Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing models, reporting academic results, or tracking training progress. Industry standard used by EleutherAI, HuggingFace, and major labs. Supports HuggingFace, vLLM, APIs.
2026-01-16