llm-inference-batching-scheduler

Guidance for implementing batching schedulers for LLM inference systems with compilation-based accelerators. This skill applies when optimizing request batching to minimize cost while meeting latency thresholds, particularly when dealing with shape compilation costs, padding overhead, and multi-bucket request distributions. Use this skill for tasks involving batch planning, shape selection, generation-length bucketing, and cost-model-driven optimization for neural network inference.

$ Instalar

git clone https://github.com/letta-ai/skills /tmp/skills && cp -r /tmp/skills/ai/benchmarks/letta/terminal-bench-2/trajectory-only/llm-inference-batching-scheduler ~/.claude/skills/skills

// tip: Run this command in your terminal to install the skill

Repository

letta-ai
letta-ai
Author
letta-ai/skills/ai/benchmarks/letta/terminal-bench-2/trajectory-only/llm-inference-batching-scheduler
13
Stars
1
Forks
Updated6d ago
Added6d ago