superbpe
Train and use SuperBPE tokenizers for 20-33% token reduction across any project. Covers training, optimization, validation, and integration with any LLM framework. Use when you need efficient tokenization, want to reduce API costs, or maximize context windows.
$ 安裝
git clone https://github.com/ScientiaCapital/unsloth-mcp-server /tmp/unsloth-mcp-server && cp -r /tmp/unsloth-mcp-server/.claude/skills/superbpe ~/.claude/skills/unsloth-mcp-server// tip: Run this command in your terminal to install the skill
Repository

ScientiaCapital
Author
ScientiaCapital/unsloth-mcp-server/.claude/skills/superbpe
1
Stars
0
Forks
Updated1w ago
Added1w ago