training-llms-megatron
Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.
$ Installieren
git clone https://github.com/zechenzhangAGI/AI-research-SKILLs /tmp/AI-research-SKILLs && cp -r /tmp/AI-research-SKILLs/08-distributed-training/megatron-core ~/.claude/skills/AI-research-SKILLs// tip: Run this command in your terminal to install the skill
Repository

zechenzhangAGI
Author
zechenzhangAGI/AI-research-SKILLs/08-distributed-training/megatron-core
62
Stars
2
Forks
Updated1w ago
Added1w ago