evaluating-llms-harness
Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing models, reporting academic results, or tracking training progress. Industry standard used by EleutherAI, HuggingFace, and major labs. Supports HuggingFace, vLLM, APIs.
$ 설치
git clone https://github.com/zechenzhangAGI/AI-research-SKILLs /tmp/AI-research-SKILLs && cp -r /tmp/AI-research-SKILLs/11-evaluation/lm-evaluation-harness ~/.claude/skills/AI-research-SKILLs// tip: Run this command in your terminal to install the skill
Repository

zechenzhangAGI
Author
zechenzhangAGI/AI-research-SKILLs/11-evaluation/lm-evaluation-harness
62
Stars
2
Forks
Updated6d ago
Added6d ago