llm-evaluation
LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.
$ 설치
git clone https://github.com/applied-artificial-intelligence/claude-code-toolkit /tmp/claude-code-toolkit && cp -r /tmp/claude-code-toolkit/skills/llm-evaluation ~/.claude/skills/claude-code-toolkit// tip: Run this command in your terminal to install the skill
Repository

applied-artificial-intelligence
Author
applied-artificial-intelligence/claude-code-toolkit/skills/llm-evaluation
12
Stars
2
Forks
Updated1w ago
Added1w ago