llm-evaluation

LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.

$ Installieren

git clone https://github.com/applied-artificial-intelligence/claude-code-toolkit /tmp/claude-code-toolkit && cp -r /tmp/claude-code-toolkit/skills/llm-evaluation ~/.claude/skills/claude-code-toolkit

// tip: Run this command in your terminal to install the skill