llm-evaluation

LLM output evaluation and quality assessment. Use when implementing LLM-as-judge patterns, quality gates for AI outputs, or automated evaluation pipelines.

$ Instalar

git clone https://github.com/yonatangross/skillforge-claude-plugin /tmp/skillforge-claude-plugin && cp -r /tmp/skillforge-claude-plugin/.claude/skills/llm-evaluation ~/.claude/skills/skillforge-claude-plugin

// tip: Run this command in your terminal to install the skill