constitutional-ai
Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.
$ 安裝
git clone https://github.com/zechenzhangAGI/AI-research-SKILLs /tmp/AI-research-SKILLs && cp -r /tmp/AI-research-SKILLs/07-safety-alignment/constitutional-ai ~/.claude/skills/AI-research-SKILLs// tip: Run this command in your terminal to install the skill
Repository

zechenzhangAGI
Author
zechenzhangAGI/AI-research-SKILLs/07-safety-alignment/constitutional-ai
62
Stars
2
Forks
Updated1w ago
Added1w ago