ai-partner-chat
eze-is/ai-partner-chat基于用户画像和向量化笔记提供个性化对话。当用户需要个性化交流、上下文感知的回应,或希望 AI 记住并引用其之前的想法和笔记时使用。
SKILL.md
name: ai-partner-chat description: 基于用户画像和向量化笔记提供个性化对话。当用户需要个性化交流、上下文感知的回应,或希望 AI 记住并引用其之前的想法和笔记时使用。
AI Partner Chat
Overview
Provide personalized, context-aware conversations by integrating user persona, AI persona, and vectorized personal notes. This skill enables AI to remember and reference the user's previous thoughts, preferences, and knowledge base, creating a more coherent and personalized interaction experience.
Prerequisites
Before first use, complete these steps in order:
-
Create directory structure
mkdir -p config notes vector_db scripts -
Set up Python environment
python3 -m venv venv ./venv/bin/pip install -r .claude/skills/ai-partner-chat/scripts/requirements.txtNote: First run will download embedding model (~4.3GB)
-
Generate persona templates Copy from
.claude/skills/ai-partner-chat/assets/toconfig/:user-persona-template.md→config/user-persona.mdai-persona-template.md→config/ai-persona.md
-
User adds notes Place markdown notes in
notes/directory (any format/structure) -
Initialize vector database (see section 1.2 below)
Now proceed to Core Workflow →
Core Workflow
1. Initial Setup
Before using this skill for the first time, complete the following setup:
1.1 Create Persona Files
Create two Markdown files to define interaction parameters:
User Persona (user-persona.md):
- Define user's background, expertise, interests
- Specify communication preferences and working style
- Include learning goals and current projects
- Use template:
assets/user-persona-template.md
AI Persona (ai-persona.md):
- Define AI's role and expertise areas
- Specify communication style and tone
- Set interaction guidelines and response strategies
- Define how to use user context and reference notes
- Use template:
assets/ai-persona-template.md
1.2 Initialize Vector Database
This skill uses AI Agent approach for intelligent note chunking:
When you initialize the vector database, Claude Code will:
- Read notes from
<project_root>/notes/directory - Analyze each note's format (daily logs, structured docs, continuous text, etc.)
- Generate custom chunking code tailored to that specific note
- Execute the code to produce chunks conforming to
chunk_schema.Chunkformat - Generate embeddings using BAAI/bge-m3 (optimized for Chinese text)
- Store in ChromaDB at
<project_root>/vector_db/
Key advantages:
- ✅ No pre-written chunking strategies needed
- ✅ Each note gets optimal chunking based on its actual structure
- ✅ True AI Agent - generates tools on demand, not calling pre-built tools
Chunk Format Requirement:
All chunks must conform to this schema (see scripts/chunk_schema.py):
{
'content': 'chunk text content',
'metadata': {
'filename': 'note.md', # Required
'filepath': '/path/to/file', # Required
'chunk_id': 0, # Required
'chunk_type': 'date_entry', # Required
'date': '2025-11-07', # Optional
'title': 'Section title', # Optional
}
}
Implementation Requirements
Location: Create <project_root>/scripts/chunk_and_index.py
Required structure:
# Import provided utilities
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent / ".claude/skills/ai-partner-chat/scripts"))
from chunk_schema import Chunk, validate_chunk
from vector_indexer import VectorIndexer
def chunk_note_file(filepath: str) -> List[Chunk]:
"""
Analyze THIS file's format and generate appropriate chunks.
Each chunk must conform to chunk_schema.Chunk format:
{
'content': 'text',
'metadata': {
'filename': 'file.md',
'filepath': '/path/to/file',
'chunk_id': 0,
'chunk_type': 'your_label'
}
}
"""
# TODO: Analyze actual file format (NOT template-based)
# TODO: Generate chunks based on analysis
# TODO: Validate each chunk with validate_chunk()
pass
def main():
# Initialize vector database
indexer = VectorIndexer(db_path="./vector_db")
indexer.initialize_db()
# Process all note files
all_chunks = []
for note_file in Path("./notes").glob("**/*"):
if note_file.is_file():
chunks = chunk_note_file(str(note_file))
all_chunks.extend(chunks)
# Index chunks
indexer.index_chunks(all_chunks)
if __name__ == "__main__":
main()
Execute: ./venv/bin/python scripts/chunk_and_index.py
Key points:
- The
chunk_note_file()function logic should be dynamically created based on analyzing actual file content - Do NOT copy chunking strategies from examples or templates
- Each file may have different format - analyze individually
- Only requirement: output must conform to
chunk_schema.Chunk
2. Conversation Workflow
For each user query, follow this process:
2.1 Load Personas
Read both persona files to understand:
- User's background, preferences, and communication style
- AI's role definition and interaction guidelines
- How to appropriately reference context
2.2 Retrieve Relevant Notes
Query the vector database to find the top 5 most semantically similar notes:
from scripts.vector_utils import get_relevant_notes
# Query for relevant context
relevant_notes = get_relevant_notes(
query=user_query,
db_path="./vector_db",
top_k=5
)
Or use the command-line tool:
python scripts/query_notes.py "user query text" --top-k 5
2.3 Construct Context
Combine the following elements to inform the response:
- User Persona: Background, preferences, expertise
- AI Persona: Role, communication style, guidelines
- Relevant Notes (top 5): User's previous thoughts and knowledge
- Current Conversation: Ongoing chat history
2.4 Generate Response
Synthesize a response that:
- Aligns with both persona definitions
- Naturally references relevant notes when applicable
- Maintains continuity with user's knowledge base
- Follows the AI persona's communication guidelines
When Referencing Notes:
- Use natural phrasing: "Based on your previous note about..."
- Make connections: "This relates to what you mentioned in..."
- Avoid robotic citations: integrate context smoothly
Example Response Pattern:
[Acknowledge user's query in preferred communication style]
[Incorporate relevant note context naturally if applicable]
"I remember you mentioned [insight from note] - this connects well with..."
[Provide main response following AI persona guidelines]
[Optional: Ask follow-up question based on user's learning style]
3. Maintenance
Adding New Notes
When the user creates new notes, add them to the vector database:
python scripts/add_note.py /path/to/new_note.md
Updating Personas
Personas can be updated anytime by editing the Markdown files. Changes take effect in the next conversation.
Reinitializing Database
To completely rebuild the vector database:
python scripts/init_vector_db.py /path/to/notes --db-path ./vector_db
This will delete the existing database and re-index all notes.
Technical Details
Data Architecture
User data is stored in project root, not inside the skill directory:
<project_root>/
├── notes/ # User's markdown notes
├── vector_db/ # ChromaDB vector database
├── venv/ # Python dependencies
├── config/
│ ├── user-persona.md # User persona definition
│ └── ai-persona.md # AI persona definition
└── .claude/skills/ai-partner-chat/ # Skill code (can be deleted/reinstalled)
├── SKILL.md
└── scripts/
├── chunk_schema.py # Chunk format specification
├── vector_indexer.py # Core indexing utilities
└── vector_utils.py # Query utilities
Design principles:
- ✅ User data (notes, personas, vectors) lives in project root
- ✅ Easy to backup, migrate, or share across skills
- ✅ Skill code is stateless and replaceable
AI Agent Chunking
Philosophy: Instead of pre-written chunking strategies, Claude Code analyzes each note and generates optimal chunking code on the fly.
How it works:
- Claude reads a note file
- Analyzes format features (date headers, section titles, separators, etc.)
- Writes Python code that chunks this specific note optimally
- Executes the code to produce chunks
- Validates chunks against
chunk_schema.Chunkformat - Indexes chunks using
vector_indexer.py
Benefits:
- Adapts to any note format without pre-programming
- Can handle mixed formats, unusual structures, or evolving note styles
- True "vibe coding" approach - tools are created when needed
Vector Database
- Storage: ChromaDB (persistent local storage at
<project_root>/vector_db/) - Embedding Model: BAAI/bge-m3 (multilingual, optimized for Chinese)
- Similarity Metric: Cosine similarity
- Chunking: AI-generated custom code per note
Scripts
chunk_schema.py: Defines required chunk format specificationvector_indexer.py: Core utilities for embedding generation and ChromaDB indexingvector_utils.py: Query utilities for retrieving relevant chunksrequirements.txt: Python dependencies (chromadb, sentence-transformers)
Note: No pre-written chunking scripts. Chunking is done by Claude Code dynamically.
File Structure
<project_root>/
├── notes/ # User's notes (managed by user)
│ └── *.md
├── vector_db/ # Vector database (auto-generated)
├── venv/ # Python environment
├── config/ # User configuration
│ ├── user-persona.md
│ └── ai-persona.md
└── .claude/skills/ai-partner-chat/
├── SKILL.md # This file
├── scripts/
│ ├── chunk_schema.py # Chunk format spec
│ ├── vector_indexer.py # Indexing utilities
│ ├── vector_utils.py # Query utilities
│ └── requirements.txt # Dependencies
└── assets/
├── user-persona-template.md
└── ai-persona-template.md
Best Practices
Persona Design
- Be Specific: Vague personas lead to generic responses
- Include Examples: Show desired interaction patterns in AI persona
- Update Regularly: Refine personas based on conversation quality
- Balance Detail: Provide enough context without overwhelming
Note Management
- Any Format Welcome: AI Agent approach adapts to your note structure
- Meaningful Content: Rich, substantive notes yield better retrieval
- Regular Updates: Add new notes to
<project_root>/notes/anytime - Rebuild When Needed: Re-index when note collection changes significantly
Context Integration
- Natural References: Avoid forced citations - only reference when genuinely relevant
- Connection Quality: Prioritize meaningful connections over quantity
- Respect Privacy: Be mindful of sensitive information in notes
- Conversation Flow: Don't let note references disrupt natural dialogue
Troubleshooting
Database Connection Errors:
- Ensure
<project_root>/vector_db/directory exists and is writable - Check that Python dependencies are installed in venv
Poor Retrieval Quality:
- Try re-indexing with Claude Code analyzing notes fresh
- Verify notes contain substantial content (not just titles)
- Consider increasing
top_kvalue for more context
Chunking Issues:
- If chunks are too large/small, ask Claude to adjust chunking strategy
- Review generated chunking code and provide feedback
- Ensure notes have clear structure for better chunking
README
AI Partner Chat
一个 Claude Skills 项目,让 AI 成为你的个性化对话伙伴。
项目简介
AI Partner Chat 通过整合用户画像、AI 画像和向量化的个人笔记,提供个性化的、上下文感知的对话体验。这个技能让 AI 能够记住并引用你之前的想法、偏好和知识库,创造出更加连贯和个性化的交互体验。
核心功能
🎭 双画像系统
- 用户画像:定义你的背景、专长、兴趣和沟通偏好
- AI 画像:定制 AI 的角色定位、沟通风格和交互方式
📝 智能笔记检索
- 自动索引你的 Markdown 笔记
- 根据对话内容智能检索相关历史记录
- 在对话中自然引用你的过往想法和笔记
💬 个性化对话
- 基于你的画像和笔记生成个性化回应
- 保持对话的上下文连贯性
- 像朋友一样自然地引用你的想法,而非机械地"根据记录"
使用场景
当你需要:
- 个性化交流,而非通用回复
- 上下文感知的回应,AI 能记住你的背景
- AI 记住并引用你之前的想法和笔记
- 持续性的对话体验,而非每次都是全新开始
安装与使用
安装技能
将此项目复制到你的工作目录下的 .claude/skills/ 文件夹中:
<你的项目根目录>/
└── .claude/
└── skills/
└── ai-partner-chat/ # 本技能包
├── assets/
├── scripts/
├── SKILL.md
└── README.md
使用技能
在 Claude Code 中,发送以下指令即可启用此技能:
遵循 ai-partner-chat 对话
AI agent 会自动:
- 读取技能配置和指引
- 创建必要的目录结构(
notes/、config/、vector_db/等) - 根据你的需求进行初始化
初始化流程
方式一:让 AI 自动创建并配置
首次使用时,直接告诉 AI:
我刚刚在 notes 里放入了对应的笔记,请根据笔记内容,进行向量化;并基于笔记内容,推测并更新 user-persona.md,以及最适合我的 ai-persona.md
AI agent 会:
- 分析
notes/目录中的笔记内容 - 根据笔记格式智能分块并创建向量数据库
- 基于笔记内容推测你的背景和偏好
- 自动生成并更新
config/user-persona.md - 根据你的特点推荐并创建
config/ai-persona.md
方式二:手动配置画像
如果你想自己定义画像:
- AI agent 会自动从模板创建画像文件到
config/目录 - 你可以手动编辑这些文件来定制画像
- 然后告诉 AI 进行向量化处理
开始对话
配置完成后,每次使用只需发送:
遵循 ai-partner-chat 对话
AI 将:
- 读取你的画像了解你的背景
- 检索相关的历史笔记
- 生成个性化的、上下文感知的回应
项目结构
技能包结构(位于 .claude/skills/ai-partner-chat/)
ai-partner-chat/
├── assets/ # 画像模板
│ ├── user-persona-template.md
│ └── ai-persona-template.md
├── scripts/ # 核心脚本
│ ├── chunk_schema.py
│ ├── vector_indexer.py
│ ├── vector_utils.py
│ └── requirements.txt
├── SKILL.md # 技能详细文档(AI agent 会读取此文件)
└── README.md # 本文件
用户数据目录(位于项目根目录)
AI agent 会在你的项目根目录创建以下结构:
<项目根目录>/
├── notes/ # 你的笔记(由你或 AI agent 创建)
├── config/ # 画像配置(由 AI agent 创建)
│ ├── user-persona.md
│ └── ai-persona.md
├── vector_db/ # 向量数据库(由 AI agent 创建)
└── venv/ # Python 虚拟环境(由 AI agent 创建)
重要:用户数据与技能包分离,便于备份和迁移。
工作流程
- 加载画像:读取用户画像和 AI 画像,理解交互背景
- 检索笔记:根据用户查询,从向量数据库中检索最相关的笔记
- 构建上下文:整合画像信息、相关笔记和对话历史
- 生成回应:基于上下文生成个性化、自然的回应
特色亮点
🤖 AI Agent 智能分块
系统会分析每篇笔记的实际格式,动态生成最适合的分块策略,而非使用预设模板。这意味着无论你的笔记是什么格式,都能得到最优处理。
🎯 自然引用
AI 会像回忆一样自然地引入你的过往信息,不会生硬地说"根据记录",而是流畅地融入对话中。
📦 数据独立
你的所有数据(笔记、画像、向量库)都存储在项目根目录,易于备份、迁移或在不同技能间共享。
最佳实践
画像设计
- 具体明确:模糊的画像会导致通用回复
- 包含示例:在 AI 画像中展示期望的交互模式
- 定期更新:根据对话质量不断优化画像
笔记管理
- 格式自由:系统能适应任何笔记结构
- 内容丰富:有深度的笔记能带来更好的检索效果
- 及时更新:新笔记记得添加到索引中
对话体验
- 自然引用:只在真正相关时才引用笔记
- 保持流畅:不要让引用打断对话的自然节奏
- 关注质量:优先考虑有意义的连接,而非数量
维护与更新
添加新笔记
将新笔记放入 notes/ 目录后,告诉 AI:
我刚刚在 notes 里添加了新笔记,请更新向量数据库
AI agent 会自动分析新笔记并更新索引。
更新画像
你可以直接编辑 config/ 目录下的画像文件,或者告诉 AI:
请根据我最近的笔记内容,更新 user-persona.md 和 ai-persona.md
重建索引
当笔记结构发生重大变化时,告诉 AI:
请重新初始化向量数据库
AI agent 会重新分析所有笔记并重建索引。
注意事项
- 首次运行:AI agent 首次创建向量数据库时会自动下载嵌入模型(约 4.3GB),请耐心等待
- Python 环境:AI agent 会自动创建虚拟环境并安装所需依赖
- 数据存储:所有数据(笔记、画像、向量库)存储在项目根目录,而非技能包目录内
- 技能位置:确保技能包位于
.claude/skills/ai-partner-chat/目录下
更多信息
详细的技术文档和使用说明请参考 SKILL.md 文件。
让 AI 成为真正了解你的对话伙伴,而不仅仅是一个工具。