distill
Distill session episodes into persistent memory patterns
When & Why to Use This Skill
This Claude skill automates the transformation of episodic session logs into refined, persistent semantic memory patterns. It solves the critical challenge of AI 'amnesia' by systematically extracting, reinforcing, and pruning learnings, pitfalls, and preferences, ensuring that AI agents evolve and maintain deep context across multiple interactions.
Use Cases
- Technical Debt Prevention: Automatically capturing project-specific pitfalls, such as unique middleware requirements or security constraints, to prevent the agent from repeating past mistakes in future sessions.
- Adaptive Coding Standards: Learning and reinforcing a development team's preferred architectural patterns and coding styles through continuous observation of successful task outcomes.
- Long-term Knowledge Retention: Building a high-confidence 'Memory' database for complex codebases, allowing the agent to recall specific implementation details and logic discovered weeks prior.
- Workflow Optimization: Distilling successful problem-solving approaches into reusable patterns, enabling the agent to suggest more efficient strategies for recurring technical tasks.
| name | distill |
|---|---|
| description | Distill session episodes into persistent memory patterns |
| user_invocable | true |
/distill - Memory Distillation
Transforms episodic session logs into refined semantic memory patterns.
Usage
/distill # Distill episodes at current scope
/distill show # Show current memory without distilling
/distill episodes # Show pending episodes awaiting distillation
How It Works
Episode Collection
During agent work, learnings are logged to .context/EPISODES.md:
## Session: 2026-01-10T14:30:00Z
- **Task**: Fix authentication bug
- **Outcome**: success
- **Learnings**:
- JWT tokens need refresh handling in middleware
- Error messages should include request ID
Distillation Process
When /distill runs:
- Extract patterns from each episode's learnings
- Classify as pattern, pitfall, preference, or approach
- Match against existing patterns in memory
- Reinforce matching patterns (increases confidence)
- Add new patterns with low initial confidence
- Decay old patterns not recently reinforced
- Prune patterns below confidence threshold
Memory Output
Results are saved to .context/MEMORY.md:
# Memory: [Scope]
## Patterns Observed
- JWT tokens need refresh handling
Confidence: high | Last reinforced: 2026-01-10
## Pitfalls Discovered
- Avoid storing tokens in localStorage
Confidence: medium | Last reinforced: 2026-01-08
Implementation
When invoked, run:
python3 ~/.claude/plugins/agent-swarm/context/memory.py distill .
For showing memory:
python3 ~/.claude/plugins/agent-swarm/context/memory.py show .
For pending episodes:
python3 ~/.claude/plugins/agent-swarm/context/memory.py episodes .
Logging Learnings
Agents can log learnings by including in their output:
LEARNING: [description of pattern, pitfall, or approach]
These are captured by post-task hooks and added to EPISODES.md.
Confidence Mechanics
| Confidence | Meaning |
|---|---|
| 0.0 - 0.2 | Uncertain, may be pruned |
| 0.2 - 0.4 | Low, needs reinforcement |
| 0.4 - 0.7 | Medium, established pattern |
| 0.7 - 0.95 | High, well-validated |
- Reinforcement: Each observation increases confidence
- Decay: Patterns not seen in 30+ days lose confidence
- Pruning: Patterns below 0.2 are removed
Automatic Distillation
Distillation triggers automatically when:
- Episode count exceeds threshold (default: 10)
- Session ends (if configured)
- Manually via
/distillcommand