writing-documentation
Apply Strunk's timeless writing rules to ANY prose humans will read—documentation, commit messages, error messages, explanations, reports, or UI text. Makes your writing clearer, stronger, and more professional.
When & Why to Use This Skill
This Claude skill optimizes technical documentation and prose for maximum clarity and efficiency. By merging Strunk's timeless writing principles with modern 'token economics,' it enables the creation of high-impact content that is easily processed by both human readers and AI systems. It focuses on reducing cognitive load, improving information retrieval through embedding-first design, and maintaining professional standards across all forms of written communication.
Use Cases
- Refining technical manuals and API documentation to be more concise and 'token-efficient' for better LLM context window management.
- Polishing UI microcopy, error messages, and tooltips to ensure they are professional, clear, and actionable for end-users.
- Structuring complex internal knowledge bases using 'Capsule Architecture' to enhance semantic searchability and vector embedding performance.
- Standardizing engineering communications, such as commit messages and technical reports, to maintain high information density and professional clarity.
- Designing 'Wisdom Triggers' in documentation to help team members grasp complex architectural concepts in seconds rather than minutes.
| name | writing-documentation |
|---|---|
| description | Apply Strunk's timeless writing rules to ANY prose humans will read—documentation, commit messages, error messages, explanations, reports, or UI text. Makes your writing clearer, stronger, and more professional. |
Writing Documentation as Wisdom Triggers
Mission: Encode durable wisdom in minimal tokens, creating triggers that activate full understanding in any cognitive system.
Write to activate understanding, not to transcribe knowledge. Create a lattice of retrieval cues rather than a transcript of information.
Core Principles
1. 🧠 Shared Cognition: Design for How Minds Work
Applicability: 👤 Human: 90% | 🤖 LLM: 85%
Both biological and artificial minds exhibit:
- Limited working memory → Use chunks of 3-7 concepts
- Attention biases → U-shaped focus (primacy/recency effects)
- Pattern recognition → Leverage familiar structures
- Associative retrieval → Consistent cues trigger memories
Example:
❌ HOSTILE: "The system uses various approaches depending on factors..."
✅ FRIENDLY:
1. PII → Encrypted PostgreSQL
2. Sessions → Redis (24h TTL)
3. Analytics → BigQuery (aggregated)
2. 🎯 Token Economics: Every Token Must Earn Its Place
Applicability: 👤 Human: 30% | 🤖 LLM: 100%
Modern LLMs use subword tokenization that affects concept integrity:
CamelCase→ Often single tokenhyphenated-terms→ Usually 3+ tokens- Common phrases → Fewer tokens than synonyms
See @token-optimization.md for advanced techniques.
3. 📦 Capsule Architecture: Compress Wisdom Into Invariants
Applicability: 👤 Human: 95% | 🤖 LLM: 90%
Distill each concept into a stable, minimal truth that can be expanded when needed.
See @capsule-pattern.md for complete format and examples.
4. 🔗 Embedding-First Design: Write for Vector Search
Applicability: 👤 Human: 40% | 🤖 LLM: 95%
Each documentation chunk should be a self-contained semantic unit:
- Modular sections that make sense in isolation
- Topic sentences that summarize each chunk
- Metadata tags for filtering:
[Security],[Performance] - Semantic boundaries at paragraph breaks
5. 🎨 Multi-Modal Encoding: Visual + Verbal + Semantic
Applicability: 👤 Human: 100% | 🤖 LLM: 70%
Use Mermaid diagrams with assistive comments:
graph LR
A[Write] --> B[Cache]
B --> C[Success]
B --> X[Failure]
X -->|Rollback| A
%% MEANING: Cache only after successful writes to maintain consistency.
%% KEY INSIGHT: Write-through caching prevents stale data on failures.
%% IMPLICATION: Never cache before confirming persistent storage.
⚠️ CRITICAL: Always escape Mermaid labels with quotes:
❌ WRONG: A[User Request] --> B{Complex Task?}
✅ RIGHT: A["User Request"] --> B{"Complex Task?"}
Two-Tier Knowledge Architecture
📚 Tier 1: Knowledge Base (High Fidelity)
Comprehensive, authoritative documents on specific concepts:
- Deep technical details
- Edge cases and exceptions
- Historical context and decisions
- Implementation guidance
Location: /docs/concepts/[concept-name].md
📖 Tier 2: Synthesis Documents (Accessible)
Practical guides that combine multiple concepts:
- 80% of value in 20% of tokens
- Clear links to source concepts
- Unified examples showing interaction
- Task-oriented organization
Location: /docs/guides/[guide-name].md
Structural Patterns
The SABER Pattern
For critical technical sections, document:
## 🚨 CRITICAL: Payment Processing
**S**ecurity: PCI compliance required, no card storage
**A**lways: Use idempotency keys, maintain audit trail
**B**oundaries: $0.50 min, $10,000 max, 30s timeout
**E**rrors: Log full details, return safe messages
**R**etries: Max 3, exponential backoff, stop on new errors
See @saber-pattern.md for complete framework.
Invariant + Cue Pairing
**Invariant**: "Verify early, trust completely" {VETC}
Throughout our system, VETC means validating at edge, then using
trusted internal tokens. This verify-early pattern prevents
deep security checks in every service.
The Writing Process
📝 Pre-Write: Design Your Triggers
Before writing, identify:
- The single concept to convey
- Its canonical name (token-friendly)
- The invariant truth (≤25 tokens)
- The category/emoji marker
- Which tier it belongs to
✍️ Write: Layer Your Wisdom
- Capsule first - The irreducible truth
- Example second - Concrete instantiation
- Details last - For those who need depth
- Visual when helpful - Mermaid + meaning
- Links to sources - For knowledge base items
🔍 Review: Validate Retrieval
See @writing-checklist.md for complete validation checklist.
🔄 Maintain: Preserve Triggers While Updating Details
When updating:
- Invariants stay stable (they're the retrieval cues)
- Examples can change (keep them current)
- Details expand (add new edge cases)
- Links get verified (prevent drift)
Consolidation Principle
After iterations, consolidate with these lenses:
- Token Efficiency: Can I say this in fewer tokens?
- Embedding Coherence: Is each chunk semantically focused?
- Retrieval Precision: Will search find this easily?
- Cognitive Load: Am I respecting attention limits?
- Pattern Clarity: Are the relationships obvious?
Success Metrics
Your documentation succeeds when:
- ✅ Readers grasp concepts in seconds, not minutes
- ✅ AI finds and extracts exactly what's needed
- ✅ Knowledge transfers intact across contexts
- ✅ Updates preserve retrieval while adding detail
- ✅ Complex systems become navigable
Remember: We're not writing documentation. We're encoding wisdom triggers that activate understanding in any mind that encounters them.
References
- @capsule-pattern.md - Capsule architecture format and examples
- @saber-pattern.md - SABER pattern for critical sections
- @token-optimization.md - Advanced token optimization tactics
- @writing-checklist.md - Pre-write and review checklists