writing-rules
Use when creating or updating rules in CLAUDE.md, settings, or rule files. Covers confidence thresholds and false positive prevention.
When & Why to Use This Skill
This Claude skill provides a comprehensive framework for creating, updating, and maintaining high-quality operational rules within CLAUDE.md and configuration files. It optimizes AI agent performance by establishing clear confidence thresholds, preventing false positives, and ensuring behavioral consistency through testable constraints and quality checklists.
Use Cases
- Establishing project-specific guardrails in CLAUDE.md to prevent recurring coding mistakes and enforce architectural patterns.
- Defining confidence-based triggers that instruct the agent when to proceed autonomously and when to pause for user verification.
- Creating standardized 'SOPs for agents' by documenting specific triggers, actions, and exceptions for complex multi-step workflows.
- Improving agent reliability through rigorous testing protocols, including the creation of positive and negative test cases to minimize false activation rates.
| name | writing-rules |
|---|---|
| description | "Use when creating or updating rules in CLAUDE.md, settings, or rule files. Covers confidence thresholds and false positive prevention." |
Writing Rules
Overview
Rules are constraints that guide behavior. Good rules are specific, testable, and have clear thresholds for when they apply.
When to Create
Create a rule when:
- Same guidance given 3+ times manually
- Behavior needs to be consistent across sessions
- Mistake pattern keeps recurring
- Process discipline is required
Don't create for:
- Occasional edge cases (document as patterns instead)
- User preferences that change frequently
- One-time instructions
Structure Template
## Rule Name
**When:** [Specific trigger conditions]
**Confidence:** [high/medium/low - when to apply vs ask]
**Action:** [What to do]
### Examples
- Good: [Correct application]
- Bad: [Incorrect application]
### Exceptions
[When this rule doesn't apply]
Quality Checklist
- Clear trigger condition (not vague)
- Confidence threshold defined
- Both good AND bad examples
- Exceptions explicitly listed
- Testable (can verify rule was followed)
- No overlap with existing rules
- Assigned to correct category (process/domain/project)
Testing Requirements
- Create test cases - 5+ scenarios where rule should apply
- Create negative cases - 5+ scenarios where rule should NOT apply
- Run through agent - Verify correct activation
- Measure false positive rate - Target < 10%
- Monitor for 3 sessions - Auto-rollback if quality drops
Examples
Good Rule:
## Import Path Verification
**When:** Adding or modifying import statements
**Confidence:** high
**Action:** Verify path exists by checking 3+ existing examples in codebase
### Examples
- Good: Check `src/components/Button.tsx` exists before importing
- Bad: Assume `@/components/Button` works without verification
### Exceptions
- Standard library imports (React, Node built-ins)
- Well-known packages from package.json
Bad Rule:
## Be Careful
**When:** Doing things
**Action:** Think before acting
(Too vague, not testable, no examples)
Common Mistakes
| Mistake | Fix |
|---|---|
| Rule too broad | Narrow to specific trigger |
| No confidence threshold | Add high/medium/low guidance |
| Missing exceptions | List when rule doesn't apply |
| No examples | Add good AND bad examples |
| Overlaps existing rule | Merge or differentiate clearly |
| Not testable | Rewrite with observable outcome |