test-fixing
Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.
When & Why to Use This Skill
The 'test-fixing' skill is a systematic automation tool designed to identify, categorize, and resolve software test failures. By utilizing smart error grouping based on error types, modules, and root causes, it streamlines the debugging process, allowing developers to efficiently restore test suites to a passing state after refactoring, dependency updates, or CI/CD failures.
Use Cases
- 1. Post-Refactoring Stabilization: Automatically fix widespread test failures resulting from renamed modules or changed function signatures after significant code changes.
- 2. CI/CD Failure Resolution: Rapidly address broken builds by systematically triaging and fixing errors reported in continuous integration pipelines.
- 3. Dependency Upgrade Support: Efficiently resolve import errors and configuration issues that occur when updating project dependencies or frameworks.
- 4. Regression Debugging: Identify and fix logic bugs introduced during new feature development to ensure the entire codebase remains stable.
| name | test-fixing |
|---|---|
| description | Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass. |
Test Fixing
Systematically identify and fix all failing tests using smart grouping strategies.
When to Use
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
Systematic Approach
1. Initial Test Run
Run make test to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
2. Smart Error Grouping
Group similar failures by:
- Error type: ImportError, AttributeError, AssertionError, etc.
- Module/file: Same file causing multiple test failures
- Root cause: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
3. Systematic Fixing Process
For each group (starting with highest impact):
Identify root cause
- Read relevant code
- Check recent changes with
git diff - Understand the error pattern
Implement fix
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
Verify fix
- Run subset of tests for this group
- Use pytest markers or file patterns:
uv run pytest tests/path/to/test_file.py -v uv run pytest -k "pattern" -v - Ensure group passes before moving on
Move to next group
4. Fix Order Strategy
Infrastructure first:
- Import errors
- Missing dependencies
- Configuration issues
Then API changes:
- Function signature changes
- Module reorganization
- Renamed variables/functions
Finally, logic issues:
- Assertion failures
- Business logic bugs
- Edge case handling
5. Final Verification
After all groups fixed:
- Run complete test suite:
make test - Verify no regressions
- Check test coverage remains intact
Best Practices
- Fix one group at a time
- Run focused tests after each fix
- Use
git diffto understand recent changes - Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused
Example Workflow
User: "The tests are failing after my refactor"
- Run
make test→ 15 failures identified - Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
- Fix ImportErrors first → Run subset → Verify
- Fix AttributeErrors → Run subset → Verify
- Fix AssertionErrors → Run subset → Verify
- Run full suite → All pass ✓