smith-tests

tianjianjiang's avatarfrom tianjianjiang

Testing standards and TDD workflow. Use when writing tests, running test suites, implementing TDD, or organizing test files. Covers unit vs integration test separation, pytest patterns, and test-driven development methodology.

0stars🔀0forks📁View on GitHub🕐Updated Jan 10, 2026

When & Why to Use This Skill

This Claude skill establishes a comprehensive framework for testing standards and Test-Driven Development (TDD) workflows. It provides structured guidance on organizing unit and integration tests, implementing pytest patterns, and maintaining high code quality through a rigorous 'test-before-code' methodology. By enforcing consistent directory structures and environment configurations, it helps developers build more reliable, maintainable, and regression-proof software.

Use Cases

  • Implementing Test-Driven Development (TDD): Guiding the workflow of writing failing tests before implementation to ensure code meets specific requirements from the start.
  • Test Suite Organization: Structuring project directories to mirror source code for unit tests while maintaining separate, marked integration tests for external services.
  • Automated Quality Assurance: Running and verifying full test suites using modern Python package managers like Poetry or uv to ensure zero regressions after code changes.
  • PR Review & Coverage Analysis: Identifying critical testing gaps and behavioral coverage issues during the code review process to maintain high engineering standards.
  • Environment Configuration: Setting up isolated testing environments, managing conftest.py files, and handling environment variables securely via .env templates.
namesmith-tests
descriptionTesting standards and TDD workflow. Use when writing tests, running test suites, implementing TDD, or organizing test files. Covers unit vs integration test separation, pytest patterns, and test-driven development methodology.

Testing Standards

  • Load if: Writing tests, running test suites, TDD
  • Prerequisites: @smith-principles/SKILL.md, @smith-standards/SKILL.md, @smith-python/SKILL.md

CRITICAL (Primacy Zone)

  • MUST mirror source structure: foo/bar/xyz.pytests/unit/foo/bar/test_xyz.py
  • MUST use pytest functions (not classes) - see @smith-python/SKILL.md
  • MUST separate unit (tests/unit/) and integration (tests/integration/) tests
  • MUST use virtual env runner for pytest (poetry run or uv run)
  • MUST write tests BEFORE implementation (TDD)
  • MUST run full test suite after changes
  • NEVER use pytest -m "not integration" if folder structure is mirrored (import conflicts)
  • NEVER write implementation before tests
  • NEVER skip running tests after changes

Test Organization

Unit:

  • Location: tests/unit/
  • Characteristics: Mock dependencies, fast

Integration:

  • Location: tests/integration/
  • Characteristics: Real services, @pytest.mark.integration

TDD Workflow

  1. Understand: Read existing test patterns
  2. Design: Write failing tests defining expected behavior
  3. Implement: Write minimal code to pass tests
  4. Verify: Run tests, validate coverage
  5. Refactor: Improve code while keeping tests green

Environment Configuration

  • tests/conftest.py disables tracking (OPIK, etc.)
  • Virtual env runners load .env automatically
  • Use .env.example as template (NEVER commit .env)

Claude Code Plugin Integration

When pr-review-toolkit is available:

  • pr-test-analyzer agent: Analyzes behavioral coverage, identifies critical gaps
  • Rates test gaps 1-10 (10 = critical, must add)
  • Trigger: "Check if the tests are thorough" or use Task tool

Ralph Loop Integration

TDD = Ralph iteration: test → implement → pytest → iterate until <promise>TESTS PASS</promise>.

See @smith-ralph/SKILL.md for full patterns.

  • @smith-python/SKILL.md - Python testing patterns (pytest functions)
  • @smith-dev/SKILL.md - Development workflow (quality gates)
  • @smith-principles/SKILL.md - Core principles

ACTION (Recency Zone)

Run tests (Poetry):

poetry run pytest tests/unit/ -v
poetry run pytest tests/integration/ -v

Run tests (uv):

uv run pytest tests/unit/ -v

Success criteria:

  • All new functionality has tests
  • Test names follow project conventions
  • Tests are isolated and deterministic
  • No regressions in existing tests