integration-testing-performance

michaellperry's avatarfrom michaellperry

Integration testing performance optimization, test parallelization, cleanup strategies, and CI/CD categorization patterns. Use when optimizing test execution speed, managing test data, or structuring tests for automated pipelines.

0stars🔀0forks📁View on GitHub🕐Updated Jan 11, 2026

When & Why to Use This Skill

This Claude skill optimizes integration testing workflows by implementing parallelization, efficient data cleanup, and strategic CI/CD categorization. It helps developers significantly reduce test execution time, maintain clean test environments, and ensure high-quality software delivery through structured and scalable automated pipelines.

Use Cases

  • Accelerating slow integration test suites by enabling parallel execution with isolated database contexts and unique tenant IDs.
  • Automating data cleanup strategies using disposable scopes to prevent cross-test pollution and ensure environment stability.
  • Categorizing tests with traits to optimize CI/CD pipelines, enabling fast feedback on pull requests while maintaining comprehensive nightly builds.
  • Monitoring and optimizing memory-heavy or long-running tests to prevent performance regressions in the testing infrastructure.
  • Structuring test data management to use bulk creation for efficiency while minimizing the use of expensive shared fixtures.
nameintegration-testing-performance
descriptionIntegration testing performance optimization, test parallelization, cleanup strategies, and CI/CD categorization patterns. Use when optimizing test execution speed, managing test data, or structuring tests for automated pipelines.

Integration Testing Performance and Maintainability

Use when speeding up integration suites, keeping data isolated/clean, and tailoring what runs in CI/CD stages.

When to use

  • Parallelizing integration tests with isolated tenants/DbContexts
  • Adding cleanup for shared containers or fixtures
  • Tagging tests for selective CI/CD execution (PR vs main vs nightly vs deploy)
  • Tracking slow/memory-heavy tests and optimizing data setup

Core principles

  • Parallel by default with unique tenant IDs and fresh scopes per test
  • Clean up created data (disposable scopes first; manual cleanup as fallback)
  • Trait tests to slice by category/speed/feature/environment; align pipeline filters
  • Monitor execution time and memory; fail fast on regressions
  • Bulk-create data when needed; share expensive fixtures sparingly

Resources

Default locations

  • Integration tests: tests/GloboTicket.IntegrationTests
  • Shared fixtures/test data helpers: tests/GloboTicket.IntegrationTests/Fixtures or Helpers
  • CI filters: pipeline scripts or dotnet test arguments in scripts/bash|powershell

Validation checklist

  • Parallelization enabled in csproj and safe (isolated tenant IDs, no shared state)
  • Cleanup runs (disposable scopes or explicit) to prevent cross-test pollution
  • Traits applied consistently; pipeline filters match categories/speed
  • Long-running tests have timeouts; memory checks guard bulk ops where relevant
  • Bulk data creation uses single round trips; shared fixtures dispose correctly