Testing Strategy

You are planning the testing strategy for: $ARGUMENTS

A test suite is an investment. It pays dividends when it catches regressions before they reach users, and it creates a debt when it slows down changes without adding confidence. The goal is maximum confidence for minimum maintenance cost.


The Test Pyramid

          /\
         /E2E\        ← few, slow, test user journeys
        /------\
       / Integr.\     ← some, medium speed, test real slices
      /----------\
     / Unit Tests \   ← many, fast, test logic in isolation
    /--------------\

The pyramid is a ratio. If you have more E2E tests than unit tests, you have an inverted pyramid — it will be slow, flaky, and expensive to maintain.

Target distribution:

  • Unit: 60-70% of test count
  • Integration: 20-30%
  • E2E: 5-10% (critical journeys only)

1. Unit Tests

What to unit test

  • Every function with meaningful logic (conditionals, loops, calculations)
  • Every edge case that could cause different output
  • Every error/exception path
  • Business rules and domain logic

What NOT to unit test

  • Framework internals (the ORM's save() method)
  • Simple getters and setters with no logic
  • Constants and pure data objects
  • Code that is only meaningful in an integrated context

Mocking principles

Mock external dependencies, not internal ones.

Mock:

  • HTTP clients (third-party APIs)
  • Email/SMS senders
  • Queue publishers
  • File system (in most cases)
  • The current time (for time-dependent logic)
  • Random number generators

Do NOT mock:

  • Your own domain services (test them with real instances)
  • In-memory data structures
  • Pure functions

The mock smell test: If you need to mock 5+ things to test a function, the function has too many dependencies. Refactor first.


2. Integration Tests

What to integration test

  • Controller → Service → Repository → Database (the full slice)
  • Queue consumers (enqueue a message, verify the DB state after processing)
  • Authentication/authorisation flows (real middleware, real token, real DB)
  • Caching behaviour (cache miss → DB hit, cache hit → no DB hit)

Database integration testing rules

  • Use a real database of the same engine as production (never mock the DB)
  • Use a separate test database, never production data
  • Wrap each test in a transaction and roll back after — keeps tests independent and fast
  • Seed only the data each test needs — avoid a giant fixture file
// Good: each test sets up exactly what it needs
public function test_user_can_place_order(): void
{
    $user = User::factory()->create(['credit' => 100]);
    $product = Product::factory()->create(['price' => 50, 'stock' => 10]);

    $response = $this->actingAs($user)->post('/api/orders', ['product_id' => $product->id]);

    $this->assertEquals(201, $response->status());
    $this->assertDatabaseHas('orders', ['user_id' => $user->id]);
    $this->assertDatabaseHas('products', ['id' => $product->id, 'stock' => 9]);
}

3. End-to-End Tests

What to E2E test

Only the critical user journeys — the ones where a failure means the business is broken:

  • User signup and login
  • Purchase / checkout flow
  • Password reset
  • Core feature completion (whatever the primary value of the app is)

What NOT to E2E test

  • Every edge case (E2E tests are too slow and brittle for this)
  • Internal business rules (unit test those)
  • Error messages (integration test those)

E2E test writing rules

  • Tests must be deterministic — same result every run
  • Use data-testid attributes for selectors, not CSS classes (which change with design updates)
  • Each test must clean up its data (or use isolated test accounts)
  • Keep tests short and focused — one journey, not ten features chained together

4. Test Naming — The Specification Style

A test name is the first documentation a future developer reads when the test fails. It must tell them:

  1. What are you testing?
  2. What scenario/condition?
  3. What is the expected result?
[module_or_class]__[scenario]__[expected_result]

Examples:
UserRegistration__with_duplicate_email__returns_conflict_error
Cart__adding_item_when_stock_is_zero__throws_out_of_stock_exception
Invoice__calculate_total_with_vat_exempt_customer__excludes_tax
PasswordReset__with_expired_token__returns_401_unauthorized

5. Coverage Targets and What They Mean

Coverage % What it means
0% You are testing nothing
50% You are testing happy paths
80% You are covering most branches — common baseline
90% You are covering edge cases — good for critical code
100% You are covering every line — may include diminishing returns

Coverage is a floor, not a goal. 80% coverage with bad assertions is worse than 60% coverage with tight assertions.

Mutation testing (if available: Infection for PHP, Stryker for JS/TS) is the real measurement — it verifies your assertions actually catch bugs.


6. Testing Checklist for a Feature

Before marking any feature complete:

  • [ ] Unit test: happy path
  • [ ] Unit test: each distinct error/exception path
  • [ ] Unit test: boundary values (empty, null, max, min)
  • [ ] Integration test: primary API endpoint returns correct response
  • [ ] Integration test: validation errors return 422 with correct field errors
  • [ ] Integration test: unauthorised access returns 401/403
  • [ ] E2E test: if this is a critical user journey
  • [ ] Regression test: for any bug fix

7. Test Maintenance Rules

  • Delete tests that test nothing — tests that assert true === true or just call a function without asserting the result
  • Fix flaky tests immediately — a flaky test creates false confidence and alert fatigue
  • Treat test code like production code — read it in code reviews, keep it clean
  • Do not use sleep() in tests — it is hiding a race condition or async issue. Fix the root cause.