Intent-Driven Testing

Back to Blog Listing

Intent-Driven Testing

Intent-Driven Testing: Validating the What, Not Just the How

Intent-driven testing shifts validation from implementation details to declared intent, ensuring your system works as specified rather th an just as coded. This approach generates comprehensive test suites from your intent files, introducing the powerful concept of specification coverage.

Ben HoustonMay 7, 202514 min read

Traditional testing focuses on implementation: we write code, then we write tests to validate that the code works as implemented. But this approach has a fundamental flaw—it only verifies that your system behaves according to how it was coded, not necessarily according to how it should function.

Intent-driven testing flips this paradigm. By deriving tests directly from intent specifications, we can validate that implementations actually fulfill their declared purpose.

The Problem with Traditional Testing

In implementation-centric development, tests often suffer from several critical limitations:

Test Theater

Tests validate what was implemented, not what was intended. This creates an illusion of quality that masks the core issue: the implementation might be wrong, even if all tests pass.

Specification Gaps

Manual test creation inevitably misses edge cases or scenarios that were part of the original requirements but never translated into test cases.

Maintenance Burden

As implementations evolve, test suites require parallel maintenance. This cost scales with codebase size, leading teams to either sacrifice velocity or test coverage.

Misaligned Incentives

Developers test what's easy to test rather than what's important to test, resulting in artificially high code coverage that doesn't reflect true quality.

Intent-Driven Testing: A New Paradigm

Intent-driven testing starts from a simple premise: if we've declared what a system should do in structured intent files, we can automatically derive what should be tested.

Core Principles

  1. Test against intent, not implementation — Validate that the system meets its declared purpose, not just that code runs without errors
  2. Generate tests from specifications — Use intent files as the source of truth for test generation
  3. Measure specification coverage, not code coverage — Track which declared behaviors are validated, not which lines of code are executed
  4. Maintain tests through intent updates — When specifications change, tests automatically evolve

How Intent-Driven Testing Works

Declary implements intent-driven testing through integrated flow with the main build process:

1. Specification to Test File Generation

When you run declary build, Declary automatically generates test files for each intent file that has testing enabled:

# Button.intent.yaml
builder: react/component
description: A customizable button component
testing: true # Enable test generation
instructions: |
  A button component that supports different sizes (sm, md, lg),
  variants (primary, secondary, ghost), and states (default, hover, focused, disabled).
  The button should be accessible and include proper ARIA attributes.
  It should prevent double-clicks by default with a 300ms debounce.

This generates both the implementation and a corresponding test definition:

# Button.test.yaml
target: Button.intent.yaml # Reference to the intent file being tested
setup: |
  This test suite validates the Button component's visual appearance,
  behavior, and accessibility across different variants and states.
tests:
  - name: 'Size Variants'
    setup: |
      Render Button component with size="sm", size="md", and size="lg" variants.
    assertions:
      - 'Small button has height of 32px'
      - 'Medium button has height of 40px'
      - 'Large button has height of 48px'

  - name: 'Visual Variants'
    setup: |
      Render Button with variant="primary", variant="secondary", and variant="ghost".
    assertions:
      - 'Primary button uses the primary color from theme'
      - 'Secondary button uses a lighter background color'
      - 'Ghost button has no background but shows border on hover'

  - name: 'Interactive States'
    setup: |
      Render Button in default, hovered, focused, and disabled states.
    assertions:
      - 'Default state renders with base styles'
      - 'Hover state shows appropriate visual feedback'
      - 'Focused state includes focus ring for accessibility'
      - 'Disabled state prevents clicks and shows disabled styling'

  - name: 'Accessibility'
    setup: |
      Render Button with various accessibility attributes and states.
    assertions:
      - "Button has role='button' when not using native button element"
      - 'Disabled buttons communicate disabled state via aria-disabled'
      - 'Buttons have sufficient color contrast in all states'

  - name: 'Behavior'
    setup: |
      Render Button with onClick handler and trigger various click events.
    assertions:
      - 'Clicking button triggers onClick handler'
      - 'Rapid clicks within 300ms only trigger one click event'
      - "Disabled button doesn't trigger onClick when clicked"

The source field explicitly links the test file back to its originating intent file, establishing a clear traceability chain.

2. Test Code Generation

As part of the same declary build process, Declary then generates the actual test code from these test files:

// Button.test.tsx (automatically generated)
import { render, screen, fireEvent } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { Button } from './Button';

describe('Button Component', () => {
  describe('Size Variants', () => {
    it('Small button has height of 32px', () => {
      render(<Button size="sm">Click Me</Button>);
      const button = screen.getByRole('button', { name: /click me/i });
      expect(button).toHaveStyle({ height: '32px' });
    });

    it('Medium button has height of 40px', () => {
      render(<Button size="md">Click Me</Button>);
      const button = screen.getByRole('button', { name: /click me/i });
      expect(button).toHaveStyle({ height: '40px' });
    });

    it('Large button has height of 48px', () => {
      render(<Button size="lg">Click Me</Button>);
      const button = screen.getByRole('button', { name: /click me/i });
      expect(button).toHaveStyle({ height: '48px' });
    });
  });

  // Additional test suites for other scenarios...
});

3. Test Execution with Standard Tools

Rather than introducing custom test runners, Declary integrates with your existing testing infrastructure:

# Use your standard test runner
npm test

# Or for specific components
jest Button.test.tsx

4. Specification Coverage Analysis

Declary can analyze the relationship between intent files and test files to calculate specification coverage, independent of test execution:

# Generate specification coverage report
declary spec-coverage

This analyzes your test files against their target intent specifications to determine how completely the tests cover the declared intent, producing a report:

Specification Coverage Report
----------------------------------------------------
✅ Button.intent.yaml: 16/16 assertions validated (100%)
✅ UserRegistration.intent.yaml: 12/12 assertions validated (100%)
⚠️ PaymentProcessor.intent.yaml: 14/18 assertions validated (78%)
  Missing coverage:
  - Exponential backoff for retry attempts
  - Fee disclosure requirements
  - Transaction logging validation
  - Bank transfer processing

Overall Specification Coverage: 42/46 assertions (91%)

Beyond Components: Testing Different Types of Intent

Intent-driven testing works seamlessly with all types of intent files:

API Endpoints

# UserRegistration.intent.yaml
builder: api/endpoint
description: User registration endpoint
testing: true # Enable test generation
instructions: |
  An API endpoint that registers new users.
  It should validate email format and password strength.
  Passwords must be at least 8 characters with one number and special character.
  If the email already exists, return a 409 Conflict status.
  On success, return 201 Created with the user ID.

Generated test file:

# UserRegistration.test.yaml
target: UserRegistration.intent.yaml
setup: |
  This test suite validates the user registration endpoint
  with various input combinations and expected responses.
tests:
  - name: 'Successful Registration'
    setup: |
      Send POST request to "/api/users" with valid data:
      {
        "email": "new@example.com",
        "password": "Strong!123",
        "name": "New User"
      }
    assertions:
      - 'Response status code is 201'
      - 'Response includes user ID'
      - 'User record is created in database'

  - name: 'Email Validation'
    setup: |
      Send POST request to "/api/users" with invalid email:
      {
        "email": "invalid-email",
        "password": "Strong!123",
        "name": "New User"
      }
    assertions:
      - 'Response status code is 400'
      - 'Response includes validation error for email'
      - 'No user record is created'

  # Additional tests...

Database Queries

# RevenueReport.intent.yaml
builder: sql/query
description: Monthly revenue report query
testing: true
instructions: |
  A SQL query that calculates monthly revenue by product category.
  It should include total revenue, order count, and average order value.
  Results should be filterable by date range and sortable by revenue.
  Performance should be optimized for large datasets.

Generated test file:

# RevenueReport.test.yaml
target: RevenueReport.intent.yaml
setup: |
  This test suite validates the monthly revenue report query
  against a database with sample order data across multiple months.
tests:
  - name: 'Basic Functionality'
    setup: |
      Create sample orders across multiple months and categories.
      Set startDate to "2025-01-01" and endDate to "2025-03-31".
    assertions:
      - 'Results include all expected months in range'
      - 'Results include all product categories with orders'
      - 'Total revenue matches sum of order amounts'
      - 'Order count matches actual order records'
      - 'Average order value calculation is correct'

  # Additional tests...

Specification Coverage vs. Code Coverage

Traditional code coverage measures which lines of code are executed during tests, but this metric has severe limitations:

  1. It measures execution, not validation — A line can execute without its behavior being verified
  2. It focuses on implementation, not requirements — 100% code coverage doesn't mean all requirements are tested
  3. It treats all code equally — Critical business logic and boilerplate have the same weight

Specification coverage addresses these limitations by measuring which declared behaviors are validated, not which lines run.

Comparative Example

Consider a payment processing component with this intent:

# PaymentProcessor.intent.yaml
builder: service/processor
description: Payment processing service
testing: true
instructions: |
  A service that processes payments through multiple providers.
  It should support credit cards, PayPal, and bank transfers.
  Failed payments should be retried up to 3 times with exponential backoff.
  All transactions should be logged for audit purposes.
  Payment methods with fees should clearly indicate the fee amount.

With traditional testing:

  • You might achieve 95% code coverage
  • But miss testing the retry logic with exponential backoff
  • And forget to validate fee disclosure requirements

With specification coverage:

  • You explicitly track validation of each declared behavior
  • Tests are generated for retry logic and fee disclosure
  • Coverage gaps directly align with specification gaps

Metrics That Matter

Intent-driven testing introduces more meaningful metrics:

  1. Assertion Coverage — Percentage of declared assertions that pass
  2. Scenario Coverage — Percentage of defined scenarios that are tested
  3. Edge Case Coverage — Percentage of boundary conditions that are validated
  4. Behavioral Coverage — Percentage of documented behaviors that are verified

These metrics more closely align with what actually matters: does your system work as intended?

Implementing Intent-Driven Testing in Your Workflow

Adopting intent-driven testing with Declary is straightforward with the integrated approach:

1. Enable Testing in Intent Files (Optional)

You can add the testing: true property to your intent files for automatic test generation:

# Component.intent.yaml
builder: react/component
description: My component
testing: true # Enable test generation
instructions: |
  Component description...

2. Build Your Project

Run the standard build command:

declary build

This generates both implementation code and test files.

3. Create or Customize Test Files

You can either use automatically generated test files or create your own:

# Manually created or edited test file
# Component.test.yaml
target: Component.intent.yaml
setup: |
  Test suite for component validation
tests:
  - name: 'Feature A'
    setup: |
      Configure component with specific props
    assertions:
      - 'Component renders correctly'
      - 'Component handles state changes'

The test files are designed to be human-readable, reviewable, and customizable.

4. Run Tests with Standard Tools

Use your existing test runner:

npm test

5. Analyze Specification Coverage

Generate a specification coverage report to see how well your tests cover your intent specifications:

declary spec-coverage

This analysis compares your .test.yaml files against their target intent files to determine how comprehensively the tests cover the declared behaviors, independent of whether the tests themselves pass or fail.

Incremental Adoption Strategies

As with other Declary features, intent-driven testing can be adopted incrementally:

  1. Start small — Enable testing on one component or endpoint
  2. Manual review first — Initially review generated test files without running them
  3. Focus on critical paths — Prioritize intent-driven testing for core business logic
  4. Hybrid approach — Mix intent-driven and hand-written tests

The goal isn't to replace all testing immediately, but to gradually shift validation focus from implementation to intent.

Benefits Beyond Validation

Intent-driven testing provides benefits that extend beyond simply validating code:

Specification Refinement

Generated test files often reveal ambiguities or gaps in intent specifications, leading to improved documentation.

Knowledge Transfer

Test files serve as executable documentation that helps new team members understand system requirements.

Regression Prevention

When implementations change, intent-driven tests ensure that the original purpose is still fulfilled.

Design Feedback

The process of defining testable assertions often leads to more robust and well-thought-out designs.

Real-World Example: Authentication Flow

Let's examine how intent-driven testing applies to a complex authentication flow:

# Authentication.intent.yaml
builder: feature/authentication
description: User authentication system
testing: true
instructions: |
  A comprehensive authentication system supporting:
  - Email/password login
  - Social login (Google, GitHub)
  - Multi-factor authentication (SMS, Authenticator app)
  - Session management with refreshable tokens
  - Account recovery via email

  Security requirements:
  - Passwords stored with Argon2 hashing
  - Rate limiting for login attempts
  - Device fingerprinting for suspicious activity detection
  - Audit logging for all authentication events

This complex feature would traditionally require dozens of manually created tests. With intent-driven testing:

  1. Declary analyzes the intent file to identify testable behaviors
  2. It generates a structured test file with scenarios for each authentication path
  3. The test file includes specific security validations
  4. Test code is generated with appropriate mocks and fixtures
  5. Specification coverage tracks validation of each authentication flow and security requirement

The result is a comprehensive test suite that validates the system works as intended, not just as implemented.

Conclusion: Testing What Matters

Intent-driven testing represents a fundamental shift in how we validate software. By deriving tests directly from declared intent, we ensure that our systems fulfill their purpose, not just execute code without errors.

This approach addresses the core limitations of traditional testing:

  • It aligns validation with requirements, not implementation details
  • It automatically adapts as specifications evolve
  • It provides meaningful coverage metrics that reflect actual quality
  • It reduces the maintenance burden of keeping tests in sync with code

As software development increasingly leverages AI for implementation, the focus naturally shifts to clearly specifying intent. Intent-driven testing ensures that what gets built actually does what it's supposed to do—bridging the gap between specification and validation.

In a world where implementation is increasingly automated, testing that your system fulfills its intended purpose becomes the critical quality measure. Intent-driven testing provides the bridge between "what we want" and "what we built"—ensuring they remain aligned throughout the development lifecycle.