Backend (@backend-dev): - Add ReportService with PDF/CSV generation (reportlab, pandas) - Implement Report API endpoints (POST, GET, DELETE, download) - Add ReportRepository and schemas - Configure storage with auto-cleanup (30 days) - Rate limiting: 10 downloads/minute - Professional PDF templates with charts support Frontend (@frontend-dev): - Integrate Recharts for data visualization - Add CostBreakdown, TimeSeries, ComparisonBar charts - Implement scenario comparison page with multi-select - Add dark/light mode toggle with ThemeProvider - Create Reports page with generation form and list - Add new UI components: checkbox, dialog, tabs, label, skeleton - Implement useComparison and useReports hooks QA (@qa-engineer): - Setup Playwright E2E testing framework - Create 7 test spec files with 94 test cases - Add visual regression testing with baselines - Configure multi-browser testing (Chrome, Firefox, WebKit) - Add mobile responsive tests - Create test fixtures and helpers - Setup GitHub Actions CI workflow Documentation (@spec-architect): - Create detailed kanban-v0.4.0.md with 27 tasks - Update progress.md with v0.4.0 tracking - Create v0.4.0 planning prompt Features: ✅ PDF/CSV Report Generation ✅ Interactive Charts (Pie, Area, Bar) ✅ Scenario Comparison (2-4 scenarios) ✅ Dark/Light Mode Toggle ✅ E2E Test Suite (94 tests) Dependencies added: - Backend: reportlab, pandas, slowapi - Frontend: recharts, date-fns, @radix-ui/react-checkbox/dialog/tabs - Testing: @playwright/test 27 tasks completed, 100% v0.4.0 implementation
8.9 KiB
End-to-End Testing with Playwright
This directory contains the End-to-End (E2E) test suite for mockupAWS using Playwright.
Table of Contents
- Overview
- Setup
- Running Tests
- Test Structure
- Test Data & Fixtures
- Visual Regression Testing
- Best Practices
- Troubleshooting
Overview
The E2E test suite provides comprehensive testing of the mockupAWS application, covering:
- Scenario CRUD Operations: Creating, reading, updating, and deleting scenarios
- Log Ingestion: Sending test logs and verifying metrics updates
- Report Generation: Generating and downloading PDF and CSV reports
- Scenario Comparison: Comparing multiple scenarios side-by-side
- Navigation: Testing all routes and responsive design
- Visual Regression: Ensuring UI consistency across browsers and viewports
Setup
Prerequisites
- Node.js 18+ installed
- Backend API running on
http://localhost:8000 - Frontend development server
Installation
Playwright and its dependencies are already configured in the project. To install browsers:
# Install Playwright browsers
npx playwright install
# Install additional dependencies for browser testing
npx playwright install-deps
Environment Variables
Create a .env file in the frontend directory if needed:
# Optional: Override the API URL for tests
VITE_API_URL=http://localhost:8000/api/v1
# Optional: Set CI mode
CI=true
Running Tests
NPM Scripts
The following npm scripts are available:
# Run all E2E tests in headless mode
npm run test:e2e
# Run tests with UI mode (interactive)
npm run test:e2e:ui
# Run tests in debug mode
npm run test:e2e:debug
# Run tests in headed mode (visible browser)
npm run test:e2e:headed
# Run tests in CI mode
npm run test:e2e:ci
Running Specific Tests
# Run a specific test file
npx playwright test scenario-crud.spec.ts
# Run tests matching a pattern
npx playwright test --grep "should create"
# Run tests in a specific browser
npx playwright test --project=chromium
# Run tests with specific tag
npx playwright test --grep "@critical"
Updating Visual Baselines
# Update all visual baseline screenshots
UPDATE_BASELINE=true npx playwright test visual-regression.spec.ts
Test Structure
e2e/
├── fixtures/ # Test data and fixtures
│ ├── test-scenarios.ts # Sample scenario data
│ └── test-logs.ts # Sample log data
├── screenshots/ # Visual regression screenshots
│ └── baseline/ # Baseline images
├── global-setup.ts # Global test setup
├── global-teardown.ts # Global test teardown
├── utils/
│ └── test-helpers.ts # Shared test utilities
├── scenario-crud.spec.ts # Scenario CRUD tests
├── ingest-logs.spec.ts # Log ingestion tests
├── reports.spec.ts # Report generation tests
├── comparison.spec.ts # Scenario comparison tests
├── navigation.spec.ts # Navigation and routing tests
├── visual-regression.spec.ts # Visual regression tests
└── README.md # This file
Test Data & Fixtures
Test Scenarios
The test-scenarios.ts fixture provides sample scenarios for testing:
import { testScenarios, newScenarioData } from './fixtures/test-scenarios';
// Use in tests
const scenario = await createScenarioViaAPI(request, newScenarioData);
Test Logs
The test-logs.ts fixture provides sample log data:
import { testLogs, logsWithPII, highVolumeLogs } from './fixtures/test-logs';
// Send logs to scenario
await sendTestLogs(request, scenarioId, testLogs);
API Helpers
Test utilities are available in utils/test-helpers.ts:
createScenarioViaAPI()- Create scenario via APIdeleteScenarioViaAPI()- Delete scenario via APIstartScenarioViaAPI()- Start scenariostopScenarioViaAPI()- Stop scenariosendTestLogs()- Send test logsnavigateTo()- Navigate to page with waitwaitForLoading()- Wait for loading statesgenerateTestScenarioName()- Generate unique test names
Visual Regression Testing
How It Works
Visual regression tests capture screenshots of pages/components and compare them against baseline images. Tests fail if differences exceed the configured threshold (20%).
Running Visual Tests
# Run all visual regression tests
npx playwright test visual-regression.spec.ts
# Run tests for specific viewport
npx playwright test visual-regression.spec.ts --project="Mobile Chrome"
# Update baselines
UPDATE_BASELINE=true npx playwright test visual-regression.spec.ts
Screenshots Location
- Baseline:
e2e/screenshots/baseline/ - Actual:
e2e/screenshots/actual/ - Diff:
e2e/screenshots/diff/
Adding New Visual Tests
test('new page should match baseline', async ({ page }) => {
await navigateTo(page, '/new-page');
await waitForLoading(page);
const screenshot = await page.screenshot({ fullPage: true });
expect(screenshot).toMatchSnapshot('new-page.png', {
threshold: 0.2, // 20% threshold
});
});
Best Practices
1. Use Data Attributes for Selectors
Prefer data-testid attributes over CSS selectors:
// In component
<button data-testid="submit-button">Submit</button>
// In test
await page.getByTestId('submit-button').click();
2. Wait for Async Operations
Always wait for async operations to complete:
await page.waitForResponse('**/api/scenarios');
await waitForLoading(page);
3. Clean Up Test Data
Use beforeAll/afterAll for setup and cleanup:
test.describe('Feature', () => {
test.beforeAll(async ({ request }) => {
// Create test data
});
test.afterAll(async ({ request }) => {
// Clean up test data
});
});
4. Use Unique Test Names
Generate unique names to avoid conflicts:
const testName = generateTestScenarioName('My Test');
5. Test Across Viewports
Test both desktop and mobile:
test('desktop view', async ({ page }) => {
await setDesktopViewport(page);
// ...
});
test('mobile view', async ({ page }) => {
await setMobileViewport(page);
// ...
});
Troubleshooting
Tests Timing Out
If tests timeout, increase the timeout in playwright.config.ts:
timeout: 90000, // Increase to 90 seconds
Flaky Tests
For flaky tests, use retries:
npx playwright test --retries=3
Or configure in playwright.config.ts:
retries: process.env.CI ? 2 : 0,
Browser Not Found
If browsers are not installed:
npx playwright install
API Not Available
Ensure the backend is running:
# In project root
docker-compose up -d
# or
uvicorn src.main:app --reload --port 8000
Screenshot Comparison Fails
If visual tests fail due to minor differences:
- Check the diff image in
e2e/screenshots/diff/ - Update baseline if the change is intentional:
UPDATE_BASELINE=true npx playwright test - Adjust threshold if needed:
threshold: 0.3, // Increase to 30%
CI Integration
GitHub Actions Example
name: E2E Tests
on: [push, pull_request]
jobs:
e2e:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
working-directory: frontend
- name: Install Playwright browsers
run: npx playwright install --with-deps
working-directory: frontend
- name: Run E2E tests
run: npm run test:e2e:ci
working-directory: frontend
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: playwright-report
path: frontend/e2e-report/
Coverage Reporting
Playwright E2E tests can be integrated with code coverage tools. To enable coverage:
- Instrument your frontend code with Istanbul
- Configure Playwright to collect coverage
- Generate coverage reports
See Playwright Coverage Guide for details.
Contributing
When adding new E2E tests:
- Follow the existing test structure
- Use fixtures for test data
- Add proper cleanup in
afterAll - Include both positive and negative test cases
- Test across multiple viewports if UI-related
- Update this README with new test information
Support
For issues or questions:
- Check the Playwright Documentation
- Review existing tests for examples
- Open an issue in the project repository