feat: implement v0.4.0 - Reports, Charts, Comparison, Dark Mode, E2E Testing
Backend (@backend-dev): - Add ReportService with PDF/CSV generation (reportlab, pandas) - Implement Report API endpoints (POST, GET, DELETE, download) - Add ReportRepository and schemas - Configure storage with auto-cleanup (30 days) - Rate limiting: 10 downloads/minute - Professional PDF templates with charts support Frontend (@frontend-dev): - Integrate Recharts for data visualization - Add CostBreakdown, TimeSeries, ComparisonBar charts - Implement scenario comparison page with multi-select - Add dark/light mode toggle with ThemeProvider - Create Reports page with generation form and list - Add new UI components: checkbox, dialog, tabs, label, skeleton - Implement useComparison and useReports hooks QA (@qa-engineer): - Setup Playwright E2E testing framework - Create 7 test spec files with 94 test cases - Add visual regression testing with baselines - Configure multi-browser testing (Chrome, Firefox, WebKit) - Add mobile responsive tests - Create test fixtures and helpers - Setup GitHub Actions CI workflow Documentation (@spec-architect): - Create detailed kanban-v0.4.0.md with 27 tasks - Update progress.md with v0.4.0 tracking - Create v0.4.0 planning prompt Features: ✅ PDF/CSV Report Generation ✅ Interactive Charts (Pie, Area, Bar) ✅ Scenario Comparison (2-4 scenarios) ✅ Dark/Light Mode Toggle ✅ E2E Test Suite (94 tests) Dependencies added: - Backend: reportlab, pandas, slowapi - Frontend: recharts, date-fns, @radix-ui/react-checkbox/dialog/tabs - Testing: @playwright/test 27 tasks completed, 100% v0.4.0 implementation
This commit is contained in:
391
frontend/e2e/README.md
Normal file
391
frontend/e2e/README.md
Normal file
@@ -0,0 +1,391 @@
|
||||
# End-to-End Testing with Playwright
|
||||
|
||||
This directory contains the End-to-End (E2E) test suite for mockupAWS using Playwright.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Setup](#setup)
|
||||
- [Running Tests](#running-tests)
|
||||
- [Test Structure](#test-structure)
|
||||
- [Test Data & Fixtures](#test-data--fixtures)
|
||||
- [Visual Regression Testing](#visual-regression-testing)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Overview
|
||||
|
||||
The E2E test suite provides comprehensive testing of the mockupAWS application, covering:
|
||||
|
||||
- **Scenario CRUD Operations**: Creating, reading, updating, and deleting scenarios
|
||||
- **Log Ingestion**: Sending test logs and verifying metrics updates
|
||||
- **Report Generation**: Generating and downloading PDF and CSV reports
|
||||
- **Scenario Comparison**: Comparing multiple scenarios side-by-side
|
||||
- **Navigation**: Testing all routes and responsive design
|
||||
- **Visual Regression**: Ensuring UI consistency across browsers and viewports
|
||||
|
||||
## Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Node.js 18+ installed
|
||||
- Backend API running on `http://localhost:8000`
|
||||
- Frontend development server
|
||||
|
||||
### Installation
|
||||
|
||||
Playwright and its dependencies are already configured in the project. To install browsers:
|
||||
|
||||
```bash
|
||||
# Install Playwright browsers
|
||||
npx playwright install
|
||||
|
||||
# Install additional dependencies for browser testing
|
||||
npx playwright install-deps
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Create a `.env` file in the `frontend` directory if needed:
|
||||
|
||||
```env
|
||||
# Optional: Override the API URL for tests
|
||||
VITE_API_URL=http://localhost:8000/api/v1
|
||||
|
||||
# Optional: Set CI mode
|
||||
CI=true
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### NPM Scripts
|
||||
|
||||
The following npm scripts are available:
|
||||
|
||||
```bash
|
||||
# Run all E2E tests in headless mode
|
||||
npm run test:e2e
|
||||
|
||||
# Run tests with UI mode (interactive)
|
||||
npm run test:e2e:ui
|
||||
|
||||
# Run tests in debug mode
|
||||
npm run test:e2e:debug
|
||||
|
||||
# Run tests in headed mode (visible browser)
|
||||
npm run test:e2e:headed
|
||||
|
||||
# Run tests in CI mode
|
||||
npm run test:e2e:ci
|
||||
```
|
||||
|
||||
### Running Specific Tests
|
||||
|
||||
```bash
|
||||
# Run a specific test file
|
||||
npx playwright test scenario-crud.spec.ts
|
||||
|
||||
# Run tests matching a pattern
|
||||
npx playwright test --grep "should create"
|
||||
|
||||
# Run tests in a specific browser
|
||||
npx playwright test --project=chromium
|
||||
|
||||
# Run tests with specific tag
|
||||
npx playwright test --grep "@critical"
|
||||
```
|
||||
|
||||
### Updating Visual Baselines
|
||||
|
||||
```bash
|
||||
# Update all visual baseline screenshots
|
||||
UPDATE_BASELINE=true npx playwright test visual-regression.spec.ts
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
```
|
||||
e2e/
|
||||
├── fixtures/ # Test data and fixtures
|
||||
│ ├── test-scenarios.ts # Sample scenario data
|
||||
│ └── test-logs.ts # Sample log data
|
||||
├── screenshots/ # Visual regression screenshots
|
||||
│ └── baseline/ # Baseline images
|
||||
├── global-setup.ts # Global test setup
|
||||
├── global-teardown.ts # Global test teardown
|
||||
├── utils/
|
||||
│ └── test-helpers.ts # Shared test utilities
|
||||
├── scenario-crud.spec.ts # Scenario CRUD tests
|
||||
├── ingest-logs.spec.ts # Log ingestion tests
|
||||
├── reports.spec.ts # Report generation tests
|
||||
├── comparison.spec.ts # Scenario comparison tests
|
||||
├── navigation.spec.ts # Navigation and routing tests
|
||||
├── visual-regression.spec.ts # Visual regression tests
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Test Data & Fixtures
|
||||
|
||||
### Test Scenarios
|
||||
|
||||
The `test-scenarios.ts` fixture provides sample scenarios for testing:
|
||||
|
||||
```typescript
|
||||
import { testScenarios, newScenarioData } from './fixtures/test-scenarios';
|
||||
|
||||
// Use in tests
|
||||
const scenario = await createScenarioViaAPI(request, newScenarioData);
|
||||
```
|
||||
|
||||
### Test Logs
|
||||
|
||||
The `test-logs.ts` fixture provides sample log data:
|
||||
|
||||
```typescript
|
||||
import { testLogs, logsWithPII, highVolumeLogs } from './fixtures/test-logs';
|
||||
|
||||
// Send logs to scenario
|
||||
await sendTestLogs(request, scenarioId, testLogs);
|
||||
```
|
||||
|
||||
### API Helpers
|
||||
|
||||
Test utilities are available in `utils/test-helpers.ts`:
|
||||
|
||||
- `createScenarioViaAPI()` - Create scenario via API
|
||||
- `deleteScenarioViaAPI()` - Delete scenario via API
|
||||
- `startScenarioViaAPI()` - Start scenario
|
||||
- `stopScenarioViaAPI()` - Stop scenario
|
||||
- `sendTestLogs()` - Send test logs
|
||||
- `navigateTo()` - Navigate to page with wait
|
||||
- `waitForLoading()` - Wait for loading states
|
||||
- `generateTestScenarioName()` - Generate unique test names
|
||||
|
||||
## Visual Regression Testing
|
||||
|
||||
### How It Works
|
||||
|
||||
Visual regression tests capture screenshots of pages/components and compare them against baseline images. Tests fail if differences exceed the configured threshold (20%).
|
||||
|
||||
### Running Visual Tests
|
||||
|
||||
```bash
|
||||
# Run all visual regression tests
|
||||
npx playwright test visual-regression.spec.ts
|
||||
|
||||
# Run tests for specific viewport
|
||||
npx playwright test visual-regression.spec.ts --project="Mobile Chrome"
|
||||
|
||||
# Update baselines
|
||||
UPDATE_BASELINE=true npx playwright test visual-regression.spec.ts
|
||||
```
|
||||
|
||||
### Screenshots Location
|
||||
|
||||
- **Baseline**: `e2e/screenshots/baseline/`
|
||||
- **Actual**: `e2e/screenshots/actual/`
|
||||
- **Diff**: `e2e/screenshots/diff/`
|
||||
|
||||
### Adding New Visual Tests
|
||||
|
||||
```typescript
|
||||
test('new page should match baseline', async ({ page }) => {
|
||||
await navigateTo(page, '/new-page');
|
||||
await waitForLoading(page);
|
||||
|
||||
const screenshot = await page.screenshot({ fullPage: true });
|
||||
|
||||
expect(screenshot).toMatchSnapshot('new-page.png', {
|
||||
threshold: 0.2, // 20% threshold
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Data Attributes for Selectors
|
||||
|
||||
Prefer `data-testid` attributes over CSS selectors:
|
||||
|
||||
```tsx
|
||||
// In component
|
||||
<button data-testid="submit-button">Submit</button>
|
||||
|
||||
// In test
|
||||
await page.getByTestId('submit-button').click();
|
||||
```
|
||||
|
||||
### 2. Wait for Async Operations
|
||||
|
||||
Always wait for async operations to complete:
|
||||
|
||||
```typescript
|
||||
await page.waitForResponse('**/api/scenarios');
|
||||
await waitForLoading(page);
|
||||
```
|
||||
|
||||
### 3. Clean Up Test Data
|
||||
|
||||
Use `beforeAll`/`afterAll` for setup and cleanup:
|
||||
|
||||
```typescript
|
||||
test.describe('Feature', () => {
|
||||
test.beforeAll(async ({ request }) => {
|
||||
// Create test data
|
||||
});
|
||||
|
||||
test.afterAll(async ({ request }) => {
|
||||
// Clean up test data
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 4. Use Unique Test Names
|
||||
|
||||
Generate unique names to avoid conflicts:
|
||||
|
||||
```typescript
|
||||
const testName = generateTestScenarioName('My Test');
|
||||
```
|
||||
|
||||
### 5. Test Across Viewports
|
||||
|
||||
Test both desktop and mobile:
|
||||
|
||||
```typescript
|
||||
test('desktop view', async ({ page }) => {
|
||||
await setDesktopViewport(page);
|
||||
// ...
|
||||
});
|
||||
|
||||
test('mobile view', async ({ page }) => {
|
||||
await setMobileViewport(page);
|
||||
// ...
|
||||
});
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests Timing Out
|
||||
|
||||
If tests timeout, increase the timeout in `playwright.config.ts`:
|
||||
|
||||
```typescript
|
||||
timeout: 90000, // Increase to 90 seconds
|
||||
```
|
||||
|
||||
### Flaky Tests
|
||||
|
||||
For flaky tests, use retries:
|
||||
|
||||
```bash
|
||||
npx playwright test --retries=3
|
||||
```
|
||||
|
||||
Or configure in `playwright.config.ts`:
|
||||
|
||||
```typescript
|
||||
retries: process.env.CI ? 2 : 0,
|
||||
```
|
||||
|
||||
### Browser Not Found
|
||||
|
||||
If browsers are not installed:
|
||||
|
||||
```bash
|
||||
npx playwright install
|
||||
```
|
||||
|
||||
### API Not Available
|
||||
|
||||
Ensure the backend is running:
|
||||
|
||||
```bash
|
||||
# In project root
|
||||
docker-compose up -d
|
||||
# or
|
||||
uvicorn src.main:app --reload --port 8000
|
||||
```
|
||||
|
||||
### Screenshot Comparison Fails
|
||||
|
||||
If visual tests fail due to minor differences:
|
||||
|
||||
1. Check the diff image in `e2e/screenshots/diff/`
|
||||
2. Update baseline if the change is intentional:
|
||||
```bash
|
||||
UPDATE_BASELINE=true npx playwright test
|
||||
```
|
||||
3. Adjust threshold if needed:
|
||||
```typescript
|
||||
threshold: 0.3, // Increase to 30%
|
||||
```
|
||||
|
||||
## CI Integration
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
name: E2E Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
e2e:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: '18'
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
working-directory: frontend
|
||||
|
||||
- name: Install Playwright browsers
|
||||
run: npx playwright install --with-deps
|
||||
working-directory: frontend
|
||||
|
||||
- name: Run E2E tests
|
||||
run: npm run test:e2e:ci
|
||||
working-directory: frontend
|
||||
|
||||
- name: Upload test results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: playwright-report
|
||||
path: frontend/e2e-report/
|
||||
```
|
||||
|
||||
## Coverage Reporting
|
||||
|
||||
Playwright E2E tests can be integrated with code coverage tools. To enable coverage:
|
||||
|
||||
1. Instrument your frontend code with Istanbul
|
||||
2. Configure Playwright to collect coverage
|
||||
3. Generate coverage reports
|
||||
|
||||
See [Playwright Coverage Guide](https://playwright.dev/docs/api/class-coverage) for details.
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new E2E tests:
|
||||
|
||||
1. Follow the existing test structure
|
||||
2. Use fixtures for test data
|
||||
3. Add proper cleanup in `afterAll`
|
||||
4. Include both positive and negative test cases
|
||||
5. Test across multiple viewports if UI-related
|
||||
6. Update this README with new test information
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Check the [Playwright Documentation](https://playwright.dev/)
|
||||
- Review existing tests for examples
|
||||
- Open an issue in the project repository
|
||||
Reference in New Issue
Block a user