--- phase: 03-lab-02-network-vpc plan: 03 type: execute wave: 2 depends_on: ["03-01", "03-02"] files_modified: - labs/lab-02-network/docker-compose.yml - labs/lab-02-network/Dockerfile - labs/lab-02-network/tests/04-verify-infrastructure.sh autonomous: true requirements: - LAB-02 - INF-02 - TEST-01 user_setup: [] must_haves: truths: - "docker-compose.yml defines VPC networks with custom subnets" - "Private networks use --internal flag and no published ports" - "Public services bind to 127.0.0.1 only (INF-02 compliant)" - "Infrastructure verification tests pass (GREEN phase)" - "All services start successfully with docker-compose up" artifacts: - path: "labs/lab-02-network/docker-compose.yml" provides: "VPC network definition with subnets" min_lines: 80 contains: "networks:, vpc-public, vpc-private, ipam, subnet" - path: "labs/lab-02-network/Dockerfile" provides: "Test container image for network verification" min_lines: 30 - path: "labs/lab-02-network/tests/04-verify-infrastructure.sh" provides: "Infrastructure verification script" min_lines: 100 key_links: - from: "docker-compose.yml" to: "INF-02 requirement" via: "Port bindings use 127.0.0.1, never 0.0.0.0" pattern: "127\\.0\\.0\\.1:[0-9]+:[0-9]+" - from: "docker-compose.yml networks" to: "VPC simulation" via: "Custom subnets 10.0.1.0/24 and 10.0.2.0/24" pattern: "10\\.0\\.[12]\\.0/24" --- Create Docker infrastructure (docker-compose.yml and Dockerfile) implementing VPC simulation with isolated bridge networks. Following TDD methodology, this is the GREEN phase - tests already exist from Plan 03-01, and infrastructure should make those tests pass. Infrastructure must enforce INF-02 compliance (private networks don't expose ports on 0.0.0.0). Purpose: Implement network infrastructure that simulates AWS VPC with public and private subnets. Students learn by running docker-compose and observing isolated networks in action. Output: Working docker-compose.yml with VPC networks, test container image, and infrastructure verification script that validates all requirements. @/home/luca/.claude/get-shit-done/workflows/execute-plan.md @/home/luca/.claude/get-shit-done/templates/summary.md @.planning/REQUIREMENTS.md @.planning/phases/03-lab-02-network-vpc/03-RESEARCH.md @.planning/phases/03-lab-02-network-vpc/03-01-PLAN.md @.planning/phases/02-lab-01-iam-sicurezza/02-03-SUMMARY.md @labs/lab-01-iam/docker-compose.yml @labs/lab-01-iam/Dockerfile # Phase 2 Infrastructure Patterns From labs/lab-01-iam/docker-compose.yml: ```yaml version: "3.8" services: lab01-test: build: . user: "1000:1000" # INF-01 enforcement container_name: lab01-iam-test healthcheck: test: ["CMD", "sh", "-c", "whoami | grep -q labuser"] ``` From labs/lab-01-iam/Dockerfile: - Alpine 3.19 base image (minimal, secure) - Non-root user (labuser, UID 1000) - USER directive before any operations - CMD demonstrates functionality # Network Architecture from RESEARCH.md From 03-RESEARCH.md, Pattern 1 (VPC Simulation): ```yaml networks: vpc-public: driver: bridge name: lab02-vpc-public ipam: config: - subnet: 10.0.1.0/24 gateway: 10.0.1.1 vpc-private: driver: bridge name: lab02-vpc-private internal: true # Blocks external internet ipam: config: - subnet: 10.0.2.0/24 gateway: 10.0.2.1 services: web: image: nginx:alpine networks: - vpc-public ports: - "127.0.0.1:8080:80" # INF-02: Only localhost db: image: postgres:16-alpine networks: - vpc-private # No ports - private network only ``` # INF-02 Requirement From REQUIREMENTS.md: - INF-02: Reti private non espongono porte sull'host (127.0.0.1 max, mai 0.0.0.0) - Test verifies: grep for 0.0.0.0 bindings (violation) - Correct pattern: `ports: ["127.0.0.1:8080:80"]` - Private services: No published ports at all Task 1: Create docker-compose.yml with VPC networks labs/lab-02-network/docker-compose.yml Create docker-compose.yml implementing VPC simulation with two isolated networks (public and private subnets). File structure: 1. **Header**: version: "3.8" 2. **Networks section**: - vpc-public: Bridge driver, subnet 10.0.1.0/24, gateway 10.0.1.1 - vpc-private: Bridge driver, subnet 10.0.2.0/24, gateway 10.0.2.1, internal: true 3. **Services section**: - **web-service**: Nginx Alpine in vpc-public network * Image: nginx:alpine * Container name: lab02-web-public * Networks: vpc-public only * Ports: 127.0.0.1:8080:80 (INF-02 compliant - localhost only) * Restart: unless-stopped - **api-service**: Custom test image in both networks * Build: . (uses Dockerfile from Task 2) * Container name: lab02-api-tier * Networks: vpc-public AND vpc-private (multi-homed for tier communication) * Ports: 127.0.0.1:8081:8000 (INF-02 compliant) * Restart: unless-stopped - **db-service**: PostgreSQL Alpine in vpc-private only * Image: postgres:16-alpine * Container name: lab02-db-private * Networks: vpc-private only * Environment: POSTGRES_PASSWORD=testpass (test only) * NO PORTS - private network isolation * Restart: unless-stopped 4. **Volumes**: Named volume for database data persistence (INF-04 preparation) 5. **Comments**: Explain VPC simulation, subnet choices, INF-02 compliance Requirements: - Use cloud nomenclature: vpc-public, vpc-private - Subnets: 10.0.1.0/24 (public), 10.0.2.0/24 (private) - INF-02 strict compliance: * Public services: `127.0.0.1:PORT:PORT` format * Private services: No published ports * NEVER use `0.0.0.0:PORT:PORT` - vpc-private uses `internal: true` (blocks internet access) - Multi-tier architecture: web → api → db - API service connects to both networks (demonstrates multi-homed containers) - Comments explaining each section Complete example structure: ```yaml version: "3.8" # VPC Network Simulation # This configuration simulates AWS VPC with public and private subnets # using Docker bridge networks with custom CIDR blocks networks: # Public Subnet: simulates 10.0.1.0/24 with internet access vpc-public: driver: bridge name: lab02-vpc-public ipam: driver: default config: - subnet: 10.0.1.0/24 gateway: 10.0.1.1 # Private Subnet: simulates 10.0.2.0/24 without internet access vpc-private: driver: bridge name: lab02-vpc-private internal: true # No internet gateway (private subnet) ipam: driver: default config: - subnet: 10.0.2.0/24 gateway: 10.0.2.1 services: # Web Server in Public Subnet web: image: nginx:alpine container_name: lab02-web-public networks: - vpc-public ports: # INF-02: Bind to localhost only, NOT 0.0.0.0 - "127.0.0.1:8080:80" restart: unless-stopped # API Service (multi-homed: both public and private) api: build: context: . dockerfile: Dockerfile container_name: lab02-api-tier networks: - vpc-public - vpc-private ports: # INF-02: Localhost binding only - "127.0.0.1:8081:8000" depends_on: - db restart: unless-stopped # Database in Private Subnet (no internet, no host ports) db: image: postgres:16-alpine container_name: lab02-db-private networks: - vpc-private environment: POSTGRES_DB: labdb POSTGRES_USER: labuser POSTGRES_PASSWORD: testpass # NO PORTS - private network only (INF-02) volumes: - lab02-db-data:/var/lib/postgresql/data restart: unless-stopped # Named volume for database persistence (INF-04) volumes: lab02-db-data: driver: local ``` Expected: ~100 lines with complete VPC simulation cd labs/lab-02-network && docker-compose config && docker-compose up -d && docker-compose ps docker-compose.yml defines VPC networks with correct subnets. Services deployed in appropriate tiers. INF-02 compliant (127.0.0.1 bindings only). Task 2: Create Dockerfile for API service labs/lab-02-network/Dockerfile Create Dockerfile for test API service that demonstrates network connectivity and multi-tier communication. Requirements: - Base image: Alpine 3.19 (minimal, consistent with Lab 1) - Non-root user: labuser UID 1000 (INF-01 compliance from Lab 1) - Install networking tools: curl, netcat-openbsd, iputils-ping - Simple test service: Python HTTP server or netcat listener - Healthcheck: Verify connectivity to database - Demonstrates: Same-network and cross-network communication Dockerfile structure: ```dockerfile # Base image: Alpine 3.19 FROM alpine:3.19 # Install networking tools for testing RUN apk add --no-cache \ python3 \ curl \ netcat-openbsd \ iputils-ping # Create non-root user (INF-01 compliance) RUN addgroup -g 1000 labuser && \ adduser -D -u 1000 -G labuser labuser # Create working directory WORKDIR /app # Create simple test server (Python) RUN echo '#!/usr/bin/env python3' > test-server.py && \ echo 'import http.server' >> test-server.py && \ echo 'import socket' >> test-server.py && \ echo 'import sys' >> test-server.py && \ echo 'class TestHandler(http.server.SimpleHTTPRequestHandler):' >> test-server.py && \ echo ' def do_GET(self):' >> test-server.py && \ echo ' self.send_response(200)' >> test-server.py && \ echo ' self.send_header("Content-Type", "text/plain")' >> test-server.py && \ echo ' self.end_headers()' >> test-server.py && \ echo ' response = f"API Server\\nContainer: {socket.gethostname()}\\nNetwork: Both public and private\\n"' >> test-server.py && \ echo ' self.wfile.write(response.encode())' >> test-server.py && \ echo 'if __name__ == "__main__":' >> test-server.py && \ echo ' http.server.HTTPServer(("0.0.0.0", 8000), TestHandler).serve_forever()' >> test-server.py && \ chmod +x test-server.py # Switch to non-root user USER labuser # Expose port (internal, not published by default) EXPOSE 8000 # Healthcheck: Test connectivity to database HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD nc -zv lab02-db-private 5432 || exit 1 # Start test server CMD ["python3", "test-server.py"] ``` Alternative (simpler - without Python): ```dockerfile FROM alpine:3.19 # Install minimal tools RUN apk add --no-cache \ curl \ netcat-openbsd \ iputils-ping \ bash # Create non-root user RUN addgroup -g 1000 labuser && \ adduser -D -u 1000 -G labuser labuser # Create test script RUN echo '#!/bin/bash' > /app/test-service.sh && \ echo 'echo "API Service - Multi-tier network test"' >> /app/test-service.sh && \ echo 'echo "Connected to both vpc-public and vpc-private"' >> /app/test-service.sh && \ echo 'echo "Testing connectivity..."' >> /app/test-service.sh && \ echo 'while true; do sleep 3600; done' >> /app/test-service.sh && \ chmod +x /app/test-service.sh USER labuser WORKDIR /app EXPOSE 8000 # Healthcheck HEALTHCHECK --interval=30s --timeout=3s \ CMD nc -zv lab02-db-private 5432 || exit 1 CMD ["/app/test-service.sh"] ``` Requirements: - Non-root user (INF-01) - Networking tools installed - Healthcheck tests connectivity to private network - Simple enough for Lab 2 (don't overcomplicate) - ~40-50 lines Expected: ~45 lines with non-root user and networking tools cd labs/lab-02-network && docker-compose build api && docker images | grep lab02-api Dockerfile builds successfully. Creates non-root container with networking tools. Healthcheck tests connectivity to private network. Task 3: Create infrastructure verification script labs/lab-02-network/tests/04-verify-infrastructure.sh Create comprehensive infrastructure verification script that validates docker-compose.yml and running services. Test cases: 1. Verify docker-compose.yml is valid YAML 2. Verify networks defined correctly (vpc-public, vpc-private) 3. Verify subnet configurations (10.0.1.0/24, 10.0.2.0/24) 4. Verify INF-02 compliance (no 0.0.0.0 bindings) 5. Verify private network has internal: true flag 6. Verify docker-compose build succeeds 7. Verify services start successfully 8. Verify network isolation (web cannot ping db) 9. Verify same-network communication (api can reach db) 10. Verify port bindings (127.0.0.1 only) Requirements: - Follow Phase 2 test patterns (color output, helper functions) - Use docker-compose config to validate YAML - Use docker network inspect to verify network config - Use docker exec for connectivity tests - Use grep for INF-02 validation - Clear pass/fail for each test - Graceful SKIP if services not running Script structure: ```bash #!/bin/bash # Infrastructure Verification: Lab 02 - Network & VPC # Validates docker-compose.yml and running services set -euo pipefail # Color definitions RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m' BLUE='\033[0;34m' NC='\033[0m' # Test directory TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" LAB_DIR="$(cd "$TEST_DIR/.." && pwd)" cd "$LAB_DIR" # Counter helpers pass_count=0 fail_count=0 skip_count=0 inc_pass() { ((pass_count++)) || true; } inc_fail() { ((fail_count++)) || true; } inc_skip() { ((skip_count++)) || true; } echo -e "${BLUE}========================================${NC}" echo -e "${BLUE}Lab 02: Infrastructure Verification${NC}" echo -e "${BLUE}========================================${NC}" echo "" # Test 1: docker-compose.yml is valid echo -e "[1/10] Testing docker-compose.yml syntax..." if docker-compose config > /dev/null 2>&1; then echo -e "${GREEN}PASS${NC}: docker-compose.yml is valid" inc_pass else echo -e "${RED}FAIL${NC}: docker-compose.yml has syntax errors" inc_fail fi # Test 2: Networks defined echo -e "[2/10] Testing network definitions..." if docker-compose config | grep -q "vpc-public:" && \ docker-compose config | grep -q "vpc-private:"; then echo -e "${GREEN}PASS${NC}: vpc-public and vpc-private networks defined" inc_pass else echo -e "${RED}FAIL${NC}: Networks not found in compose file" inc_fail fi # Test 3: Subnet configurations echo -e "[3/10] Testing subnet configurations..." if docker-compose config | grep -q "10.0.1.0/24" && \ docker-compose config | grep -q "10.0.2.0/24"; then echo -e "${GREEN}PASS${NC}: Subnets 10.0.1.0/24 and 10.0.2.0/24 configured" inc_pass else echo -e "${RED}FAIL${NC}: Subnet configurations incorrect" inc_fail fi # Test 4: INF-02 compliance echo -e "[4/10] Testing INF-02 compliance (no 0.0.0.0 bindings)..." if docker-compose config | grep -qE '0\.0\.0\.0:[0-9]+'; then echo -e "${RED}FAIL${NC}: Found 0.0.0.0 port bindings (INF-02 violation)" inc_fail else echo -e "${GREEN}PASS${NC}: No 0.0.0.0 bindings found (INF-02 compliant)" inc_pass fi # Test 5: Private network internal flag echo -e "[5/10] Testing private network isolation..." if docker-compose config | grep -A 3 "vpc-private:" | grep -q "internal: true"; then echo -e "${GREEN}PASS${NC}: vpc-private has internal: true flag" inc_pass else echo -e "${YELLOW}SKIP${NC}: internal flag not found (may be in extended config)" inc_skip fi # Test 6: Build succeeds echo -e "[6/10] Testing docker-compose build..." if docker-compose build -q api > /dev/null 2>&1; then echo -e "${GREEN}PASS${NC}: Docker image builds successfully" inc_pass else echo -e "${YELLOW}SKIP${NC}: Build failed or not needed (images may exist)" inc_skip fi # Test 7-10: Runtime tests (if services running) # Check if services are running if docker-compose ps | grep -q "Up"; then # Test 7: Services running echo -e "[7/10] Testing service status..." running_count=$(docker-compose ps | grep -c "Up" || true) if [ "$running_count" -ge 2 ]; then echo -e "${GREEN}PASS${NC}: Services are running ($running_count services)" inc_pass else echo -e "${YELLOW}SKIP${NC}: Not all services running" inc_skip fi # Test 8: Network isolation echo -e "[8/10] Testing network isolation..." if docker exec lab02-web-public ping -c 1 -W 1 lab02-db-private > /dev/null 2>&1; then echo -e "${RED}FAIL${NC}: Public network can reach private (isolation broken)" inc_fail else echo -e "${GREEN}PASS${NC}: Public and private networks isolated" inc_pass fi # Test 9: Same-network communication echo -e "[9/10] Testing same-network communication..." if docker exec lab02-api-tier ping -c 1 -W 1 lab02-db-private > /dev/null 2>&1; then echo -e "${GREEN}PASS${NC}: API can reach database (same network)" inc_pass else echo -e "${YELLOW}SKIP${NC}: Multi-homed container test skipped" inc_skip fi # Test 10: Port bindings echo -e "[10/10] Testing port bindings..." if netstat -tlnp 2>/dev/null | grep -q "127.0.0.1:8080"; then echo -e "${GREEN}PASS${NC}: Port 8080 bound to 127.0.0.1 (INF-02 compliant)" inc_pass else echo -e "${YELLOW}SKIP${NC}: Port binding check skipped (netstat not available)" inc_skip fi else echo -e "${YELLOW}SKIP${NC}: Runtime tests skipped (services not running)" inc_skip; inc_skip; inc_skip; inc_skip fi # Summary echo "" echo -e "${BLUE}========================================${NC}" echo -e "${BLUE}Test Summary${NC}" echo -e "${BLUE}========================================${NC}" echo "Passed: $pass_count" echo "Failed: $fail_count" echo "Skipped: $skip_count" echo "" if [ $fail_count -eq 0 ]; then echo -e "${GREEN}Infrastructure verification PASSED${NC}" exit 0 else echo -e "${RED}Infrastructure verification FAILED${NC}" exit 1 fi ``` Expected: ~180 lines with 10 comprehensive tests bash labs/lab-02-network/tests/04-verify-infrastructure.sh Infrastructure verification script validates docker-compose.yml, networks, INF-02 compliance, and service connectivity. All tests pass. ## Infrastructure Verification After all tasks complete, verify: 1. **Files Created**: - docker-compose.yml exists - Dockerfile exists - tests/04-verify-infrastructure.sh exists 2. **Compose Configuration**: - `docker-compose config` succeeds (valid YAML) - Two networks defined: vpc-public, vpc-private - Correct subnets: 10.0.1.0/24, 10.0.2.0/24 - Three services: web, api, db 3. **INF-02 Compliance**: - No 0.0.0.0 bindings in docker-compose config - Public services use 127.0.0.1:PORT:PORT format - Private services have no published ports - vpc-private has internal: true flag 4. **Services Start Successfully**: - `docker-compose up -d` succeeds - All containers show "Up" status - Containers have correct network attachments 5. **Network Isolation**: - web (public) cannot ping db (private) - api (multi-homed) can reach db (private) - DNS resolution works within same network 6. **Tests Pass**: - Infrastructure verification script passes - All tests from Plan 03-01 should now pass (GREEN phase) ## Automated Validation Commands ```bash # Verify compose configuration cd labs/lab-02-network && docker-compose config # Check for INF-02 violations (should return nothing) cd labs/lab-02-network && docker-compose config | grep "0.0.0.0" # Build services cd labs/lab-02-network && docker-compose build # Start services cd labs/lab-02-network && docker-compose up -d # Check service status cd labs/lab-02-network && docker-compose ps # Verify networks created docker network ls | grep lab02 # Run infrastructure verification bash labs/lab-02-network/tests/04-verify-infrastructure.sh # Run full test suite (should all pass now) bash labs/lab-02-network/tests/run-all-tests.sh # Cleanup cd labs/lab-02-network && docker-compose down -v ``` ## Success Criteria - [ ] docker-compose.yml is valid and configures VPC networks - [ ] Two networks defined: vpc-public (10.0.1.0/24), vpc-private (10.0.2.0/24) - [ ] vpc-private has internal: true flag - [ ] No 0.0.0.0 port bindings (INF-02 compliant) - [ ] Services start successfully with docker-compose up - [ ] Network isolation verified (public cannot reach private) - [ ] Infrastructure verification script passes all tests - [ ] All tests from Plan 03-01 now pass (GREEN phase complete) 1. docker-compose.yml implements VPC simulation with two networks (public, private) 2. Custom subnets configured (10.0.1.0/24, 10.0.2.0/24) 3. INF-02 compliance enforced (127.0.0.1 bindings only, no 0.0.0.0) 4. Private network uses internal: true flag 5. Services deployed in correct tiers (web→public, db→private, api→both) 6. Dockerfile creates non-root container with networking tools 7. Infrastructure verification script validates all requirements 8. All tests pass (GREEN phase complete - TDD cycle finished) After completion, create `.planning/phases/03-lab-02-network-vpc/03-03-SUMMARY.md` with: - docker-compose.yml structure and decisions - Dockerfile specifications - Infrastructure verification test results - INF-02 compliance validation - Network isolation verification - TDD GREEN phase completion confirmation