25 KiB
Phase 3: Lab 02 - Network & VPC - Research
Researched: 2026-03-25 Domain: Docker Networking, VPC Simulation, Network Isolation Confidence: HIGH
Summary
Phase 3 focuses on teaching VPC (Virtual Private Cloud) and subnet concepts through Docker bridge networks. Students will learn to create isolated networks that simulate cloud VPCs with public and private subnets, understand network segmentation, and verify isolation through connectivity testing. This lab builds on Phase 1 (Docker setup) and Phase 2 (IAM) by introducing multi-tier network architecture.
Primary recommendation: Use Docker Compose V2's custom bridge networks with explicit subnet configuration (10.0.x.0/24 pattern) to simulate VPC subnets. Avoid exposing private network ports on host interfaces (127.0.0.1 max, never 0.0.0.0) to enforce INF-02 compliance. Use standard networking tools (ping, curl, netcat) for isolation verification.
User Constraints (from CONTEXT.md)
No CONTEXT.md exists for this phase. All decisions are at Claude's discretion based on project requirements and standards.
Phase Requirements
| ID | Description | Research Support |
|---|---|---|
| LAB-02 | Studente può creare reti Docker bridge isolate per simulare VPC/Subnets | User-defined bridge networks with custom subnets (10.0.x.0/24) |
| DOCT-01 | Lab include Tutorial (guida passo-passo incrementale) | 3-part tutorial: Create networks → Deploy containers → Verify isolation |
| DOCT-02 | Lab include How-to Guides | Guides for: Create network, List networks, Verify isolation, Cleanup |
| DOCT-03 | Lab include Reference | docker-compose.yml network syntax, Network inspection, IP mapping |
| DOCT-04 | Lab include Explanation (parallelismo Docker ↔ cloud) | Bridge networks → VPC/Subnets, DNS resolution, Security groups |
| DOCT-05 | Tutorial segue principio "little often" | Incremental steps with verification after each operation |
| TEST-01 | Script di test bash pre-implementazione (TDI) | Tests for: Network creation, Isolation verification, DNS resolution |
| TEST-05 | Comando di verifica finale ("double check") | Final verification: All isolation tests pass, INF-02 compliant |
| INF-02 | Reti private non espongono porte sull'host (127.0.0.1 max, mai 0.0.0.0) | Port publishing restrictions verification test |
| PARA-01 | Componente Docker mappato a servizio cloud (VPC/Subnets) | Bridge networks → VPC, Custom subnets → Subnet CIDRs |
| PARA-02 | Architettura locale usa nomenclatura cloud (VPC, subnet, security groups) | Network naming: vpc-main, subnet-public, subnet-private |
| PARA-03 | Differenze tra locale e cloud documentate | Local limitations: Single host, No real AZs, No internet gateway |
| PARA-04 | Comandi Docker equivalenti a comandi cloud mostrati | docker network create ↔ aws ec2 create-vpc |
Standard Stack
Core
| Library/Tool | Version | Purpose | Why Standard |
|---|---|---|---|
| Docker Engine | >= 24.0 | Container runtime, bridge networking | Project-wide standard, native network isolation |
| Docker Compose | V2 | Multi-container networking definition | Project-wide standard, custom network syntax |
| Bridge driver | (builtin) | User-defined isolated networks | Default Docker network driver, production-grade isolation |
| iproute2 | (system) | Network debugging (ip, ip addr) | Standard Linux networking tools |
| netcat-openbsd | (system) | Port connectivity testing (nc) | Project-wide standard for network testing |
Supporting
| Library/Tool | Version | Purpose | When to Use |
|---|---|---|---|
| iputils-ping | (system) | ICMP connectivity testing (ping) | For testing network isolation between containers |
| curl | (system) | HTTP/HTTPS connectivity testing | For testing service reachability across networks |
| nmap | (system) | Advanced port scanning (optional) | For detailed network analysis in troubleshooting |
Alternatives Considered
| Instead of | Could Use | Tradeoff |
|---|---|---|
| Bridge networks | Overlay networks | Overlay requires Swarm mode, overkill for single-host labs |
| Bridge networks | Macvlan networks | Macvlan gives containers direct network access, breaks isolation model |
| netcat | telnet | Netcat is more versatile, telnet is legacy |
| ping | traceroute | Ping tests basic connectivity, traceroute shows path (not needed for local) |
Installation:
# Core utilities (typically pre-installed with Docker)
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Network testing tools
sudo apt-get install -y iproute2 netcat-openbsd iputils-ping curl
# Optional: Advanced network analysis
sudo apt-get install -y nmap
Architecture Patterns
Recommended Project Structure
labs/lab-02-network/
├── tutorial/
│ ├── 01-create-networks.md
│ ├── 02-deploy-containers.md
│ └── 03-verify-isolation.md
├── how-to-guides/
│ ├── create-custom-network.md
│ ├── inspect-network-configuration.md
│ ├── test-network-isolation.md
│ └── cleanup-networks.md
├── reference/
│ ├── docker-network-commands.md
│ ├── compose-network-syntax.md
│ └── vpc-network-mapping.md
├── explanation/
│ └── docker-network-vpc-parallels.md
├── tests/
│ ├── 01-network-creation-test.sh
│ ├── 02-isolation-verification-test.sh
│ └── 99-final-verification.sh
├── docker-compose.yml
└── ARCHITECTURE.md
Pattern 1: VPC Simulation with Custom Bridge Networks
What: Create isolated networks that simulate VPC and subnets using explicit CIDR blocks When to use: All multi-tier applications requiring network segmentation Example:
# Source: Docker Compose networking documentation
# https://docs.docker.com/compose/networking/
networks:
# Simulates VPC with public subnet
vpc-public:
driver: bridge
name: lab02-vpc-public
ipam:
config:
- subnet: 10.0.1.0/24
gateway: 10.0.1.1
# Simulates VPC with private subnet
vpc-private:
driver: bridge
name: lab02-vpc-private
ipam:
config:
- subnet: 10.0.2.0/24
gateway: 10.0.2.1
internal: true # Blocks external internet access
services:
# Web server in public subnet
web:
image: nginx:alpine
networks:
- vpc-public
ports:
- "127.0.0.1:8080:80" # INF-02: Only expose on localhost, NOT 0.0.0.0
# Database in private subnet
db:
image: postgres:16-alpine
networks:
- vpc-private
# No ports exposed - private network only
Pattern 2: Network Isolation Verification
What: Test that containers on different networks cannot communicate When to use: TDD RED phase, verification of INF-02 compliance Example:
# Source: Docker bridge network documentation
# https://docs.docker.com/engine/network/drivers/bridge/
#!/bin/bash
# Test 1: Containers in same network can communicate
docker run -d --name container1 --network vpc-public alpine sleep 3600
docker run -d --name container2 --network vpc-public alpine sleep 3600
docker exec container1 ping -c 2 container2
# Expected: SUCCESS (containers in same network can communicate)
# Test 2: Containers in different networks are isolated
docker run -d --name container3 --network vpc-private alpine sleep 3600
docker exec container1 ping -c 2 container3
# Expected: FAILURE (different networks are isolated)
# Test 3: Private network not accessible from host
docker exec container3 wget -O- http://container2
# Expected: FAILURE (private network isolation)
Pattern 3: VPC Nomenclature Mapping
What: Use cloud-style naming for local networks to teach parallelism When to use: All network definitions in docker-compose.yml Example:
# Cloud-style naming for local networks
networks:
# Naming convention: <env>-<tier>-<type>
lab-vpc: # Simulates VPC
driver: bridge
ipam:
config:
- subnet: 10.0.0.0/16 # VPC CIDR block
public-subnet-1a: # Subnet name includes AZ simulation
driver: bridge
ipam:
config:
- subnet: 10.0.1.0/24 # Public subnet CIDR
private-subnet-1a:
driver: bridge
internal: true # No internet gateway (private subnet)
ipam:
config:
- subnet: 10.0.2.0/24 # Private subnet CIDR
Anti-Patterns to Avoid
- Using default bridge network: No DNS resolution, poor isolation, not production-grade
- Exposing private ports on 0.0.0.0: Violates INF-02, allows external access to private services
- Mixing public/private services in same network: Breaks isolation model, defeats learning objective
- Using
--linkflag: Legacy feature, user-defined networks provide automatic DNS - Skipping network tests: TDD approach requires RED→GREEN→REFACTOR for network setup
Don't Hand-Roll
| Problem | Don't Build | Use Instead | Why |
|---|---|---|---|
| Network isolation | Custom iptables rules | Docker bridge networks | Built-in isolation, battle-tested, easier to debug |
| DNS resolution | Custom /etc/hosts management | Docker embedded DNS | Automatic service discovery, no manual updates |
| IP allocation | Custom IP assignment scripts | Docker IPAM | Prevents conflicts, handles subnet calculations |
| Network testing | Custom ping wrappers | Standard tools: ping, curl, nc | Students already know these tools, portable |
Key insight: Docker's networking primitives are sufficient for VPC simulation. Building custom network layers teaches the wrong lesson about using platform-provided isolation.
Common Pitfalls
Pitfall 1: Default Bridge Network Limitations
What goes wrong: Students use default bridge, find containers can't resolve each other by name
Why it happens: Default bridge doesn't have embedded DNS; requires --link (legacy) or IP addresses
How to avoid: Always create custom networks with docker network create or networks: in Compose
Warning signs: Using IP addresses to connect containers, "name resolution failed" errors
Pitfall 2: Publishing Private Network Ports on 0.0.0.0
What goes wrong: ports: ["8080:80"] publishes on all interfaces, violating INF-02
Why it happens: Default port binding is 0.0.0.0 (all interfaces) if not specified
How to avoid: Always use explicit binding: ports: ["127.0.0.1:8080:80"] for private services
Warning signs: Services accessible from outside host, netstat -tlnp shows 0.0.0.0:8080
Pitfall 3: Not Verifying Isolation Before Deployment
What goes wrong: Containers deployed, then discovered they can communicate across networks Why it happens: Network isolation not tested in RED phase, only verified after setup How to avoid: Write isolation tests FIRST (RED), then deploy containers (GREEN) Warning signs: No test files in labs/lab-02-network/tests/ before writing docker-compose.yml
Pitfall 4: Confusion Between Container and Host Networking
What goes wrong: Students think localhost inside container reaches host services
Why it happens: 127.0.0.1 inside container refers to container, not host
How to avoid: Teach that host.docker.internal (Docker Desktop) or host gateway IP bridges the gap
Warning signs: "Connection refused" when container tries to connect to localhost services
Pitfall 5: Network Cleanup Between Tests
What goes wrong: Previous test networks interfere with new tests
Why it happens: Networks not removed between test runs, container references stale
How to avoid: Always run docker-compose down -v to remove networks, include cleanup in tests
Warning signs: "Network already exists" errors, IP conflicts in subnet allocation
Code Examples
Verified patterns from official sources:
Creating Custom Networks with Subnets
# Source: Docker Compose networking specification
# https://docs.docker.com/compose/compose-file/07-networking/
version: "3.8"
networks:
frontend:
driver: bridge
name: lab02-frontend
ipam:
driver: default
config:
- subnet: 10.0.1.0/24
ip_range: 10.0.1.128/25
backend:
driver: bridge
name: lab02-backend
internal: true # Isolated from external networks
ipam:
config:
- subnet: 10.0.2.0/24
services:
web:
image: nginx:alpine
networks:
- frontend
ports:
# INF-02: Only bind to localhost, NOT 0.0.0.0
- "127.0.0.1:8080:80"
api:
image: api:latest
networks:
- frontend
- backend
# No ports exposed - internal communication only
db:
image: postgres:16-alpine
networks:
- backend
# No ports exposed - private network only
Testing Network Isolation
# Source: Docker bridge network documentation
# https://docs.docker.com/engine/network/drivers/bridge/
#!/bin/bash
# Network isolation test for Lab 02
set -euo pipefail
# Create two isolated networks
docker network create --driver bridge --subnet 10.0.1.0/24 net1
docker network create --driver bridge --subnet 10.0.2.0/24 net2
# Deploy containers in different networks
docker run -d --name container1 --network net1 alpine sleep 3600
docker run -d --name container2 --network net2 alpine sleep 3600
# Test 1: Isolation between networks
echo "Testing isolation between net1 and net2..."
if docker exec container1 ping -c 2 -W 1 container2 2>/dev/null; then
echo "FAIL: Containers in different networks can communicate (isolation broken)"
exit 1
else
echo "PASS: Containers in different networks are isolated"
fi
# Test 2: Same network communication
docker run -d --name container3 --network net1 alpine sleep 3600
echo "Testing communication within same network..."
if docker exec container1 ping -c 2 -W 1 container3 2>/dev/null; then
echo "PASS: Containers in same network can communicate"
else
echo "FAIL: Containers in same network cannot communicate"
exit 1
fi
# Cleanup
docker rm -f container1 container2 container3
docker network rm net1 net2
echo "All isolation tests passed!"
Verifying INF-02 Compliance
# Source: INF-02 requirement from REQUIREMENTS.md
# Verify private networks don't expose ports on host
#!/bin/bash
echo "Checking INF-02 compliance: Private networks don't expose ports..."
# Check docker-compose.yml for port bindings
compose_file="labs/lab-02-network/docker-compose.yml"
# Extract all port mappings
port_mappings=$(grep -A 20 "ports:" "$compose_file" | grep -E '^\s*-\s*[0-9]+' || true)
# Check for 0.0.0.0 bindings (violation)
if echo "$port_mappings" | grep -qE '0\.0\.0\.0:[0-9]+'; then
echo "FAIL: Found ports exposed on 0.0.0.0 (INF-02 violation)"
echo "$port_mappings"
exit 1
fi
# Verify all private services use 127.0.0.1 binding
if echo "$port_mappings" | grep -qE '127\.0\.0\.1:[0-9]+'; then
echo "PASS: Private services correctly bound to 127.0.0.1"
else
echo "WARNING: No port bindings found or all public"
fi
# Verify with docker-compose config
docker-compose -f "$compose_file" config 2>/dev/null || true
echo "INF-02 verification complete"
Network Inspection and Debugging
# Source: Docker CLI best practices for network debugging
# List all networks
docker network ls
# Inspect network configuration (shows subnet, gateway, connected containers)
docker network inspect lab02-vpc-public
# View network details from container perspective
docker exec <container> ip addr show
docker exec <container> ip route show
# Test DNS resolution within network
docker exec <container> nslookup <other_container_name>
# Test connectivity
docker exec <container> ping -c 2 <other_container_name>
docker exec <container> nc -zv <other_container_name> 80
# Check bridge interface on host
ip addr show br-<network_id>
State of the Art
| Old Approach | Current Approach | When Changed | Impact |
|---|---|---|---|
| Default bridge network | User-defined bridge networks | Docker 1.10+ | Automatic DNS, better isolation, production-grade |
--link for service discovery |
Embedded DNS server | Docker 1.12+ | No manual links, name-based resolution by default |
| Port mapping required for all communication | Network-scoped communication | Docker 1.10+ | Containers communicate without port publishing |
| Manual IP allocation | IPAM (IP Address Management) | Docker 1.10+ | Automatic subnet allocation, conflict prevention |
Deprecated/outdated:
--linkflag: Legacy container linking, superseded by user-defined networks with DNS- Default bridge for production: Documentation explicitly recommends user-defined bridges
- Ambiguous port binding: Always specify host binding (127.0.0.1 vs 0.0.0.0) for security
Validation Architecture
This section defines validation/testing approach for Phase 3 based on TDD methodology and project requirements.
Test Framework
| Property | Value |
|---|---|
| Framework | BASH (Bourne Again Shell) >= 4.0 |
| Config file | None — inline test functions |
| Quick run command | bash labs/lab-02-network/tests/quick-test.sh |
| Full suite command | bash labs/lab-02-network/tests/run-all-tests.sh |
Phase Requirements → Test Map
| Req ID | Behavior | Test Type | Automated Command | File Exists? |
|---|---|---|---|---|
| LAB-02 | Studente può creare reti Docker bridge isolate per simulare VPC/Subnets | integration | bash tests/01-network-creation-test.sh |
Wave 0 |
| DOCT-01 | Lab include Tutorial (guida passo-passo) | manual | Verify: tutorial/01-create-networks.md |
Wave 0 |
| DOCT-02 | Lab include How-to Guides | manual | Verify: how-to-guides/*.md exist |
Wave 0 |
| DOCT-03 | Lab include Reference | manual | Verify: reference/vpc-network-mapping.md |
Wave 0 |
| DOCT-04 | Lab include Explanation (parallelismo Docker ↔ cloud) | manual | Verify: explanation/docker-network-vpc-parallels.md |
Wave 0 |
| DOCT-05 | Tutorial segue principio "little often" | manual | Review tutorial for incremental steps | Wave 0 |
| TEST-01 | Script di test bash pre-implementazione (TDI) | unit | bash tests/02-isolation-verification-test.sh |
Wave 0 |
| TEST-05 | Comando di verifica finale ("double check") | integration | bash tests/99-final-verification.sh |
Wave 0 |
| INF-02 | Reti private non espongono porte sull'host | unit | bash tests/inf-02-compliance-test.sh |
Wave 0 |
| PARA-01 | Componente Docker mappato a servizio cloud (VPC/Subnets) | manual | Verify Explanation includes network mapping | Wave 0 |
| PARA-02 | Architettura locale usa nomenclatura cloud | manual | Verify docker-compose.yml uses VPC naming | Wave 0 |
| PARA-03 | Differenze tra locale e cloud documentate | manual | Verify Explanation includes differences | Wave 0 |
| PARA-04 | Comandi Docker equivalenti mostrati | manual | Verify Reference includes command comparison | Wave 0 |
Sampling Rate
- Per task commit:
bash labs/lab-02-network/tests/quick-test.sh(runs in < 30 seconds) - Per wave merge:
bash labs/lab-02-network/tests/run-all-tests.sh(full validation) - Phase gate: Full suite green + manual verification of all 4 Diátaxis documents + INF-02 verified
Wave 0 Gaps
labs/lab-02-network/tests/01-network-creation-test.sh— tests custom network creation with subnetslabs/lab-02-network/tests/02-isolation-verification-test.sh— verifies isolation between networkslabs/lab-02-network/tests/inf-02-compliance-test.sh— verifies private networks don't expose portslabs/lab-02-network/tests/99-final-verification.sh— double check command for studentslabs/lab-02-network/tutorial/01-create-networks.md— first Diátaxis tutoriallabs/lab-02-network/how-to-guides/— directory for goal-oriented guideslabs/lab-02-network/reference/— directory for technical specificationslabs/lab-02-network/explanation/docker-network-vpc-parallels.md— VPC parallelism explanation- Test harness setup: None needed — using pure BASH with networking tools
Integration with Phase 2 (IAM Lab)
Phase 2 Integration Points:
- Non-root containers from Lab 1 should be used in Lab 2 networks
- Docker socket access from Lab 1 enables network management in Lab 2
- Test infrastructure patterns from Lab 1 (RED→GREEN→REFACTOR) apply to Lab 2
Requiring Phase 2:
- Student must have Docker access configured (Lab 1 success)
- Student must understand container execution (Lab 1 concept)
Building on Phase 2:
- Lab 2 introduces complexity: multiple containers + networks (vs single container in Lab 1)
- Lab 2 requires verification of network isolation (new testing concept)
- Lab 2 introduces infrastructure as code (docker-compose.yml for networks)
Success Criteria Validation
Success Criteria 1: Studente può creare reti Docker bridge isolate per simulare VPC e Subnets
- How to verify: Student creates docker-compose.yml with custom networks, verifies with
docker network ls - Test command:
bash tests/01-network-creation-test.sh - Manual check:
docker network inspect lab02-vpc-publicshows custom subnet
Success Criteria 2: Reti private non espongono porte sull'host (max 127.0.0.1, mai 0.0.0.0)
- How to verify: Test checks docker-compose.yml for 0.0.0.0 bindings, verifies only 127.0.0.1 used
- Test command:
bash tests/inf-02-compliance-test.sh - Manual check:
netstat -tlnpshows no 0.0.0.0 bindings for private services
Success Criteria 3: Studente comprende il parallelismo tra Docker Bridge Networks e VPC cloud
- How to verify: Explanation document maps bridge networks → VPC, subnets → Subnets
- Manual check: Review
explanation/docker-network-vpc-parallels.md - Test command: None — conceptual understanding verified through documentation
Success Criteria 4: Studente può verificare isolamento tra reti con test di connettività
- How to verify: Student runs ping/curl/netcat between containers, observes failures across networks
- Test command:
bash tests/02-isolation-verification-test.sh - Manual check: Student demonstrates
docker exec container1 ping container2fails across networks
Success Criteria 5: Lab include Documentazione Diátaxis completa con nomenclatura cloud
- How to verify: All 4 document types present, use VPC/subnet terminology consistently
- Manual check: File structure + content review for cloud terminology
- Test command:
find labs/lab-02-network -name "*.md" | grep -E "(tutorial|how-to|reference|explanation)"
Open Questions
-
Should we simulate multiple Availability Zones?
- What we know: AWS uses AZs for high availability, but Docker is single-host
- What's unclear: Would creating multiple "subnet-1a", "subnet-1b" networks be misleading?
- Recommendation: Include AZ naming convention for terminology learning, but document limitation (single host = no real AZ isolation)
-
How to handle DNS resolution testing in automated tests?
- What we know: User-defined networks provide automatic DNS, default bridge doesn't
- What's unclear: Best way to test DNS in isolation verification?
- Recommendation: Use
docker exec container ping -c 1 other_containerwhich tests both DNS and connectivity
-
Should we implement NAT Gateway simulation?
- What we know: AWS VPCs use NAT gateways for private subnet internet access
- What's unclear: Can Docker networks simulate NAT behavior without iptables complexity?
- Recommendation: Document as difference between local and cloud, don't implement (adds complexity without learning value)
-
What subnet CIDR pattern should we standardize on?
- What we know: AWS uses 10.0.0.0/16 for VPC, 10.0.X.0/24 for subnets
- What's unclear: Should we match this exactly or use different ranges to avoid conflicts?
- Recommendation: Use 10.0.X.0/24 pattern matching AWS, document in ARCHITECTURE.md for future labs
Sources
Primary (HIGH confidence)
- Docker Bridge Network Driver Documentation - https://docs.docker.com/engine/network/drivers/bridge/ - User-defined bridges, DNS resolution, isolation, configuration options (Published: 2026-02-21, verified current)
- Docker Compose Networking Documentation - https://docs.docker.com/compose/networking/ - Custom networks, multi-network services, compose file syntax (Published: 2024-02-09, verified current)
- AWS VPC User Guide - https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html - VPC concepts, subnets, gateways, routing (Verified current)
Secondary (MEDIUM confidence)
- Docker Compose File Reference (Networks) - https://docs.docker.com/compose/compose-file/07-networking/ - Network configuration syntax, IPAM options, internal networks
- Docker Network Inspection -
docker network inspectcommand documentation - Network debugging, verification commands
Tertiary (LOW confidence)
- None - all findings verified with official Docker and AWS documentation
Metadata
Confidence breakdown:
- Standard stack: HIGH - Docker networking is project-wide standard, tools are well-documented
- Architecture: HIGH - Based on official Docker bridge network documentation and AWS VPC concepts
- Pitfalls: HIGH - Common networking issues verified through Docker documentation and best practices
- Validation: HIGH - TDD approach proven in Phase 2, testing patterns established
Research date: 2026-03-25 Valid until: 2026-04-24 (30 days - Docker networking model is stable, AWS VPC concepts are mature)