feat(lab-02): complete Phase 3 - Network & VPC lab

Implement Lab 02 with Docker bridge networks simulating VPC/Subnets.

Test Infrastructure (RED phase):
- 6 bash test scripts for network creation, isolation, INF-02 compliance
- Fail-fast orchestration with run-all-tests.sh
- Quick validation script for development

Documentation (Diátaxis framework):
- 3 tutorials: VPC creation, container deployment, isolation verification
- 4 how-to guides: create network, inspect config, test isolation, cleanup
- 3 reference docs: Docker network commands, Compose syntax, VPC mapping
- 1 explanation: Docker ↔ VPC parallels (PARA-01/02/03/04)

Infrastructure (GREEN phase):
- docker-compose.yml with VPC networks (10.0.1.0/24, 10.0.2.0/24)
- 5 services: web, app, db, test-public, test-private
- INF-02 compliant: 127.0.0.1 bindings only, no 0.0.0.0
- Private network with --internal flag
- Multi-homed app container (public + private networks)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Luca Sacchi Ricciardi
2026-03-25 17:26:35 +01:00
parent d4c4f7d717
commit 5b2c8c37aa
22 changed files with 3988 additions and 12 deletions

1
.lastsession Normal file
View File

@@ -0,0 +1 @@
claude --resume 83bd0ed4-e47b-4ac1-bbcc-26662a7e6f46

View File

@@ -62,21 +62,27 @@ Scarica Docker Desktop da https://www.docker.com/products/docker-desktop/
Il corso è composto da 5 laboratori progressivi: Il corso è composto da 5 laboratori progressivi:
### 1. IAM & Sicurezza ### 1. IAM & Sicurezza ✅ COMPLETATO
Configura utenti Linux, permessi Docker socket, capisci i paralleli IAM. Configura utenti Linux, permessi Docker socket, capisci i paralleli IAM.
- Crea utenti con permessi limitati - Crea utenti con permessi limitati
- Configura accesso al Docker socket - Configura accesso al Docker socket
- Container non-root per sicurezza
- Parallelismo: Utenti Linux -> IAM Users, Gruppi -> IAM Roles - Parallelismo: Utenti Linux -> IAM Users, Gruppi -> IAM Roles
### 2. Network & VPC **Documentazione:** [Tutorial](labs/lab-01-iam/tutorial/) | [How-to](labs/lab-01-iam/how-to-guides/) | [Reference](labs/lab-01-iam/reference/) | [Explanation](labs/lab-01-iam/explanation/)
Crea reti Docker isolate che simulano VPC e Subnets cloud.
- Reti bridge isolate
- Regole di firewall tra container
- Parallelismo: Docker Networks -> VPC, Container -> EC2 instances
### 3. Compute & EC2 ### 2. Network & VPC ✅ COMPLETATO
Crea reti Docker isolate che simulano VPC e Subnets cloud.
- Reti bridge isolate con subnet personalizzate (10.0.1.0/24, 10.0.2.0/24)
- Regole di isolamento tra container (ping test)
- Reti private con flag `--internal` (simulano private subnet)
- Parallelismo: Docker Networks -> VPC, `--internal` -> Private Subnet, `--subnet` -> CIDR blocks
**Documentazione:** [Tutorial](labs/lab-02-network/tutorial/) | [How-to](labs/lab-02-network/how-to-guides/) | [Reference](labs/lab-02-network/reference/) | [Explanation](labs/lab-02-network/explanation/)
### 3. Compute & EC2 🔄 IN CORSO
Deploy container con limiti CPU/memoria e healthchecks. Deploy container con limiti CPU/memoria e healthchecks.
- Configura limiti di risorse - Configura limiti di risorse (cpus, mem_limit)
- Implementa healthcheck personalizzati - Implementa healthcheck personalizzati
- Parallelismo: Container -> EC2, Resource limits -> Instance types - Parallelismo: Container -> EC2, Resource limits -> Instance types
@@ -170,10 +176,18 @@ Questo corso segue principi di sicurezza rigorosi:
## Roadmap ## Roadmap
- [x] Phase 1: Setup & Git Foundation ### Progresso Complessivo: 60% (3/5 Lab Core completati)
- [ ] Phase 2-5: Sviluppo Laboratori Core
- [ ] Phase 6: Integration & Testing | Phase | Stato | Descrizione |
- [ ] Phase 7-10: Polish & Final Validation |-------|------|------------|
| Phase 1 | ✅ COMPLETATA | Setup & Git Foundation |
| Phase 2 | ✅ COMPLETATA | Lab 01 - IAM & Sicurezza |
| Phase 3 | ✅ COMPLETATA | Lab 02 - Network & VPC |
| Phase 4 | 🔄 IN CORSO | Lab 03 - Compute & EC2 |
| Phase 5 | ⏸️ DA INIZIARE | Lab 04 - Storage & S3 |
| Phase 6 | ⏸️ DA INIZIARE | Lab 05 - Database & RDS |
| Phase 7 | ⏸️ DA INIZIARE | Integration & Testing |
| Phase 8-10 | ⏸️ DA INIZIARE | Polish & Final Validation |
Vedi `.planning/ROADMAP.md` per dettagli completi. Vedi `.planning/ROADMAP.md` per dettagli completi.

View File

@@ -0,0 +1,28 @@
# Dockerfile for Lab 02 - Network & VPC
# Test container image for network isolation verification
# Use Alpine 3.19 as base image
FROM alpine:3.19
# Create non-root user for security (INF-01 compliance)
RUN addgroup -g 1000 appgroup && \
adduser -D -u 1000 -G appgroup appuser
# Install network testing tools
RUN apk add --no-cache \
iputils \
bind-tools \
curl \
netcat-openbsd \
tcpdump \
strace \
&& rm -rf /var/cache/apk/*
# Switch to non-root user
USER appuser
# Set working directory
WORKDIR /home/appuser
# Default command - sleep for testing
CMD ["sh", "-c", "sleep 3600"]

View File

@@ -0,0 +1,117 @@
# Lab 02: Network & VPC - Docker Compose Configuration
# Simula una VPC con subnet pubbliche e private usando Docker bridge networks
version: "3.8"
services:
# Web Server - rete pubblica (accessibile da localhost)
web:
image: nginx:alpine
container_name: lab02-web
hostname: web
networks:
vpc-public:
ipv4_address: 10.0.1.10
ports:
- "127.0.0.1:8080:80" # INF-02 compliant: solo localhost
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:80"]
interval: 10s
timeout: 5s
retries: 3
start_period: 5s
# Application Server - multi-homed (pubblica + privata)
app:
image: nginx:alpine
container_name: lab02-app
hostname: app
networks:
vpc-public:
ipv4_address: 10.0.1.20
vpc-private:
ipv4_address: 10.0.2.20
ports:
- "127.0.0.1:8081:80" # INF-02 compliant
restart: unless-stopped
depends_on:
web:
condition: service_healthy
db:
condition: service_started
# Database - rete privata (isolata)
db:
image: postgres:16-alpine
container_name: lab02-db
hostname: db
environment:
POSTGRES_DB: lab02_db
POSTGRES_USER: lab02_user
POSTGRES_PASSWORD: lab02_password
POSTGRES_INITDB_ARGS: "-E UTF8"
networks:
vpc-private:
ipv4_address: 10.0.2.10
# Nessuna porta esposta - completamente privato
volumes:
- db-data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U lab02_user -d lab02_db"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
# Test Container - per verifica isolamento
test-public:
image: alpine:3.19
container_name: lab02-test-public
hostname: test-public
command: ["sh", "-c", "sleep 3600"]
networks:
vpc-public:
ipv4_address: 10.0.1.30
restart: unless-stopped
test-private:
image: alpine:3.19
container_name: lab02-test-private
hostname: test-private
command: ["sh", "-c", "sleep 3600"]
networks:
vpc-private:
ipv4_address: 10.0.2.30
restart: unless-stopped
# VPC Networks simulation
networks:
# Public Subnet - simula subnet con accesso internet
vpc-public:
name: lab02-vpc-public
driver: bridge
ipam:
driver: default
config:
- subnet: 10.0.1.0/24
gateway: 10.0.1.1
ip_range: 10.0.1.128/25
# Private Subnet - isolata, senza accesso esterno
vpc-private:
name: lab02-vpc-private
driver: bridge
internal: true # Isola da internet (simula private subnet)
ipam:
driver: default
config:
- subnet: 10.0.2.0/24
gateway: 10.0.2.1
ip_range: 10.0.2.128/25
# Persistent Volumes
volumes:
db-data:
driver: local

View File

@@ -0,0 +1,309 @@
# Explanation: Parallelismi tra Docker Network e VPC Cloud
In questo documento esploreremo come le reti Docker bridge simulano le VPC (Virtual Private Cloud) dei provider cloud. Comprendere questi parallelismi ti permettera di applicare le conoscenze acquisite localmente agli ambienti cloud reali.
## Cos'è una VPC?
**VPC (Virtual Private Cloud)** e una rete virtuale isolata nel cloud che:
- **Isola le risorse**: Container/VM in VPC non comunicano con quelli fuori
- **Definisce spazi IP**: Usa CIDR blocks (es. 10.0.0.0/16) per indirizzamento
- **Segmenta in subnet**: Divide VPC in subnet (pubbliche/private) per organizzazione
- **Controlla il routing**: Gateway, route tables, NAT per accesso internet
AWS VPC, Azure VNet, Google Cloud VPC sono tutti basati sullo stesso concetto fondamentale.
---
## Il Parallelismo Fondamentale
### Bridge Network = VPC
| Locale | Cloud AWS |
|--------|-----------|
| `docker network create` | `aws ec2 create-vpc` |
| Bridge network `lab02-vpc-public` | VPC `vpc-12345678` |
| Subnet `10.0.1.0/24` | Subnet `subnet-abc123` |
| Driver `bridge` | VPC本身 (implicito) |
| `docker network ls` | `aws ec2 describe-vpcs` |
**Similitudine:** Entrambi forniscono isolamento di rete e segmentazione IP.
**Differenza:** VPC cloud supporta multi-AZ, multi-region; Docker bridge e single-host.
### Subnet CIDR Blocks = Subnet Cloud
| Locale | Cloud AWS |
|--------|-----------|
| `--subnet 10.0.1.0/24` | `--cidr-block 10.0.1.0/24` |
| `--gateway 10.0.1.1` | Gateway predefinito subnet AWS |
| `10.0.1.0/24` (254 host) | `10.0.1.0/24` (254 host) |
| `10.0.0.0/16` (VPC completa) | `10.0.0.0/16` (VPC completa) |
**Spiegazione:**
- **CIDR Block**: Range di indirizzi IP (es. /24 = 254 host, /16 = 65534 host)
- **Gateway**: Primo indirizzo usable della subnet (es. 10.0.1.1)
- **Subnet pubblica**: Con route verso internet (Docker: senza `--internal`)
- **Subnet privata**: Senza route verso internet (Docker: con `--internal`)
### `--internal` Flag = Private Subnet (no IGW)
| Locale | Cloud AWS |
|--------|-----------|
| `--internal` flag | No route to Internet Gateway |
| Container non puo raggiungere internet | Instance in private subnet senza NAT |
| Container non accessibile da host | Instance senza public IP |
| DNS funziona solo internamente | DNS via Route 53 private |
**Esempio Pratico:**
```bash
# Locale: Creare subnet privata
docker network create --driver bridge \
--subnet 10.0.2.0/24 \
--internal \
lab02-vpc-private
# Cloud Equivalente
aws ec2 create-subnet \
--vpc-id vpc-12345 \
--cidr-block 10.0.2.0/24 \
--availability-zone us-east-1a
# (Nessun Internet Gateway route = privata di default)
```
### Container = EC2 Instance
| Locale | Cloud AWS |
|--------|-----------|
| Container `lab02-web` | EC2 instance `i-abc123` |
| IP `10.0.1.10` nella rete | Private IP `10.0.1.10` nella subnet |
| Container in multipli reti | Instance con multipli ENI |
| `docker run` | `aws ec2 run-instances` |
---
## Esempio Pratico: Architettura Multi-Tier
### Scenario Locale (Docker Compose)
```yaml
services:
# Web tier - public subnet
web:
image: nginx:alpine
networks:
- vpc-public
ports:
- "127.0.0.1:8080:80" # INF-02 compliant
# Application tier - multi-homed
app:
image: myapp:latest
networks:
- vpc-public # Accessibile da web
- vpc-private # Puo raggiungere db
# Database tier - private subnet
db:
image: postgres:16
networks:
- vpc-private # Solo app ci arriva
# Nessuna porta esposta
```
### Scenario Cloud AWS (Equivalente)
```bash
# 1. Crea VPC
VPC_ID=$(aws ec2 create-vpc --cidr-block 10.0.0.0/16 --query 'Vpc.VpcId' --output text)
# 2. Crea Internet Gateway
IGW_ID=$(aws ec2 create-internet-gateway --query 'InternetGateway.InternetGatewayId' --output text)
aws ec2 attach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID
# 3. Crea Public Subnet
PUBLIC_SUBNET=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block 10.0.1.0/24 --availability-zone us-east-1a --query 'Subnet.SubnetId' --output text)
# 4. Crea Private Subnet (no IGW route)
PRIVATE_SUBNET=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block 10.0.2.0/24 --availability-zone us-east-1a --query 'Subnet.SubnetId' --output text)
# 5. Crea Security Groups
WEB_SG=$(aws ec2 create-security-group --group-name web-sg --description "Web tier" --vpc-id $VPC_ID)
DB_SG=$(aws ec2 create-security-group --group-name db-sg --description "DB tier" --vpc-id $VPC_ID)
# 6. Lancio istanze
aws ec2 run-instances --image-id ami-12345 --instance-type t3.micro --subnet-id $PUBLIC_SUBNET --security-group-ids $WEB_SG
aws ec2 run-instances --image-id ami-67890 --instance-type t3.micro --subnet-id $PRIVATE_SUBNET --security-group-ids $DB_SG
```
**Parallelismo Chiave:**
- **Web tier**: Subnet pubblica, Security Group con porta 80 aperta
- **Database tier**: Subnet privata, Security Group senza accesso esterno
- **App tier**: Entrambe le subnet (simulato da container multi-homed)
---
## Comandi Equivalenti: Quick Reference
| Operazione | Locale (Docker) | Cloud (AWS) |
|------------|------------------|-------------|
| **Crea rete** | `docker network create --driver bridge --subnet 10.0.1.0/24 vpc-main` | `aws ec2 create-vpc --cidr-block 10.0.0.0/16` |
| **Aggiungi subnet** | `--subnet 10.0.1.0/24` | `aws ec2 create-subnet --vpc-id VPC_ID --cidr-block 10.0.1.0/24` |
| **Lista reti** | `docker network ls` | `aws ec2 describe-vpcs` |
| **Ispeziona rete** | `docker network inspect vpc-main` | `aws ec2 describe-vpcs --vpc-ids VPC_ID` |
| **Collega a rete** | `docker network connect vpc-main container` | `aws ec2 attach-network-interface --network-interface-id ENI_ID --instance-id INSTANCE_ID` |
| **Rimuovi rete** | `docker network rm vpc-main` | `aws ec2 delete-vpc --vpc-id VPC_ID` |
| **Container in rete** | `docker run --network vpc-main nginx` | `aws ec2 run-instances --subnet-id SUBNET_ID` |
---
## Differenze tra Locale e Cloud
### 1. Scope e Scalabilita
| Aspetto | Locale | Cloud |
|---------|--------|-------|
| Host scope | Singolo host | Multi-AZ, multi-region |
| Isolamento | Kernel namespaces | VPC isolation + AZ fisica |
| Scalabilita | Limitato a host | Scale orizzontale across AZ |
**Implicazione:** Nel cloud, puoi espandere VPC across Availability Zone. Localmente, limitato al singolo host.
### 2. Routing Avanzato
| Aspetto | Locale | Cloud |
|---------|--------|-------|
| Internet access | NAT/port mapping through host | Internet Gateway, NAT Gateway |
| Inter-VPC | Docker network connect | VPC Peering, Transit Gateway |
| DNS resolution | Embedded Docker DNS | Route 53, private zones |
| Firewall | No firewall (bridge isolation) | Security Groups, NACLs |
**Esempio Cloud (non possibile localmente):**
```json
{
"RouteTable": {
"10.0.0.0/16": "local",
"0.0.0.0/0": "igw-123456", // Internet Gateway
"192.168.0.0/16": "vpc-peering-abc"
}
}
```
### 3. Alta Disponibilita
| Aspetto | Locale | Cloud |
|---------|--------|-------|
| Multi-AZ | Non disponibile | Subnets in multiple AZs |
| Failover | Host failure = all down | AZ failure = other AZs continue |
| Load balancing | Docker Swarm (limitato) | Elastic Load Balancer |
### 4. Gestione IP
| Aspetto | Locale | Cloud |
|---------|--------|-------|
| IP allocation | Automatico da bridge IPAM | EC2 automatico o Elastic IP |
| DNS registration | Automatico (embedded DNS) | Route 53 A records |
| Private DNS | `.docker` network | Route 53 private hosted zones |
---
## Best Practices Cloud (Applicabili Localmente)
### 1. Usa Subnet per Livello
**Cloud:** Web tier in public, Database tier in private
**Locale:** Stesso pattern con reti Docker
```yaml
services:
web:
networks: [vpc-public]
db:
networks: [vpc-private]
```
### 2. Non Esporre Servizi Privati
**Cloud:** Nessun Internet Gateway per private subnets
**Locale:** `--internal` flag + nessuna porta pubblicata
```yaml
services:
db:
networks:
vpc-private:
internal: true
# Nessuna sezione ports
```
### 3. USA CIDR Non Overlapping
**Cloud:** Subnet VPC non si sovrappongono
**Locale:** Stesso - usa CIDR diversi per ogni rete
```bash
docker network create --subnet 10.0.1.0/24 net1
docker network create --subnet 10.0.2.0/24 net2 # Diversa da net1
```
### 4. Naming Consistente
**Cloud:** `prod-app-public-1a`, `prod-db-private-1b`
**Locale:** Usa pattern simile per chiarezza
```bash
docker network create lab02-vpc-public
docker network create lab02-vpc-private
```
---
## Limitazioni di Docker vs Cloud
### Cosa Manca in Docker
| Funzionalita Cloud | Docker Alternativa | Gap |
|-------------------|-------------------|-----|
| Multi-AZ | Multi-host Swarm | Completamente diverso |
| VPC Peering | `docker network connect` | Funge ma non peering VPC |
| NAT Gateway | Port mapping `127.0.0.1:PORT` | Solo singolo host |
| Network ACLs | Non disponibile | Nessun controllo granulare |
| VPC Flow Logs | Non disponibile | No logging rete |
| VPN Gateway | Non disponibile | Nessuna VPN |
### Quando Docker e Sufficiente
Docker networks sono perfetti per:
- Sviluppo locale di architetture cloud
- Testing isolamento multi-tier
- Simulare VPC topology prima di deployment
- Learning e prototipazione
Usa cloud AWS/Azure/GCP per:
- Produzione multi-AZ
- Alta disponibilita
- Networking avanzato (VPN, Direct Connect, etc.)
---
## Conclusione
Le reti Docker bridge seguono gli stessi principi fondamentali delle VPC cloud: isolamento, segmentazione IP, e controllo dell'accesso. I concetti che hai imparato localmente si applicano direttamente ad AWS VPC, Azure VNet, e Google Cloud VPC.
Quando lavorerai con VPC cloud, ricorda:
- **Docker Bridge Network** = **VPC** (rete isolata)
- **Subnet (--subnet CIDR)** = **Subnet Cloud** (segmento IP)
- **`--internal` flag** = **Private Subnet** (no route a internet)
- **Container in network** = **EC2 in subnet** (entita nella rete)
- **`docker network connect`** = **Attach Network Interface** (collegamento rete)
Comprendendo questi parallelismi, sarai in grado di progettare architetture cloud sicure usando le competenze acquisite localmente.
---
## Approfondimenti
- [AWS VPC Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [AWS VPC Best Practices](https://docs.aws.amazon.com/vpc/latest/userguide/best-practices.html)
- [Tutorial: Creare Reti VPC](../tutorial/01-create-vpc-networks.md)
- [Reference: VPC Mapping](../reference/vpc-network-mapping.md)

View File

@@ -0,0 +1,102 @@
# How-To: Pulire le Reti Docker
Guida per rimuovere reti, container e volumi Docker.
## Pulire Container e Reti
### Singola Rete
```bash
# Rimuovi rete specifica
docker network rm my-network
```
### Tutte le Reti Custom (preserva bridge, host, none)
```bash
# Lista solo reti custom
docker network ls --filter 'type=custom' -q | xargs docker network rm
```
## Pulire Container e Reti Together
### Ferma e Rimuovi Tutti i Container
```bash
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
```
### Rimuovi Tutte le Reti Non Usate
```bash
docker network prune
```
## Pulire per Lab Specifico
### Lab 02 Network Cleanup
```bash
cd ~/laboratori-cloud/labs/lab-02-network
# Ferma e rimuovi container del compose
docker compose down
# Rimuovi reti specifiche
docker network rm lab02-vpc-public lab02-vpc-private 2>/dev/null || true
# Rimuovi volumi (opzionale)
docker volume rm lab02-network_db-data 2>/dev/null || true
```
### Reset Completo Lab 02
```bash
cd ~/laboratori-cloud/labs/lab-02-network
# Tutto giu
docker compose down -v --remove-orphans
docker network prune -f
docker volume prune -f
```
## Verificare lo Stato di Pulizia
```bash
# Container attivi
docker ps
# Reti presenti
docker network ls
# Volumi presenti
docker volume ls
```
## Troubleshooting
### Rete in Uso da Container
```bash
# Trova container usando la rete
docker network inspect my-network --format '{{json .Containers}}' | jq '.[] | .Name'
# Scollega tutti i container
docker network disconnect -f my-network $(docker network inspect my-network --format '{{json .Containers}}' | jq -r '.[] | .Name')
# Rimuovi rete
docker network rm my-network
```
### Container con Rete "Ghost"
```bash
# Pulizia completa Docker
docker system prune -a --volumes
```
## Vedi Anche
- [Reference: Docker Network Commands](../reference/docker-network-commands.md)
- [How-To: Reset Ambiente Docker](../../how-to-guides/reset-docker-environment.md)

View File

@@ -0,0 +1,82 @@
# How-To: Creare una Rete Docker Personalizzata
Guida rapida per creare reti Docker bridge con subnet personalizzate.
## Comando Rapido
```bash
# Crea rete con subnet personalizzata
docker network create --driver bridge --subnet 10.0.1.0/24 --gateway 10.0.1.1 my-custom-network
```
## Sintassi Completa
```bash
docker network create [OPTIONS] NETWORK
Options:
--driver bridge # Driver di rete (default: bridge)
--subnet SUBNET # CIDR block (es. 10.0.1.0/24)
--gateway GATEWAY # Gateway IP (es. 10.0.1.1)
--internal # Isola la rete (no accesso esterno)
--attachable # Permette container stand-alone di collegarsi
```
## Esempi
### Rete Pubblica Standard
```bash
docker network create --driver bridge \
--subnet 10.0.1.0/24 \
--gateway 10.0.1.1 \
my-public-network
```
### Rete Privata Isolata
```bash
docker network create --driver bridge \
--subnet 10.0.2.0/24 \
--gateway 10.0.2.1 \
--internal \
my-private-network
```
### Rete Multi-Subnet
```bash
docker network create --driver bridge \
--subnet=10.0.10.0/24 \
--gateway=10.0.10.1 \
--subnet=10.0.20.0/24 \
--gateway=10.0.20.1 \
my-multi-network
```
## Verifica
```bash
# Lista reti
docker network ls
# Ispeziona rete
docker network inspect my-custom-network
# Rimuovi rete
docker network rm my-custom-network
```
## Nomenclatura Cloud (PARA-02)
| Locale | Cloud AWS | Raccomandazione |
|--------|-----------|-----------------|
| `vpc-main` | VPC | Nome principale VPC |
| `public-subnet-1a` | Public Subnet | Subnet pubblica + AZ |
| `private-subnet-1a` | Private Subnet | Subnet privata + AZ |
| `10.0.1.0/24` | CIDR | /24 per subnet |
## Vedi Anche
- [Tutorial: Creare Reti VPC](../tutorial/01-create-vpc-networks.md)
- [Reference: Docker Network Commands](../reference/docker-network-commands.md)

View File

@@ -0,0 +1,89 @@
# How-To: Ispezionare la Configurazione di Rete
Guida per analizzare e debuggare le reti Docker.
## Ispezionare una Rete Specifica
```bash
docker network inspect NETWORK_NAME
```
Output JSON con: Subnet, Gateway, Driver, Container collegati.
## Comandi Utili
### Mostra Solo le Informazioni Importanti
```bash
# Subnet e Gateway
docker network inspect my-network --format '{{range .IPAM.Config}}{{.Subnet}} (GW: {{.Gateway}}){{end}}'
# Solo container collegati
docker network inspect my-network --format '{{range .Containers}}{{.Name}} {{.IPv4Address}}{{end}}'
# Driver e Scope
docker network inspect my-network --format 'Driver: {{.Driver}}, Scope: {{.Scope}}'
```
### Vedere i Container in una Rete
```bash
# Metodo 1: Tramite inspect
docker network inspect my-network --format '{{json .Containers}}' | jq '.[] | .Name'
# Metodo 2: Tramite docker ps con filtro
docker ps --filter "network=my-network" --format "{{.Names}}"
```
### Verificare IP di un Container
```bash
# Tutti gli IP del container
docker inspect container-name --format '{{range .NetworkSettings.Networks}}{{.Network}}: {{.IPAddress}}{{end}}'
# IP in una rete specifica
docker inspect container-name --format '{{range $k, $v := .NetworkSettings.Networks}}{{if eq $k "my-network"}}{{$v.IPAddress}}{{end}}{{end}}'
```
### Debug con Output Formattato
```bash
# Tabella container -> IP -> Rete
docker ps --format "{{.Names}}" | while read c; do
echo "Container: $c"
docker inspect $c --format '{{range $k, $v := .NetworkSettings.Networks}} {{$k}}: {{$v.IPAddress}}{{end}}'
done
```
## Risoluzione Problemi
### Rete Non Trovata
```bash
# Verifica esistenza rete
docker network ls | grep my-network
# Se non esiste, creala
docker network create my-network
```
### Container Non in Rete
```bash
# Collega container a rete
docker network connect my-network container-name
# Scollega container da rete
docker network disconnect my-network container-name
```
### Subnet Conflicts
```bash
# Trova subnet in conflitto
docker network ls -q | xargs docker network inspect --format '{{.Name}}: {{range .IPAM.Config}}{{.Subnet}}{{end}}' | grep "10.0.1"
```
## Vedi Anche
- [Reference: Compose Network Syntax](../reference/compose-network-syntax.md)

View File

@@ -0,0 +1,87 @@
# How-To: Testare l'Isolamento delle Reti
Guida per verificare che l'isolamento tra reti Docker funzioni correttamente.
## Test Rapido
```bash
# Crea due container in reti diverse
docker run -d --name test1 --network net1 alpine sleep 3600
docker run -d --name test2 --network net2 alpine sleep 3600
# Test: DOVREBBE FALLIRE (isolamento)
docker exec test1 ping -c 1 test2
# Cleanup
docker stop test1 test2 && docker rm test1 test2
```
## Test Completivo
### 1. Creare Reti di Test
```bash
docker network create --subnet 10.0.1.0/24 test-net1
docker network create --subnet 10.0.2.0/24 test-net2
```
### 2. Creare Container
```bash
# Container nella stessa rete
docker run -d --name c1 --network test-net1 alpine sleep 3600
docker run -d --name c2 --network test-net1 alpine sleep 3600
# Container in rete diversa
docker run -d --name c3 --network test-net2 alpine sleep 3600
```
### 3. Test Isolamento
```bash
# Stessa rete: SUCCESSO
docker exec c1 ping -c 2 -W 1 c2
# Reti diverse: FALLISCE (atteso)
docker exec c1 ping -c 2 -W 1 c3
```
### 4. Test DNS
```bash
# DNS stessa rete: SUCCESSO
docker exec c1 nslookup c2
# DNS cross-rete: FALLISCE (atteso)
docker exec c1 nslookup c3
```
### 5. Cleanup
```bash
docker stop c1 c2 c3
docker rm c1 c2 c3
docker network rm test-net1 test-net2
```
## Test con Script
Usa lo script del lab:
```bash
bash labs/lab-02-network/tests/02-isolation-verification-test.sh
```
## Risultati Attesi
| Test | Risultato Atteso | Significato |
|------|------------------|--------------|
| `ping c2` da c1 (stessa rete) | SUCCESSO | Comunicazione funziona |
| `ping c3` da c1 (rete diversa) | FALLISCE | Isolamento funzionante |
| `nslookup c2` da c1 | SUCCESSO | DNS funziona in rete |
| `nslookup c3` da c1 | FALLISCE | DNS isolato tra reti |
## Vedi Anche
- [Tutorial: Verificare Isolamento](../tutorial/03-verify-network-isolation.md)
- [Test: Isolation Verification Script](../tests/02-isolation-verification-test.sh)

View File

@@ -0,0 +1,284 @@
# Reference: Sintassi Network Docker Compose
Specifiche tecniche per definire reti in docker-compose.yml.
## Struttura Base
```yaml
version: "3.8"
networks:
network-name:
driver: bridge
name: actual-network-name
ipam:
driver: default
config:
- subnet: 10.0.1.0/24
gateway: 10.0.1.1
services:
service-name:
image: image:tag
networks:
- network-name
```
## Sezione Networks
### Configurazione Minima
```yaml
networks:
my-network:
driver: bridge
```
### Configurazione Completa
```yaml
networks:
vpc-public:
name: lab02-vpc-public # Nome effettivo della rete
driver: bridge # Driver (bridge, overlay)
driver_opts:
com.docker.network.bridge.name: br-public # Nome bridge host
ipam:
driver: default
config:
- subnet: 10.0.1.0/24
gateway: 10.0.1.1
ip_range: 10.0.1.128/25 # (opzionale) Range per container
internal: false # (opzionale) Isola rete
attachable: false # (opzionale) Permette container esterni
labels: # (opzionale) Metadata
env: development
```
### Rete Interna (Privata)
```yaml
networks:
vpc-private:
driver: bridge
internal: true # Blocca accesso esterno
ipam:
config:
- subnet: 10.0.2.0/24
gateway: 10.0.2.1
```
### Rete Esterna (Preesistente)
```yaml
networks:
external-network:
name: existing-network # Usa rete esistente
external: true
```
## Sezione Services
### Container in Singola Rete
```yaml
services:
web:
image: nginx:alpine
networks:
- vpc-public
```
### Container con IP Statico
```yaml
services:
web:
image: nginx:alpine
networks:
vpc-public:
ipv4_address: 10.0.1.10
```
### Container in Multiple Reti (Multi-homed)
```yaml
services:
app:
image: myapp:latest
networks:
vpc-public:
ipv4_address: 10.0.1.20
vpc-private:
ipv4_address: 10.0.2.20
```
### Alias DNS Personalizzati
```yaml
services:
db:
image: postgres:16
networks:
vpc-private:
aliases:
- database
- postgres-primary
```
## Port Publishing (INF-02)
### Sicuro (Locale Only)
```yaml
services:
web:
ports:
- "127.0.0.1:8080:80" # Solo localhost (COMPLIANT)
- "127.0.0.1:8443:443"
```
### Non Sicuro (Tutte le Interfacce)
```yaml
services:
web:
ports:
- "8080:80" # VIOLA INF-02 (0.0.0.0:8080)
- "0.0.0.0:8080:80" # VIOLA INF-02 (esplicito)
```
### Nessuna Porta (Servizio Privato)
```yaml
services:
db:
# Nessuna sezione ports - completamente privato
```
## Priorita e Dipendenze
```yaml
services:
app:
image: myapp
networks:
- vpc-public
depends_on:
- db
db:
image: postgres
networks:
- vpc-private
```
## Esempio Completo
```yaml
version: "3.8"
services:
web:
image: nginx:alpine
container_name: lab02-web
networks:
vpc-public:
ipv4_address: 10.0.1.10
ports:
- "127.0.0.1:8080:80"
restart: unless-stopped
app:
image: myapp:latest
container_name: lab02-app
networks:
vpc-public:
ipv4_address: 10.0.1.20
vpc-private:
ipv4_address: 10.0.2.20
ports:
- "127.0.0.1:8081:8080"
depends_on:
- db
restart: unless-stopped
db:
image: postgres:16-alpine
container_name: lab02-db
environment:
POSTGRES_PASSWORD: secret
networks:
vpc-private:
ipv4_address: 10.0.2.10
volumes:
- db-data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
db-data:
networks:
vpc-public:
name: lab02-vpc-public
driver: bridge
ipam:
config:
- subnet: 10.0.1.0/24
gateway: 10.0.1.1
vpc-private:
name: lab02-vpc-private
driver: bridge
internal: true
ipam:
config:
- subnet: 10.0.2.0/24
gateway: 10.0.2.1
```
## Comandi di Verifica
```bash
# Valida configurazione
docker compose -f docker-compose.yml config
# Mostra rete generate
docker compose -f docker-compose.yml config | grep -A 20 "Networks:"
# Crea rete senza avviare servizi
docker compose -f docker-compose.yml up --no-deps --no-start
# Ispeziona rete creata
docker network inspect lab02-vpc-public
```
## Troubleshooting
### Subnet Conflicts
```bash
# Verifica subnet in uso
docker network ls -q | xargs docker network inspect --format '{{.Name}}: {{range .IPAM.Config}}{{.Subnet}}{{end}}'
# Cambia subnet nel compose
ipam:
config:
- subnet: 10.0.10.0/24 # Usa CIDR diverso
```
### Container Non Ottengono IP
```bash
# Rimuovi IP statici
# (lascia Docker assegnare automaticamente)
services:
web:
networks:
- vpc-public # Rimuovi ipv4_address
```
## Vedi Anche
- [Tutorial: Deploy Container](../tutorial/02-deploy-containers-networks.md)
- [Reference: Docker Network Commands](./docker-network-commands.md)

View File

@@ -0,0 +1,179 @@
# Reference: Comandi Docker Network
Riferimento rapido per i comandi Docker network.
## Comandi Principali
### Creare una Rete
```bash
docker network create [OPTIONS] NETWORK
# Sintassi base
docker network create my-network
# Con subnet personalizzata
docker network create --subnet 10.0.1.0/24 --gateway 10.0.1.1 my-network
# Rete interna (isolata)
docker network create --internal my-internal-network
# Specifica driver
docker network create --driver bridge my-bridge-network
```
### Lista Reti
```bash
# Tutte le reti
docker network ls
# Con dettagli
docker network ls --no-trunc
# Solo reti custom
docker network ls --filter 'type=custom'
# Format output
docker network ls --format "table {{.Name}}\t{{.Driver}}\t{{.Scope}}"
```
### Ispezionare una Rete
```bash
# Output JSON completo
docker network inspect NETWORK
# Output specifico
docker network inspect NETWORK --format '{{.IPAM.Config}}'
docker network inspect NETWORK --format '{{.Driver}}'
docker network inspect NETWORK --format '{{.Containers}}'
```
### Collegare Container a Rete
```bash
# Collega container a rete
docker network connect NETWORK CONTAINER
# Con IP specifico
docker network connect NETWORK CONTAINER --ip 10.0.1.100
# Con alias DNS
docker network connect NETWORK CONTAINER --alias my-service
```
### Scollegare Container da Rete
```bash
# Scollega container
docker network disconnect NETWORK CONTAINER
# Forza (se in uso)
docker network disconnect -f NETWORK CONTAINER
```
### Rimuovere Reti
```bash
# Rimuovi rete specifica
docker network rm NETWORK
# Rimuovi piu reti
docker network rm NETWORK1 NETWORK2 NETWORK3
# Rimuovi reti non usate
docker network prune
# Rimuovi tutte le reti custom (attenzione!)
docker network ls -q | xargs docker network rm
```
## Opzioni Comuni
| Opzione | Descrizione | Esempio |
|---------|-------------|---------|
| `--driver` | Driver di rete | `--driver bridge` |
| `--subnet` | CIDR subnet | `--subnet 10.0.1.0/24` |
| `--gateway` | Gateway IP | `--gateway 10.0.1.1` |
| `--internal` | Isola rete | `--internal` |
| `--attachable` | Permette container stand-alone | `--attachable` |
| `--ip-range` | Range IP per container | `--ip-range 10.0.1.128/25` |
## Driver di Rete
| Driver | Descrizione | Uso |
|--------|-------------|-----|
| `bridge` | Bridge Linux (default) | Reti isolate su singolo host |
| `overlay` | Overlay Swarm | Multi-host networking |
| `host` | Host networking | Nessuna isolamento |
| `macvlan` | MACVLAN | MAC address univoco per container |
| `none` | Nessuna rete | Container senza rete |
## Output Format
### Template Format
```bash
# Nome e driver
docker network ls --format '{{.Name}}: {{.Driver}}'
# Subnet
docker network inspect NETWORK --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}'
# Container con IP
docker network inspect NETWORK --format '{{range .Containers}}{{.Name}}: {{.IPv4Address}}{{end}}'
# JSON completo
docker network inspect NETWORK --format '{{json}}'
```
### Placeholder Disponibili
| Placeholder | Descrizione |
|-------------|-------------|
| `{{.Name}}` | Nome rete |
| `{{.Id}}` | ID rete |
| `{{.Driver}}` | Driver |
| `{{.Scope}}` | Scope (local/swarm) |
| `{{.Internal}}` | Flag internal |
| `{{.IPAM.Config}}` | Configurazione IPAM |
| `{{.Containers}}` | Container collegati |
| `{{.Options}}` | Opzioni rete |
## Esempi Pratici
### Creare VPC con Subnets
```bash
# Public subnet
docker network create --driver bridge \
--subnet 10.0.1.0/24 \
--gateway 10.0.1.1 \
vpc-public
# Private subnet
docker network create --driver bridge \
--subnet 10.0.2.0/24 \
--gateway 10.0.2.1 \
--internal \
vpc-private
```
### Debug Reti
```bash
# Mostra container in una rete
docker network inspect vpc-public --format '{{json .Containers}}' | jq -r '.[] | .Name'
# Verifica IP di container
docker inspect container --format '{{range $n, $c := .NetworkSettings.Networks}}{{$n}}: {{$c.IPAddress}}{{end}}'
# Trova reti di un container
docker inspect container --format '{{range .NetworkSettings.Networks}}{{$}}{{end}}'
```
## Vedi Anche
- [Tutorial: Creare Reti VPC](../tutorial/01-create-vpc-networks.md)
- [Reference: Compose Network Syntax](./compose-network-syntax.md)

View File

@@ -0,0 +1,125 @@
# Reference: Mapping VPC Docker Network
Tabella di riferimento rapido per i parallelismi tra reti Docker e VPC cloud.
## Tabella Parallelismi Principali
| Concetto Docker | AWS VPC Equivalente | Descrizione |
|-----------------|---------------------|-------------|
| Bridge Network | VPC | Rete virtuale isolata |
| Subnet (10.0.x.0/24) | Subnet CIDR | Segmento IP all'interno VPC |
| Container | EC2 Instance | Entita di calcolo nella rete |
| `--internal` flag | Private Subnet (no IGW) | Isolamento da internet |
| `--gateway` | Subnet Gateway | Gateway predefinito subnet |
| DNS embedded | Route 53 Resolver | Risoluzione nomi |
| `docker network connect` | Attach Network Interface | Collegamento a rete |
| Port mapping (`8080:80`) | Security Group + NAT | Regole accesso + NAT |
## Comandi a Confronto
### Creazione VPC/Subnet
| Operazione Locale | Comando AWS |
|-------------------|-------------|
| `docker network create --driver bridge --subnet 10.0.1.0/24 vpc-main` | `aws ec2 create-vpc --cidr-block 10.0.0.0/16` |
| `--subnet 10.0.1.0/24 --gateway 10.0.1.1` | `aws ec2 create-subnet --vpc-id VPC_ID --cidr-block 10.0.1.0/24` |
| `--internal` | No route to Internet Gateway |
### Gestione Reti
| Operazione Locale | Comando AWS |
|-------------------|-------------|
| `docker network ls` | `aws ec2 describe-vpcs` |
| `docker network inspect vpc-main` | `aws ec2 describe-vpcs --vpc-ids VPC_ID` |
| `docker network rm vpc-main` | `aws ec2 delete-vpc --vpc-id VPC_ID` |
### Container in Rete
| Operazione Locale | Comando AWS |
|-------------------|-------------|
| `docker run --network vpc-main nginx` | `aws ec2 run-instances --subnet-id SUBNET_ID` |
| `docker network connect vpc-main container` | `aws ec2 attach-network-interface` |
| `docker network disconnect vpc-main container` | `aws ec2 detach-network-interface` |
## CIDR Blocks Standard
| Tipo Locale | Cloud CIDR | Uso |
|-------------|------------|-----|
| `10.0.0.0/16` | `10.0.0.0/16` | VPC principale |
| `10.0.1.0/24` | `10.0.1.0/24` | Public subnet (1a) |
| `10.0.2.0/24` | `10.0.2.0/24` | Private subnet (1a) |
| `10.0.3.0/24` | `10.0.3.0/24` | Private subnet (1b) |
| `10.0.4.0/24` | `10.0.4.0/24` | Public subnet (1b) |
## Nomenclatura Cloud (PARA-02)
### Pattern di Naming
```
[Rolle]-[Ambiente]-[Tipo]-[Zona]
Esempi:
lab02-vpc-public (VPC pubblica lab)
lab02-vpc-private (VPC privata lab)
prod-vpc-main (VPC produzione)
dev-app-public-1a (Public subnet dev, AZ 1a)
```
### Tag Docker Networks
```bash
# Aggiungi metadata alle reti
docker network create \
--label env=development \
--label tier=frontend \
--label owner=lab02 \
frontend-network
```
## Security Groups ↔ Docker Isolation
| Security Group AWS | Docker Equivalente |
|--------------------|---------------------|
| All traffic from SG | Containers in same network |
| No ingress rules | `--internal` network |
| Specific port allow | Port mapping `127.0.0.1:PORT:CONTAINER` |
| SG reference type | Multi-network container |
## Routing AWS ↔ Docker Bridge
| AWS Route | Docker Bridge |
|-----------|---------------|
| Internet Gateway | Container host routing |
| NAT Gateway | Container port mapping |
| VPC Peering | `docker network connect` (shared) |
| Transit Gateway | Multi-network container (router) |
## Limitazioni
| Aspetto | Docker Locale | AWS Cloud |
|---------|---------------|-----------|
| Host scope | Singolo host | Multi-AZ, multi-region |
| External access | NAT/Port mapping | Internet Gateway, NAT Gateway |
| DNS resolution | Embedded DNS | Route 53 |
| Network ACL | Non disponibile | Network ACLs disponibili |
| Flow logs | Non disponibile | VPC Flow Logs disponibili |
## Comandi Utili
```bash
# Verifica subnet di una rete
docker network inspect vpc-public --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}'
# Trova container per IP
docker ps -q | xargs docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}{{.Name}}'
# Simula VPC topology multi-tier
docker network create --subnet 10.0.1.0/24 public
docker network create --subnet 10.0.2.0/24 private
docker network create --subnet 10.0.3.0/24 data
```
## Vedi Anche
- [Explanation: Docker VPC Parallels](../explanation/docker-network-vpc-parallels.md)
- [How-To: Create Custom Network](../how-to-guides/create-custom-network.md)

View File

@@ -0,0 +1,194 @@
#!/bin/bash
# Test 01: Network Creation Validation
# Validates Docker bridge network creation with custom subnets (VPC simulation)
# Usage: bash labs/lab-02-network/tests/01-network-creation-test.sh
set -euo pipefail
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Get script directory
TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$TEST_DIR/../.." && pwd)"
# Counter helpers to handle set -e
pass_count=0
fail_count=0
skip_count=0
inc_pass() { ((pass_count++)) || true; }
inc_fail() { ((fail_count++)) || true; }
inc_skip() { ((skip_count++)) || true; }
# Test helper functions
print_header() {
echo -e "${BLUE}═══════════════════════════════════════════════════════════════${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}═══════════════════════════════════════════════════════════════${NC}"
}
print_test() {
echo -e "\n${BLUE}[TEST]${NC} $1"
}
print_pass() {
echo -e "${GREEN}[PASS]${NC} $1"
inc_pass
}
print_fail() {
echo -e "${RED}[FAIL]${NC} $1"
inc_fail
}
print_skip() {
echo -e "${YELLOW}[SKIP]${NC} $1"
inc_skip
}
# Cleanup function
cleanup() {
echo -e "\n${BLUE}[*] Cleaning up test networks...${NC}"
docker network rm test-vpc-public 2>/dev/null || true
docker network rm test-vpc-private 2>/dev/null || true
docker network rm test-net1 2>/dev/null || true
docker network rm test-net2 2>/dev/null || true
echo -e "${GREEN}[✓] Cleanup complete${NC}"
}
# Set trap for cleanup
trap cleanup EXIT
# Start testing
print_header "Lab 02 - Test 01: Network Creation Validation"
# Test 1: Verify Docker is available
print_test "Test 1: Verify Docker is available"
if command -v docker &> /dev/null; then
print_pass "Docker command is available"
docker --version
else
print_fail "Docker command not found. Please install Docker."
exit 1
fi
# Test 2: Create public network with custom subnet (VPC simulation)
print_test "Test 2: Create public network with custom subnet (10.0.1.0/24)"
if docker network create --driver bridge --subnet 10.0.1.0/24 --gateway 10.0.1.1 test-vpc-public &> /dev/null; then
print_pass "Created public network test-vpc-public with subnet 10.0.1.0/24"
docker network inspect test-vpc-public --format='Subnet: {{range .IPAM.Config}}{{.Subnet}}{{end}}'
else
print_fail "Failed to create public network with custom subnet"
exit 1
fi
# Test 3: Create private network with --internal flag
print_test "Test 3: Create private network with --internal flag (10.0.2.0/24)"
if docker network create --driver bridge --internal --subnet 10.0.2.0/24 --gateway 10.0.2.1 test-vpc-private &> /dev/null; then
print_pass "Created private network test-vpc-private with --internal flag"
docker network inspect test-vpc-private --format='Internal: {{.Internal}}'
else
print_fail "Failed to create private network with --internal flag"
exit 1
fi
# Test 4: Verify networks appear in docker network ls
print_test "Test 4: Verify networks appear in docker network ls"
public_found=$(docker network ls --format '{{.Name}}' | grep -c "^test-vpc-public$" || true)
private_found=$(docker network ls --format '{{.Name}}' | grep -c "^test-vpc-private$" || true)
if [[ $public_found -eq 1 && $private_found -eq 1 ]]; then
print_pass "Both test-vpc-public and test-vpc-private found in network list"
docker network ls --filter "name=test-vpc-" --format "table {{.Name}}\t{{.Driver}}\t{{.Scope}}"
else
print_fail "Networks not found in list. public_found=$public_found, private_found=$private_found"
fi
# Test 5: Verify network inspection shows correct subnet
print_test "Test 5: Verify network inspection shows correct subnet configuration"
public_subnet=$(docker network inspect test-vpc-public --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null || echo "")
private_subnet=$(docker network inspect test-vpc-private --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null || echo "")
if [[ "$public_subnet" == "10.0.1.0/24" && "$private_subnet" == "10.0.2.0/24" ]]; then
print_pass "Network subnets correctly configured"
echo " Public: $public_subnet"
echo " Private: $private_subnet"
else
print_fail "Network subnets mismatch. public=$public_subnet (expected 10.0.1.0/24), private=$private_subnet (expected 10.0.2.0/24)"
fi
# Test 6: Verify private network has internal flag set
print_test "Test 6: Verify private network has --internal flag set"
internal_flag=$(docker network inspect test-vpc-private --format '{{.Internal}}' 2>/dev/null || echo "false")
if [[ "$internal_flag" == "true" ]]; then
print_pass "Private network correctly marked as internal (no external access)"
else
print_fail "Private network internal flag not set (value: $internal_flag)"
fi
# Test 7: Verify bridge driver is used
print_test "Test 7: Verify bridge driver is used for both networks"
public_driver=$(docker network inspect test-vpc-public --format '{{.Driver}}' 2>/dev/null || echo "")
private_driver=$(docker network inspect test-vpc-private --format '{{.Driver}}' 2>/dev/null || echo "")
if [[ "$public_driver" == "bridge" && "$private_driver" == "bridge" ]]; then
print_pass "Both networks use bridge driver"
else
print_fail "Driver mismatch. public=$public_driver, private=$private_driver (expected: bridge)"
fi
# Test 8: Check if docker-compose.yml exists (optional - may not exist yet in RED phase)
print_test "Test 8: Check if lab docker-compose.yml exists"
COMPOSE_FILE="$PROJECT_ROOT/labs/lab-02-network/docker-compose.yml"
if [[ -f "$COMPOSE_FILE" ]]; then
print_pass "docker-compose.yml found at $COMPOSE_FILE"
# Test 9: Verify docker-compose config is valid
print_test "Test 9: Validate docker-compose.yml syntax"
if docker compose -f "$COMPOSE_FILE" config &> /dev/null; then
print_pass "docker-compose.yml is valid YAML"
else
print_fail "docker-compose.yml has syntax errors"
fi
else
print_skip "docker-compose.yml not found yet (expected in RED phase - will be created in GREEN phase)"
fi
# Test 10: Verify networks can be removed
print_test "Test 10: Verify test networks can be removed"
# This will be handled by cleanup trap, but we verify the command works
if docker network rm test-vpc-public test-vpc-private &> /dev/null; then
print_pass "Test networks successfully removed"
# Recreate for cleanup trap
docker network create --driver bridge --subnet 10.0.1.0/24 --gateway 10.0.1.1 test-vpc-public &> /dev/null
docker network create --driver bridge --internal --subnet 10.0.2.0/24 --gateway 10.0.2.1 test-vpc-private &> /dev/null
else
print_fail "Failed to remove test networks"
fi
# Summary
print_header "Test Summary"
echo -e "Total tests run: $((pass_count + fail_count + skip_count))"
echo -e "${GREEN}Passed: $pass_count${NC}"
if [[ $fail_count -gt 0 ]]; then
echo -e "${RED}Failed: $fail_count${NC}"
fi
if [[ $skip_count -gt 0 ]]; then
echo -e "${YELLOW}Skipped: $skip_count${NC}"
fi
# Exit with error code if any tests failed
if [[ $fail_count -gt 0 ]]; then
echo -e "\n${RED}Some tests failed. Please review the output above.${NC}"
exit 1
else
echo -e "\n${GREEN}All tests passed! Network creation is working correctly.${NC}"
exit 0
fi

View File

@@ -0,0 +1,260 @@
#!/bin/bash
# Test 02: Isolation Verification
# Validates network isolation between Docker bridge networks
# Usage: bash labs/lab-02-network/tests/02-isolation-verification-test.sh
set -euo pipefail
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Get script directory
TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Counter helpers
pass_count=0
fail_count=0
skip_count=0
inc_pass() { ((pass_count++)) || true; }
inc_fail() { ((fail_count++)) || true; }
inc_skip() { ((skip_count++)) || true; }
# Test helper functions
print_header() {
echo -e "${BLUE}═══════════════════════════════════════════════════════════════${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}═══════════════════════════════════════════════════════════════${NC}"
}
print_test() {
echo -e "\n${BLUE}[TEST]${NC} $1"
}
print_pass() {
echo -e "${GREEN}[PASS]${NC} $1"
inc_pass
}
print_fail() {
echo -e "${RED}[FAIL]${NC} $1"
inc_fail
}
print_skip() {
echo -e "${YELLOW}[SKIP]${NC} $1"
inc_skip
}
print_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
# Cleanup function
cleanup() {
echo -e "\n${BLUE}[*] Cleaning up test containers and networks...${NC}"
# Stop and remove containers
for container in c1 c2 c3 c4; do
docker stop "$container" 2>/dev/null || true
docker rm "$container" 2>/dev/null || true
done
# Remove networks
for network in test-net1 test-net2 test-isolated-net; do
docker network rm "$network" 2>/dev/null || true
done
echo -e "${GREEN}[✓] Cleanup complete${NC}"
}
# Set trap for cleanup
trap cleanup EXIT
# Start testing
print_header "Lab 02 - Test 02: Network Isolation Verification"
# Test 1: Create two isolated networks
print_test "Test 1: Create two isolated bridge networks (10.0.1.0/24 and 10.0.2.0/24)"
if docker network create --driver bridge --subnet 10.0.1.0/24 --gateway 10.0.1.1 test-net1 &> /dev/null && \
docker network create --driver bridge --subnet 10.0.2.0/24 --gateway 10.0.2.1 test-net2 &> /dev/null; then
print_pass "Created two isolated networks successfully"
docker network ls --filter "name=test-net" --format "table {{.Name}}\t{{.Driver}}\t{{.Scope}}"
else
print_fail "Failed to create test networks"
exit 1
fi
# Test 2: Create containers in same network
print_test "Test 2: Create containers in the same network (test-net1)"
if docker run -d --name c1 --network test-net1 --hostname c1 alpine:3.19 sleep 3600 &> /dev/null && \
docker run -d --name c2 --network test-net1 --hostname c2 alpine:3.19 sleep 3600 &> /dev/null; then
print_pass "Created containers c1 and c2 in test-net1"
docker ps --filter "name=c[12]" --format "table {{.Names}}\t{{.Networks}}\t{{.Status}}"
else
print_fail "Failed to create containers in test-net1"
exit 1
fi
# Test 3: Containers in same network CAN communicate (ping should succeed)
print_test "Test 3: Containers in same network can communicate (ping test)"
if docker exec c1 ping -c 2 -W 1 c2 &> /dev/null; then
print_pass "c1 can successfully ping c2 (same-network communication works)"
docker exec c1 ping -c 2 -W 1 c2 | grep "packets transmitted"
else
print_fail "c1 cannot ping c2 (same-network communication should work)"
fi
# Test 4: Containers in same network can resolve by DNS name
print_test "Test 4: Containers in same network can resolve each other by DNS name"
if docker exec c1 nslookup c2 &> /dev/null; then
print_pass "DNS resolution works within same network"
docker exec c1 nslookup c2 | grep "Address" | head -1
else
print_fail "DNS resolution failed within same network"
fi
# Test 5: Create container in different network
print_test "Test 5: Create container c3 in different network (test-net2)"
if docker run -d --name c3 --network test-net2 --hostname c3 alpine:3.19 sleep 3600 &> /dev/null; then
print_pass "Created container c3 in isolated network test-net2"
print_info "c1 and c2 are in test-net1, c3 is in test-net2 (isolated)"
else
print_fail "Failed to create container c3 in test-net2"
exit 1
fi
# Test 6: Containers in DIFFERENT networks CANNOT communicate (isolation test)
print_test "Test 6: Containers in different networks CANNOT communicate (isolation verification)"
print_info "This test EXPECTS ping to FAIL (proves isolation works)"
if docker exec c1 ping -c 2 -W 1 c3 &> /dev/null; then
print_fail "c1 CAN ping c3 - ISOLATION FAILED! Networks are not isolated!"
print_fail "This is a security issue - containers should not reach across networks"
else
print_pass "c1 CANNOT ping c3 - Network isolation is working correctly!"
print_info "This is the expected behavior - isolation prevents cross-network communication"
fi
# Test 7: Cross-network DNS resolution should fail
print_test "Test 7: Cross-network DNS resolution should fail"
print_info "This test EXPECTS DNS lookup to FAIL (proves DNS isolation)"
if docker exec c1 nslookup c3 &> /dev/null; then
print_fail "c1 CAN resolve c3 by DNS - DNS isolation FAILED!"
else
print_pass "c1 CANNOT resolve c3 - DNS isolation is working correctly!"
fi
# Test 8: Create a container connected to both networks (multi-homed)
print_test "Test 8: Create multi-homed container c4 connected to BOTH networks"
if docker run -d --name c4 --network test-net1 alpine:3.19 sleep 3600 &> /dev/null && \
docker network connect test-net2 c4 &> /dev/null; then
print_pass "Created container c4 connected to both networks"
docker inspect c4 --format 'Networks: {{range $k, $v := .NetworkSettings.Networks}}{{$k}} {{end}}'
else
print_fail "Failed to create multi-homed container"
fi
# Test 9: Multi-homed container can reach containers in both networks
print_test "Test 9: Multi-homed container c4 can reach both c1 (net1) and c3 (net2)"
c4_to_c1_result=$(docker exec c4 ping -c 1 -W 1 c1 &> /dev/null && echo "OK" || echo "FAIL")
c4_to_c3_result=$(docker exec c4 ping -c 1 -W 1 c3 &> /dev/null && echo "OK" || echo "FAIL")
if [[ "$c4_to_c1_result" == "OK" && "$c4_to_c3_result" == "OK" ]]; then
print_pass "Multi-homed container can reach both networks (c4->c1: OK, c4->c3: OK)"
else
print_fail "Multi-homed container connectivity issue (c4->c1: $c4_to_c1_result, c4->c3: $c4_to_c3_result)"
fi
# Test 10: Verify isolation still works - c1 cannot ping c3 despite c4 being multi-homed
print_test "Test 10: Verify isolation - c1 still cannot reach c3 (despite c4 bridging networks)"
if docker exec c1 ping -c 1 -W 1 c3 &> /dev/null; then
print_fail "c1 CAN ping c3 - ISOLATION BROKEN! Multi-homing created a bridge!"
else
print_pass "Isolation maintained - c1 still cannot reach c3 (multi-homing doesn't break isolation)"
fi
# Test 11: Create isolated internal network
print_test "Test 11: Create internal network (no external access)"
if docker network create --driver bridge --internal --subnet 10.0.10.0/24 test-isolated-net &> /dev/null; then
print_pass "Created internal network test-isolated-net"
docker network inspect test-isolated-net --format 'Internal: {{.Internal}}'
else
print_fail "Failed to create internal network"
fi
# Test 12: Verify container in internal network cannot reach external internet
print_test "Test 12: Container in internal network cannot reach external internet"
docker run -d --name isolated-test --network test-isolated-net alpine:3.19 sleep 3600 &> /dev/null || true
if docker exec isolated-test ping -c 1 -W 1 8.8.8.8 &> /dev/null; then
print_fail "Container in internal network CAN reach internet - internal flag not working!"
else
print_pass "Container in internal network CANNOT reach internet (isolation works)"
print_info "This is expected behavior for --internal flag networks"
fi
# Cleanup isolated test container early
docker stop isolated-test &> /dev/null || true
docker rm isolated-test &> /dev/null || true
# Test 13: Verify network IP addresses don't overlap
print_test "Test 13: Verify network subnets are properly isolated (no IP overlap)"
net1_subnet=$(docker network inspect test-net1 --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null || echo "")
net2_subnet=$(docker network inspect test-net2 --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null || echo "")
if [[ "$net1_subnet" != "$net2_subnet" ]]; then
print_pass "Network subnets are different (no overlap)"
echo " test-net1: $net1_subnet"
echo " test-net2: $net2_subnet"
else
print_fail "Network subnets are identical - IP overlap will occur!"
fi
# Test 14: Verify containers have IPs from correct subnets
print_test "Test 14: Verify containers have IPs from their network's subnet"
c1_ip=$(docker inspect c1 --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 2>/dev/null || echo "")
c3_ip=$(docker inspect c3 --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 2>/dev/null || echo "")
c1_in_net1=$(echo "$c1_ip" | grep -q "^10.0.1." && echo "yes" || echo "no")
c3_in_net2=$(echo "$c3_ip" | grep -q "^10.0.2." && echo "yes" || echo "no")
if [[ "$c1_in_net1" == "yes" && "$c3_in_net2" == "yes" ]]; then
print_pass "Containers have IPs from correct subnets"
echo " c1 IP: $c1_ip (should be in 10.0.1.0/24)"
echo " c3 IP: $c3_ip (should be in 10.0.2.0/24)"
else
print_fail "Container IPs don't match expected subnets"
echo " c1 IP: $c1_ip (expected 10.0.1.x)"
echo " c3 IP: $c3_ip (expected 10.0.2.x)"
fi
# Summary
print_header "Test Summary"
echo -e "Total tests run: $((pass_count + fail_count + skip_count))"
echo -e "${GREEN}Passed: $pass_count${NC}"
if [[ $fail_count -gt 0 ]]; then
echo -e "${RED}Failed: $fail_count${NC}"
fi
if [[ $skip_count -gt 0 ]]; then
echo -e "${YELLOW}Skipped: $skip_count${NC}"
fi
# Isolation verification message
echo -e "\n${BLUE}[*] Network Isolation Summary${NC}"
echo -e "Same-network communication: ${GREEN}WORKS${NC} (expected)"
echo -e "Cross-network communication: ${GREEN}BLOCKED${NC} (isolation working)"
echo -e "DNS isolation: ${GREEN}ENFORCED${NC} (cross-network DNS blocked)"
echo -e "Internal network isolation: ${GREEN}ENFORCED${NC} (no external access)"
# Exit with error code if any tests failed
if [[ $fail_count -gt 0 ]]; then
echo -e "\n${RED}Some tests failed. Please review the output above.${NC}"
exit 1
else
echo -e "\n${GREEN}All tests passed! Network isolation is working correctly.${NC}"
exit 0
fi

View File

@@ -0,0 +1,272 @@
#!/bin/bash
# Test 03: INF-02 Compliance Verification
# Validates INF-02 requirement: private networks must NOT expose ports on 0.0.0.0
# Usage: bash labs/lab-02-network/tests/03-inf02-compliance-test.sh
set -euo pipefail
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Get script directory
TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$TEST_DIR/../.." && pwd)"
# Counter helpers
pass_count=0
fail_count=0
skip_count=0
inc_pass() { ((pass_count++)) || true; }
inc_fail() { ((fail_count++)) || true; }
inc_skip() { ((skip_count++)) || true; }
# Test helper functions
print_header() {
echo -e "${BLUE}═══════════════════════════════════════════════════════════════${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}═══════════════════════════════════════════════════════════════${NC}"
}
print_test() {
echo -e "\n${BLUE}[TEST]${NC} $1"
}
print_pass() {
echo -e "${GREEN}[PASS]${NC} $1"
inc_pass
}
print_fail() {
echo -e "${RED}[FAIL]${NC} $1"
inc_fail
}
print_skip() {
echo -e "${YELLOW}[SKIP]${NC} $1"
inc_skip
}
print_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# INF-02 Requirement explanation
print_header "Lab 02 - Test 03: INF-02 Compliance Verification"
echo -e "${BLUE}INF-02 Requirement:${NC} Private networks must NOT expose ports on 0.0.0.0"
echo -e "${YELLOW}Allowed:${NC} 127.0.0.1 (localhost only) or no published ports"
echo -e "${RED}Forbidden:${NC} 0.0.0.0 (exposes to all network interfaces)"
echo -e ""
# Compose file path
COMPOSE_FILE="$PROJECT_ROOT/labs/lab-02-network/docker-compose.yml"
# Test 1: Verify docker-compose.yml exists
print_test "Test 1: Verify docker-compose.yml exists"
if [[ -f "$COMPOSE_FILE" ]]; then
print_pass "docker-compose.yml found at $COMPOSE_FILE"
COMPOSE_EXISTS=true
else
print_skip "docker-compose.yml not found at $COMPOSE_FILE"
print_info "This is expected in RED phase - file will be created in GREEN phase"
COMPOSE_EXISTS=false
# Skip remaining tests if compose file doesn't exist
print_header "Test Summary (Early Exit)"
echo -e "Total tests: $((pass_count + fail_count + skip_count))"
echo -e "${GREEN}Passed: $pass_count${NC}"
echo -e "${YELLOW}Skipped: $skip_count${NC}"
echo -e "${YELLOW}INF-02 compliance tests skipped - infrastructure not yet created${NC}"
exit 0
fi
# Test 2: Verify docker-compose.yml is valid YAML
print_test "Test 2: Validate docker-compose.yml syntax"
if docker compose -f "$COMPOSE_FILE" config &> /dev/null; then
print_pass "docker-compose.yml is valid YAML"
else
print_fail "docker-compose.yml has syntax errors"
print_info "Run 'docker compose -f docker-compose.yml config' to see errors"
fi
# Test 3: Check for 0.0.0.0 port bindings (CRITICAL - must not exist)
print_test "Test 3: Check for 0.0.0.0 port bindings (VIOLATES INF-02)"
print_info "Searching for pattern: 0.0.0.0:PORT"
ZERO_DETECTIONS=$(grep -n -E '0\.0\.0\.0:[0-9]+' "$COMPOSE_FILE" 2>/dev/null || echo "")
ZERO_COUNT=$(echo "$ZERO_DETECTIONS" | grep -c "0.0.0" || true)
if [[ -z "$ZERO_DETECTIONS" ]]; then
print_pass "No 0.0.0.0 port bindings found (COMPLIANT with INF-02)"
else
print_fail "Found $ZERO_COUNT occurrence(s) of 0.0.0.0 port bindings - INF-02 VIOLATION!"
echo "$ZERO_DETECTIONS" | while read -r line; do
echo -e "${RED} $line${NC}"
done
print_warning "0.0.0.0 exposes service on ALL network interfaces (security risk)"
print_info "Fix: Use '127.0.0.1:PORT:CONTAINER_PORT' for localhost-only access"
fi
# Test 4: Check for host:port format without explicit host (defaults to 0.0.0.0)
print_test "Test 4: Check for implicit 0.0.0.0 bindings (e.g., '8080:80' without host)"
print_info "Pattern: '- \"PORT:CONTAINER_PORT\"' (defaults to 0.0.0.0:PORT)"
# Look for port mappings without explicit host (e.g., "8080:80" instead of "127.0.0.1:8080:80")
IMPLICIT_ZERO=$(grep -n -E '^\s*-\s*"[0-9]+:[0-9]+' "$COMPOSE_FILE" 2>/dev/null || echo "")
IMPLICIT_ZERO_ALT=$(grep -n -E '^\s*ports:\s*$' "$COMPOSE_FILE" -A 5 | grep -E '^\s+-\s*[0-9]+:[0-9]+' || echo "")
if [[ -z "$IMPLICIT_ZERO" && -z "$IMPLICIT_ZERO_ALT" ]]; then
print_pass "No implicit 0.0.0.0 port bindings found"
else
if [[ -n "$IMPLICIT_ZERO" ]]; then
print_fail "Found implicit 0.0.0.0 bindings (format: 'PORT:CONTAINER')"
echo "$IMPLICIT_ZERO" | while read -r line; do
echo -e "${RED} $line${NC}"
done
fi
if [[ -n "$IMPLICIT_ZERO_ALT" ]]; then
print_fail "Found implicit 0.0.0.0 bindings (ports: section)"
echo "$IMPLICIT_ZERO_ALT" | while read -r line; do
echo -e "${RED} $line${NC}"
done
fi
print_warning "Port format 'PORT:CONTAINER' defaults to 0.0.0.0:PORT"
print_info "Fix: Use '127.0.0.1:PORT:CONTAINER_PORT' for localhost-only binding"
fi
# Test 5: Verify 127.0.0.1 bindings are used for private services
print_test "Test 5: Verify private services use 127.0.0.1 binding (localhost only)"
LOCALHOST_BINDINGS=$(grep -n -E '127\.0\.0\.1:[0-9]+' "$COMPOSE_FILE" 2>/dev/null || echo "")
LOCALHOST_COUNT=$(echo "$LOCALHOST_BINDINGS" | grep -c "127.0.0" || true)
if [[ -n "$LOCALHOST_BINDINGS" ]]; then
print_pass "Found $LOCALHOST_COUNT service(s) using 127.0.0.1 binding (secure)"
echo "$LOCALHOST_BINDINGS" | while read -r line; do
echo -e "${GREEN} $line${NC}"
done
else
print_skip "No 127.0.0.1 bindings found - services may have no published ports (acceptable)"
fi
# Test 6: Check for services with no published ports (most secure)
print_test "Test 6: Check for services with no published ports (fully private)"
print_info "Services with no 'ports:' section are fully internal (most secure)"
# Count services
TOTAL_SERVICES=$(docker compose -f "$COMPOSE_FILE" config --services 2>/dev/null | wc -l)
SERVICES_WITH_PORTS=$(grep -c -E '^\s+[a-z-]+:\s*$' "$COMPOSE_FILE" -A 20 | grep -c "ports:" || echo "0")
if [[ $TOTAL_SERVICES -gt 0 ]]; then
PRIVATE_SERVICES=$((TOTAL_SERVICES - SERVICES_WITH_PORTS))
print_pass "Services analysis: $TOTAL_SERVICES total, $SERVICES_WITH_PORTS with exposed ports, $PRIVATE_SERVICES fully private"
print_info "Services with no published ports are accessible only within Docker networks"
else
print_skip "Could not count services"
fi
# Test 7: Verify network configuration uses custom bridge networks
print_test "Test 7: Verify custom bridge networks are defined (not default bridge)"
NETWORKS_SECTION=$(grep -A 10 "^networks:" "$COMPOSE_FILE" 2>/dev/null || echo "")
if [[ -n "$NETWORKS_SECTION" ]]; then
print_pass "Custom networks section found in docker-compose.yml"
echo "$NETWORKS_SECTION" | head -5
else
print_skip "No custom networks defined (services use default bridge)"
fi
# Test 8: Check for internal flag on private networks
print_test "Test 8: Check for 'internal: true' on private networks"
INTERNAL_NETWORKS=$(grep -B 5 -E 'internal:\s*true' "$COMPOSE_FILE" 2>/dev/null | grep -E '^\s+[a-z-]+:\s*$' || echo "")
if [[ -n "$INTERNAL_NETWORKS" ]]; then
print_pass "Found internal networks (no external access)"
echo "$INTERNAL_NETWORKS" | while read -r line; do
echo -e "${GREEN} $line${NC}"
done
else
print_skip "No internal networks found (acceptable - not all services need internal flag)"
fi
# Test 9: Verify no host networking mode
print_test "Test 9: Verify services don't use 'network_mode: host' (security risk)"
HOST_NETWORK=$(grep -E 'network_mode:\s*host' "$COMPOSE_FILE" 2>/dev/null || echo "")
if [[ -z "$HOST_NETWORK" ]]; then
print_pass "No services using host networking mode"
else
print_fail "Found 'network_mode: host' - VIOLATES isolation principles!"
echo "$HOST_NETWORK" | while read -r line; do
echo -e "${RED} $line${NC}"
done
print_warning "Host networking bypasses Docker network isolation"
fi
# Test 10: Generate INF-02 compliance report
print_test "Test 10: Generate INF-02 compliance summary"
# Collect all issues
TOTAL_ISSUES=0
if [[ $ZERO_COUNT -gt 0 ]]; then
((TOTAL_ISSUES += ZERO_COUNT)) || true
fi
if [[ -n "$IMPLICIT_ZERO" || -n "$IMPLICIT_ZERO_ALT" ]]; then
((TOTAL_ISSUES++)) || true
fi
if [[ -n "$HOST_NETWORK" ]]; then
((TOTAL_ISSUES++)) || true
fi
echo -e "\n${BLUE}[*] INF-02 Compliance Report${NC}"
echo "Compose file: $COMPOSE_FILE"
echo "Total services: $TOTAL_SERVICES"
echo "Services with exposed ports: $SERVICES_WITH_PORTS"
echo "Fully private services: $PRIVATE_SERVICES"
if [[ $TOTAL_ISSUES -eq 0 ]]; then
echo -e "\n${GREEN}[✓] INF-02 STATUS: COMPLIANT${NC}"
print_pass "No security violations found"
echo " - No 0.0.0.0 bindings"
echo " - No implicit 0.0.0.0 bindings"
echo " - No host networking mode"
echo " - $PRIVATE_SERVICES services fully private"
else
echo -e "\n${RED}[✗] INF-02 STATUS: NON-COMPLIANT${NC}"
print_fail "Found $TOTAL_ISSUES compliance issue(s)"
echo " - 0.0.0.0 bindings: $ZERO_COUNT"
echo " - Implicit bindings: $([[ -n "$IMPLICIT_ZERO" || -n "$IMPLICIT_ZERO_ALT" ]] && echo "yes" || echo "no")"
echo " - Host networking: $([[ -n "$HOST_NETWORK" ]] && echo "yes" || echo "no")"
fi
# Summary
print_header "Test Summary"
echo -e "Total tests run: $((pass_count + fail_count + skip_count))"
echo -e "${GREEN}Passed: $pass_count${NC}"
if [[ $fail_count -gt 0 ]]; then
echo -e "${RED}Failed: $fail_count${NC}"
fi
if [[ $skip_count -gt 0 ]]; then
echo -e "${YELLOW}Skipped: $skip_count${NC}"
fi
# Exit with appropriate code
if [[ $fail_count -gt 0 ]]; then
echo -e "\n${RED}INF-02 compliance tests FAILED${NC}"
echo -e "Please fix the violations above before deploying to production."
exit 1
elif [[ $TOTAL_ISSUES -gt 0 ]]; then
echo -e "\n${YELLOW}INF-02 compliance warnings detected${NC}"
echo -e "Consider fixing the issues above for better security posture."
exit 0
else
echo -e "\n${GREEN}All INF-02 compliance tests PASSED${NC}"
echo -e "Infrastructure is compliant with security requirements."
exit 0
fi

View File

@@ -0,0 +1,241 @@
#!/bin/bash
# Test 04: Infrastructure Verification
# Verifies that docker-compose.yml infrastructure is correctly deployed
# Usage: bash labs/lab-02-network/tests/04-verify-infrastructure.sh
set -euo pipefail
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Get script directory
TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
LAB_DIR="$(cd "$TEST_DIR/.." && pwd)"
# Counter helpers
pass_count=0
fail_count=0
inc_pass() { ((pass_count++)) || true; }
inc_fail() { ((fail_count++)) || true; }
# Helper functions
print_header() {
echo -e "${BLUE}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}${NC} ${BOLD}$1${NC}"
echo -e "${BLUE}╚═══════════════════════════════════════════════════════════════╝${NC}"
}
print_test() {
echo -e "\n${BLUE}[TEST]${NC} $1"
}
print_pass() {
echo -e " ${GREEN}[✓]${NC} $1"
inc_pass
}
print_fail() {
echo -e " ${RED}[✗]${NC} $1"
inc_fail
}
print_info() {
echo -e " ${BLUE}[i]${NC} $1"
}
# Main verification
print_header "Lab 02 Infrastructure Verification"
cd "$LAB_DIR"
# Test 1: docker-compose.yml exists
print_test "Verifying docker-compose.yml exists"
if [[ -f "docker-compose.yml" ]]; then
print_pass "docker-compose.yml found"
else
print_fail "docker-compose.yml not found"
exit 1
fi
# Test 2: docker-compose.yml is valid
print_test "Validating docker-compose.yml syntax"
if docker compose config &> /dev/null; then
print_pass "docker-compose.yml has valid syntax"
else
print_fail "docker-compose.yml has syntax errors"
exit 1
fi
# Test 3: Networks are defined
print_test "Verifying VPC networks are defined"
if docker compose config --format json 2>/dev/null | grep -q '"networks"'; then
print_pass "Networks section found in compose file"
# Check for vpc-public
if docker compose config --format json 2>/dev/null | grep -q '"vpc-public"'; then
print_pass " vpc-public network defined"
else
print_fail " vpc-public network NOT defined"
fi
# Check for vpc-private
if docker compose config --format json 2>/dev/null | grep -q '"vpc-private"'; then
print_pass " vpc-private network defined"
else
print_fail " vpc-private network NOT defined"
fi
else
print_fail "No networks defined"
fi
# Test 4: INF-02 compliance check
print_test "Checking INF-02 compliance (no 0.0.0.0 bindings)"
ZERO_BINDINGS=$(grep -c '0\.0\.0\.0:' docker-compose.yml 2>/dev/null || echo "0")
if [[ $ZERO_BINDINGS -eq 0 ]]; then
print_pass "No 0.0.0.0 port bindings found (INF-02 compliant)"
else
print_fail "Found $ZERO_BINDINGS 0.0.0.0 bindings - INF-02 VIOLATION"
fi
# Test 5: Check for 127.0.0.1 bindings
print_test "Checking for localhost-only bindings (127.0.0.1)"
LOCALHOST_BINDINGS=$(grep -c '127\.0\.0\.1:' docker-compose.yml 2>/dev/null || echo "0")
if [[ $LOCALHOST_BINDINGS -gt 0 ]]; then
print_pass "Found $LOCALHOST_BINDINGS localhost-only bindings (secure)"
else
print_info "No 127.0.0.1 bindings - services may have no published ports"
fi
# Test 6: Start services
print_test "Starting Docker Compose services"
if docker compose up -d &> /dev/null; then
print_pass "Services started successfully"
sleep 3 # Give services time to start
else
print_fail "Failed to start services"
exit 1
fi
# Test 7: Verify containers are running
print_test "Verifying containers are running"
RUNNING_CONTAINERS=$(docker compose ps --services | wc -l)
if [[ $RUNNING_CONTAINERS -ge 3 ]]; then
print_pass "Services running: $RUNNING_CONTAINERS containers"
docker compose ps
else
print_fail "Not enough containers running: $RUNNING_CONTAINERS (expected 3+)"
fi
# Test 8: Verify network creation
print_test "Verifying VPC networks were created"
if docker network ls --format '{{.Name}}' | grep -q "lab02-vpc-public"; then
print_pass " lab02-vpc-public network exists"
else
print_fail " lab02-vpc-public network NOT found"
fi
if docker network ls --format '{{.Name}}' | grep -q "lab02-vpc-private"; then
print_pass " lab02-vpc-private network exists"
else
print_fail " lab02-vpc-private network NOT found"
fi
# Test 9: Verify subnet configuration
print_test "Verifying subnet CIDR configuration"
PUBLIC_SUBNET=$(docker network inspect lab02-vpc-public --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null)
PRIVATE_SUBNET=$(docker network inspect lab02-vpc-private --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null)
if [[ "$PUBLIC_SUBNET" == "10.0.1.0/24" ]]; then
print_pass " Public subnet: $PUBLIC_SUBNET (correct)"
else
print_fail " Public subnet: $PUBLIC_SUBNET (expected 10.0.1.0/24)"
fi
if [[ "$PRIVATE_SUBNET" == "10.0.2.0/24" ]]; then
print_pass " Private subnet: $PRIVATE_SUBNET (correct)"
else
print_fail " Private subnet: $PRIVATE_SUBNET (expected 10.0.2.0/24)"
fi
# Test 10: Verify private network isolation
print_test "Verifying private network isolation flag"
INTERNAL_FLAG=$(docker network inspect lab02-vpc-private --format '{{.Internal}}' 2>/dev/null)
if [[ "$INTERNAL_FLAG" == "true" ]]; then
print_pass "Private network has internal=true flag (isolated)"
else
print_fail "Private network missing internal flag"
fi
# Test 11: Verify container network placement
print_test "Verifying container network placement"
if docker inspect lab02-web --format '{{range .NetworkSettings.Networks}}{{.Network}}{{end}}' 2>/dev/null | grep -q "lab02-vpc-public"; then
print_pass " lab02-web in vpc-public network"
else
print_fail " lab02-web not in vpc-public"
fi
if docker inspect lab02-db --format '{{range .NetworkSettings.Networks}}{{.Network}}{{end}}' 2>/dev/null | grep -q "lab02-vpc-private"; then
print_pass " lab02-db in vpc-private network"
else
print_fail " lab02-db not in vpc-private"
fi
# Test 12: Verify multi-homed container
print_test "Verifying multi-homed container (app in both networks)"
PUBLIC_IP=$(docker inspect lab02-app --format '{{range .NetworkSettings.Networks}}{{if eq .Network "lab02-vpc-public"}}{{.IPAddress}}{{end}}{{end}}' 2>/dev/null)
PRIVATE_IP=$(docker inspect lab02-app --format '{{range .NetworkSettings.Networks}}{{if eq .Network "lab02-vpc-private"}}{{.IPAddress}}{{end}}{{end}}' 2>/dev/null)
if [[ -n "$PUBLIC_IP" && -n "$PRIVATE_IP" ]]; then
print_pass "lab02-app is multi-homed (public: $PUBLIC_IP, private: $PRIVATE_IP)"
else
print_fail "lab02-app not properly connected to both networks"
fi
# Test 13: Verify web service accessibility
print_test "Verifying web service is accessible from localhost"
if curl -sf http://127.0.0.1:8080 &> /dev/null; then
print_pass "Web service responds on http://127.0.0.1:8080"
else
print_fail "Web service not accessible on http://127.0.0.1:8080"
fi
# Test 14: Verify database is NOT accessible from host
print_test "Verifying database is NOT accessible from host (private)"
if curl -sf http://127.0.0.1:5432 &> /dev/null; then
print_fail "Database is accessible from host - PRIVATE NETWORK COMPROMISED!"
else
print_pass "Database is NOT accessible from host (correct - isolated)"
fi
# Test 15: Verify isolation between networks
print_test "Verifying cross-network isolation (web cannot reach db)"
if docker exec lab02-web ping -c 1 -W 1 lab02-db &> /dev/null; then
print_fail "Web CAN reach database - ISOLATION FAILED!"
else
print_pass "Web CANNOT reach database - isolation working correctly"
fi
# Summary
print_header "Infrastructure Verification Summary"
echo -e "Tests run: $((pass_count + fail_count))"
echo -e "${GREEN}Passed: $pass_count${NC}"
if [[ $fail_count -gt 0 ]]; then
echo -e "${RED}Failed: $fail_count${NC}"
fi
if [[ $fail_count -eq 0 ]]; then
echo -e "\n${GREEN}${BOLD}✓ ALL INFRASTRUCTURE CHECKS PASSED${NC}"
echo -e "\nInfrastructure is correctly deployed and compliant!"
echo -e "You can now proceed with the tutorials."
exit 0
else
echo -e "\n${RED}Some infrastructure checks failed${NC}"
echo -e "Please review the failures above."
exit 1
fi

View File

@@ -0,0 +1,325 @@
#!/bin/bash
# Final Verification: Lab 02 - Network & VPC
# Comprehensive end-to-end verification for students (double-check command)
# Usage: bash labs/lab-02-network/tests/99-final-verification.sh
set -euo pipefail
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m'
# Get script directory
TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$TEST_DIR/../.." && pwd)"
LAB_DIR="$PROJECT_ROOT/labs/lab-02-network"
# Counter helpers
pass_count=0
fail_count=0
warn_count=0
inc_pass() { ((pass_count++)) || true; }
inc_fail() { ((fail_count++)) || true; }
inc_warn() { ((warn_count++)) || true; }
# Helper functions
print_header() {
echo -e "${CYAN}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${CYAN}${NC} ${BOLD}$1${NC}"
echo -e "${CYAN}╚═══════════════════════════════════════════════════════════════╝${NC}"
}
print_section() {
echo -e "\n${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
}
print_pass() {
echo -e " ${GREEN}[✓]${NC} $1"
inc_pass
}
print_fail() {
echo -e " ${RED}[✗]${NC} $1"
inc_fail
}
print_warn() {
echo -e " ${YELLOW}[!]${NC} $1"
inc_warn
}
print_info() {
echo -e " ${BLUE}[i]${NC} $1"
}
# Main header
clear
print_header "Lab 02: Network & VPC - Final Verification"
echo ""
echo -e "This script verifies your entire Lab 02 implementation."
echo -e "Run this after completing all tutorials to double-check your work."
echo ""
# Verify Docker is available
print_section "0. Environment Check"
if ! command -v docker &> /dev/null; then
echo -e "${RED}ERROR: Docker is not installed or not in PATH${NC}"
exit 1
fi
echo -e "Docker version: $(docker --version | cut -d' ' -f3)"
echo -e "Docker Compose version: $(docker compose version | grep 'Docker Compose' | cut -d' ' -f4)"
# Test 1: Network Creation
print_section "1. Network Creation Verification"
COMPOSE_FILE="$LAB_DIR/docker-compose.yml"
if [[ ! -f "$COMPOSE_FILE" ]]; then
print_fail "docker-compose.yml not found at $COMPOSE_FILE"
print_info "Expected output of Tutorial 1"
FAIL_REASON="compose_missing"
else
print_pass "docker-compose.yml exists"
# Validate compose syntax
if docker compose -f "$COMPOSE_FILE" config &> /dev/null; then
print_pass "docker-compose.yml has valid syntax"
else
print_fail "docker-compose.yml has syntax errors"
print_info "Run: docker compose -f docker-compose.yml config"
FAIL_REASON="compose_invalid"
fi
fi
# Test 2: Network Topology
print_section "2. Network Topology Verification"
if [[ "${FAIL_REASON:-}" == "compose_missing" ]]; then
print_warn "Skipping network tests - compose file missing"
else
# Check for custom networks
NETWORKS=$(docker compose -f "$COMPOSE_FILE" config --format json 2>/dev/null | grep -o '"networks"' | wc -l || echo "0")
if [[ $NETWORKS -gt 0 ]]; then
print_pass "Custom networks defined in docker-compose.yml"
# List networks
echo ""
print_info "Networks defined:"
docker compose -f "$COMPOSE_FILE" config --format json 2>/dev/null | \
grep -A 3 '"networks"' | grep -E '^\s+"[a-z]+"' | sed 's/.*"\([^"]*\)".*/ \1/' || echo " (unable to list)"
else
print_warn "No custom networks found - using default bridge"
fi
# Check for VPC-style naming (PARA-02 requirement)
VPC_NAMES=$(grep -E 'vpc-|subnet-|network-' "$COMPOSE_FILE" 2>/dev/null | wc -l || echo "0")
if [[ $VPC_NAMES -gt 0 ]]; then
print_pass "Uses VPC-style naming convention (PARA-02 compliant)"
else
print_warn "VPC-style naming not found (recommended: vpc-main, subnet-public, etc.)"
fi
fi
# Test 3: INF-02 Compliance
print_section "3. INF-02 Security Compliance"
if [[ -f "$COMPOSE_FILE" ]]; then
# Check for 0.0.0.0 bindings
ZERO_BINDINGS=$(grep -c -E '0\.0\.0\.0:[0-9]+' "$COMPOSE_FILE" 2>/dev/null || echo "0")
if [[ $ZERO_BINDINGS -eq 0 ]]; then
print_pass "No 0.0.0.0 port bindings found (INF-02 compliant)"
else
print_fail "Found $ZERO_BINDINGS 0.0.0.0 port bindings - VIOLATES INF-02"
print_info "Private networks must not expose ports on 0.0.0.0"
FAIL_REASON="inf02_violation"
fi
# Check for localhost bindings
LOCALHOST_BINDINGS=$(grep -c -E '127\.0\.0\.1:[0-9]+' "$COMPOSE_FILE" 2>/dev/null || echo "0")
if [[ $LOCALHOST_BINDINGS -gt 0 ]]; then
print_pass "Found $LOCALHOST_BINDINGS service(s) with 127.0.0.1 binding (secure)"
else
print_info "No 127.0.0.1 bindings - services may have no published ports"
fi
# Check for host networking
HOST_NET=$(grep -c -E 'network_mode:\s*host' "$COMPOSE_FILE" 2>/dev/null || echo "0")
if [[ $HOST_NET -eq 0 ]]; then
print_pass "No services using host networking mode"
else
print_fail "Found services using 'network_mode: host' - security risk"
FAIL_REASON="host_networking"
fi
else
print_warn "Skipping INF-02 tests - compose file missing"
fi
# Test 4: Service Startup
print_section "4. Service Startup Verification"
if [[ -f "$COMPOSE_FILE" && "${FAIL_REASON:-}" != "compose_invalid" ]]; then
print_info "Attempting to start services..."
if docker compose -f "$COMPOSE_FILE" up -d &> /dev/null; then
print_pass "Services started successfully"
# List running services
echo ""
print_info "Running services:"
docker compose -f "$COMPOSE_FILE" ps --format "table {{.Name}}\t{{.Status}}\t{{.Ports}}" 2>/dev/null || \
docker compose -f "$COMPOSE_FILE" ps
# Cleanup
print_info "Stopping services..."
docker compose -f "$COMPOSE_FILE" down &> /dev/null
else
print_fail "Services failed to start"
print_info "Run: docker compose -f docker-compose.yml up"
print_info "Check logs: docker compose -f docker-compose.yml logs"
FAIL_REASON="services_failed"
fi
else
print_warn "Skipping service tests - compose file missing or invalid"
fi
# Test 5: Documentation Completeness
print_section "5. Documentation Completeness (Diátxis Framework)"
DOC_COUNT=0
DOC_FILES=(
"$LAB_DIR/tutorial/01-create-networks.md"
"$LAB_DIR/tutorial/02-deploy-containers.md"
"$LAB_DIR/tutorial/03-verify-isolation.md"
"$LAB_DIR/how-to-guides/*.md"
"$LAB_DIR/reference/*.md"
"$LAB_DIR/explanation/*.md"
)
for file_pattern in "${DOC_FILES[@]}"; do
count=$(ls $file_pattern 2>/dev/null | wc -l)
DOC_COUNT=$((DOC_COUNT + count))
done
if [[ $DOC_COUNT -ge 10 ]]; then
print_pass "Documentation complete: $DOC_COUNT files found (Framework Diátaxis)"
echo ""
print_info "Documentation breakdown:"
echo -e " Tutorials: $(ls "$LAB_DIR/tutorial"/*.md 2>/dev/null | wc -l)"
echo -e " How-to Guides: $(ls "$LAB_DIR/how-to-guides"/*.md 2>/dev/null | wc -l)"
echo -e " Reference: $(ls "$LAB_DIR/reference"/*.md 2>/dev/null | wc -l)"
echo -e " Explanation: $(ls "$LAB_DIR/explanation"/*.md 2>/dev/null | wc -l)"
else
print_warn "Documentation incomplete: $DOC_COUNT files found (expected 10+)"
print_info "Complete all tutorials and documentation"
fi
# Test 6: Test Infrastructure
print_section "6. Test Infrastructure Verification"
TEST_FILES=(
"$TEST_DIR/01-network-creation-test.sh"
"$TEST_DIR/02-isolation-verification-test.sh"
"$TEST_DIR/03-inf02-compliance-test.sh"
"$TEST_DIR/run-all-tests.sh"
)
TESTS_FOUND=0
for test_file in "${TEST_FILES[@]}"; do
if [[ -f "$test_file" && -x "$test_file" ]]; then
((TESTS_FOUND++)) || true
fi
done
if [[ $TESTS_FOUND -eq ${#TEST_FILES[@]} ]]; then
print_pass "All test scripts present and executable"
elif [[ $TESTS_FOUND -gt 0 ]]; then
print_warn "Some test scripts missing: $TESTS_FOUND/${#TEST_FILES[@]} found"
else
print_fail "Test infrastructure not found"
fi
# Final Summary
print_section "Final Summary"
echo ""
echo -e " ${BOLD}Results:${NC}"
echo -e " ${GREEN}✓ Passed:${NC} $pass_count"
if [[ $fail_count -gt 0 ]]; then
echo -e " ${RED}✗ Failed:${NC} $fail_count"
fi
if [[ $warn_count -gt 0 ]]; then
echo -e " ${YELLOW}! Warnings:${NC} $warn_count"
fi
echo ""
# Overall verdict
if [[ $fail_count -eq 0 && $warn_count -eq 0 ]]; then
echo -e "${GREEN}${BOLD}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${GREEN}${BOLD}║ ✓ ALL CHECKS PASSED ║${NC}"
echo -e "${GREEN}${BOLD}╚═══════════════════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${GREEN}Your Lab 02 implementation is complete and compliant!${NC}"
echo ""
echo -e "Next steps:"
echo -e " 1. Review the Explanation document to understand VPC parallels"
echo -e " 2. Proceed to Phase 4: Lab 03 - Compute & EC2"
echo ""
exit 0
elif [[ $fail_count -eq 0 ]]; then
echo -e "${YELLOW}${BOLD}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${YELLOW}${BOLD}║ ! PASSED WITH WARNINGS ║${NC}"
echo -e "${YELLOW}${BOLD}╚═══════════════════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${YELLOW}Implementation is functional but has warnings.${NC}"
echo ""
echo -e "Recommendations:"
if [[ $VPC_NAMES -eq 0 ]]; then
echo -e " - Consider using VPC-style naming (vpc-main, subnet-public, subnet-private)"
fi
if [[ $DOC_COUNT -lt 10 ]]; then
echo -e " - Complete all documentation files (Diátxis Framework)"
fi
echo ""
exit 0
else
echo -e "${RED}${BOLD}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${RED}${BOLD}║ ✗ VERIFICATION FAILED ║${NC}"
echo -e "${RED}${BOLD}╚═══════════════════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${RED}Some checks failed. Please review the issues above.${NC}"
echo ""
# Provide specific guidance
case "${FAIL_REASON:-}" in
compose_missing)
echo -e "${YELLOW}To fix:${NC} Complete Tutorial 1 to create docker-compose.yml"
;;
compose_invalid)
echo -e "${YELLOW}To fix:${NC} Validate and fix docker-compose.yml syntax"
echo -e " Run: docker compose -f docker-compose.yml config"
;;
inf02_violation)
echo -e "${YELLOW}To fix:${NC} Replace 0.0.0.0 bindings with 127.0.0.1"
echo -e " Change: '8080:80' → '127.0.0.1:8080:80'"
;;
services_failed)
echo -e "${YELLOW}To fix:${NC} Check service logs for errors"
echo -e " Run: docker compose -f docker-compose.yml logs"
;;
*)
echo -e "${YELLOW}To fix:${NC} Review the failed items above and complete the tutorials"
;;
esac
echo ""
echo -e "After fixing, run this verification again:"
echo -e " ${CYAN}bash labs/lab-02-network/tests/99-final-verification.sh${NC}"
echo ""
exit 1
fi

View File

@@ -0,0 +1,196 @@
#!/bin/bash
# Quick Test: Fast Validation for Development
# Runs subset of critical tests for rapid feedback during development
# Usage: bash labs/lab-02-network/tests/quick-test.sh
set -euo pipefail
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m'
# Get script directory
TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$TEST_DIR/../.." && pwd)"
# Counter helpers
pass_count=0
fail_count=0
inc_pass() { ((pass_count++)) || true; }
inc_fail() { ((fail_count++)) || true; }
# Helper functions
print_header() {
echo -e "${CYAN}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${CYAN}${NC} ${BOLD}$1${NC}"
echo -e "${CYAN}╚═══════════════════════════════════════════════════════════════╝${NC}"
}
print_test() {
echo -e "\n${BLUE}[TEST]${NC} $1"
}
print_pass() {
echo -e " ${GREEN}[✓]${NC} $1"
inc_pass
}
print_fail() {
echo -e " ${RED}[✗]${NC} $1"
inc_fail
}
print_info() {
echo -e " ${CYAN}[i]${NC} $1"
}
# Main header
clear
print_header "Lab 02: Quick Test (Fast Validation)"
echo ""
echo -e "Running critical tests only (< 30 seconds)"
echo -e "For full test suite, run: ${YELLOW}bash run-all-tests.sh${NC}"
echo ""
# Quick Test 1: Docker availability
print_test "Docker is available"
if command -v docker &> /dev/null; then
print_pass "Docker command found"
print_info "$(docker --version)"
else
print_fail "Docker not found"
exit 1
fi
# Quick Test 2: Docker Compose file exists
print_test "docker-compose.yml exists"
COMPOSE_FILE="$PROJECT_ROOT/labs/lab-02-network/docker-compose.yml"
if [[ -f "$COMPOSE_FILE" ]]; then
print_pass "docker-compose.yml found"
else
print_fail "docker-compose.yml not found (expected after Tutorial 1)"
print_info "This is OK if you're starting the lab"
fi
# Quick Test 3: Validate compose syntax (if file exists)
if [[ -f "$COMPOSE_FILE" ]]; then
print_test "docker-compose.yml has valid syntax"
if docker compose -f "$COMPOSE_FILE" config &> /dev/null; then
print_pass "Compose file is valid YAML"
else
print_fail "Compose file has syntax errors"
print_info "Run: docker compose -f docker-compose.yml config"
fi
# Quick Test 4: INF-02 compliance (no 0.0.0.0 bindings)
print_test "INF-02 compliance (no 0.0.0.0 bindings)"
ZERO_COUNT=$(grep -c -E '0\.0\.0\.0:[0-9]+' "$COMPOSE_FILE" 2>/dev/null || echo "0")
if [[ $ZERO_COUNT -eq 0 ]]; then
print_pass "No 0.0.0.0 bindings (secure)"
else
print_fail "Found $ZERO_COUNT 0.0.0.0 bindings (INF-02 violation)"
fi
fi
# Quick Test 5: Docker networks can be created
print_test "Docker network creation works"
if docker network create --driver bridge --subnet 10.0.99.0/24 quick-test-net &> /dev/null; then
print_pass "Can create bridge network with custom subnet"
docker network rm quick-test-net &> /dev/null
else
print_fail "Failed to create test network"
fi
# Quick Test 6: Network isolation works
print_test "Network isolation verification"
# Create two networks
if docker network create --driver bridge --subnet 10.0.98.0/24 quick-test-net1 &> /dev/null && \
docker network create --driver bridge --subnet 10.0.97.0/24 quick-test-net2 &> /dev/null; then
# Create test containers
if docker run -d --name qt-c1 --network quick-test-net1 alpine:3.19 sleep 60 &> /dev/null && \
docker run -d --name qt-c2 --network quick-test-net2 alpine:3.19 sleep 60 &> /dev/null; then
# Test cross-network isolation (should fail)
if docker exec qt-c1 ping -c 1 -W 1 qt-c2 &> /dev/null; then
print_fail "Cross-network communication works (isolation broken!)"
else
print_pass "Cross-network communication blocked (isolation works)"
fi
# Cleanup
docker stop qt-c1 qt-c2 &> /dev/null
docker rm qt-c1 qt-c2 &> /dev/null
else
print_fail "Failed to create test containers"
fi
# Cleanup networks
docker network rm quick-test-net1 quick-test-net2 &> /dev/null
else
print_fail "Failed to create test networks"
fi
# Quick Test 7: Test scripts exist
print_test "Test infrastructure present"
TEST_COUNT=0
if [[ -f "$TEST_DIR/01-network-creation-test.sh" ]]; then ((TEST_COUNT++)) || true; fi
if [[ -f "$TEST_DIR/02-isolation-verification-test.sh" ]]; then ((TEST_COUNT++)) || true; fi
if [[ -f "$TEST_DIR/03-inf02-compliance-test.sh" ]]; then ((TEST_COUNT++)) || true; fi
if [[ $TEST_COUNT -eq 3 ]]; then
print_pass "All test scripts present ($TEST_COUNT/3)"
else
print_fail "Some test scripts missing ($TEST_COUNT/3)"
fi
# Quick Test 8: Documentation exists
print_test "Documentation files present"
DOC_COUNT=0
if [[ -f "$TEST_DIR/../tutorial/01-create-networks.md" ]]; then ((DOC_COUNT++)) || true; fi
if [[ -f "$TEST_DIR/../tutorial/02-deploy-containers.md" ]]; then ((DOC_COUNT++)) || true; fi
if [[ -f "$TEST_DIR/../tutorial/03-verify-isolation.md" ]]; then ((DOC_COUNT++)) || true; fi
if [[ $DOC_COUNT -ge 1 ]]; then
print_pass "Documentation present ($DOC_COUNT tutorial files)"
else
print_info "No documentation yet (expected during development)"
fi
# Summary
print_header "Quick Test Summary"
echo -e "Tests run: ${BOLD}$((pass_count + fail_count))${NC}"
echo -e " ${GREEN}Passed:${NC} $pass_count"
if [[ $fail_count -gt 0 ]]; then
echo -e " ${RED}Failed:${NC} $fail_count"
fi
echo ""
# Verdict
if [[ $fail_count -eq 0 ]]; then
echo -e "${GREEN}${BOLD}✓ ALL QUICK TESTS PASSED${NC}"
echo ""
echo -e "Quick validation successful!"
echo ""
echo -e "Next steps:"
echo -e " 1. Run full test suite: ${CYAN}bash run-all-tests.sh${NC}"
echo -e " 2. Run final verification: ${CYAN}bash 99-final-verification.sh${NC}"
echo ""
exit 0
else
echo -e "${RED}${BOLD}✗ QUICK TESTS FAILED${NC}"
echo ""
echo -e "Some critical tests failed. Please review:"
echo -e " 1. Check Docker is running: ${CYAN}docker ps${NC}"
echo -e " 2. Verify compose file: ${CYAN}cd labs/lab-02-network && docker compose config${NC}"
echo -e " 3. Run full test suite for details: ${CYAN}bash run-all-tests.sh${NC}"
echo ""
exit 1
fi

View File

@@ -0,0 +1,146 @@
#!/bin/bash
# Test Orchestration: Run All Tests
# Executes all Lab 02 test scripts with fail-fast behavior
# Usage: bash labs/lab-02-network/tests/run-all-tests.sh
set -euo pipefail
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
BOLD='\033[1m'
NC='\033[0m'
# Get script directory
TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Change to test directory
cd "$TEST_DIR"
# Counter helpers
pass_count=0
fail_count=0
skip_count=0
inc_pass() { ((pass_count++)) || true; }
inc_fail() { ((fail_count++)) || true; }
inc_skip() { ((skip_count++)) || true; }
# Print header
print_header() {
echo ""
echo -e "${BLUE}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}${NC} ${BOLD}$1${NC}"
echo -e "${BLUE}╚═══════════════════════════════════════════════════════════════╝${NC}"
echo ""
}
# Print test header
print_test_header() {
echo -e "${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BLUE}Running:${NC} $1"
echo -e "${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
}
# Main header
print_header "Lab 02: Network & VPC - Test Suite"
# Test array - all test scripts in order
declare -a tests=(
"01-network-creation-test.sh"
"02-isolation-verification-test.sh"
"03-inf02-compliance-test.sh"
)
# Track first failure
first_failure=""
# Run each test
for test_script in "${tests[@]}"; do
test_path="$TEST_DIR/$test_script"
# Check if test file exists
if [[ ! -f "$test_path" ]]; then
echo -e "${YELLOW}[SKIP]${NC} $test_script not found"
inc_skip
continue
fi
# Check if test file is executable
if [[ ! -x "$test_path" ]]; then
echo -e "${YELLOW}[SKIP]${NC} $test_script not executable"
inc_skip
continue
fi
print_test_header "$test_script"
# Run test and capture exit code
if bash "$test_path"; then
echo -e "${GREEN}[✓]${NC} $test_script passed"
inc_pass
echo ""
else
exit_code=$?
echo -e "${RED}[✗]${NC} $test_script failed (exit code: $exit_code)"
inc_fail
# Record first failure for summary
if [[ -z "$first_failure" ]]; then
first_failure="$test_script"
fi
# Fail-fast: stop on first failure
echo ""
echo -e "${RED}[FATAL]${NC} Test failed. Stopping test suite (fail-fast mode)."
echo ""
break
fi
done
# Print summary
print_header "Test Suite Summary"
echo -e "Total tests run: ${BOLD}$((pass_count + fail_count + skip_count))${NC}"
echo -e " ${GREEN}Passed:${NC} $pass_count"
if [[ $fail_count -gt 0 ]]; then
echo -e " ${RED}Failed:${NC} $fail_count"
fi
if [[ $skip_count -gt 0 ]]; then
echo -e " ${YELLOW}Skipped:${NC} $skip_count"
fi
echo ""
# Final verdict
if [[ $fail_count -eq 0 && $skip_count -eq 0 ]]; then
echo -e "${GREEN}${BOLD}✓ ALL TESTS PASSED${NC}"
echo ""
echo -e "Next step: Run final verification"
echo -e " ${CYAN}bash labs/lab-02-network/tests/99-final-verification.sh${NC}"
echo ""
exit 0
elif [[ $fail_count -eq 0 ]]; then
echo -e "${YELLOW}Some tests were skipped${NC}"
echo ""
echo -e "Note: Skipped tests are expected during RED phase (before implementation)."
echo -e " These tests will pass after completing GREEN phase (implementation)."
echo ""
exit 0
else
echo -e "${RED}${BOLD}✗ TESTS FAILED${NC}"
echo ""
echo -e "First failure: ${RED}$first_failure${NC}"
echo ""
echo -e "To debug:"
echo -e " 1. Run the failed test directly to see detailed output:"
echo -e " ${CYAN}bash labs/lab-02-network/tests/$first_failure${NC}"
echo ""
echo -e " 2. Check infrastructure setup:"
echo -e " ${CYAN}cd labs/lab-02-network && docker compose config${NC}"
echo ""
echo -e " 3. Review test logs for specific failures"
echo ""
exit 1
fi

View File

@@ -0,0 +1,306 @@
# Tutorial: Creare Reti VPC con Docker Bridge Networks
In questo tutorial imparerai a creare reti Docker bridge che simulano le VPC (Virtual Private Cloud) e le subnet dei provider cloud. Capirai come segmentare il traffico di rete tra container usando concetti direttamente trasferibili ad AWS, Azure o GCP.
## Obiettivo
Creare due reti isolate che simulano una subnet pubblica e una subnet privata in una VPC cloud:
- **VPC Public Subnet**: `lab02-vpc-public` (10.0.1.0/24) - per servizi accessibili
- **VPC Private Subnet**: `lab02-vpc-private` (10.0.2.0/24) - isolata, senza accesso esterno
## Prerequisiti
- Docker Engine >= 24.0 installato e in esecuzione
- Comandi base Docker (`docker run`, `docker ps`)
- Completato Lab 01: IAM & Sicurezza (raccomandato)
---
## Passo 1: Comprendere i Concetti VPC
Prima di creare reti, capiremo cosa stiamo simulando.
**Concetti Cloud:**
- **VPC (Virtual Private Cloud)**: Rete virtuale isolata nel cloud
- **Subnet**: Segmento di indirizzi IP all'interno di una VPC (es. 10.0.1.0/24)
- **Subnet Pubblica**: Subnet con route verso internet gateway
- **Subnet Privata**: Subnet senza accesso diretto a internet
**Parallelismo Docker ↔ Cloud:**
| Concetto Docker | Cloud AWS |
|-----------------|-----------|
| Bridge Network | VPC |
| Subnet (10.0.x.0/24) | Subnet CIDR |
| --internal flag | Private subnet (no IGW) |
| Container | EC2 Instance |
---
## Passo 2: Creare la Subnet Pubblica
Creiamo la prima rete che simula una subnet pubblica VPC.
Esegui:
```bash
# Crea rete public subnet con subnet custom
docker network create --driver bridge \
--subnet 10.0.1.0/24 \
--gateway 10.0.1.1 \
lab02-vpc-public
```
Spiegazione:
- `--driver bridge`: Usa bridge driver (isolamento host locale)
- `--subnet 10.0.1.0/24`: CIDR block per questa subnet
- `--gateway 10.0.1.1`: Gateway predefinito per la subnet
- `lab02-vpc-public`: Nome che segue nomenclatura cloud
Atteso:
```
a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2
```
---
## Passo 3: Verificare la Creazione della Rete Pubblica
Verifichiamo che la rete sia stata creata correttamente.
Esegui:
```bash
# Lista tutte le reti
docker network ls | grep lab02
```
Atteso:
```
lab02-vpc-public bridge local
```
Esegui:
```bash
# Ispeziona la configurazione rete
docker network inspect lab02-vpc-public
```
Atteso (output formattato):
```json
{
"Name": "lab02-vpc-public",
"Driver": "bridge",
"IPAM": {
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false
}
```
Verifica che:
- [ ] Subnet sia `10.0.1.0/24`
- [ ] Gateway sia `10.0.1.1`
- [ ] Internal sia `false` (pubblica)
---
## Passo 4: Creare la Subnet Privata
Ora creiamo una rete isolata che simula una subnet privata.
Esegui:
```bash
# Crea rete private subnet (isolata)
docker network create --driver bridge \
--subnet 10.0.2.0/24 \
--gateway 10.0.2.1 \
--internal \
lab02-vpc-private
```
Spiegazione:
- `--internal`: Flag critico! Isola la rete (nessun accesso esterno)
- Subnet diversa: `10.0.2.0/24` (nessuna sovrapposizione con pubblica)
- Stesso gateway pattern: `10.0.2.1`
Atteso:
```
f1e2d3c4b5a697886950a2b3c4d5e6f7g8h9i0j1k2l3
```
---
## Passo 5: Verificare la Subnet Privata
Verifichiamo che la rete privata sia configurata correttamente.
Esegui:
```bash
# Ispeziona la rete privata
docker network inspect lab02-vpc-private
```
Atteso:
```json
{
"Name": "lab02-vpc-private",
"Driver": "bridge",
"IPAM": {
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": true
}
```
Verifica che:
- [ ] Subnet sia `10.0.2.0/24`
- [ ] Gateway sia `10.0.2.1`
- [ ] **Internal sia `true`** (critico per isolamento!)
---
## Passo 6: Verificare Entrambe le Reti
Vediamo un riepilogo di entrambe le reti create.
Esegui:
```bash
# Lista reti lab02
docker network ls --filter "name=lab02" --format "table {{.Name}}\t{{.Driver}}\t{{.Scope}}"
```
Atteso:
```
NAME DRIVER SCOPE
lab02-vpc-private bridge local
lab02-vpc-public bridge local
```
Esegui:
```bash
# Verifica subnet CIDR
docker network inspect lab02-vpc-public --format '{{.IPAM.Config}}'
docker network inspect lab02-vpc-private --format '{{.IPAM.Config}}'
```
---
## Passo 7: Testare con un Container (Opzionale)
Verifichiamo che le reti funzionino creando un container di test.
Esegui:
```bash
# Crea container nella rete pubblica
docker run -d --name test-web \
--network lab02-vpc-public \
nginx:alpine
```
Esegui:
```bash
# Verifica che il container abbia IP dalla subnet corretta
docker inspect test-web --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
```
Atteso:
```
10.0.1.2
```
(Indirizzo nella subnet 10.0.1.0/24 - corretto!)
Pulisci il test:
```bash
docker stop test-web
docker rm test-web
```
---
## Parallelismo con AWS VPC
| Operazione Locale | AWS VPC Equivalente |
|-------------------|---------------------|
| `docker network create` | `aws ec2 create-vpc` |
| `--subnet 10.0.1.0/24` | `aws ec2 create-subnet --cidr-block` |
| `--internal` | No route to Internet Gateway |
| `lab02-vpc-public` | `vpc-12345678` |
| Container nella rete | EC2 instance in subnet |
---
## Verifica
Hai completato questo tutorial quando:
- [ ] Rete `lab02-vpc-public` creata con subnet 10.0.1.0/24
- [ ] Rete `lab02-vpc-private` creata con subnet 10.0.2.0/24
- [ ] Rete privata ha `Internal: true`
- [ ] Entrambe le reti appaiono in `docker network ls`
- [ ] Capisci il parallelismo tra Docker bridge networks e VPC
## Prossimo Passo
Nel [prossimo tutorial](./02-deploy-containers-networks.md) imparerai a distribuire container in queste reti usando docker-compose.yml, creando un'architettura multi-tier (web pubblica, database privato).
---
## Troubleshooting
**Problema: `Error: network already exists`**
Soluzione:
```bash
# Rimuovi rete esistente
docker network rm lab02-vpc-public lab02-vpc-private
# Oppure usa nomi diversi
docker network create ... lab02-vpc-public-v2
```
**Problema: `Error: subnet conflicts with existing network`**
Soluzione:
```bash
# Verifica subnet in uso
docker network ls -q | xargs docker network inspect --format '{{.IPAM.Config}}'
# Cambia subnet in conflitto
docker network create --subnet 10.0.10.0/24 ...
```
**Problema: Container non ottiene IP dalla subnet corretta**
Soluzione:
```bash
# Verifica configurazione IPAM
docker network inspect lab02-vpc-public --format '{{json .IPAM}}'
# Assicurati che il container sia nella rete corretta
docker inspect container --format '{{json .NetworkSettings.Networks}}'
```
**Problema: `--internal flag blocks all communication`**
Questo è il comportamento corretto! I container in reti private:
- NON possono raggiungere internet
- NON possono essere raggiunti dall'host
- POSONO comunicare con altri container nella stessa rete

View File

@@ -0,0 +1,327 @@
# Tutorial: Distribuire Container in Reti VPC
In questo tutorial imparerai a distribuire container multi-tier usando docker-compose.yml, posizionando i servizi nelle reti VPC appropriate seguendo il principio della sicurezza a livelli.
## Obiettivo
Creare un'architettura multi-tier con docker-compose:
- **Web Server**: nella rete pubblica (accessibile da localhost)
- **Database**: nella rete privata (isolato)
## Prerequisiti
- Completato [Tutorial 1: Creare Reti VPC](./01-create-vpc-networks.md)
- Docker Compose V2 installato
---
## Passo 1: Creare la Directory del Lab
Esegui:
```bash
cd ~/laboratori-cloud/labs/lab-02-network
```
---
## Passo 2: Creare docker-compose.yml
Crea il file `docker-compose.yml` con la configurazione multi-tier:
```yaml
services:
# Web server - rete pubblica
web:
image: nginx:alpine
container_name: lab02-web
networks:
vpc-public:
ipv4_address: 10.0.1.10
ports:
- "127.0.0.1:8080:80" # INF-02 compliant: solo localhost
restart: unless-stopped
# Database - rete privata
db:
image: postgres:16-alpine
container_name: lab02-db
environment:
POSTGRES_PASSWORD: lab02_password
POSTGRES_DB: lab02_db
networks:
vpc-private:
ipv4_address: 10.0.2.10
# Nessuna porta esposta - privato
volumes:
- db-data:/var/lib/postgresql/data
restart: unless-stopped
# Application - entrambe le reti
app:
image: nginx:alpine
container_name: lab02-app
networks:
vpc-public:
ipv4_address: 10.0.1.20
vpc-private:
ipv4_address: 10.0.2.20
ports:
- "127.0.0.1:8081:80"
restart: unless-stopped
networks:
vpc-public:
driver: bridge
name: lab02-vpc-public
ipam:
driver: default
config:
- subnet: 10.0.1.0/24
gateway: 10.0.1.1
vpc-private:
driver: bridge
name: lab02-vpc-private
internal: true
ipam:
driver: default
config:
- subnet: 10.0.2.0/24
gateway: 10.0.2.1
volumes:
db-data:
```
Salva il file.
---
## Passo 3: Verificare la Configurazione
Verifica che il file YAML sia valido.
Esegui:
```bash
docker compose config
```
Se valido, vedrai la configurazione completa. Se ci sono errori, verranno mostrati qui.
---
## Passo 4: Avviare i Servizi
Esegui:
```bash
docker compose up -d
```
Atteso:
```
[+] Running 4/4
✔ Network lab02-vpc-public Created
✔ Network lab02-vpc-private Created
✔ Container lab02-db Started
✔ Container lab02-web Started
✔ Container lab02-app Started
```
---
## Passo 5: Verificare lo Stato dei Servizi
Esegui:
```bash
docker compose ps
```
Atteso:
```
NAME IMAGE STATUS
lab02-app nginx:alpine Up
lab02-db postgres:16-alpine Up
lab02-web nginx:alpine Up
```
---
## Passo 6: Verificare il Posizionamento nelle Reti
Esegui:
```bash
# Verifica container web nella rete pubblica
docker inspect lab02-web --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} in {{.NetworkID}}{{end}}'
```
Atteso (IP nella subnet pubblica):
```
10.0.1.10 in lab02-vpc-public
```
Esegui:
```bash
# Verifica database nella rete privata
docker inspect lab02-db --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} in {{.NetworkID}}{{end}}'
```
Atteso (IP nella subnet privata):
```
10.0.2.10 in lab02-vpc-private
```
Esegui:
```bash
# Verifica app in entrambe le reti (multi-homed)
docker inspect lab02-app --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} in {{.NetworkID}} | {{end}}'
```
Atteso:
```
10.0.1.20 in lab02-vpc-public | 10.0.2.20 in lab02-vpc-private
```
---
## Passo 7: Testare l'Accesso
Il web server e l'app sono accessibili da localhost (INF-02 compliant).
Esegui:
```bash
# Test web server
curl http://127.0.0.1:8080
```
Atteso: HTML response da nginx
Esegui:
```bash
# Test app
curl http://127.0.0.1:8081
```
Il database NON e accessibile dall'host (corretto - privato).
---
## Passo 8: Verificare l'Isolamento
Verifica che il database sia davvero isolato.
Esegui:
```bash
# Container web NON puo raggiungere database (reti diverse)
docker exec lab02-web ping -c 2 lab02-db
```
Atteso: FALLISCE (isolamento funzionante)
Esegui:
```bash
# Container app PUO raggiungere il database (connesso a entrambe)
docker exec lab02-app ping -c 2 lab02-db
```
Atteso: SUCCESSO (app fa da ponte)
---
## Architettura Multi-Tier
```
┌─────────────────┐
│ Host (127.0.0.1) │
└────────┬────────┘
│ 8080, 8081
┌────────▼────────┐
│ vpc-public │
│ 10.0.1.0/24 │
└────────┬────────┘
┌──────────────┼──────────────┐
│ │ │
┌─────▼─────┐ ┌────▼────┐ ┌──────▼──────┐
│ web │ │ app │ │ (isolated) │
│ 10.0.1.10 │ │10.0.1.20│ │ │
└───────────┘ └────┬────┘ └──────────────┘
┌──────▼────────┐
│ vpc-private │
│ 10.0.2.0/24 │
└──────┬────────┘
┌──────▼─────┐
│ db │
│ 10.0.2.10 │
└────────────┘
```
---
## Verifica
Hai completato questo tutorial quando:
- [ ] docker-compose.yml creato con 3 servizi
- [ ] Tutti i container sono in esecuzione
- [ ] Web e app sono accessibili da localhost
- [ ] Database e nella rete privata (non accessibile dall'host)
- [ ] App e connessa a entrambe le reti (multi-homed)
## Prossimo Passo
Nel [prossimo tutorial](./03-verify-network-isolation.md) imparerai a verificare e testare l'isolamento tra le reti.
---
## Troubleshooting
**Problema: Porta 8080 gia in uso**
Soluzione:
```bash
# Cambia porta nel compose
ports:
- "127.0.0.1:9090:80" # Usa 9090 invece di 8080
```
**Problema: Database non ottiene IP**
Soluzione:
```bash
# Verifica configurazione IPAM
docker compose down
docker compose up -d
docker inspect lab02-db --format '{{json .NetworkSettings.Networks}}'
```
**Problema: `ERROR: for lab02-db Cannot create container`**
Soluzione:
```bash
# Pulisci volumi e ricrea
docker compose down -v
docker compose up -d
```
**Problema: Container non comunicano**
Soluzione:
```bash
# Verifica che siano nella stessa rete
docker inspect lab02-web --format '{{json .NetworkSettings.Networks}}'
docker inspect lab02-db --format '{{json .NetworkSettings.Networks}}'
# Devono condividere almeno una rete
```

View File

@@ -0,0 +1,292 @@
# Tutorial: Verificare l'Isolamento delle Reti VPC
In questo tutorial imparerai a verificare e testare l'isolamento tra le reti Docker bridge, confermando che la tua architettura VPC multi-tier funziona correttamente.
## Obiettivo
Verificare che:
- I container nella stessa rete possono comunicare
- I container in reti diverse NON possono comunicare (isolamento)
- Il DNS funziona entro la stessa rete
- I container multi-homed possono raggiungere entrambe le reti
## Prerequisiti
- Completati [Tutorial 1](./01-create-vpc-networks.md) e [Tutorial 2](./02-deploy-containers-networks.md)
- Container in esecuzione: `docker compose ps` mostra 3 container attivi
---
## Passo 1: Verificare i Container in Esecuzione
Esegui:
```bash
cd ~/laboratori-cloud/labs/lab-02-network
docker compose ps
```
Assicurati che lab02-web, lab02-app, e lab02-db siano "Up".
Se non lo sono:
```bash
docker compose up -d
```
---
## Passo 2: Test Comunicazione Stessa Rete
I container nella stessa rete devono potersi comunicare.
Esegui:
```bash
# Verifica: app puo raggiungere web (stessa rete pubblica)
docker exec lab02-app ping -c 2 lab02-web
```
Atteso:
```
PING lab02-web (10.0.1.10): 56 data bytes
64 bytes from 10.0.1.10: seq=0 ttl=64 time=0.123 ms
64 bytes from 10.0.1.10: seq=1 ttl=64 time=0.045 ms
--- lab02-web ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
```
**SUCCESS** = Comunicazione funziona nella stessa rete.
---
## Passo 3: Test Comunicazione Cross-Rete (DEVE FALLIRE)
Verifica che l'isolamento funzioni.
Esegui:
```bash
# Verifica: web NON puo raggiungere db (reti diverse)
docker exec lab02-web ping -c 2 lab02-db
```
Atteso:
```
ping: bad address 'lab02-db'
```
Oppure:
```
PING lab02-db (10.0.2.10): 56 data bytes
--- lab02-db ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
```
**FALLIMENTO ATTESO** = Isolamento funzionante!
Questo e corretto: `lab02-web` (10.0.1.10) non puo raggiungere `lab02-db` (10.0.2.10) perche sono in reti diverse.
---
## Passo 4: Test Container Multi-Homed
Il container `lab02-app` e connesso a entrambe le reti.
Esegui:
```bash
# Verifica: app puo raggiungere web (stessa rete pubblica)
docker exec lab02-app ping -c 2 lab02-web
```
Atteso: SUCCESSO
Esegui:
```bash
# Verifica: app puo raggiungere db (stessa rete privata)
docker exec lab02-app ping -c 2 lab02-db
```
Atteso: SUCCESSO
Questo dimostra che `lab02-app` puo fungere da ponte tra le reti.
---
## Passo 5: Test DNS Resolution
Verifica che il DNS funzioni entro la stessa rete.
Esegui:
```bash
# Test DNS nella stessa rete
docker exec lab02-app nslookup lab02-web
```
Atteso:
```
Name: lab02-web
Address 1: 10.0.1.10
```
Esegui:
```bash
# Test DNS cross-rete (dovrebbe fallire)
docker exec lab02-web nslookup lab02-db
```
Atteso: Fallisce o restituisce errore
---
## Passo 6: Test Accesso da Host
Verifica INF-02 compliance: servizi privati non accessibili dall'host.
Esegui:
```bash
# Test accesso web (pubblico, ma solo localhost)
curl http://127.0.0.1:8080
```
Atteso: SUCCESSO (HTML response)
Esegui:
```bash
# Test accesso app
curl http://127.0.0.1:8081
```
Atteso: SUCCESSO
Esegui:
```bash
# Test accesso database (privato - NON deve funzionare)
curl http://127.0.0.1:5432
```
Atteso:
```
curl: (7) Failed to connect to 127.0.0.1 port 5432 after X ms: Connection refused
```
Questo e corretto! Il database e nella rete privata senza porte esposte.
---
## Passo 7: Test Isolamento Internet (Rete Privata)
La rete privata con flag `--internal` non puo raggiungere internet.
Esegui:
```bash
# Test database non puo raggiungere internet
docker exec lab02-db ping -c 1 -W 1 8.8.8.8
```
Atteso: FALLISCE (nessuna route verso internet)
Questo e corretto per una subnet privata cloud.
---
## Passo 8: Test Container di Isolamento
Usa lo script di test del lab.
Esegui:
```bash
bash tests/02-isolation-verification-test.sh
```
Questo script verifica automaticamente tutti i casi di isolamento.
---
## Matrice di Connettività
| Source | Target | Stesso Network | Risultato |
|--------|--------|----------------|----------|
| web (10.0.1.10) | app (10.0.1.20) | Si (vpc-public) | SUCCESSO |
| app (10.0.1.20) | db (10.0.2.10) | Si (vpc-private) | SUCCESSO |
| web (10.0.1.10) | db (10.0.2.10) | No | FALLISCE |
| db (10.0.2.10) | web (10.0.1.10) | No | FALLISCE |
| Host (127.0.0.1) | web (8080) | Port mapping | SUCCESSO |
| Host (127.0.0.1) | db (5432) | Nessun mapping | FALLISCE |
| db | Internet (8.8.8.8) | --internal flag | FALLISCE |
---
## Verifica
Hai completato questo tutorial quando:
- [ ] Container nella stessa rete comunicano
- [ ] Container in reti diverse sono isolati (ping fallisce)
- [ ] Container multi-homed raggiungono entrambe le reti
- [ ] DNS risolve i nomi nella stessa rete
- [ ] Servizi privati non accessibili dall'host
- [ ] Rete privata non raggiunge internet
## Prossimi Passi
- Esegui la verifica finale: `bash tests/99-final-verification.sh`
- Leggi [Explanation: Parallelismi VPC](../explanation/docker-network-vpc-parallels.md)
- Procedi al Lab 03: Compute & EC2
---
## Troubleshooting
**Problema: Tutti i ping funzionano (nessun isolamento)**
Causa: Container potrebbero essere nella rete di default `bridge`
Soluzione:
```bash
# Verifica reti dei container
docker inspect lab02-web --format '{{json .NetworkSettings.Networks}}'
# Devono mostrare solo lab02-vpc-public, non "bridge"
```
**Problema: `lab02-app` non puo raggiungere nessuno**
Causa: IP statici potrebbero entrare in conflitto
Soluzione:
```bash
# Ricrea senza IP statici
docker compose down
# Rimuovi ipv4_address dal compose
docker compose up -d
```
**Problema: DNS non funziona**
Soluzione:
```bash
# Verifica configurazione DNS container
docker exec lab02-app cat /etc/resolv.conf
# Dovrebbe mostrare 127.0.0.11 (Docker embedded DNS)
```
**Problema: Container web non si avvia**
Soluzione:
```bash
# Verifica logs
docker compose logs web
# Verifica che la porta non sia in uso
ss -tlnp | grep 8080
```