Files
laboratori-cloud/labs/lab-03-compute/explanation/compute-ec2-parallels.md
Luca Sacchi Ricciardi 23a9ffe443 feat(lab-03): complete Phase 4 - Compute & EC2 lab
Phase Plans (5 files):
- 04-RESEARCH.md: Domain research on Docker limits, healthchecks, EC2 parallels
- 04-VALIDATION.md: Success criteria and validation strategy
- 04-01-PLAN.md: Test infrastructure (RED phase)
- 04-02-PLAN.md: Diátxis documentation
- 04-03-PLAN.md: Infrastructure implementation (GREEN phase)

Test Scripts (6 files, 1300+ lines):
- 01-resource-limits-test.sh: Validate INF-03 compliance
- 02-healthcheck-test.sh: Validate healthcheck configuration
- 03-enforcement-test.sh: Verify resource limits with docker stats
- 04-verify-infrastructure.sh: Infrastructure verification
- 99-final-verification.sh: End-to-end student verification
- run-all-tests.sh: Test orchestration with fail-fast
- quick-test.sh: Fast validation (<30s)

Documentation (11 files, 2500+ lines):
Tutorials (3):
- 01-set-resource-limits.md: EC2 instance types, Docker limits syntax
- 02-implement-healthchecks.md: ELB health check parallels
- 03-dependencies-with-health.md: depends_on with service_healthy

How-to Guides (4):
- check-resource-usage.md: docker stats monitoring
- test-limits-enforcement.md: Stress testing CPU/memory
- custom-healthcheck.md: HTTP, TCP, database healthchecks
- instance-type-mapping.md: Docker limits → EC2 mapping

Reference (3):
- compose-resources-syntax.md: Complete deploy.resources reference
- healthcheck-syntax.md: All healthcheck parameters
- ec2-instance-mapping.md: Instance type mapping table

Explanation (1):
- compute-ec2-parallels.md: Container=EC2, Limits=Instance Type, Healthcheck=ELB

Infrastructure:
- docker-compose.yml: 5 services (web, app, worker, db, stress-test)
  All services: INF-03 compliant (cpus + memory limits)
  All services: healthcheck configured
  EC2 parallels: t2.nano, t2.micro, t2.small, t2.medium, m5.large
- Dockerfile: Alpine 3.19 + stress tools + non-root user

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-03 15:16:58 +02:00

11 KiB

Explanation: Parallelismi tra Docker Compute e EC2

In questo documento esploreremo come le risorse dei container Docker simulano le istanze EC2 di AWS. Comprendere questi parallelismi ti permetterà di applicare le conoscenze acquisite localmente agli ambienti cloud reali.


Cos'è un'EC2 Instance?

EC2 (Elastic Compute Cloud) Instance è una macchina virtuale nel cloud che:

  • Fornisce risorse computazionali: CPU, memoria, storage, rete
  • Definisce il tipo e la dimensione: Instance types con diverse combinazioni
  • Scansiona i costi: Paghi per le risorse che usi (o riservate)

Le istanze EC2 sono il cuore della compute infrastructure in AWS, usate per:

  • Web server e application server
  • Database (RDS, Aurora)
  • Batch processing
  • Container hosting (ECS, EKS)

Il Parallelismo Fondamentale

Container Docker = EC2 Instance

Locale Cloud AWS
docker run aws ec2 run-instances
Container con limiti EC2 Instance type
cpus: '1' 1 vCPU
memory: 2G 2 GB RAM
docker stats CloudWatch Metrics
Healthcheck ELB Health Check

Resource Limits = Instance Type

Locale (Docker Compose):

deploy:
  resources:
    limits:
      cpus: '1'
      memory: 2G

Cloud (AWS CLI):

aws ec2 run-instances \
  --image-id ami-12345 \
  --instance-type t2.small \
  # t2.small = 1 vCPU, 2 GB RAM

Stesso risultato: 1 vCPU, 2 GB RAM di risorse garantite.


EC2 Instance Types - Approfondimento

Famiglie di Instance Types

AWS offre diverse famiglie di istanze per diversi workload:

Famiglia Prefix Caratteristiche Docker Parallel
Burstable t2, t3 Credit-based CPU, low cost Piccoli limiti con burst
General Purpose m5, m6 Equilibrato CPU/memoria Limiti bilanciati
Compute Optimized c5, c6 Alto ratio CPU CPU alto, memoria bassa
Memory Optimized r5, r6 Alto ratio memoria CPU basso, memoria alta
Storage Optimized i3, i4 NVMe SSD locale Volumes veloci

Instance Types Comuni Analizzati

T2 Nano - Microservices

Spec: 0.5 vCPU, 512 MB RAM Costo: ~$0.006/ora Use Case: Microservizi minimi, background tasks

Docker Parallel:

deploy:
  resources:
    limits:
      cpus: '0.5'
      memory: 512M

Quando usarlo:

  • Servizi che usano poca CPU
  • Task asincroni leggeri
  • Sviluppo e test economici

T2 Micro - Dev/Test

Spec: 1 vCPU, 1 GB RAM Costo: ~$0.012/ora Use Case: Development, test, small websites

Docker Parallel:

deploy:
  resources:
    limits:
      cpus: '1'
      memory: 1G

Quando usarlo:

  • Ambienti di sviluppo
  • Test automation
  • Microservices a basso traffico

T2 Small - Web Servers

Spec: 1 vCPU, 2 GB RAM Costo: ~$0.024/ora Use Case: Web server, API endpoints

Docker Parallel:

deploy:
  resources:
    limits:
      cpus: '1'
      memory: 2G

Quando usarlo:

  • Web server (Nginx, Apache)
  • API REST
  • Container reverse proxy

T2 Medium - Application Server

Spec: 2 vCPU, 4 GB RAM Costo: ~$0.048/ora Use Case: Application server, cache

Docker Parallel:

deploy:
  resources:
    limits:
      cpus: '2'
      memory: 4G

Quando usarlo:

  • Application server (Node.js, Python)
  • Cache server (Redis)
  • Database di sviluppo

M5 Large - Production

Spec: 2 vCPU, 8 GB RAM Costo: ~$0.096/ora Use Case: Production applications

Docker Parallel:

deploy:
  resources:
    limits:
      cpus: '2'
      memory: 8G

Quando usarlo:

  • Production web applications
  • Services con cache in-memory
  • Databases di produzione (non critici)

Healthcheck Parallelism

Docker Healthcheck = ELB Health Check

Locale Cloud AWS
healthcheck.test Health check path/protocol
healthcheck.interval Health check interval (30s)
healthcheck.timeout Health check timeout (5s)
healthcheck.retries Unhealthy threshold (2)
healthcheck.start_period Grace period (none in ELB)

Esempio Pratico

Locale (Docker Compose):

healthcheck:
  test: ["CMD", "wget", "--spider", "-q", "http://localhost/health"]
  interval: 30s
  timeout: 5s
  retries: 2

Cloud (Target Group ELB):

{
  "TargetGroup": {
    "HealthCheckProtocol": "HTTP",
    "HealthCheckPath": "/health",
    "HealthCheckIntervalSeconds": 30,
    "HealthCheckTimeoutSeconds": 5,
    "UnhealthyThresholdCount": 2,
    "HealthyThresholdCount": 2
  }
}

Stesso comportamento:

  • Check HTTP ogni 30 secondi
  • Timeout dopo 5 secondi
  • Unhealthy dopo 2 fallimenti consecutivi

Resource Monitoring Parallelism

Docker Stats = CloudWatch Metrics

Locale Cloud AWS
docker stats CloudWatch Metrics
CPU % CPUUtilization
Mem usage MemoryUtilization
Network I/O NetworkIn/Out
Block I/O DiskReadBytes/WriteBytes

Esempio di Monitoring

Locale:

docker stats lab03-web --no-stream
# lab03-web: 0.01% CPU, 2.5MiB / 1GiB memory (0.24%)

Cloud (CloudWatch):

aws cloudwatch get-metric-statistics \
  --namespace AWS/EC2 \
  --metric-name CPUUtilization \
  --dimensions Name=InstanceId,Value=i-12345 \
  --start-time 2024-03-25T00:00:00Z \
  --end-time 2024-03-25T00:05:00Z \
  --period 60 \
  --statistics Average

Dipendenze Parallelism

Docker depends_on = ECS DependsOn

Locale Cloud AWS
depends_on: service_healthy dependsOn: condition=HEALTHY
depends_on: service_started dependsOn: condition=START

Esempio Multi-Tier

Locale (Docker Compose):

services:
  web:
    depends_on:
      app:
        condition: service_healthy
  
  app:
    depends_on:
      db:
        condition: service_healthy
  
  db:
    # No dependencies

Cloud (ECS Task Definition):

{
  "containerDefinitions": [
    {
      "name": "web",
      "dependsOn": [
        {"containerName": "app", "condition": "HEALTHY"}
      ]
    },
    {
      "name": "app",
      "dependsOn": [
        {"containerName": "db", "condition": "HEALTHY"}
      ]
    },
    {
      "name": "db",
      "dependsOn": []
    }
  ]
}

Costi e Billing Parallelism

Docker = Local Resources (Free)

In locale, le risorse Docker sono "gratuite":

  • Paghi per l'hardware host (una tantum)
  • Nessun costo orario per container
  • Limiti servono per isolamento, non billing

EC2 = Pay-Per-Use

In cloud, paghi per:

  • Ore di compute (o secondi con Fargate)
  • Tipi di istanza (più grandi = più costosi)
  • Riserve e spot instances possono ridurre costi

Mapping costi:

Docker EC2 Costo/ora
0.5 CPU, 512M t2.nano ~$0.006
1 CPU, 1G t2.micro ~$0.012
1 CPU, 2G t2.small ~$0.024
2 CPU, 4G t2.medium ~$0.048
2 CPU, 8G m5.large ~$0.096

Scaling Parallelism

Docker Compose Scale = EC2 Auto Scaling

Locale (Horizontal Scaling):

web:
  deploy:
    replicas: 4
    resources:
      limits:
        cpus: '1'
        memory: 2G

Risultato: 4 container, ognuno con 1 CPU e 2 GB

Cloud (Auto Scaling Group):

aws autoscaling create-auto-scaling-group \
  --auto-scaling-group-name my-asg \
  --launch-template LaunchTemplateId=lt-12345 \
  --min-size 2 \
  --max-size 4 \
  --desired-capacity 4

Risultato: 4 EC2 instances (t2.small)

Parallelismo:

  • Docker: replicas = numero di container
  • EC2: desired-capacity = numero di istanze
  • Entrambi: distribuiscono carico su più unità

Differenze Chiave

1. Crediti CPU (Burstable Instances)

T2/T3 instances usano CPU credits:

  • Ogni istanza accumula crediti quando idle
  • Spende crediti quando sotto carico
  • Senza crediti = performance degradata

Docker NON ha credits:

  • CPU limit è hard cap
  • Mai degrada (sempre disponibile fino al limite)
  • No burst oltre il limite

2. Pricing Models

AWS offre:

  • On-Demand: Paghi per ora usata
  • Reserved: Sconto per 1-3 anni di impegno
  • Spot: Asta per capacità unused (fino a 90% sconto)
  • Dedicated: Host fisico dedicato

Docker:

  • Nessun pricing model
  • Tutto flat sul costo host

3. Availability Zones

AWS:

  • Istanze distribuite su multiple AZ
  • Ogni AZ = data center separato
  • High availability geografica

Docker:

  • Single host (senza Swarm/Kubernetes)
  • Nessuna AZ separation
  • Host failure = tutti i container down

Best Practices Transfer

Da Locale a Cloud

Best Practice Locale Equivalente Cloud
Set resource limits Choose right instance type
Use healthchecks Configure ELB health checks
depends_on healthy Use ECS dependsOn
Monitor with docker stats Use CloudWatch alarms
Scale with replicas Use Auto Scaling Groups

Evoluzione Architetturale

Locale (Docker Compose):

services:
  web:
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 2G

Cloud (ECS/Fargate):

{
  "containerDefinitions": [{
    "name": "web",
    "cpu": 1024,
    "memory": 2048,
    "memoryReservation": 2048
  }]
}

Nota: In ECS Fargate, CPU e memoria sono configurati come:

  • cpu: 256, 512, 1024, 2048, 4096 (unità: 1 vCPU = 1024)
  • memory: 512, 1024, 2048, ... (in MB)

Command Equivalents Table

Operazione Locale (Docker) Cloud (AWS)
Deploy compute docker compose up -d aws ec2 run-instances
Check resources docker inspect --format '{{.HostConfig}}' aws ec2 describe-instance-types
Monitor usage docker stats aws cloudwatch get-metric-statistics
Set limits deploy.resources.limits --instance-type parameter
Check health docker inspect --format '{{.State.Health.Status}}' aws elb describe-target-health
Scale out docker compose up -d --scale web=4 aws autoscaling set-desired-capacity
Stop compute docker compose stop aws ec2 stop-instances
Terminate docker compose down aws ec2 terminate-instances

Conclusione

Le risorse dei container Docker seguono gli stessi principi fondamentali delle EC2 instances: definizione di CPU e memoria, monitoraggio dell'utilizzo, e health checks per verificare lo stato.

Quando lavorerai con EC2 cloud, ricorda:

  • Docker Container = EC2 Instance (unità di compute)
  • Resource Limits = Instance Type (dimensione e potenza)
  • Healthcheck = ELB Health Check (verifica stato)
  • docker stats = CloudWatch Metrics (monitoraggio)
  • depends_on = ECS DependsOn (ordinamento avvio)

Comprendendo questi parallelismi, sarai in grado di progettare architetture cloud scalabili usando le competenze acquisite localmente.


Approfondimenti