Skip to Content

AWS with Terraform

Deploy the Internal Scanner to your AWS account using our Terraform module. This guide covers EKS deployment with automatic node scaling.

Prerequisites

Account Permissions

You need an AWS account with permissions to create:

  • EKS clusters and node groups
  • IAM roles and policies
  • Application Load Balancers
  • Security groups
  • Route53 records (optional)
  • ACM certificates

VPC Requirements

RequirementDetails
VPCExisting VPC with DNS support enabled
SubnetsMinimum 2 private subnets in different availability zones
NAT GatewayRequired for outbound internet access from private subnets
DNSVPC DNS resolution and hostnames enabled

Software Requirements

ToolVersionPurpose
Terraform>= 1.5.0Infrastructure provisioning
AWS CLI>= 2.0AWS authentication
kubectl>= 1.28Kubernetes management

Checklist

Before proceeding, verify:

  • AWS account with required permissions
  • VPC with 2+ private subnets in different AZs
  • NAT Gateway configured for outbound internet
  • Terraform >= 1.5.0 installed
  • AWS CLI configured with credentials
  • Detectify credentials from UI (license key, API key, Docker credentials)

Step 1: Configure AWS Credentials

# Configure AWS CLI with your credentials aws configure # Verify access aws sts get-caller-identity

Step 2: Create Terraform Configuration

Create a new directory for your deployment:

mkdir internal-scanner && cd internal-scanner

Create main.tf with the module configuration:

module "internal_scanner" { source = "git::https://github.com/detectify/internal-scanner-terraform.git?ref=v1.0.0" # Core configuration environment = "production" aws_region = "eu-west-1" # Network configuration vpc_id = "vpc-xxxxxxxxx" private_subnet_ids = ["subnet-aaaaa", "subnet-bbbbb"] alb_inbound_cidrs = ["10.0.0.0/8"] # Scanner endpoint scanner_url = "scanner.internal.example.com" # Detectify credentials (from Settings → Internal Scanning) license_key = var.license_key api_key = var.api_key tags = { Environment = "production" ManagedBy = "terraform" } } variable "license_key" { description = "Detectify license key" type = string sensitive = true } variable "api_key" { description = "Detectify API key" type = string sensitive = true } output "scanner_url" { value = module.internal_scanner.scanner_url } output "kubeconfig_command" { value = module.internal_scanner.kubeconfig_command }

Create terraform.tfvars with credentials from the Detectify UI (SettingsInternal Scanning):

license_key = "your-license-key" api_key = "your-api-key"

Step 3: Deploy Infrastructure

# Initialize Terraform terraform init # Preview changes terraform plan # Deploy terraform apply

Deployment creates an EKS cluster with Auto Mode enabled, which automatically provisions and scales nodes based on workload.

Step 4: Configure kubectl Access

After deployment completes, configure kubectl:

# Get the kubeconfig command from Terraform output terraform output kubeconfig_command # Run the output command, e.g.: aws eks update-kubeconfig --region eu-west-1 --name production-internal-scanning # Verify access kubectl get nodes

Step 5: Verify Deployment

Check that all components are running:

# View all pods kubectl get pods -n scanner # Expected output: # NAME READY STATUS RESTARTS AGE # scan-scheduler-xxxxx 1/1 Running 0 5m # scan-manager-xxxxx 1/1 Running 0 5m # chrome-controller-xxxxx 1/1 Running 0 5m # redis-xxxxx 1/1 Running 0 5m

Test the scanner endpoint (from within your VPC):

curl https://scanner.internal.example.com/health # Expected: {"status": "ok"}

DNS Configuration

Option A: Automatic (Route53)

If you have a Route53 hosted zone, add these variables:

create_route53_record = true route53_zone_id = "Z1234567890ABC"

Option B: Manual DNS

If not using Route53, create a DNS record manually:

  1. Get the ALB DNS name:

    terraform output alb_dns_name
  2. Create a CNAME record in your DNS provider:

    scanner.internal.example.com → internal-xxxxx.eu-west-1.elb.amazonaws.com

Network Configuration

Allowing Scanner Access to Applications

The scanner needs network access to your internal applications. Update security groups to allow inbound traffic from the scanner’s subnet:

# Example: Allow scanner to access application resource "aws_security_group_rule" "allow_scanner" { type = "ingress" from_port = 443 to_port = 443 protocol = "tcp" source_security_group_id = module.internal_scanner.scanner_security_group_id security_group_id = aws_security_group.application.id }

Production Configuration

For production deployments, consider these additional settings:

module "internal_scanner" { # ... basic configuration ... # Scaling (see Scaling guide for capacity planning) scan_scheduler_replicas = 3 scan_manager_replicas = 2 chrome_controller_replicas = 1 # Resource limits scan_scheduler_resources = { requests = { cpu = "500m" memory = "512Mi" } limits = { cpu = "2000m" memory = "2Gi" } } # Monitoring enable_cloudwatch_observability = true enable_prometheus = true # Cluster access for your team cluster_admin_role_arns = [ "arn:aws:iam::123456789012:role/DevOpsTeam" ] }

Autoscaling Configuration

For dynamic workloads, enable Horizontal Pod Autoscaler:

module "internal_scanner" { # ... other configuration ... scan_scheduler_autoscaling = { enabled = true min_replicas = 2 max_replicas = 10 target_cpu_utilization_percentage = 70 } scan_manager_autoscaling = { enabled = true min_replicas = 1 max_replicas = 20 target_cpu_utilization_percentage = 80 } }

How EKS Auto Mode Handles Scaling

You don’t need to pre-provision or size nodes manually. EKS Auto Mode automatically:

  1. Creates nodes on demand - When scan-worker pods are scheduled, Auto Mode provisions nodes
  2. Right-sizes instances - Selects appropriate EC2 instance types based on pod resource requests
  3. Scales horizontally - Creates multiple smaller nodes rather than one large node
  4. Scales to zero - Terminates unused nodes when scans complete

Example: For 20 concurrent scans needing ~8 vCPU / ~32 Gi total, Auto Mode might create:

  • m5.large nodes (2 vCPU / 8 Gi each), or
  • m5.xlarge nodes (4 vCPU / 16 Gi each)

Estimated Costs

Typical monthly costs based on deployment size:

Deployment SizeConcurrent ScansEC2 EstimateTotal Estimate
Minimal5~$100/month~$220/month
Standard10-20~$200/month~$320/month
Large50+~$500/month~$620/month

Base costs: EKS cluster ($70), ALB ($20), NAT Gateway (~$30). Costs vary by region and actual usage.

Updating Scanner Version

When a new version is available, update your Terraform module version:

module "internal_scanner" { source = "git::https://github.com/detectify/internal-scanner-terraform.git?ref=v1.1.0" # ... }

Apply the update:

terraform init -upgrade terraform apply

The Helm chart performs a rolling update with zero downtime.

Troubleshooting

ALB Not Created

# Check AWS Load Balancer Controller logs kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller

Image Pull Errors

Verify your container registry credentials are configured correctly. Contact Detectify support if you’re unable to pull images.

CloudWatch Logs

If enable_cloudwatch_observability = true:

aws logs tail /aws/containerinsights/<cluster-name>/application --follow

Next Steps

Last updated on