AWS with Terraform
Deploy the Internal Scanner to your AWS account using our Terraform module. This guide covers EKS deployment with automatic node scaling.
Prerequisites
Account Permissions
You need an AWS account with permissions to create:
- EKS clusters and node groups
- IAM roles and policies
- Application Load Balancers
- Security groups
- Route53 records (optional)
- ACM certificates
VPC Requirements
| Requirement | Details |
|---|---|
| VPC | Existing VPC with DNS support enabled |
| Subnets | Minimum 2 private subnets in different availability zones |
| NAT Gateway | Required for outbound internet access from private subnets |
| DNS | VPC DNS resolution and hostnames enabled |
Software Requirements
| Tool | Version | Purpose |
|---|---|---|
| Terraform | >= 1.5.0 | Infrastructure provisioning |
| AWS CLI | >= 2.0 | AWS authentication |
| kubectl | >= 1.28 | Kubernetes management |
Checklist
Before proceeding, verify:
- AWS account with required permissions
- VPC with 2+ private subnets in different AZs
- NAT Gateway configured for outbound internet
- Terraform >= 1.5.0 installed
- AWS CLI configured with credentials
- Detectify credentials from UI (license key, API key, Docker credentials)
Step 1: Configure AWS Credentials
# Configure AWS CLI with your credentials
aws configure
# Verify access
aws sts get-caller-identityStep 2: Create Terraform Configuration
Create a new directory for your deployment:
mkdir internal-scanner && cd internal-scannerCreate main.tf with the module configuration:
module "internal_scanner" {
source = "git::https://github.com/detectify/internal-scanner-terraform.git?ref=v1.0.0"
# Core configuration
environment = "production"
aws_region = "eu-west-1"
# Network configuration
vpc_id = "vpc-xxxxxxxxx"
private_subnet_ids = ["subnet-aaaaa", "subnet-bbbbb"]
alb_inbound_cidrs = ["10.0.0.0/8"]
# Scanner endpoint
scanner_url = "scanner.internal.example.com"
# Detectify credentials (from Settings → Internal Scanning)
license_key = var.license_key
api_key = var.api_key
tags = {
Environment = "production"
ManagedBy = "terraform"
}
}
variable "license_key" {
description = "Detectify license key"
type = string
sensitive = true
}
variable "api_key" {
description = "Detectify API key"
type = string
sensitive = true
}
output "scanner_url" {
value = module.internal_scanner.scanner_url
}
output "kubeconfig_command" {
value = module.internal_scanner.kubeconfig_command
}Create terraform.tfvars with credentials from the Detectify UI (Settings → Internal Scanning):
license_key = "your-license-key"
api_key = "your-api-key"Step 3: Deploy Infrastructure
# Initialize Terraform
terraform init
# Preview changes
terraform plan
# Deploy
terraform applyDeployment creates an EKS cluster with Auto Mode enabled, which automatically provisions and scales nodes based on workload.
Step 4: Configure kubectl Access
After deployment completes, configure kubectl:
# Get the kubeconfig command from Terraform output
terraform output kubeconfig_command
# Run the output command, e.g.:
aws eks update-kubeconfig --region eu-west-1 --name production-internal-scanning
# Verify access
kubectl get nodesStep 5: Verify Deployment
Check that all components are running:
# View all pods
kubectl get pods -n scanner
# Expected output:
# NAME READY STATUS RESTARTS AGE
# scan-scheduler-xxxxx 1/1 Running 0 5m
# scan-manager-xxxxx 1/1 Running 0 5m
# chrome-controller-xxxxx 1/1 Running 0 5m
# redis-xxxxx 1/1 Running 0 5mTest the scanner endpoint (from within your VPC):
curl https://scanner.internal.example.com/health
# Expected: {"status": "ok"}DNS Configuration
Option A: Automatic (Route53)
If you have a Route53 hosted zone, add these variables:
create_route53_record = true
route53_zone_id = "Z1234567890ABC"Option B: Manual DNS
If not using Route53, create a DNS record manually:
-
Get the ALB DNS name:
terraform output alb_dns_name -
Create a CNAME record in your DNS provider:
scanner.internal.example.com → internal-xxxxx.eu-west-1.elb.amazonaws.com
Network Configuration
Allowing Scanner Access to Applications
The scanner needs network access to your internal applications. Update security groups to allow inbound traffic from the scanner’s subnet:
# Example: Allow scanner to access application
resource "aws_security_group_rule" "allow_scanner" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
source_security_group_id = module.internal_scanner.scanner_security_group_id
security_group_id = aws_security_group.application.id
}Production Configuration
For production deployments, consider these additional settings:
module "internal_scanner" {
# ... basic configuration ...
# Scaling (see Scaling guide for capacity planning)
scan_scheduler_replicas = 3
scan_manager_replicas = 2
chrome_controller_replicas = 1
# Resource limits
scan_scheduler_resources = {
requests = {
cpu = "500m"
memory = "512Mi"
}
limits = {
cpu = "2000m"
memory = "2Gi"
}
}
# Monitoring
enable_cloudwatch_observability = true
enable_prometheus = true
# Cluster access for your team
cluster_admin_role_arns = [
"arn:aws:iam::123456789012:role/DevOpsTeam"
]
}Autoscaling Configuration
For dynamic workloads, enable Horizontal Pod Autoscaler:
module "internal_scanner" {
# ... other configuration ...
scan_scheduler_autoscaling = {
enabled = true
min_replicas = 2
max_replicas = 10
target_cpu_utilization_percentage = 70
}
scan_manager_autoscaling = {
enabled = true
min_replicas = 1
max_replicas = 20
target_cpu_utilization_percentage = 80
}
}How EKS Auto Mode Handles Scaling
You don’t need to pre-provision or size nodes manually. EKS Auto Mode automatically:
- Creates nodes on demand - When scan-worker pods are scheduled, Auto Mode provisions nodes
- Right-sizes instances - Selects appropriate EC2 instance types based on pod resource requests
- Scales horizontally - Creates multiple smaller nodes rather than one large node
- Scales to zero - Terminates unused nodes when scans complete
Example: For 20 concurrent scans needing ~8 vCPU / ~32 Gi total, Auto Mode might create:
- 4×
m5.largenodes (2 vCPU / 8 Gi each), or - 2×
m5.xlargenodes (4 vCPU / 16 Gi each)
Estimated Costs
Typical monthly costs based on deployment size:
| Deployment Size | Concurrent Scans | EC2 Estimate | Total Estimate |
|---|---|---|---|
| Minimal | 5 | ~$100/month | ~$220/month |
| Standard | 10-20 | ~$200/month | ~$320/month |
| Large | 50+ | ~$500/month | ~$620/month |
Base costs: EKS cluster ($70), ALB ($20), NAT Gateway (~$30). Costs vary by region and actual usage.
Updating Scanner Version
When a new version is available, update your Terraform module version:
module "internal_scanner" {
source = "git::https://github.com/detectify/internal-scanner-terraform.git?ref=v1.1.0"
# ...
}Apply the update:
terraform init -upgrade
terraform applyThe Helm chart performs a rolling update with zero downtime.
Troubleshooting
ALB Not Created
# Check AWS Load Balancer Controller logs
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controllerImage Pull Errors
Verify your container registry credentials are configured correctly. Contact Detectify support if you’re unable to pull images.
CloudWatch Logs
If enable_cloudwatch_observability = true:
aws logs tail /aws/containerinsights/<cluster-name>/application --followNext Steps
- Configuration - Set up scan targets and integrate with Detectify
- Scaling - Detailed capacity planning
- Troubleshooting - General troubleshooting guide