Skip to Content

Reference

Technical reference for the Internal Scanner AWS deployment, including module variables, outputs, how EKS Auto Mode works, cost estimates, update procedures, and troubleshooting.

Module Variables Reference

Required Variables

VariableTypeDescription
environmentstringEnvironment name (lowercase alphanumeric, e.g., production)
vpc_idstringVPC ID for cluster deployment
private_subnet_idslist(string)Minimum 2 private subnets in different AZs
scanner_urlstringDomain for scanner endpoint (e.g., scanner.internal.example.com)
alb_inbound_cidrslist(string)CIDR blocks allowed to access the scanner ALB
license_keystringDetectify license key (sensitive)
connector_api_keystringConnector API key (sensitive)
registry_usernamestringDocker registry username (sensitive)
registry_passwordstringDocker registry password (sensitive)

Optional Variables

Core Configuration

VariableDefaultDescription
aws_region"us-east-1"AWS region for deployment
cluster_name_prefix"internal-scanning"Prefix for EKS cluster name
cluster_version"1.35"Kubernetes version
internal_scanning_version"stable"Scanner image tag
log_format"json"Log output format ("json" or "text")

DNS & Certificate Configuration

VariableDefaultDescription
create_route53_recordfalseCreate Route53 A record for scanner endpoint
route53_zone_idnullRoute53 zone ID for DNS record (required if create_route53_record = true)
acm_validation_zone_idnullPublic zone ID for ACM validation (defaults to route53_zone_id)
create_acm_certificatetrueCreate ACM certificate (set false to bring your own)
acm_certificate_arnnullExisting certificate ARN (required if create_acm_certificate = false)

Scaling & Resources

VariableDefaultDescription
scan_scheduler_replicas1Initial scheduler replicas
scan_manager_replicas1Initial manager replicas
chrome_controller_replicas1Chrome controller replicas
redis_replicas1Redis replicas
redis_storage_size"8Gi"Redis persistent volume size
redis_storage_class"ebs-gp3"Kubernetes StorageClass for the Redis PVC
deploy_redistrueDeploy in-cluster Redis. Set false for managed Redis (e.g., ElastiCache)
redis_url"redis://redis:6379"Redis connection URL. Override when using external Redis
enable_autoscalingfalseEnable HPA for scheduler and manager

Scan Configuration

VariableDefaultDescription
max_scan_duration_secondsnull (172800 = 2 days)Maximum scan duration
scheduled_scans_poll_interval_seconds600How often to check for scheduled scans (min 60)
completed_scans_poll_interval_seconds60How often to check for completed scans (min 10)

Observability

VariableDefaultDescription
enable_cloudwatch_observabilitytrueEnable CloudWatch container insights addon
enable_prometheusfalseDeploy Prometheus monitoring stack
prometheus_urlnullHostname for Prometheus UI (required if Prometheus enabled)

Security & Encryption

VariableDefaultDescription
kms_key_arnnullExisting KMS key for EKS secrets encryption (creates new if null)
kms_key_deletion_window30Days before KMS key deletion on destroy (7-30)
enable_cluster_creator_admin_permissionstrueGrant Terraform identity cluster admin access
cluster_admin_role_arns[]Additional IAM role ARNs for cluster admin access

Network

VariableDefaultDescription
cluster_endpoint_public_accessfalseEnable public access to EKS API endpoint (no VPN required). IAM authentication still enforced
cluster_endpoint_public_access_cidrs["0.0.0.0/0"]CIDR blocks allowed to reach the public EKS API. Only applies when cluster_endpoint_public_access = true. Restrict to known IPs in production
cluster_security_group_additional_rules{}Additional security group rules for EKS API access (for VPN-based access)

Advanced

VariableDefaultDescription
registry_server"registry.detectify.com"Docker registry hostname
helm_chart_versionnull (latest)Pin Helm chart version
helm_chart_pathnullLocal Helm chart path (overrides repository)

Module Outputs

OutputDescription
scanner_urlFull HTTPS URL of the scanner endpoint
cluster_endpointEKS cluster API endpoint
cluster_nameEKS cluster name
cluster_idEKS cluster ID
cluster_certificate_authority_dataBase64 CA certificate (sensitive)
cluster_security_group_idCluster security group ID
cluster_primary_security_group_idEKS-managed cluster security group (shown as “Cluster security group” in the EKS console)
cluster_oidc_issuer_urlOIDC issuer URL for IRSA
oidc_provider_arnOIDC provider ARN
scanner_namespaceKubernetes namespace (scanner)
alb_dns_nameALB hostname (for manual DNS setup)
alb_zone_idALB Route53 hosted zone ID
acm_certificate_arnTLS certificate ARN
acm_certificate_domain_validation_optionsDNS records for manual ACM validation
kms_key_arnKMS key ARN for secrets encryption
kms_key_idKMS key ID (only if created by module)
kubeconfig_commandCommand to configure kubectl
vpc_cni_role_arnVPC CNI addon IAM role ARN
cloudwatch_observability_role_arnCloudWatch addon IAM role ARN
alb_controller_role_arnALB controller IAM role ARN
prometheus_urlPrometheus URL (if enabled)

How EKS Auto Mode Handles Scaling

You don’t need to pre-provision or size nodes manually. EKS Auto Mode automatically:

  1. Creates nodes on demand - When scan-worker pods are scheduled, Auto Mode provisions nodes
  2. Right-sizes instances - Selects appropriate EC2 instance types based on pod resource requests
  3. Scales horizontally - Creates multiple smaller nodes rather than one large node
  4. Scales to zero - Terminates unused nodes when scans complete

Example: For 20 concurrent scans needing ~8 vCPU / ~32 Gi total, Auto Mode might create:

  • m5.large nodes (2 vCPU / 8 Gi each), or
  • m5.xlarge nodes (4 vCPU / 16 Gi each)

Estimated Costs

Typical monthly costs based on deployment size:

Deployment SizeConcurrent ScansEC2 EstimateTotal Estimate
Minimal5~$100/month~$220/month
Standard10-20~$200/month~$320/month
Large50+~$500/month~$620/month

Base costs: EKS cluster ($70), ALB ($20), NAT Gateway (~$30 if used). Costs vary by region and actual usage.


Updating Scanner Version

When a new version is available, update your Terraform module version:

module "internal_scanner" { source = "detectify/internal-scanning/aws" version = "1.1.0" # ... }

You can also pin a specific scanner image version:

module "internal_scanner" { # ... other configuration ... internal_scanning_version = "2.0.0" }

Apply the update:

terraform init -upgrade terraform apply

The Helm chart performs a rolling update with zero downtime.


Troubleshooting

Terraform Timeout Connecting to EKS

Symptom: Terraform hangs or times out during kubernetes_* or helm_* resources.

Cause: Terraform cannot reach the EKS API endpoint from your network.

Solution A — No VPN: Enable the public EKS API endpoint:

cluster_endpoint_public_access = true cluster_endpoint_public_access_cidrs = ["your-ip/32"]

Solution B — With VPN: Add security group rules to allow access via your VPN:

cluster_security_group_additional_rules = { ingress_terraform = { description = "Allow Terraform access" protocol = "tcp" from_port = 443 to_port = 443 type = "ingress" cidr_blocks = ["your-vpn-cidr/32"] } }

See EKS API Access for details.

ImagePullBackOff Errors

Symptom: Pods stuck in ImagePullBackOff or ErrImagePull status.

Cause: Invalid registry credentials or registry not accessible.

Diagnosis:

# Check pod events kubectl describe pod -n scanner <pod-name> # Look for errors like: # "Failed to pull image: unauthorized" # "Failed to pull image: connection refused"

Solution:

  1. Verify registry_username and registry_password are correct
  2. Check that your VPC has outbound internet access to the registry
  3. Verify credentials work: contact Detectify support if issues persist

ALB Not Created

Symptom: No load balancer appears after deployment.

Diagnosis:

# Check AWS Load Balancer Controller logs kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller

Common causes:

  • Missing IAM permissions for the controller
  • Subnet tags missing (kubernetes.io/role/internal-elb = 1)
  • Security group rules blocking controller

ACM Certificate Validation Failed

Symptom: Certificate stuck in “Pending validation” status.

Cause: Route53 zone used for validation is private, but ACM requires public DNS.

Solution: Use a public Route53 hosted zone for ACM DNS validation, even if your scanner endpoint uses a private zone.

Pods CrashLoopBackOff

Symptom: Pods repeatedly crash and restart.

Diagnosis:

# Check pod logs kubectl logs -n scanner <pod-name> --previous # Check events kubectl get events -n scanner --sort-by='.lastTimestamp'

Common causes:

  • Invalid license key or connector API key
  • Network connectivity issues to Detectify platform
  • Resource limits too low

CloudWatch Logs

If enable_cloudwatch_observability = true:

aws logs tail /aws/containerinsights/<cluster-name>/application --follow

Next Steps

Last updated on