Skip to Content

Reference

Technical reference for the Internal Scanner AWS deployment, including how EKS Auto Mode works, cost estimates, update procedures, and troubleshooting.

How EKS Auto Mode Handles Scaling

You don’t need to pre-provision or size nodes manually. EKS Auto Mode automatically:

  1. Creates nodes on demand - When scan-worker pods are scheduled, Auto Mode provisions nodes
  2. Right-sizes instances - Selects appropriate EC2 instance types based on pod resource requests
  3. Scales horizontally - Creates multiple smaller nodes rather than one large node
  4. Scales to zero - Terminates unused nodes when scans complete

Example: For 20 concurrent scans needing ~8 vCPU / ~32 Gi total, Auto Mode might create:

  • m5.large nodes (2 vCPU / 8 Gi each), or
  • m5.xlarge nodes (4 vCPU / 16 Gi each)

Estimated Costs

Typical monthly costs based on deployment size:

Deployment SizeConcurrent ScansEC2 EstimateTotal Estimate
Minimal5~$100/month~$220/month
Standard10-20~$200/month~$320/month
Large50+~$500/month~$620/month

Base costs: EKS cluster ($70), ALB ($20), NAT Gateway (~$30 if used). Costs vary by region and actual usage.


Updating Scanner Version

When a new version is available, update your Terraform module version:

module "internal_scanner" { source = "detectify/detectify-internal-scanning/aws" version = "1.1.0" # ... }

You can also pin a specific scanner image version:

module "internal_scanner" { # ... other configuration ... internal_scanning_version = "2.0.0" }

Apply the update:

terraform init -upgrade terraform apply

The Helm chart performs a rolling update with zero downtime.


Troubleshooting

Terraform Timeout Connecting to EKS

Symptom: Terraform hangs or times out during kubernetes_* or helm_* resources.

Cause: Terraform cannot reach the EKS API endpoint from your network.

Solution: Add security group rules to allow access from your network:

cluster_security_group_additional_rules = { ingress_terraform = { description = "Allow Terraform access" protocol = "tcp" from_port = 443 to_port = 443 type = "ingress" cidr_blocks = ["your-ip/32"] } }

ImagePullBackOff Errors

Symptom: Pods stuck in ImagePullBackOff or ErrImagePull status.

Cause: Invalid registry credentials or registry not accessible.

Diagnosis:

# Check pod events kubectl describe pod -n scanner <pod-name> # Look for errors like: # "Failed to pull image: unauthorized" # "Failed to pull image: connection refused"

Solution:

  1. Verify registry_username and registry_password are correct
  2. Check that your VPC has outbound internet access to the registry
  3. Verify credentials work: contact Detectify support if issues persist

ALB Not Created

Symptom: No load balancer appears after deployment.

Diagnosis:

# Check AWS Load Balancer Controller logs kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller

Common causes:

  • Missing IAM permissions for the controller
  • Subnet tags missing (kubernetes.io/role/internal-elb = 1)
  • Security group rules blocking controller

ACM Certificate Validation Failed

Symptom: Certificate stuck in “Pending validation” status.

Cause: Route53 zone used for validation is private, but ACM requires public DNS.

Solution: Use a public Route53 hosted zone for ACM DNS validation, even if your scanner endpoint uses a private zone.

Pods CrashLoopBackOff

Symptom: Pods repeatedly crash and restart.

Diagnosis:

# Check pod logs kubectl logs -n scanner <pod-name> --previous # Check events kubectl get events -n scanner --sort-by='.lastTimestamp'

Common causes:

  • Invalid license key or connector API key
  • Network connectivity issues to Detectify platform
  • Resource limits too low

CloudWatch Logs

If enable_cloudwatch_observability = true:

aws logs tail /aws/containerinsights/<cluster-name>/application --follow

Next Steps

Last updated on