Reference
Technical reference for the Internal Scanner AWS deployment, including module variables, outputs, how EKS Auto Mode works, cost estimates, update procedures, and troubleshooting.
Module Variables Reference
Required Variables
| Variable | Type | Description |
|---|---|---|
environment | string | Environment name (lowercase alphanumeric, e.g., production) |
vpc_id | string | VPC ID for cluster deployment |
private_subnet_ids | list(string) | Minimum 2 private subnets in different AZs |
scanner_url | string | Domain for scanner endpoint (e.g., scanner.internal.example.com) |
alb_inbound_cidrs | list(string) | CIDR blocks allowed to access the scanner ALB |
license_key | string | Detectify license key (sensitive) |
connector_api_key | string | Connector API key (sensitive) |
registry_username | string | Docker registry username (sensitive) |
registry_password | string | Docker registry password (sensitive) |
Optional Variables
Core Configuration
| Variable | Default | Description |
|---|---|---|
aws_region | "us-east-1" | AWS region for deployment |
cluster_name_prefix | "internal-scanning" | Prefix for EKS cluster name |
cluster_version | "1.35" | Kubernetes version |
internal_scanning_version | "stable" | Scanner image tag |
log_format | "json" | Log output format ("json" or "text") |
DNS & Certificate Configuration
| Variable | Default | Description |
|---|---|---|
create_route53_record | false | Create Route53 A record for scanner endpoint |
route53_zone_id | null | Route53 zone ID for DNS record (required if create_route53_record = true) |
acm_validation_zone_id | null | Public zone ID for ACM validation (defaults to route53_zone_id) |
create_acm_certificate | true | Create ACM certificate (set false to bring your own) |
acm_certificate_arn | null | Existing certificate ARN (required if create_acm_certificate = false) |
Scaling & Resources
| Variable | Default | Description |
|---|---|---|
scan_scheduler_replicas | 1 | Initial scheduler replicas |
scan_manager_replicas | 1 | Initial manager replicas |
chrome_controller_replicas | 1 | Chrome controller replicas |
redis_replicas | 1 | Redis replicas |
redis_storage_size | "8Gi" | Redis persistent volume size |
redis_storage_class | "ebs-gp3" | Kubernetes StorageClass for the Redis PVC |
deploy_redis | true | Deploy in-cluster Redis. Set false for managed Redis (e.g., ElastiCache) |
redis_url | "redis://redis:6379" | Redis connection URL. Override when using external Redis |
enable_autoscaling | false | Enable HPA for scheduler and manager |
Scan Configuration
| Variable | Default | Description |
|---|---|---|
max_scan_duration_seconds | null (172800 = 2 days) | Maximum scan duration |
scheduled_scans_poll_interval_seconds | 600 | How often to check for scheduled scans (min 60) |
completed_scans_poll_interval_seconds | 60 | How often to check for completed scans (min 10) |
Observability
| Variable | Default | Description |
|---|---|---|
enable_cloudwatch_observability | true | Enable CloudWatch container insights addon |
enable_prometheus | false | Deploy Prometheus monitoring stack |
prometheus_url | null | Hostname for Prometheus UI (required if Prometheus enabled) |
Security & Encryption
| Variable | Default | Description |
|---|---|---|
kms_key_arn | null | Existing KMS key for EKS secrets encryption (creates new if null) |
kms_key_deletion_window | 30 | Days before KMS key deletion on destroy (7-30) |
enable_cluster_creator_admin_permissions | true | Grant Terraform identity cluster admin access |
cluster_admin_role_arns | [] | Additional IAM role ARNs for cluster admin access |
Network
| Variable | Default | Description |
|---|---|---|
cluster_endpoint_public_access | false | Enable public access to EKS API endpoint (no VPN required). IAM authentication still enforced |
cluster_endpoint_public_access_cidrs | ["0.0.0.0/0"] | CIDR blocks allowed to reach the public EKS API. Only applies when cluster_endpoint_public_access = true. Restrict to known IPs in production |
cluster_security_group_additional_rules | {} | Additional security group rules for EKS API access (for VPN-based access) |
Advanced
| Variable | Default | Description |
|---|---|---|
registry_server | "registry.detectify.com" | Docker registry hostname |
helm_chart_version | null (latest) | Pin Helm chart version |
helm_chart_path | null | Local Helm chart path (overrides repository) |
Module Outputs
| Output | Description |
|---|---|
scanner_url | Full HTTPS URL of the scanner endpoint |
cluster_endpoint | EKS cluster API endpoint |
cluster_name | EKS cluster name |
cluster_id | EKS cluster ID |
cluster_certificate_authority_data | Base64 CA certificate (sensitive) |
cluster_security_group_id | Cluster security group ID |
cluster_primary_security_group_id | EKS-managed cluster security group (shown as “Cluster security group” in the EKS console) |
cluster_oidc_issuer_url | OIDC issuer URL for IRSA |
oidc_provider_arn | OIDC provider ARN |
scanner_namespace | Kubernetes namespace (scanner) |
alb_dns_name | ALB hostname (for manual DNS setup) |
alb_zone_id | ALB Route53 hosted zone ID |
acm_certificate_arn | TLS certificate ARN |
acm_certificate_domain_validation_options | DNS records for manual ACM validation |
kms_key_arn | KMS key ARN for secrets encryption |
kms_key_id | KMS key ID (only if created by module) |
kubeconfig_command | Command to configure kubectl |
vpc_cni_role_arn | VPC CNI addon IAM role ARN |
cloudwatch_observability_role_arn | CloudWatch addon IAM role ARN |
alb_controller_role_arn | ALB controller IAM role ARN |
prometheus_url | Prometheus URL (if enabled) |
How EKS Auto Mode Handles Scaling
You don’t need to pre-provision or size nodes manually. EKS Auto Mode automatically:
- Creates nodes on demand - When scan-worker pods are scheduled, Auto Mode provisions nodes
- Right-sizes instances - Selects appropriate EC2 instance types based on pod resource requests
- Scales horizontally - Creates multiple smaller nodes rather than one large node
- Scales to zero - Terminates unused nodes when scans complete
Example: For 20 concurrent scans needing ~8 vCPU / ~32 Gi total, Auto Mode might create:
- 4×
m5.largenodes (2 vCPU / 8 Gi each), or - 2×
m5.xlargenodes (4 vCPU / 16 Gi each)
Estimated Costs
Typical monthly costs based on deployment size:
| Deployment Size | Concurrent Scans | EC2 Estimate | Total Estimate |
|---|---|---|---|
| Minimal | 5 | ~$100/month | ~$220/month |
| Standard | 10-20 | ~$200/month | ~$320/month |
| Large | 50+ | ~$500/month | ~$620/month |
Base costs: EKS cluster ($70), ALB ($20), NAT Gateway (~$30 if used). Costs vary by region and actual usage.
Updating Scanner Version
When a new version is available, update your Terraform module version:
module "internal_scanner" {
source = "detectify/internal-scanning/aws"
version = "1.1.0"
# ...
}You can also pin a specific scanner image version:
module "internal_scanner" {
# ... other configuration ...
internal_scanning_version = "2.0.0"
}Apply the update:
terraform init -upgrade
terraform applyThe Helm chart performs a rolling update with zero downtime.
Troubleshooting
Terraform Timeout Connecting to EKS
Symptom: Terraform hangs or times out during kubernetes_* or helm_* resources.
Cause: Terraform cannot reach the EKS API endpoint from your network.
Solution A — No VPN: Enable the public EKS API endpoint:
cluster_endpoint_public_access = true
cluster_endpoint_public_access_cidrs = ["your-ip/32"]Solution B — With VPN: Add security group rules to allow access via your VPN:
cluster_security_group_additional_rules = {
ingress_terraform = {
description = "Allow Terraform access"
protocol = "tcp"
from_port = 443
to_port = 443
type = "ingress"
cidr_blocks = ["your-vpn-cidr/32"]
}
}See EKS API Access for details.
ImagePullBackOff Errors
Symptom: Pods stuck in ImagePullBackOff or ErrImagePull status.
Cause: Invalid registry credentials or registry not accessible.
Diagnosis:
# Check pod events
kubectl describe pod -n scanner <pod-name>
# Look for errors like:
# "Failed to pull image: unauthorized"
# "Failed to pull image: connection refused"Solution:
- Verify
registry_usernameandregistry_passwordare correct - Check that your VPC has outbound internet access to the registry
- Verify credentials work: contact Detectify support if issues persist
ALB Not Created
Symptom: No load balancer appears after deployment.
Diagnosis:
# Check AWS Load Balancer Controller logs
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controllerCommon causes:
- Missing IAM permissions for the controller
- Subnet tags missing (
kubernetes.io/role/internal-elb = 1) - Security group rules blocking controller
ACM Certificate Validation Failed
Symptom: Certificate stuck in “Pending validation” status.
Cause: Route53 zone used for validation is private, but ACM requires public DNS.
Solution: Use a public Route53 hosted zone for ACM DNS validation, even if your scanner endpoint uses a private zone.
Pods CrashLoopBackOff
Symptom: Pods repeatedly crash and restart.
Diagnosis:
# Check pod logs
kubectl logs -n scanner <pod-name> --previous
# Check events
kubectl get events -n scanner --sort-by='.lastTimestamp'Common causes:
- Invalid license key or connector API key
- Network connectivity issues to Detectify platform
- Resource limits too low
CloudWatch Logs
If enable_cloudwatch_observability = true:
aws logs tail /aws/containerinsights/<cluster-name>/application --followNext Steps
- Terraform Deployment — Core deployment guide
- Configuration Options — DNS, networking, autoscaling, BYO certificates
- Secrets Management — Secure credential handling
- General Troubleshooting — More troubleshooting guides