Configuration Options
After deploying the Internal Scanner, configure DNS, networking, autoscaling, and monitoring to match your infrastructure requirements.
DNS Configuration
Option A: Automatic (Route53)
If you have Route53 hosted zones, add these variables to your module:
module "internal_scanner" {
# ... other configuration ...
# DNS configuration
create_route53_record = true
route53_zone_id = "ZXXXXXXXXPRIV" # Your private hosted zone ID for DNS A record
# ACM certificate configuration
create_acm_certificate = true
acm_validation_zone_id = "ZXXXXXXXXPUB" # Your public hosted zone ID for ACM validation
}Important: ACM certificate DNS validation requires a public hosted zone, even though the scanner endpoint uses a private zone for internal-only resolution.
Option B: Manual DNS
If not using Route53, create a DNS record manually:
-
Get the ALB DNS name:
terraform output alb_dns_name -
Create a CNAME record in your DNS provider:
scanner.internal.example.com → internal-xxxxx.eu-west-1.elb.amazonaws.com
Network Configuration
Allowing Scanner Access to Applications
The scanner needs network access to your internal applications. Update security groups to allow inbound traffic from the scanner’s subnet:
# Example: Allow scanner to access application
resource "aws_security_group_rule" "allow_scanner" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
source_security_group_id = module.internal_scanner.cluster_security_group_id
security_group_id = aws_security_group.application.id
}Running Terraform from Outside the VPC
If you run Terraform from outside your VPC (e.g., local machine via VPN, CI/CD pipeline), you need to allow access to the EKS API endpoint. Add this to your module configuration:
module "internal_scanner" {
# ... other configuration ...
# Allow Terraform to reach EKS API from your network
cluster_security_group_additional_rules = {
ingress_terraform = {
description = "Allow Terraform access to EKS API"
protocol = "tcp"
from_port = 443
to_port = 443
type = "ingress"
cidr_blocks = ["your-vpn-cidr/32"] # Your VPN or CI/CD IP range
}
}
}Without this rule, Terraform will timeout when trying to configure Kubernetes and Helm resources.
Terraform State Backend
For team deployments, store Terraform state in S3:
Create backend.tf:
terraform {
backend "s3" {
bucket = "your-terraform-state-bucket"
key = "internal-scanner/terraform.tfstate"
region = "eu-west-1"
encrypt = true
}
}High Availability Settings
module "internal_scanner" {
# ... basic configuration ...
# Scaling (see Scaling guide for capacity planning)
scan_scheduler_replicas = 3
scan_manager_replicas = 2
chrome_controller_replicas = 1
# Resource limits
scan_scheduler_resources = {
requests = {
cpu = "500m"
memory = "512Mi"
}
limits = {
cpu = "2000m"
memory = "2Gi"
}
}
# Optional: Use existing KMS key for secrets encryption
# kms_key_arn = "arn:aws:kms:eu-west-1:123456789012:key/..."
# Optional: Pin scanner image version (defaults to "latest")
# internal_scanning_version = "2.0.0"
}Autoscaling Configuration
For dynamic workloads, enable Horizontal Pod Autoscaler:
module "internal_scanner" {
# ... other configuration ...
# Enable autoscaling
enable_autoscaling = true
scan_scheduler_autoscaling = {
min_replicas = 2
max_replicas = 10
target_cpu_utilization_percentage = 70
target_memory_utilization_percentage = null
}
scan_manager_autoscaling = {
min_replicas = 1
max_replicas = 20
target_cpu_utilization_percentage = 80
target_memory_utilization_percentage = null
}
}Monitoring
The scanner supports two monitoring options: Amazon CloudWatch and Prometheus. You can enable one or both depending on your observability requirements.
CloudWatch Observability
CloudWatch integration provides logs and metrics through the Amazon CloudWatch Observability addon:
module "internal_scanner" {
# ... other configuration ...
enable_cloudwatch_observability = true
}When enabled, container logs are automatically sent to CloudWatch Logs and you can view them in the AWS Console or via CLI:
aws logs tail /aws/containerinsights/<cluster-name>/application --followPrometheus Monitoring
For more detailed metrics, you can deploy a Prometheus server within the EKS cluster. This deploys a complete monitoring stack that scrapes metrics from the scanner services.
module "internal_scanner" {
# ... other configuration ...
enable_prometheus = true
prometheus_url = "prometheus.internal.example.com" # Hostname for Prometheus UI
}Important: The prometheus_url is the hostname where you want to access the Prometheus UI - it’s not a URL to an existing Prometheus instance. The module will:
- Deploy Prometheus server in the cluster
- Create an ACM certificate for the hostname
- Expose Prometheus via an internal ALB
DNS Requirements: Similar to the scanner endpoint, you’ll need:
- A private hosted zone record pointing to the Prometheus ALB (for internal access)
- A public hosted zone for ACM certificate validation
If you already have Route53 configured for the scanner (create_route53_record = true), the module will automatically create the necessary DNS records for Prometheus using the same hosted zones.
Cluster Access Management
By default, the identity (IAM user or role) that creates the EKS cluster automatically receives admin permissions. You can configure additional roles to have cluster admin access.
Configuration Options
| Variable | Default | Description |
|---|---|---|
enable_cluster_creator_admin_permissions | true | Grants the identity running Terraform admin access to the cluster |
cluster_admin_role_arns | [] | Additional IAM role ARNs to grant cluster admin access |
Local Development
When deploying from your laptop or workstation, the default settings work well:
module "internal_scanner" {
# ... other configuration ...
# Default: your IAM identity automatically gets admin access
# enable_cluster_creator_admin_permissions = true (default)
}CI/CD Pipeline Deployment
When deploying via a CI/CD pipeline, you might want specific team roles to have access as well for troubleshooting purposes:
module "internal_scanner" {
# ... other configuration ...
# If pipeline agent deployed the cluster and you want to keep making changes via pipeline then set this to true
enable_cluster_creator_admin_permissions = true
# Grant access to specific team roles if needed
cluster_admin_role_arns = [
"arn:aws:iam::123456789012:role/DevOpsTeam",
"arn:aws:iam::123456789012:role/PlatformEngineers"
]
}Avoiding Duplicate Role Errors
Important: Do not add the same IAM identity to cluster_admin_role_arns that is already the cluster creator when enable_cluster_creator_admin_permissions = true.
For example, if you deploy from your laptop using the role arn:aws:iam::123456789012:role/MyRole:
# This will FAIL - duplicate admin entry
module "internal_scanner" {
enable_cluster_creator_admin_permissions = true # MyRole gets admin via this
cluster_admin_role_arns = [
"arn:aws:iam::123456789012:role/MyRole" # ERROR: MyRole already has admin
]
}Solution: Either set enable_cluster_creator_admin_permissions = false and explicitly list all admin roles, or don’t include your current role in cluster_admin_role_arns.
Next Steps
- Terraform Deployment - Core deployment guide
- Secrets Management - Secure credential handling
- Reference - Costs, updates, and troubleshooting
- Scaling - Detailed capacity planning