Skip to Content

Configuration Options

After deploying the Internal Scanner, configure DNS, networking, autoscaling, and monitoring to match your infrastructure requirements.

DNS Configuration

Option A: Automatic (Route53)

If you have Route53 hosted zones, add these variables to your module:

module "internal_scanner" { # ... other configuration ... # DNS configuration create_route53_record = true route53_zone_id = "ZXXXXXXXXPRIV" # Your private hosted zone ID for DNS A record # ACM certificate configuration create_acm_certificate = true acm_validation_zone_id = "ZXXXXXXXXPUB" # Your public hosted zone ID for ACM validation }

Important: ACM certificate DNS validation requires a public hosted zone, even though the scanner endpoint uses a private zone for internal-only resolution.

Option B: Manual DNS

If not using Route53, create a DNS record manually:

  1. Get the ALB DNS name:

    terraform output alb_dns_name
  2. Create a CNAME record in your DNS provider:

    scanner.internal.example.com → internal-xxxxx.eu-west-1.elb.amazonaws.com

Option C: Bring Your Own Certificate

If you already have a TLS certificate (e.g., a wildcard certificate managed centrally) or cannot use ACM DNS validation:

module "internal_scanner" { # ... other configuration ... # Use an existing ACM certificate create_acm_certificate = false acm_certificate_arn = "arn:aws:acm:eu-west-1:123456789012:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" }

If you create the ACM certificate via the module but manage DNS externally, use the acm_certificate_domain_validation_options output to get the CNAME records needed for certificate validation:

terraform output acm_certificate_domain_validation_options

Create the returned CNAME records in your DNS provider. The certificate will validate once the records propagate.


Network Configuration

Understanding alb_inbound_cidrs

The alb_inbound_cidrs variable controls which networks can reach the scanner endpoint (ALB). This should include:

  • Your VPC CIDR — so applications in the VPC can communicate with the scanner
  • VPN or corporate network CIDRs — for accessing the health endpoint and triggering scans from your workstation
  • CI/CD runner CIDRs — if triggering scans from pipelines
module "internal_scanner" { # ... other configuration ... alb_inbound_cidrs = [ "10.0.0.0/16", # VPC CIDR "172.16.0.0/12", # Corporate VPN ] }

Allowing Scanner Access to Applications

The scanner needs network access to your internal applications. Update security groups to allow inbound traffic from the scanner’s subnet:

# Example: Allow scanner to access application resource "aws_security_group_rule" "allow_scanner" { type = "ingress" from_port = 443 to_port = 443 protocol = "tcp" source_security_group_id = module.internal_scanner.cluster_security_group_id security_group_id = aws_security_group.application.id }

EKS API Access for Terraform and kubectl

If you run Terraform or kubectl from outside the VPC, you need to allow access to the EKS cluster API endpoint. Choose the option that matches your network setup.

Option A: Public Endpoint (No VPN Required)

Enable the public EKS API endpoint so Terraform and kubectl can reach the cluster over the internet. This is the simplest option when you don’t have VPN connectivity to the VPC.

module "internal_scanner" { # ... other configuration ... # Enable public EKS API access cluster_endpoint_public_access = true # IMPORTANT: Restrict to your known IPs (don't leave as 0.0.0.0/0 in production) cluster_endpoint_public_access_cidrs = [ "203.0.113.0/24", # Office network "198.51.100.10/32", # CI/CD runner ] }

Security: Even with public access enabled, all API requests require valid AWS IAM credentials. The CIDR restriction adds a network-level layer on top of IAM authentication. Private access within the VPC is always enabled — internal traffic (EKS nodes, ALB controller) always uses the private endpoint.

Option B: Security Group Rules (VPN Required)

If you have VPN or direct network access to the VPC, keep the endpoint private and allow access via security group rules:

module "internal_scanner" { # ... other configuration ... # Allow Terraform to reach EKS API from your VPN network cluster_security_group_additional_rules = { ingress_terraform = { description = "Allow Terraform access to EKS API" protocol = "tcp" from_port = 443 to_port = 443 type = "ingress" cidr_blocks = ["10.0.0.0/8"] # Your VPN or CI/CD network CIDR } } }

Note: Both options can be used together. Without either option configured, Terraform will timeout when trying to configure Kubernetes and Helm resources.


Redis Configuration

By default, the module deploys an in-cluster Redis instance with persistent storage using an ebs-gp3 StorageClass. Redis data survives pod restarts thanks to a PersistentVolumeClaim.

Using Managed Redis (e.g., ElastiCache)

If you prefer a managed Redis service, disable the in-cluster Redis and provide your connection URL:

module "internal_scanner" { # ... other configuration ... deploy_redis = false redis_url = "rediss://user:pass@my-redis.example.com:6379" # Use rediss:// for TLS }

Note: When deploy_redis is set to false, you must override redis_url — the module enforces this with a lifecycle precondition.

Customizing Redis Storage

To change the storage size or class for the in-cluster Redis:

module "internal_scanner" { # ... other configuration ... redis_storage_size = "16Gi" # Default: 8Gi redis_storage_class = "ebs-gp3" # Default: ebs-gp3 }

Terraform State Backend

For team deployments, store Terraform state in S3:

Create backend.tf:

terraform { backend "s3" { bucket = "your-terraform-state-bucket" key = "internal-scanner/terraform.tfstate" region = "eu-west-1" encrypt = true } }

High Availability Settings

module "internal_scanner" { # ... basic configuration ... # Scaling (see Scaling guide for capacity planning) scan_scheduler_replicas = 3 scan_manager_replicas = 2 chrome_controller_replicas = 1 # Resource limits scan_scheduler_resources = { requests = { cpu = "500m" memory = "512Mi" } limits = { cpu = "2000m" memory = "2Gi" } } # Optional: Use existing KMS key for secrets encryption # kms_key_arn = "arn:aws:kms:eu-west-1:123456789012:key/..." # Optional: Pin scanner image version (defaults to "stable") # internal_scanning_version = "2.0.0" }

Autoscaling Configuration

For dynamic workloads, enable Horizontal Pod Autoscaler:

module "internal_scanner" { # ... other configuration ... # Enable autoscaling enable_autoscaling = true scan_scheduler_autoscaling = { min_replicas = 2 max_replicas = 10 target_cpu_utilization_percentage = 70 target_memory_utilization_percentage = null } scan_manager_autoscaling = { min_replicas = 1 max_replicas = 20 target_cpu_utilization_percentage = 80 target_memory_utilization_percentage = null } }

Monitoring

The scanner supports two monitoring options: Amazon CloudWatch and Prometheus. You can enable one or both depending on your observability requirements.

CloudWatch Observability

CloudWatch integration provides logs and metrics through the Amazon CloudWatch Observability addon:

module "internal_scanner" { # ... other configuration ... enable_cloudwatch_observability = true }

When enabled, container logs are automatically sent to CloudWatch Logs and you can view them in the AWS Console or via CLI:

aws logs tail /aws/containerinsights/<cluster-name>/application --follow

Prometheus Monitoring

For more detailed metrics, you can deploy a Prometheus server within the EKS cluster. This deploys a complete monitoring stack that scrapes metrics from the scanner services.

module "internal_scanner" { # ... other configuration ... enable_prometheus = true prometheus_url = "prometheus.internal.example.com" # Hostname for Prometheus UI }

Important: The prometheus_url is the hostname where you want to access the Prometheus UI - it’s not a URL to an existing Prometheus instance. The module will:

  1. Deploy Prometheus server in the cluster
  2. Create an ACM certificate for the hostname
  3. Expose Prometheus via an internal ALB

DNS Requirements: Similar to the scanner endpoint, you’ll need:

  • A private hosted zone record pointing to the Prometheus ALB (for internal access)
  • A public hosted zone for ACM certificate validation

If you already have Route53 configured for the scanner (create_route53_record = true), the module will automatically create the necessary DNS records for Prometheus using the same hosted zones.


Cluster Access Management

By default, the identity (IAM user or role) that creates the EKS cluster automatically receives admin permissions. You can configure additional roles to have cluster admin access.

Configuration Options

VariableDefaultDescription
enable_cluster_creator_admin_permissionstrueGrants the identity running Terraform admin access to the cluster
cluster_admin_role_arns[]Additional IAM role ARNs to grant cluster admin access

Local Development

When deploying from your laptop or workstation, the default settings work well:

module "internal_scanner" { # ... other configuration ... # Default: your IAM identity automatically gets admin access # enable_cluster_creator_admin_permissions = true (default) }

CI/CD Pipeline Deployment

When deploying via a CI/CD pipeline, you might want specific team roles to have access as well for troubleshooting purposes:

module "internal_scanner" { # ... other configuration ... # If pipeline agent deployed the cluster and you want to keep making changes via pipeline then set this to true enable_cluster_creator_admin_permissions = true # Grant access to specific team roles if needed cluster_admin_role_arns = [ "arn:aws:iam::123456789012:role/DevOpsTeam", "arn:aws:iam::123456789012:role/PlatformEngineers" ] }

Avoiding Duplicate Role Errors

Important: Do not add the same IAM identity to cluster_admin_role_arns that is already the cluster creator when enable_cluster_creator_admin_permissions = true.

For example, if you deploy from your laptop using the role arn:aws:iam::123456789012:role/MyRole:

# This will FAIL - duplicate admin entry module "internal_scanner" { enable_cluster_creator_admin_permissions = true # MyRole gets admin via this cluster_admin_role_arns = [ "arn:aws:iam::123456789012:role/MyRole" # ERROR: MyRole already has admin ] }

Solution: Either set enable_cluster_creator_admin_permissions = false and explicitly list all admin roles, or don’t include your current role in cluster_admin_role_arns.


Next Steps

Last updated on