Configuration
The Terraform module has many options available. For a complete set of options see the Detectify Terraform module inputs .
Namespace
The module deploys the scanner into a Kubernetes namespace you control.
| Variable | Default | Description |
|---|---|---|
namespace | "scanner" | Namespace to deploy into. Rejects default, matching the underlying chart’s install-time guard. |
create_namespace | true | Whether Terraform should create the namespace. Set to false if it’s managed out-of-band (e.g. by a platform team’s GitOps pipeline). |
For multi-tenant setups, deploy the module once per tenant with distinct name and namespace values. See Multi-tenant deployments for the broader picture.
Helm Chart Version
var.helm_chart_version defaults to "~> 2.0", pinning the underlying Helm chart to the 2.x major. You don’t need to set this variable explicitly for normal use — 2.x chart releases flow in automatically on terraform init -upgrade, and major-version bumps (which are always breaking) require an opt-in module release.
Earlier module versions left helm_chart_version unset, which pulled “latest” from the chart repo and silently broke existing deployments on every chart major release. Module 3.0 fixed that by pinning to "~> 2.0".
Override the default only when you need to lock to a specific patch or validate a pre-release chart:
module "internal_scanner" {
# ... other configuration ...
helm_chart_version = "2.0.0" # pin to a specific patch
}For details on what chart 2.x changed, see the chart 1.x → 2.0 migration guide — most of that is handled internally by the module, but it’s the reference if you’re debugging chart-level behaviour.
API Configuration
The scanner has a REST API that may be used to e.g. start jobs and fetch results without going via the Detectify UI.
Option A: Access API Directly via ALB
The simplest way to make the scanner API available is by using an ALB, without TLS or custom domain name.
module "internal_scanner" {
# ... other configuration ...
# DNS configuration
api_enabled = true
api_allowed_cidrs = [
"12.34.56.78/9", # replace with your CIDR
]
}
output "api_endpoint" {
value = module.internal_scanner.api_endpoint
}Test the scanner REST API with:
terraform output api_endpoint
curl $(terraform output -raw api_endpoint)/healthOption B: Access API via Custom Domain and TLS
If you want to communicate with the scanner API via TLS, you need a public and a private hosted zone. Add these variables to your module:
module "internal_scanner" {
# ... other configuration ...
# DNS configuration
api_enabled = true
api_allowed_cidrs = [
"12.34.56.78/9",
]
api_domain = "scanner.example.com"
route53_private_zone_id = "ZXXXXXXXXPRIV"
route53_public_zone_id = "ZXXXXXXXXPUB"
}The public hosted zone is only used to verify the ACM TLS certificate used by the load balancer.
Network Configuration
EKS Access
EKS can be managed either via a private endpoint only, or a private and public endpoint. Regardless of configuration, all requests still require valid IAM authentication.
Make EKS management only possible from the internal network/private endpoint:
module "internal_scanner" {
# ... other configuration ...
cluster_endpoint_public_access = false
}Limit public access to specific CIDRs:
module "internal_scanner" {
# ... other configuration ...
cluster_endpoint_public_access_cidrs = [
"172.16.0.0/12", # Corporate VPN
]
}The EKS endpoint must be reachable from the machine deploying/managing the Detectify Terraform module.
Understanding api_allowed_cidrs
The api_allowed_cidrs variable controls which networks can reach the scanner API endpoint (ALB). This could include:
- VPN or corporate network CIDRs — for accessing the health endpoint and triggering scans from your workstation
- CI/CD runner CIDRs — if triggering scans from pipelines
module "internal_scanner" {
# ... other configuration ...
api_allowed_cidrs = [
"10.0.5.0/24", # CI/CD runners
"172.16.0.0/12", # Corporate VPN
]
}Allowing Scanner Access to Applications
The scanner needs network access to your internal applications. Update security groups to allow inbound traffic from the scanner’s subnet:
# Example: Allow scanner to access application
resource "aws_security_group_rule" "allow_scanner" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
source_security_group_id = module.internal_scanner.cluster_security_group_id
security_group_id = aws_security_group.application.id
}Redis Configuration
By default, the module deploys an in-cluster Redis instance with persistent storage.
Using Managed Redis (e.g., ElastiCache)
If you prefer a managed Redis service, disable the in-cluster Redis and provide your connection URL:
module "internal_scanner" {
# ... other configuration ...
deploy_redis = false
redis_url = "rediss://user:pass@my-redis.example.com:6379" # Use rediss:// for TLS
}Note: When
deploy_redisis set tofalse, you must overrideredis_url.
For example, using AWS managed Valkey:
module "internal_scanner" {
# ... other configuration ...
deploy_redis = false
redis_url = "rediss://${aws_elasticache_serverless_cache.valkey.endpoint[0].address}:6379"
}
resource "aws_elasticache_serverless_cache" "valkey" {
name = "detectify-internal-scanning"
engine = "valkey"
major_engine_version = "8"
security_group_ids = [aws_security_group.valkey.id]
subnet_ids = ["subnet-xxxxx", "subnet-yyyyy"]
}
resource "aws_security_group" "valkey" {
name = "detectify-valkey"
description = "Access to Valkey used by Detectify scanner"
vpc_id = "vpc-xxxxx"
}
resource "aws_vpc_security_group_ingress_rule" "from_eks" {
description = "Allow EKS access"
referenced_security_group_id = module.internal_scanner.cluster_primary_security_group_id
from_port = 6379
to_port = 6379
ip_protocol = "tcp"
security_group_id = aws_security_group.valkey.id
}Monitoring
The scanner supports monitoring via Amazon CloudWatch.
CloudWatch Observability
CloudWatch integration provides logs and metrics through the Amazon CloudWatch Observability addon:
module "internal_scanner" {
# ... other configuration ...
enable_cloudwatch_observability = true # default is true
}When enabled, container logs are automatically sent to CloudWatch Logs and you can view them in the AWS Console or via CLI:
aws logs tail /aws/containerinsights/<cluster-name>/application --followCluster Access Management
By default, the identity (IAM user or role) that creates the EKS cluster automatically receives admin permissions. You can configure additional roles to have cluster admin access.
Configuration Options
| Variable | Default | Description |
|---|---|---|
enable_cluster_creator_admin_permissions | true | Grants the identity running Terraform admin access to the cluster |
cluster_admin_role_arns | [] | Additional IAM role ARNs to grant cluster admin access |
Local Development
When deploying from your laptop or workstation, the default settings work well:
module "internal_scanner" {
# ... other configuration ...
# Default: your IAM identity automatically gets admin access
# enable_cluster_creator_admin_permissions = true (default)
}CI/CD Pipeline Deployment
When deploying via a CI/CD pipeline, you might want specific team roles to have access as well for troubleshooting purposes:
module "internal_scanner" {
# ... other configuration ...
# If pipeline agent deployed the cluster and you want to keep making changes via pipeline then set this to true
enable_cluster_creator_admin_permissions = true
# Grant access to specific team roles if needed
cluster_admin_role_arns = [
"arn:aws:iam::123456789012:role/DevOpsTeam",
"arn:aws:iam::123456789012:role/PlatformEngineers"
]
}Avoiding Duplicate Role Errors
Important: Do not add the same IAM identity to cluster_admin_role_arns that is already the cluster creator when enable_cluster_creator_admin_permissions = true.
For example, if you deploy from your laptop using the role arn:aws:iam::123456789012:role/MyRole:
# This will FAIL - duplicate admin entry
module "internal_scanner" {
enable_cluster_creator_admin_permissions = true # MyRole gets admin via this
cluster_admin_role_arns = [
"arn:aws:iam::123456789012:role/MyRole" # ERROR: MyRole already has admin
]
}Solution: Either set enable_cluster_creator_admin_permissions = false and explicitly list all admin roles, or don’t include your current role in cluster_admin_role_arns.
Passing through extra Helm values
var.helm_values accepts a list of YAML strings that get appended to the module’s generated values. Anything in the chart’s values.yaml is fair game — use this to tune chart settings that aren’t exposed as first-class module variables without forking the module.
Customer values take precedence over the module’s defaults, so this is also the escape hatch when you need to override a module-set value.
module "internal_scanner" {
# ... other configuration ...
helm_values = [
file("${path.module}/chart-overrides.yaml"),
yamlencode({
priorityClass = {
value = 2000000
}
redis = {
persistence = {
size = "16Gi"
}
}
autoscaling = {
enabled = true
scanManager = {
maxReplicas = 40
targetCPUUtilizationPercentage = 70
}
}
}),
]
}Common uses:
- Resource tuning — bump Redis PVC size, tweak autoscaling thresholds.
- PriorityClass — raise the scanner’s priority to protect it from eviction under cluster pressure.
- Ingress tweaks — additional NGINX annotations, specific ingress class names.
Only values that exist in the chart’s values.yaml are honoured. Values you supply for unexposed fields (e.g. container-level nodeSelector, tolerations, or affinity) are silently ignored — the chart would need to plumb them through its deployment templates first. If you need one of those, open an issue on the chart repo.
For the full list of available chart values, see the Helm Chart Configuration reference.