Skip to Content

Configuration

The Terraform module has many options available. For a complete set of options see the Detectify Terraform module inputs .

API Configuration

The scanner has a REST API that may be used to e.g. start jobs and fetch results without going via the Detectify UI.

Option A: Access API Directly via ALB

The simplest way to make the scanner API available is by using an ALB, without TLS or custom domain name.

module "internal_scanner" { # ... other configuration ... # DNS configuration api_enabled = true api_allowed_cidrs [ 12.34.46.78/9, # replace with our CIDR ] } output "api_endpoint" { value = module.internal_scanner.api_endpoint }

Test the scanner REST API with:

terraform output api_endpoint curl $(terraform output -raw api_endpoint)/health

Option B: Access API via Custom Domain and TLS

If you want to communicate with the scanner API via TLS, you neeed a public and a private hosted zone. Add these variables to your module:

module "internal_scanner" { # ... other configuration ... # DNS configuration api_enabled = true api_allowed_cidrs [ 12.34.56.78/9, ] api_domain = "scanner.example.com" route53_private_zone_id = "ZXXXXXXXXPRIV" route53_public_zone_id = "ZXXXXXXXXPUB" }

The public hosted zone is only used to verify the ACM TLS certificate used by the load balancer.

Network Configuration

EKS Access

EKS can be managed either via a private endpoint only, or a private and public endpoint. Regardless of configuration, all requests still require valid IAM authentication.

Make EKS management only possible from the internal network/private endpoint:

module "internal_scanner" { # ... other configuration ... cluster_endpoint_public_access = false }

Limit public access to specific CIDRs:

module "internal_scanner" { # ... other configuration ... cluster_endpoint_public_access_cidrs = [ "172.16.0.0/12", # Corporate VPN ] }

The EKS endpoint must be reachable from the machine deploying/managing the Detectify Terraform module.

Understanding api_allowed_cidrs

The api_allowed_cidrs variable controls which networks can reach the scanner API endpoint (ALB). This could include:

  • VPN or corporate network CIDRs — for accessing the health endpoint and triggering scans from your workstation
  • CI/CD runner CIDRs — if triggering scans from pipelines
module "internal_scanner" { # ... other configuration ... api_allowed_cidrs = [ "10.0.5.0/24", # CI/CD runners "172.16.0.0/12", # Corporate VPN ] }

Allowing Scanner Access to Applications

The scanner needs network access to your internal applications. Update security groups to allow inbound traffic from the scanner’s subnet:

# Example: Allow scanner to access application resource "aws_security_group_rule" "allow_scanner" { type = "ingress" from_port = 443 to_port = 443 protocol = "tcp" source_security_group_id = module.internal_scanner.cluster_security_group_id security_group_id = aws_security_group.application.id }

Redis Configuration

By default, the module deploys an in-cluster Redis instance with persistent storage.

Using Managed Redis (e.g., ElastiCache)

If you prefer a managed Redis service, disable the in-cluster Redis and provide your connection URL:

module "internal_scanner" { # ... other configuration ... deploy_redis = false redis_url = "rediss://user:pass@my-redis.example.com:6379" # Use rediss:// for TLS }

Note: When deploy_redis is set to false, you must override redis_url.

For example, using AWS managed Valkey:

module "internal_scanning" { # ... other configuration ... deploy_redis = false redis_url = "rediss://${aws_elasticache_serverless_cache.valkey.endpoint[0].address}:6379" } resource "aws_elasticache_serverless_cache" "valkey" { name = "detectify-internal-scanning" engine = "valkey" major_engine_version = "8" security_group_ids = [aws_security_group.valkey.id] subnet_ids = ["subnet-xxxxx", "subnet-yyyyy"] } resource "aws_security_group" "valkey" { name = "detectify-valkey" description = "Access to Valkey used by Detectify scanner" vpc_id = "vpc-xxxxx" } resource "aws_vpc_security_group_ingress_rule" "from_eks" { description = "Allow EKS access" referenced_security_group_id = module.internal_scanner.cluster_primary_security_group_id from_port = 6379 to_port = 6379 ip_protocol = "tcp" security_group_id = aws_security_group.valkey.id }

Monitoring

The scanner supports monitoring via Amazon CloudWatch.

CloudWatch Observability

CloudWatch integration provides logs and metrics through the Amazon CloudWatch Observability addon:

module "internal_scanner" { # ... other configuration ... enable_cloudwatch_observability = true # default is true }

When enabled, container logs are automatically sent to CloudWatch Logs and you can view them in the AWS Console or via CLI:

aws logs tail /aws/containerinsights/<cluster-name>/application --follow

Cluster Access Management

By default, the identity (IAM user or role) that creates the EKS cluster automatically receives admin permissions. You can configure additional roles to have cluster admin access.

Configuration Options

VariableDefaultDescription
enable_cluster_creator_admin_permissionstrueGrants the identity running Terraform admin access to the cluster
cluster_admin_role_arns[]Additional IAM role ARNs to grant cluster admin access

Local Development

When deploying from your laptop or workstation, the default settings work well:

module "internal_scanner" { # ... other configuration ... # Default: your IAM identity automatically gets admin access # enable_cluster_creator_admin_permissions = true (default) }

CI/CD Pipeline Deployment

When deploying via a CI/CD pipeline, you might want specific team roles to have access as well for troubleshooting purposes:

module "internal_scanner" { # ... other configuration ... # If pipeline agent deployed the cluster and you want to keep making changes via pipeline then set this to true enable_cluster_creator_admin_permissions = true # Grant access to specific team roles if needed cluster_admin_role_arns = [ "arn:aws:iam::123456789012:role/DevOpsTeam", "arn:aws:iam::123456789012:role/PlatformEngineers" ] }

Avoiding Duplicate Role Errors

Important: Do not add the same IAM identity to cluster_admin_role_arns that is already the cluster creator when enable_cluster_creator_admin_permissions = true.

For example, if you deploy from your laptop using the role arn:aws:iam::123456789012:role/MyRole:

# This will FAIL - duplicate admin entry module "internal_scanner" { enable_cluster_creator_admin_permissions = true # MyRole gets admin via this cluster_admin_role_arns = [ "arn:aws:iam::123456789012:role/MyRole" # ERROR: MyRole already has admin ] }

Solution: Either set enable_cluster_creator_admin_permissions = false and explicitly list all admin roles, or don’t include your current role in cluster_admin_role_arns.

Last updated on