Skip to Content

Troubleshooting & Operations

Monitor your scanner deployment, perform maintenance tasks, and resolve common issues.

Monitoring

Health Checks

Check running state of pods:

kubectl get pods -n scanner

If the Scanner API is enabled and exposed, one can monitor the health with:

# API health curl https://scanner.internal.example.com/health

Viewing Logs

# Scan Scheduler logs kubectl logs -n scanner -l app=scan-scheduler -f # Scan Manager logs kubectl logs -n scanner -l app=scan-manager -f # Logs for all non-ephemeral scanner services kubectl logs -n scanner -l app.kubernetes.io/instance=scanner -f

Resource Usage

Monitor resource consumption:

kubectl top pods -n scanner

Cloud-Specific Monitoring

For cloud-specific monitoring options, see your deployment guide:

Maintenance

Updating Scanner Version

Scanner updates are managed through your Terraform configuration. When a new version is available, update your module version and apply:

terraform init -upgrade terraform apply

The Helm chart performs a rolling update with zero downtime.

You can also update your Terraform module version or pin a specific scanner image version:

module "internal_scanner" { source = "detectify/internal-scanning/aws" version = "1.1.0" # Update module version # Optionally pin scanner image version (defaults to "stable") # internal_scanning_version = "2.0.0" }

Restarting Components

# Restart all scanner components kubectl rollout restart deployment -n scanner # Restart specific component kubectl rollout restart deployment/scan-scheduler -n scanner

Common Issues

Scanner Not Connecting to Detectify

Symptoms: Scanner shows as disconnected in the Detectify UI.

Steps to diagnose:

  1. Verify outbound internet access:

    # Attach a debug container to a scan scheduler pod and curl from it kubectl debug -it scan-scheduler-YOU_POD_ID --image=curlimages/curl --target=scheduler -n scanner -- curl -I https://connector.detectify.com/status
  2. Check API token is configured correctly:

    kubectl get secret -n scanner scanner-config -o yaml
  3. Check scan-scheduler logs for connection errors:

    kubectl logs -n scanner -l app=scan-scheduler --tail=100

Scans Failing

Symptoms: Scans start but fail to complete or report errors.

Steps to diagnose:

  1. Check scan manager logs for errors:

    kubectl logs -n scanner -l app=scan-manager --tail=100
  2. Verify network connectivity to target application:

    kubectl debug -it scan-manager-YOU_POD_ID --image=curlimages/curl --target=manager -n scanner -- curl https://target-app.internal
  3. Check if scan-worker pods are being created:

    kubectl get pods -n scanner -w

Pods Not Starting

Symptoms: Pods stuck in Pending or CrashLoopBackOff state.

Steps to diagnose:

  1. Check pod status and events:

    kubectl describe pod -n scanner <pod-name>
  2. View pod logs:

    kubectl logs -n scanner <pod-name>
  3. Check node resources:

    kubectl top nodes

Common causes:

  • Image pull errors (check registry credentials)
  • Configuration errors (check secrets and configmaps)
  • Insufficient cluster resources (nodes need to scale up)

High Resource Usage / OOMKilled

Symptoms: Pods being killed due to memory limits, slow performance.

Steps to diagnose:

  1. Monitor resource consumption:

    kubectl top pods -n scanner
  2. Check for OOMKilled events:

    kubectl get events -n scanner --field-selector reason=OOMKilled

Solution: Increase memory limits in your configuration or reduce concurrent scans.

Image Pull Errors

Symptoms: Pods stuck with ImagePullBackOff or ErrImagePull status.

Steps to diagnose:

  1. Check pod events for details:

    kubectl describe pod -n scanner <pod-name>
  2. Verify container registry credentials are configured:

    kubectl get secret -n scanner regcred -o yaml

Solution: Verify your Docker credentials from the Detectify UI are correctly configured. Contact Detectify support if you’re unable to pull images.

Getting Help

If you’re unable to resolve an issue:

  1. Collect diagnostic information:

    kubectl get pods -n scanner -o wide kubectl describe pods -n scanner kubectl logs -n scanner -l app.kubernetes.io/part-of=scanner --tail=200 kubectl get events -n scanner --sort-by='.lastTimestamp'
  2. Contact Detectify support with the diagnostic output and a description of the issue.

Last updated on