Use Cases
API Security Testing addresses several common security testing scenarios. Below are the primary use cases and how Detectify helps in each.
REST API Security Testing
Most modern applications expose REST APIs that handle sensitive operations such as authentication, data retrieval, and payment processing. These endpoints are often targeted by attackers because they provide direct access to backend systems.
API Security Testing helps you:
- Test every documented endpoint by scanning all paths defined in your OpenAPI spec
- Identify injection vulnerabilities such as SQL injection, NoSQL injection, command injection, and server-side template injection
- Detect authorization flaws including broken object-level authorization (BOLA) where users can access resources belonging to other users
- Find server-side request forgery (SSRF) where an attacker can make your server issue requests to internal services
- Verify input validation by sending malformed and unexpected data to each parameter
This is particularly valuable for APIs that are publicly accessible or exposed to third-party consumers, where the attack surface is larger and more exposed.
Regression Detection
As APIs evolve with new features and refactoring, previously fixed vulnerabilities can reappear. Regression detection catches these issues before they reach production.
By scheduling recurring scans, you can:
- Detect reintroduced vulnerabilities when code changes inadvertently undo security fixes
- Validate security controls after deployments to confirm that authentication, authorization, and input validation remain intact
- Expand test coverage over time through payload rotation, which tests different attack variations across scan runs
- Integrate with CI/CD pipelines to automatically trigger scans after deployments
Regression detection works best when scans are scheduled to run regularly, such as after each deployment or on a daily or weekly cadence.
Prompt Injection Testing for AI and LLM Endpoints
APIs that integrate large language models (LLMs) or AI services face a distinct class of vulnerabilities: prompt injection. In a prompt injection attack, a malicious user crafts input that manipulates the LLM’s behavior, potentially causing it to bypass safety controls, leak system prompts, or perform unauthorized actions.
Detectify tests AI-powered endpoints by:
- Sending prompt injection payloads designed to manipulate LLM behavior through API parameters and request bodies
- Testing combinatorial variations across a massive payload space (922 quintillion permutations) to cover diverse injection techniques
- Evaluating responses for signs of successful injection, such as unexpected output patterns, system prompt leakage, or safety control bypasses
This is critical for any API that passes user input to an LLM, whether directly through a chat endpoint or indirectly through features like search, summarization, or content generation.
When to Use API Security Testing
| Scenario | Recommended approach |
|---|---|
| Pre-release API validation | Run a scan before deploying new API versions |
| Continuous security monitoring | Schedule recurring scans on a daily or weekly basis |
| Post-deployment verification | Trigger scans after each deployment via CI/CD |
| AI/LLM endpoint security | Include LLM-facing endpoints in your OpenAPI spec and scan for prompt injection |
| Compliance requirements | Use scheduled scans and exportable reports to demonstrate ongoing security testing |