Skip to Content

OWASP LLM Top 10

What it is — The OWASP Top 10 for Large Language Model Applications identifies the most critical security risks specific to applications that integrate large language models (LLMs). Published by OWASP , the current version (2025) addresses risks from prompt injection and improper output handling to supply chain vulnerabilities in AI/ML deployments.

Appsec relevance — LLM applications are web applications with additional attack vectors. Traditional web security risks (injection, information disclosure, authentication bypass) still apply, and new risks emerge from the model integration layer.

How Detectify Supports the OWASP LLM Top 10:2025

CategoryWhat it coversDetectify CoverageCoverage
LLM01:2025  Prompt InjectionCrafted inputs manipulate the LLM into executing unintended actions or bypassing guardrailsTemplate injection tests CWE-1336  provide foundational coverage for server-side prompt injection patternsPartial
LLM02:2025  Sensitive Information DisclosureLLM inadvertently reveals confidential data such as PII, credentials, or proprietary information in responsesInformation exposure detection CWE-200  covers data leakage in web responsesPartial
LLM03:2025  Supply ChainVulnerabilities in third-party components, pre-trained models, or training data compromise application securityTechnology and component detection via Surface Monitoring, CVE-specific test modules generated by Alfred AIPartial
LLM05:2025  Improper Output HandlingLLM output is passed to downstream components without validation, enabling XSS, SSRF, or code executionXSS, SQL injection, and other injection tests detect improper handling of LLM output rendered in web contextsFull (web output contexts)
LLM07:2025  System Prompt LeakageSystem prompts or instructions are exposed to users, potentially revealing sensitive business logic, API keys, or internal configurationsInformation exposure detection CWE-200  identifies sensitive data leakage in web responsesPartial

AI/ML Tool Exposure Detection

Detectify includes dedicated tests for exposed and unauthenticated AI/ML tools, including Gradio, ComfyUI, llama.cpp, Open WebUI, Stable Diffusion WebUI, Jan.ai, Xinference, and others. These tests detect publicly accessible AI infrastructure that should not be exposed to the internet — a practical security risk that goes beyond the LLM Top 10 categories.

What Detectify Covers

Detectify addresses the web application security layer of LLM deployments. It detects injection vulnerabilities that could enable prompt injection, identifies improper output handling that leads to XSS or further injection, and discovers exposed AI/ML infrastructure. For categories where traditional DAST testing applies (LLM01, LLM02, LLM03, LLM05), Detectify provides partial to full coverage.

Categories related to model internals (data and model poisoning, excessive agency, system prompt leakage, misinformation, unbounded consumption) are outside the scope of DAST testing.

Complementary Tools You May Need

  • LLM-specific security testing tools — For prompt injection testing beyond template injection patterns
  • Model monitoring and observability — For detecting anomalous model behavior
  • Software composition analysis (SCA) — For comprehensive supply chain tracking of ML dependencies
  • Access control and IAM — For restricting model access and preventing model theft
  • Red teaming — For evaluating LLM-specific attack scenarios

References

Last updated on