OWASP ZAP: Strong for Beginners, Rarely a First Choice in Professional Pentesting

OWASP ZAP is one of the most well-known tools in web pentesting. Sooner or later, most people come across it. In trainings, labs, or early hands-on experience, it is often one of the first tools used.

In professional pentesting, however, it tends to play a smaller role.

What is OWASP ZAP?

OWASP ZAP is an open source proxy for analyzing and manipulating HTTP and HTTPS traffic. Functionally, it combines several typical components of a web pentesting toolkit. It acts as an intercepting proxy, includes both passive and active scanning capabilities, can crawl applications via a spider, and provides fuzzing features. It also exposes an API for automation.

This makes ZAP capable of covering many common web pentesting tasks, at least on a basic level.

Why ZAP is often the starting point

ZAP has several characteristics that make it especially appealing for beginners. The most obvious one is availability. As an open source tool, it can be used without licensing costs, which significantly lowers the barrier to entry.

Another factor is how quickly it produces results. A scan can be launched within minutes and immediately highlights common vulnerabilities. This helps build an initial understanding of web security issues.

Its usability also plays a role. Many features are accessible without extensive configuration, which is particularly useful in training environments or early experimentation.

Why ZAP is less common in professional environments

In professional pentesting, the focus shifts significantly. The emphasis is less on automated results and more on tailored, manual analysis.

Scanners still have their place, but they typically serve only as a starting point. The real work happens manually. In this area, ZAP often provides less value compared to specialized tools that are better optimized for workflow, reproducibility, and efficient manual testing.

Another aspect is the quality of findings. Automated results always require validation. In many projects, the effort involved outweighs the benefit.

Performance and usability can also become limiting factors. These issues are rarely noticeable in training scenarios but become more apparent in larger or more complex real-world applications.

ZAPScanner from binsec.tools

The ZAPScanner from binsec.tools follows a slightly different approach. It uses OWASP ZAP in a preconfigured way for standardized scans.

The focus is on pragmatic usage. Predefined configurations allow for quick and reproducible results. This is particularly useful for initial security assessments, simple automated checks, or training environments.

It is not intended to replace manual pentesting, but rather to provide a structured entry point into automated testing.

Positioning in pentesting

ZAP is not a tool for deep analysis of complex applications. Its strength lies in making fundamental concepts tangible and delivering initial technical findings.

In practice, it is often used as a supporting tool. Typical use cases include initial crawling, quick checks, or training scenarios. For in-depth analysis, most pentesters rely on other tools and manual techniques.

Conclusion

OWASP ZAP is a solid tool for getting started in web pentesting. It helps users understand core concepts and produces quick results with minimal setup.

In professional environments, it is less commonly used as a primary tool. Manual analysis, experience, and specialized tooling take priority.

Used with the right expectations, ZAP is a useful addition to the toolkit. Expect more, and its limitations become apparent quickly.

OWASP Top 10 and CWE Top 25 – Two Perspectives on Software Weaknesses

In application security, two references appear particularly often: the OWASP Top 10 and the CWE Top 25 Most Dangerous Software Weaknesses. Both lists are frequently mentioned in security guidelines, training materials, and penetration testing reports and aim to highlight common security problems in software.

At first glance, both lists appear to describe the same thing: common weaknesses in software. In reality, they follow different approaches. While the OWASP Top 10 describes security risks in web applications, the CWE Top 25 lists concrete technical weaknesses in software in general.

The OWASP Top 10

The OWASP Top 10 is published by the Open Web Application Security Project and describes the most significant security risks for web applications.

Well-known categories include:

  • Broken Access Control
  • Cryptographic Failures
  • Injection
  • Security Misconfiguration

The categories are intentionally formulated at a relatively high level. They describe risk areas in web applications that arise from common weaknesses, potential attack vectors, and their resulting impact.

The OWASP Top 10 clearly focuses on web applications and web-based architectures. Many of its categories reflect typical problems found in modern web applications.

For this reason, the list is often used as a reference for web application security. Many organizations rely on it for secure development guidelines or security awareness training.

However, it is important to note that the OWASP Top 10 is not a testing methodology. It describes risks rather than specific testing procedures or technical checks.

The CWE Top 25

The Common Weakness Enumeration (CWE) is maintained by MITRE and represents a comprehensive classification system for software weaknesses.

From this collection, the list of CWE Top 25 Most Dangerous Software Weaknesses is regularly derived.

Unlike the OWASP Top 10, the CWE Top 25 describes concrete technical weakness classes in code, for example:

  • Out-of-bounds Write (CWE-787)
  • Use After Free (CWE-416)
  • Improper Input Validation (CWE-20)

Many of these weaknesses originate directly in the code and often affect memory-unsafe programming languages or low-level system software.

In contrast to the OWASP Top 10, the CWE classification is not limited to web applications. It describes weaknesses in software in general and can therefore be applied to web applications, desktop software, system software, or embedded systems.

Risk vs. Technical Cause

The most important difference between the two lists lies in their level of abstraction.

The OWASP Top 10 describes security risks in web applications.
The CWE Top 25 describes concrete weaknesses in software code.

An OWASP category can therefore include several underlying weaknesses.

A simple example illustrates this relationship.
The risk category Injection can arise from different technical causes, such as insufficient input validation or insecure database queries. These causes can in turn be mapped to specific CWE identifiers.

OWASP therefore answers the question:

Which security risks occur most frequently in web applications?

The CWE classification, in contrast, addresses:

Which specific coding errors lead to these problems?

Comparison of OWASP Top 10 and CWE Top 25

There is no direct one-to-one mapping between the two lists. However, typical relationships can be illustrated. The following table shows a simplified comparison of commonly related issues.

OWASP CategoryTypical Related CWE Weaknesses
Broken Access ControlCWE-284 Improper Access Control, CWE-862 Missing Authorization
Cryptographic FailuresCWE-327 Broken or Risky Crypto Algorithm, CWE-326 Inadequate Encryption Strength
InjectionCWE-89 SQL Injection, CWE-77 Command Injection, CWE-20 Improper Input Validation
Insecure DesignCWE-840 Business Logic Errors, CWE-602 Client-Side Enforcement of Server-Side Security
Security MisconfigurationCWE-16 Configuration Errors
Vulnerable and Outdated Componentsoften indirectly via known CVEs with underlying CWEs
Identification and Authentication FailuresCWE-287 Improper Authentication, CWE-522 Insufficiently Protected Credentials
Software and Data Integrity FailuresCWE-494 Download of Code Without Integrity Check
Security Logging and Monitoring FailuresCWE-778 Insufficient Logging
Server-Side Request Forgery (SSRF)CWE-918 Server-Side Request Forgery

At the same time, the CWE Top 25 includes several weaknesses that cannot be directly mapped to OWASP categories. These include classical memory-related issues such as:

  • CWE-787 Out-of-bounds Write
  • CWE-416 Use After Free
  • CWE-125 Out-of-bounds Read
  • CWE-190 Integer Overflow

Such weaknesses typically occur in system-level software rather than in typical web applications.

Relevance for Penetration Testing

For penetration testing, the OWASP Top 10 is a frequently used reference. The list highlights major security risks that are typically considered when testing web applications.

Some penetration testing reports structure their findings according to OWASP categories. More commonly, however, the categories are used to contextualize vulnerabilities or communicate risks.

The CWE classification often plays a complementary role in penetration testing. It helps to technically classify discovered vulnerabilities more precisely. Many vulnerability reports therefore include the corresponding CWE identifier.

A typical mapping may look like this:

OWASP risk
→ concrete vulnerability
→ corresponding CWE ID

Example:

Broken Access Control
→ missing authorization check
→ CWE-284 Improper Access Control

Such a mapping can facilitate both risk communication with stakeholders and the technical classification of a vulnerability. In practice, however, it is often performed only upon customer request. The actual added value of an additional classification usually remains limited.

PTES – Structure for Penetration Tests, but Not a Complete Standard

The Penetration Testing Execution Standard (PTES) describes a structured methodology for conducting penetration tests. The goal of the standard is to define the typical project phases of a penetration test and thereby create a transparent process from planning to reporting the results.

The standard emerged around 2010 as a community-driven initiative by security professionals. To this day, PTES is frequently referenced when discussing the general workflow of a penetration test. In practice, however, it is usually used more as a conceptual framework than as a complete technical methodology.

The PTES Phases

PTES divides a penetration test into seven typical project phases.

Pre-Engagement Interactions

In this phase, the organizational and legal framework of the engagement is defined. This includes in particular the scope, test objectives, communication channels, and the so-called rules of engagement.

Intelligence Gathering

This phase focuses on collecting information about the target environment. Examples include publicly available data, DNS information, subdomains, or indicators of the technologies in use.

Threat Modeling

Based on the collected information, potential attack scenarios are evaluated. The goal is to identify realistic attack paths and particularly critical systems.

Vulnerability Analysis

In this phase, potential vulnerabilities are identified. This is usually done through a combination of automated scans and manual analysis.

Exploitation

Identified vulnerabilities are then tested in practice. The objective is to determine whether and to what extent exploitation is possible.

Post-Exploitation

After successful access has been achieved, the potential impact is analyzed. This may include privilege escalation, access to sensitive data, or lateral movement within the network.

Reporting

At the end of the project, all findings are documented. The report describes the identified vulnerabilities, their potential impact, and possible remediation measures.

Taken together, these phases provide a meaningful structure for the workflow of a penetration testing project.

Critical Assessment

Despite its recognition, PTES is rarely used today as the sole methodological basis for penetration tests.

One important reason is the limited technical depth of the standard. While the defined phases describe the overall workflow of a penetration test, they provide only few concrete testing procedures. Additional technical guides and internal methodologies are therefore typically required for practical execution.

Another limitation is that the standard has seen only limited development since its original publication. Some technical examples in PTES refer to platforms and tools that are now outdated. For instance, older Windows versions such as Windows XP or Windows 7 are mentioned as reference systems in the technical sections.

Modern IT architectures are also barely addressed in the original PTES. Topics such as cloud infrastructures, containerized platforms, or complex identity systems play only a minor role in the standard.

Furthermore, PTES is not a formally maintained industry standard with clearly defined governance. There is no regular update process by a standardization body. As a result, the standard does not evolve over time and does not adequately reflect current technological developments.

Requirements for a TISAX Penetration Test

TISAX (Trusted Information Security Assessment Exchange) is the industry-specific security standard of the automotive sector – developed by the VDA and operated by the ENX Association. It ensures that companies demonstrably meet a high level of information security and can reliably share this status with their partners.

As part of TISAX, the regular execution of penetration tests is a key component of the technical security verification process. They serve to practically test the effectiveness of implemented security measures and to identify vulnerabilities at an early stage. This ensures that companies do not merely comply with policies but are genuinely protected against real-world attacks.

In the VDA’s Information Security Assessment Catalogue 6.0.2 (ISA6 DE 6.0.2), penetration testing is explicitly mentioned in two sections. The first reference appears in section 5.2.6To what extent are IT systems and services technically reviewed (system and service audits)?

In general, system and service audits must be conducted, planned in advance, and coordinated with all relevant parties. Their results must be transparently documented, reported to management, and used as a basis for appropriate improvement measures. In addition, such audits should be performed regularly and risk-based by qualified professionals using appropriate tools – both from the internal network and from the Internet. After completion, an audit report should be prepared promptly. While penetration testing can help to fulfill these requirements, an explicit demand for such testing only appears in the additional requirements for systems with a high protection need:

Additional requirements for critical IT systems or services have been identified and fulfilled (e.g., service-specific tests and tools and/or penetration tests, risk-based testing intervals).

The next reference appears in section 5.3.1 – To what extent is information security considered in new or further developed IT systems?

In principle, information security requirements must be identified, considered, and verified through security acceptance testing during the planning, development, procurement, and modification of IT systems. Specifications should include security requirements, best practices, and fail-safety measures, and they must be reviewed before systems go live. The use of production data for testing should be avoided or protected by equivalent safeguards. For systems with a very high protection need, the catalogue again explicitly refers to penetration testing:

The security of software specifically developed for a particular purpose, or extensively customized software, is verified (e.g., by penetration testing) during commissioning, in the event of significant changes, and at regular intervals.

The requirements for penetration testing under section 5.2.6 are relatively unspecific. If no proprietary software development is carried out – either internally or by contractors – testing usually focuses on conducting an external penetration test to assess publicly accessible services (e.g., websites) and analyzing the internal network. In traditional IT environments, the primary focus is typically on the Active Directory, the firewall, and internally reachable systems.

The execution of extended penetration tests – such as those including social engineering, phishing, physical security assessments, or DDoS testing – is generally not necessary for typical, limited-scope environments.

NIS2 and Penetration Testing – Mandatory or Optional?

The new NIS2 Directive of the EU has been in force since early 2023. It no longer applies only to traditional critical infrastructure operators (KRITIS), but now covers a wide range of important entities, including:

  • medium-sized and large companies in energy, transportation, finance, and healthcare,
  • hosting providers, data centers, DNS service providers,
  • (almost) any tech company providing essential services.

The NIS2 Directive does not explicitly mandate penetration testing, but it requires measures that are hardly feasible or verifiable without it. Article 21 of the directive defines a central obligation:

Member States shall ensure that essential and important entities take appropriate and proportionate technical, operational and organisational measures to manage the risks posed to the security of network and information systems which those entities use for their operations or for the provision of their services, and to prevent or minimise the impact of incidents on recipients of their services and on other services.

While penetration testing is not explicitly mentioned, the directive clearly implies it – particularly through the requirement for regular testing of the effectiveness of measures, and the demand to follow the state of the art, which, in practice, includes conducting penetration tests.