A Web Application Hacker’s Toolkit - Part 2

Updated: Jun 21


Web Application Fuzzers(Burp Suite)






Although it is possible to perform a successful attack using only manual techniques, to become a truly accomplished web application hacker, you need to automate your attacks to enhance their speed and effectiveness. Although it is possible to perform a successful attack using only manual techniques, to become a truly accomplished web application hacker, you need to automate your attacks to enhance their speed and effectiveness.


  • Manually configured probing for common vulnerabilities. With the help of this function enables you to control precisely which attack strings are used and how they are incorporated into requests. Then you can review the results to identify any unusual or anomalous responses that merit further investigation.


  • A set of built-in attack payloads and versatile functions to generate arbitrary payloads in user-defined ways — for example, based on malformed encoding, character substitution, brute force, and data retrieved in a previous attack.


  • We are able to save attack result and response data for use in reports or incorporate into further attacks.

  • Customizable functions for viewing and analyzing responses — for example, based on the appearance of specific expressions or the attack payload itself.

  • Functions for extracting useful data from the application’s responses — for example, by parsing the username and password fields in a My Details page. This can be useful when you are exploiting various vulnerabilities, including flaws in session handling and access controls.


Burp Suite


Burp Suite
Burp Suite

Web Vulnerability Scanners(Burp Suite)

Some integrated testing suites include functions to scan for common web application vulnerabilities. The scanning that is performed falls into two categories.


  • Passive scanning involves monitoring the requests and responses passing through the local proxy to identify vulnerabilities such as cleartext password submission, cookie misconfiguration, and cross-domain Referer leakage. You can perform this type of scanning noninvasively with any application that you visit with your browser. This feature is often useful when scoping out a penetration testing engagement. It gives you a feel for the application’s security posture in relation to these kinds of vulnerabilities.(Burp Suite)




  • Passive scanning involves monitoring the requests and responses passing through the local proxy to identify vulnerabilities such as cleartext password submission, cookie misconfiguration, and cross-domain Referer leakage. You can perform this type of scanning noninvasively with any application that you visit with your browser. This feature is often useful when scoping out a penetration testing engagement. It gives you a feel for the application’s security posture in relation to these kinds of vulnerabilities.(Burp Suite)

The vulnerability scanners included within testing suites are more user-driven than the standalone scanners discussed later in this chapter. Instead of just providing a start URL and leaving the scanner to crawl and test the application, the user can guide the scanner around the application, control precisely which requests are scanned, and receive real-time feedback about individual requests. Here are some typical ways to use the scanning function within an integrated testing suite:

  • After manually mapping an application’s contents, you can select interesting areas of functionality within the site map and send these to be scanned. This lets you target your available time into scanning the most critical areas and receive the results from these areas more quickly.

  • When manually testing individual requests, you can supplement your efforts by scanning each specific request as you are testing it. This gives you nearly instant feedback about common vulnerabilities for that request (Burp Suite), which can guide and optimize your manual testing.

  • We can also use the automated spidering tool to crawl the entire application and then scan all the discovered content. This emulates the basic behavior of a standalone web scanner.

  • In Burp Suite, you can enable live scanning as you browse. This lets you guide the scanner’s coverage using your browser and receive quick feedback about each request you make, without needing to manually identify the requests you want to scan. (Burp Suite)


The results of live scanning as you browse with Burp Scanner
The results of live scanning as you browse with Burp Scanner

Although the scanners in integrated testing suites are designed to be used in a different way than standalone scanners, in some cases the core scanning engine is highly capable and compares favorably with those of the leading standalone scanners.


Manual Request Tools(Burp Suite)


The manual request component of the integrated testing suites provides the basic facility to issue a single request and view its response. Although simple, this function is often beneficial when you are probing a tentative vulnerability and need to reissue the same request manually several times, tweaking elements of the request to determine the effect on the application’s behavior. Of course, you could perform this task using a standalone tool such as Netcat, but having the

function built into the suite means that you can quickly retrieve an interesting request from another component (proxy, spider, or fuzzer) for manual investigation. It also means that the manual request tool benefits from the various shared functions implemented within the suite, such as HTML rendering, support for upstream proxies and authentication, and automatic updating of the Content-Length header.(Burp Suite)


The following features are often implemented within manual request tools:

  • Integration with other suite components, and the ability to refer any request to and from other components for further investigation.

  • A history of all requests and responses, keeping a full record of all manual requests for further review, and enabling a previously modified request to be retrieved for further analysis

  • A multitabbed interface, letting you work on several different items at once.

  • The ability to automatically follow redirections.

Session Token Analyzer(Burp Suite)

Some testing suites include functions to analyze the randomness properties of session cookies and other tokens used within the application where there is a need for unpredictability. Burp Sequencer is a powerful tool that performs standard statistical tests for randomness on an arbitrarily sized sample of tokens and provides fine-grained results in an accessible format.

Burp Sequencer


Using Burp Sequencer to test the randomness properties of an  application’s session token
Using Burp Sequencer to test the randomness properties of an application’s session token

Shared Functions and Utilities

In addition to their core tool components, integrated test suites provide a wealth of other value-added features that address specific needs that arise when you are attacking a web application and that enable the other tools to work in unusual situations. The following features are implemented by the different suites:

  • Analysis of HTTP message structure, including parsing of headers and request parameters, and unpacking of common serialization formats.

  • Rendering of HTML content in responses as it would appear within the browser.

  • The ability to display and edit messages in text and hexadecimal form

  • Search functions within all requests and responses

  • Automatic updating of the HTTP Content-Length header following any manual editing of message contents.

  • Built-in encoders and decoders for various schemes, enabling quick analysis of application data in cookies and other parameters.

  • A function to compare two responses and highlight the differences

  • Features for automated content discovery and attack surface analysis

  • The ability to save to disk the current testing session and retrieve saved sessions

  • Support for upstream web proxies and SOCKS proxies, enabling you to chain together different tools or access an application via the proxy server used by your organization or ISP

  • Features to handle application sessions, login, and request tokens, allowing you to continue using manual and automated techniques when faced with unusual or highly defensive session-handling mechanisms.

  • In-tool support for HTTP authentication methods, enabling you to use all the suite’s features in environments where these are used, such as corporate LANs

  • Support for client SSL certificates, enabling you to attack applications that employ these

  • Handlings of the more obscure features of HTTP, such as gzip content-encoding, chunked transfer encoding, and status 100 interim responses

  • Extensibility, enabling the built-in functionality to be modified and extended in arbitrary ways by third-party code

  • The ability to schedule common tasks, such as spidering and scanning, allowing you to start the working day asleep

  • Persistent configuration of tool options, enabling a particular setup to be resumed on the next execution of the suite

  • Platform independence, enabling the tools to run on all popular operating systems

Requests and responses can be analyzed into their HTTP  structure and parameters
Requests and responses can be analyzed into their HTTP structure and parameters

Testing Work Flow(Burp Suite)

Figure 1.1 shows a typical workflow for using an integrated testing suite. The key steps involved in each element of the testing are described in detail throughout this book and are collated in the methodology. The workflow described here shows how the different components of the testing suite fit into that methodology.

In this workflow, you drive the overall testing process using your browser. As you browse the application via the intercepting proxy, the suite compiles two key repositories of information:

  • The proxy history records every request and response passing through the proxy

  • The site map records all discovered items in a directory tree view of the target.

(Note that in both cases, the default display filters may hide from view some items that are not normally of interest when testing.)

as you browse the application, the testing suite typically performs passive spidering of discovered content. This updates the site map with all requests passing through the proxy. It also adds items that have been identified based on the contents of responses passing through the proxy (by parsing links, forms, scripts, and so on). After you have manually mapped the application’s visible content using your browser, you may additionally use the Spider and Content Discovery functions to actively probe the application for additional content. The outputs from these tools are also added to the site map.

Fig 1.1


 A typical work flow for using an integrated testing suite
A typical work flow for using an integrated testing suite

When the content of the application has been mapped and functionality after then you can assess its attack surface. This is the set of functionality and requests that warrant closer inspection in an attempt to find and exploit vulnerabilities.

When testing for vulnerabilities, you typically select items from the proxy interception window, proxy history, or site map, and send these to other tools within the suite to perform specific tasks. As we have described, you can use the fuzzing tool to probe for input-based vulnerabilities and deliver other attacks such as harvesting sensitive information. You can use the vulnerability scanner to automatically check for common vulnerabilities, using both passive and active techniques. You can use the token analyzer tool to test the randomness properties of session cookies and other tokens. And you can use the request repeater to modify and reissue an individual request repeatedly to probe for vulnerabilities or exploit bugs you have already discovered. Often you will pass individual items back and forth between these different tools. For example, you may select an interesting item from a fuzzing attack, or an issue reported by the vulnerability scanner, and pass this to the request repeater to verify the vulnerability or refine an exploit.

For many types of vulnerabilities, you will typically need to go back to your browser to investigate an issue further, confirm whether an apparent vulnerability is genuine, or test a working exploit. For example, having found a cross-site scripting flaw using the vulnerability scanner or request repeater, you may paste the resulting URL back into your browser to confirm that your proof-of-concept exploit is executed. When testing possible access control bugs, you may view the results of particular requests in your current browser session to confirm the results within a specific user context. If you discover a SQL injection flaw that can be used to extract large amounts of information, you might revert to your browser as the most useful location to display the result


You should not regard the workflow described here as in any way rigid or restrictive. In many situations, you may test for bugs by entering unexpected input directly into your browser or into the proxy interception window. Some bugs may be immediately evident in requests and responses without the need to involve any more attack-focused tools. You may bring in other tools for particular purposes. You also may combine the components of the testing suite in innovative ways that are not described here and maybe were not even envisioned by the tool’s author. Integrated testing suites are hugely powerful creations, with numerous interrelated features. The more creative you can be when using them, the more likely you are to discover the most obscure vulnerabilities.

Recent Posts

See All