ffuf

not just a tool, but a pentester’s way of life.

About Fuzzing

After finishing a series of network scans on the target, we accidentally discover a website running on a server. Oops — a web server, an ideal target since these servers are always exposed to the public.

Imagine you find a website — maybe a production site or one still in development. After browsing its folders, the site has no links to anything else and gives no clues to more pages. So it looks like our only option is to fuzz the website. The term fuzzing refers to a testing technique that sends various types of user input to a certain interface to study how it would react. 

Well — when it comes to fuzzing, I honestly don’t know which section to put it in, because it sits between enumeration and exploitation. When fuzzing is treated as part of enumeration, it’s the actions we use to probe for parameters in a web app or to discover hidden directories. If you consider fuzzing as part of exploitation, it’s the process of sending unexpected or malformed inputs to trigger vulnerabilities — the stuff that can cause crashes, reveal memory, or lead to code execution.

Okay — in this post I’ll focus on fuzzing as part of enumeration: using these techniques to find hidden directories or parameters used by a website so they can be leveraged later.

Web servers usually don’t expose a directory of all available links and paths (unless badly configured), so we have to probe different URLs and see which ones return pages. For example, if we visit https://www.zed99.net/doesnotexist we’ll get an HTTP 404 “Not Found.” However, if we visit a page that exists — like /login — we’ll see the login page and get an HTTP 200 OK.

This is the basic idea behind web fuzzing for pages and directories. Still, we cannot do this manually, as it will take forever. This is why we have tools that do this automatically, efficiently, and very quickly. Such tools send hundreds of requests every second, study the response HTTP code, and determine whether the page exists or not. Thus, we can quickly determine what pages exist and then manually examine them to see their content.

To determine which pages exist, we should have a wordlist containing commonly used words for web directories and pages, very similar to a Password Dictionary Attack. I will not have to reinvent the wheel by manually creating these wordlists, as great efforts have been made to search the web and determine the most commonly used words for each type of fuzzing. Some of the most commonly used wordlists can be found under the GitHub SecLists repository, which categorizes wordlists under various types of fuzzing, even including commonly used passwords.

There are many tools and methods to utilize for directory and parameter fuzzing/brute-forcing. In this module we will mainly focus on the ffuf tool for web fuzzing, as it is one of the most common and reliable tools available for web fuzzing.

 

Directory Fuzzing

The goal is to find hidden paths (admin, backups, dev, old, staging…) and files (config.php, .env, backup.zip) that developers accidentally expose. This sets the groundwork for further enumeration (sensitive files, upload endpoints, endpoints with parameters to fuzz more deeply). I’ll use ffuf for this purpose.

As is customary after installing a tool, we run -h to see its command syntax and the flags used to tune the tool. As we can see from the output when run the -h flag, the main two options are -w for wordlists and -u for the URL. We can assign a wordlist to a keyword to refer to it where we want to fuzz. For example, we can pick our wordlist and assign the keyword FUZZ to it by adding :FUZZ after it:

ffuf -w [path/to/file]:FUZZ -u http://SERVER_IP:PORT/FUZZ

ffuf -w /opt/useful/seclists/Discovery/Web-Content/directory-list-2.3-small.txt:FUZZ <SNIP>

Page/Extension Fuzzing

As I said, beyond finding hidden directories, fuzzing can help us discover files that developers or system administrators have accidentally misconfigured and exposed directly.

For example, we found that we had access to /blog, but the directory returned an empty page, and we cannot manually locate any links or pages. So, we will once again utilize web fuzzing to see if the directory contains any hidden pages. However, before we start, we must find out what types of pages the website uses, like .html, .aspx, .php, or something else.

One common way to identify that is by finding the server type through the HTTP response headers and guessing the extension. For example, if the server is apache, then it may be .php, or if it was IIS, then it could be .asp or .aspx, and so on. This method is not very practical, though. So, we will again utilize ffuf to fuzz the extension, similar to how we fuzzed for directories. Instead of placing the FUZZ keyword where the directory name would be, we would place it where the extension would be .FUZZ, and use a wordlist for common extensions. We can utilize the following wordlist in SecLists for extensions:

 ffuf -w
/opt/useful/seclists/Discovery/Web-Content/web-extensions.txt:FUZZ -u -u
http://SERVER_IP:PORT/blog/indexFUZZ

Note: In the command above, the wordlist we chose already contains a dot (.), so we will not have to add the dot after “index” in our fuzzing.

We will now use the same concept of keywords we’ve been using with ffuf, use .php as the extension, place our FUZZ keyword where the filename should be, and use the same wordlist we used for fuzzing directories:

ffuf -w /opt/useful/seclists/Discovery/Web-Content/directory-list-2.3-small.txt:FUZZ -u http://SERVER_IP:PORT/blog/FUZZ.php

 

Recursive Fuzzing

So far, we have been fuzzing for directories, then going under these directories, and then fuzzing for files. However, if we had dozens of directories, each with their own subdirectories and files, this would take a very long time to complete. To be able to automate this, we will utilize what is known as recursive fuzzing.

When we scan recursively, it automatically starts another scan under any newly identified directories that may have on their pages until it has fuzzed the main website and all of its subdirectories.

Some websites may have a big tree of sub-directories, like /login/user/content/uploads/…etc, and this will expand the scanning tree and may take a very long time to scan them all. This is why it is always advised to specify a depth to our recursive scan, such that it will not scan directories that are deeper than that depth. Once we fuzz the first directories, we can then pick the most interesting directories and run another scan to direct our scan better.

In ffuf, we can enable recursive scanning with the -recursion flag, and we can specify the depth with the -recursion-depth flag. If we specify -recursion-depth 1, it will only fuzz the main directories and their direct sub-directories. If any sub-sub-directories are identified (like /list/user, it will not fuzz them for pages). When using recursion in ffuf, we can specify our extension with -e .php. Finally, we will also add the flag -v to output the full URLs. Otherwise, it may be difficult to tell which .php file lies under which directory.

ffuf -w /opt/useful/seclists/Discovery/Web-Content/directory-list-2.3-small.txt:FUZZ -u http://SERVER_IP:PORT/FUZZ -recursion -recursion-depth 1 -e .php -v

DNS/vHost Fuzzing

A sub-domain is any website underlying another domain. For example, https://sub1.zed99.local is the sub-domain of zed99.local.

In this case, we are simply checking different websites to see if they exist by checking if they have a public DNS record that would redirect us to a working server IP. 

ffuf -w /opt/useful/seclists/Discovery/DNS/subdomains-top1million-5000.txt:FUZZ -u https://FUZZ.zed99.local

As we saw in the above command, we were able to fuzz public sub-domains using public DNS records. However, when it came to fuzzing sub-domains that do not have a public DNS record or sub-domains under websites that are not public, we could not use the same method. In this section, we will learn how to do that with Vhost Fuzzing.

The key difference between VHosts and sub-domains is that a VHost is basically a ‘sub-domain’ served on the same server and has the same IP, such that a single IP could be serving two or more different websites. vHosts may or may not have public DNS records.

In many cases, many websites would actually have sub-domains that are not public and will not publish them in public DNS records, and hence if we visit them in a browser, we would fail to connect, as the public DNS would not know their IP. Once again, if we use the sub-domain fuzzing, we would only be able to identify public sub-domains but will not identify any sub-domains that are not public.

In short, a subdomain is an extension of a domain name in DNS. A vhost (virtual host) is the server configuration that defines how the server should handle requests for that domain.

ffuf -w /opt/useful/seclists/Discovery/DNS/subdomains-top1million-5000.txt:FUZZ -u http://zed99.local:PORT/ -H ‘Host: FUZZ.zed99.local

Filter result in ffuf

So far, we have not been using any filtering to our ffuf, and the results are automatically filtered by default by their HTTP code, which filters out code 404 NOT FOUND, and keeps the rest. However, we can get many responses with code 200. So, in this case, we will have to filter the results based on another factor. Ffuf provides the option to match or filter out a specific HTTP code, response size, or amount of words.

There’s a filter I use often: -fs (filter HTTP response size — comma-separated list of sizes and ranges). For example, when I do directory fuzzing I get responses like 404, 403, 200, 301… which create a lot of noise, so I observe a few initial requests (which usually show the response size of the “incorrect” results) and then use -fs to filter those out.

ffuf -w /opt/useful/seclists/Discovery/DNS/subdomains-top1million-5000.txt:FUZZ -u http://zed99.local:PORT/ -H ‘Host: FUZZ.zed99.local -fs 900

Filter result in ffuf

Fuzzing parameters may expose unpublished parameters that are publicly accessible. Such parameters tend to be less tested and less secured, so it is important to test such parameters for the web vulnerabilities we discuss in other modules.

Similarly to how we have been fuzzing various parts of a website, we will use ffuf to enumerate parameters. Let us first start with fuzzing for GET requests, which are usually passed right after the URL, with a ? symbol, like:

ffuf -w /opt/useful/seclists/Discovery/Web-Content/burp-parameter-names.txt:FUZZ -u http://admin.zed99.local:PORT/admin/admin.php?FUZZ=key -fs xxx

The main difference between POST requests and GET requests is that POST requests are not passed with the URL and cannot simply be appended after a ? symbol. POST requests are passed in the data field within the HTTP request. To fuzz the data field with ffuf, we can use the -d flag. We also have to add -X POST to send POST requests:

ffuf -w /opt/useful/seclists/Discovery/Web-Content/burp-parameter-names.txt:FUZZ -u http://admin.zed99.local:PORT/admin/admin.php -X POST -d ‘FUZZ=key’ -H ‘Content-Type: application/x-www-form-urlencoded’ -fs xxx

Value Fuzzing

Value fuzzing is the technique of sending many different values/inputs (either systematic or random) into a parameter, field, header, or API endpoint to: Find logic errors, crashes, overflows, XSS, SQLi, command injection, parsing bugs, etc. Discover hidden states or unusual responses (e.g., HTTP 500, 502). Detect differences in response time (timing leaks),…

ffuf -w ids.txt:FUZZ -u http://admin.zed99.local:PORT/admin/admin.php -X POST -d ‘id=FUZZ’ -H ‘Content-Type: application/x-www-form-urlencoded’ -fs xxx