• Pick a bug then Learn & Earn
  • IDOR
  • CSRF
  • Information Disclosure
  • More you will discover yourself :)

Discovering the IP space & Reverse DNS

# Get the ASN from websites like https://bgp.he.net/
# Find out the IP ranges that reside inside that ASN
whois -h whois.radb.net -- '-i origin [AS_Number]' | \
    grep -Eo "([0-9.]+){4}/[0-9]+" | uniq -u > ip_ranges.txt

# Resolve reverse DNS for the IP ranges
cat ip_ranges.txt | mapcidr -silent | dnsx -ptr -resp-only -o ptr_records.txt

# Discover subdomains and favicons
cat urls.txt | favfreak -o output.txt
http.favicon.hash:-<hash>

# Perform DNS brute force
puredns bruteforce best-dns-wordlist.txt example.com -r resolvers.txt -w dns_bf.txt

Recursive Enumeration

#!/bin/bash

# Check if a subdomain list file is provided as an argument
if [ $# -ne 1 ]; then
  echo "Usage: $0 <subdomain_list_file>"
  exit 1
fi

subdomain_list="$1"

# Check if the subdomain list file exists
if [ ! -f "$subdomain_list" ]; then
  echo "Subdomain list file $subdomain_list not found."
  exit 1
fi

# Ensure the required commands are available
commands=("anew" "subfinder" "assetfinder" "amass" "findomain")
for cmd in "${commands[@]}"; do
  if ! command -v "$cmd" > /dev/null; then
    echo "Error: $cmd is not installed or not in the PATH."
    exit 1
  fi
done

# Loop through subdomains and perform enumeration
for sub in $(cat "$subdomain_list" | rev | cut -d '.' -f 4,3,2,1 | rev | sort | uniq -c | sort -nr | grep -v '1 ' | head -n 10 | sed -e 's/^[[:space:]]*//' | cut -d ' ' -f 2); do
    subfinder -d "$sub" -silent -max-time 2 | anew -q passive_recursive.txt
    assetfinder --subs-only "$sub" | anew -q passive_recursive.txt
    amass enum -timeout 2 -passive -d "$sub" | anew -q passive_recursive.txt
    findomain --quiet -t "$sub" | anew -q passive_recursive.txt
done

Usage:

./your_script.sh subdomain_list_file.txt

Subdomain Enumeration

# Using findomain
findomain --target [domain] -o [target.txt]

# Using amass
amass enum -brute -passive -d [domain] | anew [target.txt]

# Using subfinder
subfinder -d [domain] | anew [target.txt]
subfinder -dL [target.txt] | anew [target.txt]

# Using assetfinder
assetfinder -subs-only [domain] | anew [target.txt]

# Using httpx and hakrawler
cat [target.txt] | httpx | hakrawler -subs | anew [target.txt]

# Using haktrails, httpx, and hakrawler
cat [target.txt] | haktrails subdomains | httpx | hakrawler | anew [other.txt]
# Using SecurityTrails API
curl -s --request GET --url \
    "https://api.securitytrails.com/v1/domain/[domain]/subdomains?apikey=zxaJxKyUQwPdRWtdWTDXUvvzFOcKVSU4" | \
    jq '.subdomains[]' | \
    sed 's/"//g; s/$/.[domain]/; s/ //g' | \
    sort | uniq >> [subdom.txt]
# Using httpx and gospider
cat [target.txt] | httpx | gospider -o output -c 10 -d 1

# Using httpx and hakrawler
cat [target.txt] | httpx | hakrawler | anew [other.txt]

# Using waybackurls
cat [target.txt] | waybackurls | anew [other.txt]

# Using gau and gauplus
cat [target.txt] | gau | anew [other.txt]
cat [target.txt] | gauplus | anew [other.txt]

# Using SecretFinder
cat [target.txt] | while read url; \
    do python3 SecretFinder.py -i $url -o cli >> [other.txt]; done

# Using nuclei
nuclei -l [target.txt] >> [other.txt]
nuclei -u [URL] >> [other.txt]

Scraping(JS/Source code)

# Using katana
katana -list [target.txt] -jc | grep ".js" | uniq | sort >> [other.txt]

# Web probing subdomains
cat subdomains.txt | httpx -random-agent -retries 2 -no-color -o probed_scrap.txt

# Crawling with gospider
gospider -S probed_scrap.txt \
    --js -t 50 -d 3 --sitemap --robots -w -r > gospider.txt

# Cleaning the output
sed -i '/^.\{2048\}./d'
cat gospider.txt | \
    grep -Eo 'https?://[^ ]+' | \
    sed 's/]$//' | \
    unfurl -u domains | \
    grep ".example.com$" | \
    sort -u scrap_subs.txt

# Resolving target subdomains
puredns resolve scrap_subs.txt -w scrap_subs_resolved.txt -r resolvers.txt

Status Code

# Using httpx
httpx -silent -status-code

# Using httprobe
cat [target.txt] | httprobe

# Using fff
cat [target.txt] | fff -d 1 -S -o Output

Other Staff About URLs and HTMLs

# Using html-tool to extract tags, attributes, and comments
cat [urls.txt] | html-tool tags title a strong
find . -type f -name "*.html" | html-tool attribs src href
cat [urls.txt] | html-tool comments

# Using curl to retrieve verbose output
curl -vs [URL]

# Using gf for SQL injection and XSS
cat [urls.txt] | gf sqli
cat [urls.txt] | gf xss

Port Scan

# Using naabu to scan ports
naabu -list [scope.txt]
naabu -host [target.txt]

# Using rustscan
rustscan -a [domain]
rustscan -a ['urls']

Content Discovery

# Using gobuster to discover directories and vhosts
gobuster dir -u [domain] -w [wordlist.txt] -t 5 -b [404,403] -s [200,504] -v
gobuster dir -u [URL] -w [wordlist.txt]
gobuster vhost -u [URL] -w [wordlist.txt]

# Using dirsearch
dirsearch [-u|--url] target [-e|--extensions] extensions [options]

Fuzzing

# Using ffuf to fuzz parameters
ffuf -w [params.txt] -u https://domain/script.php?FUZZ=test_value -fs [invalid_size]

# Using gobuster to fuzz
gobuster fuzz -u [URL]?FUZZ=test -w [wordlist.txt]

Dosyaları Birleştirme ve Düzenleme

# Combine and filter duplicates
cat file1 file2 > file3
cat file1 file2 | anew file3
cat file1 >> file2

cat [target.txt] | sort | uniq >> [other.txt]

Remove (http|https) and Filter Duplicates

cat [urls.txt] | unfurl --unique [other.txt]
cat [urls.txt] | wordlistgen -fq >> [other.txt]

Ask This Question to Yourself

What is the web framework/application of website:

AEM, Apache, Cherrypy, Coldfusion, Django, Express, Flask, Laravel, Nginx, Rails, Spring, Symfony, Tomcat, Yii, Zend

How does the app pass data?

resource?parameter=value&param2=value Method -> /route/resource/sub-resource/…

How/where does the app talk about users?

How: UID, email, username, UUID Where: Cookies, API Calls

Does the site have a unique threat model?

Examples: API keys, application data for doxing.

Does the site have multi-tenancy or user levels?

App is designed for multiple customers App has multiple user levels: Admin (cms/framework) Tenant/Account Admin Tenant/Account User Tenant/Account Viewer Unauthenticated functionality

Has there been past security research & vulns?

How does the app handle?

  • Search in Google: web framework + vuln type
  • ex: laravel xss | ex: apache sql