ProductPromotion
Logo

Go.Lang

made by https://0x3d.site

GitHub - s0rg/crawley: The unix-way web crawler
The unix-way web crawler. Contribute to s0rg/crawley development by creating an account on GitHub.
Visit Site

GitHub - s0rg/crawley: The unix-way web crawler

GitHub - s0rg/crawley: The unix-way web crawler

License FOSSA Status Go Version Release Mentioned in Awesome Go Downloads

CI Go Report Card Maintainability Test Coverage libraries.io Issues

crawley

Crawls web pages and prints any link it can find.

features

  • fast html SAX-parser (powered by x/net/html)
  • js/css lexical parsers (powered by tdewolff/parse) - extract api endpoints from js code and url() properties
  • small (below 1500 SLOC), idiomatic, 100% test covered codebase
  • grabs most of useful resources urls (pics, videos, audios, forms, etc...)
  • found urls are streamed to stdout and guranteed to be unique (with fragments omitted)
  • scan depth (limited by starting host and path, by default - 0) can be configured
  • can be polite - crawl rules and sitemaps from robots.txt
  • brute mode - scan html comments for urls (this can lead to bogus results)
  • make use of HTTP_PROXY / HTTPS_PROXY environment values + handles proxy auth (use HTTP_PROXY="socks5://127.0.0.1:1080/" crawley for socks5)
  • directory-only scan mode (aka fast-scan)
  • user-defined cookies, in curl-compatible format (i.e. -cookie "ONE=1; TWO=2" -cookie "ITS=ME" -cookie @cookie-file)
  • user-defined headers, same as curl: -header "ONE: 1" -header "TWO: 2" -header @headers-file
  • tag filter - allow to specify tags to crawl for (single: -tag a -tag form, multiple: -tag a,form, or mixed)
  • url ignore - allow to ignore urls with matched substrings from crawling (i.e.: -ignore logout)
  • subdomains support - allow depth crawling for subdomains as well (e.g. crawley http://some-test.site will be able to crawl http://www.some-test.site)

examples

# print all links from first page:
crawley http://some-test.site

# print all js files and api endpoints:
crawley -depth -1 -tag script -js http://some-test.site

# print all endpoints from js:
crawley -js http://some-test.site/app.js

# download all png images from site:
crawley -depth -1 -tag img http://some-test.site | grep '\.png$' | wget -i -

# fast directory traversal:
crawley -headless -delay 0 -depth -1 -dirs only http://some-test.site

installation

  • binaries / deb / rpm for Linux, FreeBSD, macOS and Windows.
  • archlinux you can use your favourite AUR helper to install it, e. g. paru -S crawley-bin.

usage

crawley [flags] url

possible flags with default values:

-all
    scan all known sources (js/css/...)
-brute
    scan html comments
-cookie value
    extra cookies for request, can be used multiple times, accept files with '@'-prefix
-css
    scan css for urls
-delay duration
    per-request delay (0 - disable) (default 150ms)
-depth int
    scan depth (set -1 for unlimited)
-dirs string
    policy for non-resource urls: show / hide / only (default "show")
-header value
    extra headers for request, can be used multiple times, accept files with '@'-prefix
-headless
    disable pre-flight HEAD requests
-ignore value
    patterns (in urls) to be ignored in crawl process
-js
    scan js code for endpoints
-proxy-auth string
    credentials for proxy: user:password
-robots string
    policy for robots.txt: ignore / crawl / respect (default "ignore")
-silent
    suppress info and error messages in stderr
-skip-ssl
    skip ssl verification
-subdomains
    support subdomains (e.g. if www.domain.com found, recurse over it)
-tag value
    tags filter, single or comma-separated tag names
-timeout duration
    request timeout (min: 1 second, max: 10 minutes) (default 5s)
-user-agent string
    user-agent string
-version
    show version
-workers int
      number of workers (default - number of CPU cores)

flags autocompletion

Crawley can handle flags autocompletion in bash and zsh via complete:

complete -C "/full-path-to/bin/crawley" crawley

license

FOSSA Status

Articles
to learn more about the golang concepts.

Resources
which are currently available to browse on.

mail [email protected] to add your project or resources here ๐Ÿ”ฅ.

FAQ's
to know more about the topic.

mail [email protected] to add your project or resources here ๐Ÿ”ฅ.

Queries
or most google FAQ's about GoLang.

mail [email protected] to add more queries here ๐Ÿ”.

More Sites
to check out once you're finished browsing here.

0x3d
https://www.0x3d.site/
0x3d is designed for aggregating information.
NodeJS
https://nodejs.0x3d.site/
NodeJS Online Directory
Cross Platform
https://cross-platform.0x3d.site/
Cross Platform Online Directory
Open Source
https://open-source.0x3d.site/
Open Source Online Directory
Analytics
https://analytics.0x3d.site/
Analytics Online Directory
JavaScript
https://javascript.0x3d.site/
JavaScript Online Directory
GoLang
https://golang.0x3d.site/
GoLang Online Directory
Python
https://python.0x3d.site/
Python Online Directory
Swift
https://swift.0x3d.site/
Swift Online Directory
Rust
https://rust.0x3d.site/
Rust Online Directory
Scala
https://scala.0x3d.site/
Scala Online Directory
Ruby
https://ruby.0x3d.site/
Ruby Online Directory
Clojure
https://clojure.0x3d.site/
Clojure Online Directory
Elixir
https://elixir.0x3d.site/
Elixir Online Directory
Elm
https://elm.0x3d.site/
Elm Online Directory
Lua
https://lua.0x3d.site/
Lua Online Directory
C Programming
https://c-programming.0x3d.site/
C Programming Online Directory
C++ Programming
https://cpp-programming.0x3d.site/
C++ Programming Online Directory
R Programming
https://r-programming.0x3d.site/
R Programming Online Directory
Perl
https://perl.0x3d.site/
Perl Online Directory
Java
https://java.0x3d.site/
Java Online Directory
Kotlin
https://kotlin.0x3d.site/
Kotlin Online Directory
PHP
https://php.0x3d.site/
PHP Online Directory
React JS
https://react.0x3d.site/
React JS Online Directory
Angular
https://angular.0x3d.site/
Angular JS Online Directory