PHP Class Arachnid\Crawler

This class will crawl all unique internal links found on a given website up to a specified maximum page depth. This library is based on the original blog post by Zeid Rashwani here: Josh Lockhart adapted the original blog post's code (with permission) for Composer and Packagist and updated the syntax to conform with the PSR-2 coding standard.
Show file Open project: codeguy/arachnid

Protected Properties

Property Type Description
$baseUrl string The base URL from which the crawler begins crawling
$links array Array of links (and related data) found by the crawler
$maxDepth integer The max depth the crawler will crawl

Public Methods

Method Description
__construct ( string $baseUrl, integer $maxDepth = 3 ) Constructor
getLinks ( ) : array Get links (and related data) found by the crawler
traverse ( string $url = null ) Initiate the crawl

Protected Methods

Method Description
checkIfCrawlable ( string $uri ) : boolean Is a given URL crawlable?
checkIfExternal ( string $url ) : boolean Is URL external?
extractLinksInfo ( Crawler $crawler, string $url ) : array Extract links information from url
extractTitleInfo ( Crawler $crawler, string $url ) Extract title information from url
getPathFromUrl ( type $url ) : type extrating the relative path from url string
getScrapClient ( ) : Client create and configure goutte client used for scraping
normalizeLink ( $uri ) : string Normalize link (remove hash, etc.)
traverseChildren ( array $childLinks, integer $depth ) Crawl child links
traverseSingle ( string $url, integer $depth ) Crawl single URL

Method Details

__construct() public method

Constructor
public __construct ( string $baseUrl, integer $maxDepth = 3 )
$baseUrl string
$maxDepth integer

checkIfCrawlable() protected method

Is a given URL crawlable?
protected checkIfCrawlable ( string $uri ) : boolean
$uri string
return boolean

checkIfExternal() protected method

Is URL external?
protected checkIfExternal ( string $url ) : boolean
$url string An absolute URL (with scheme)
return boolean

extractLinksInfo() protected method

Extract links information from url
protected extractLinksInfo ( Crawler $crawler, string $url ) : array
$crawler Symfony\Component\DomCrawler\Crawler
$url string
return array

extractTitleInfo() protected method

Extract title information from url
protected extractTitleInfo ( Crawler $crawler, string $url )
$crawler Symfony\Component\DomCrawler\Crawler
$url string

getPathFromUrl() protected method

extrating the relative path from url string
protected getPathFromUrl ( type $url ) : type
$url type
return type

getScrapClient() protected method

create and configure goutte client used for scraping
protected getScrapClient ( ) : Client
return Goutte\Client

traverse() public method

Initiate the crawl
public traverse ( string $url = null )
$url string

traverseChildren() protected method

Crawl child links
protected traverseChildren ( array $childLinks, integer $depth )
$childLinks array
$depth integer

traverseSingle() protected method

Crawl single URL
protected traverseSingle ( string $url, integer $depth )
$url string
$depth integer

Property Details

$baseUrl protected property

The base URL from which the crawler begins crawling
protected string $baseUrl
return string

$maxDepth protected property

The max depth the crawler will crawl
protected int $maxDepth
return integer