Respecting robots.txt | crawler | Spatie

 SPATIE

  Crawler
==========

spatie.be/open-source

  [Docs](https://spatie.be/docs)  [Crawler](https://spatie.be/docs/crawler/v9)  Configuring-the-crawler  Respecting robots.txt

 Version   v9

 Other versions for crawler [v9](https://spatie.be/docs/crawler/v9)

- [ Introduction ](https://spatie.be/docs/crawler/v9/introduction)
- [ Installation &amp; setup ](https://spatie.be/docs/crawler/v9/installation-setup)
- [ Support us ](https://spatie.be/docs/crawler/v9/support-us)
- [ Questions and issues ](https://spatie.be/docs/crawler/v9/questions-issues)
- [ Changelog ](https://spatie.be/docs/crawler/v9/changelog)
- [ About us ](https://spatie.be/docs/crawler/v9/about-us)

Basic usage
-----------

- [ Your first crawl ](https://spatie.be/docs/crawler/v9/basic-usage/starting-your-first-crawl)
- [ Crawl responses ](https://spatie.be/docs/crawler/v9/basic-usage/handling-crawl-responses)
- [ Using observers ](https://spatie.be/docs/crawler/v9/basic-usage/using-observers)
- [ Collecting URLs ](https://spatie.be/docs/crawler/v9/basic-usage/collecting-urls)
- [ Filtering URLs ](https://spatie.be/docs/crawler/v9/basic-usage/filtering-urls)
- [ Testing ](https://spatie.be/docs/crawler/v9/basic-usage/testing)
- [ Tracking progress ](https://spatie.be/docs/crawler/v9/basic-usage/tracking-progress)

Configuring the crawler
-----------------------

- [ Concurrency &amp; throttling ](https://spatie.be/docs/crawler/v9/configuring-the-crawler/crawl-behavior)
- [ Limits ](https://spatie.be/docs/crawler/v9/configuring-the-crawler/setting-crawl-limits)
- [ Extracting resources ](https://spatie.be/docs/crawler/v9/configuring-the-crawler/extracting-resources)
- [ Configuring requests ](https://spatie.be/docs/crawler/v9/configuring-the-crawler/configuring-requests)
- [ Response filtering ](https://spatie.be/docs/crawler/v9/configuring-the-crawler/handling-responses)
- [ Respecting robots.txt ](https://spatie.be/docs/crawler/v9/configuring-the-crawler/respecting-robots-txt)

Advanced usage
--------------

- [ JavaScript rendering ](https://spatie.be/docs/crawler/v9/advanced-usage/rendering-javascript)
- [ Custom link extraction ](https://spatie.be/docs/crawler/v9/advanced-usage/extracting-custom-links)
- [ Custom request handlers ](https://spatie.be/docs/crawler/v9/advanced-usage/custom-request-handlers)
- [ Crawling across requests ](https://spatie.be/docs/crawler/v9/advanced-usage/crawling-across-requests)
- [ Custom crawl queue ](https://spatie.be/docs/crawler/v9/advanced-usage/using-a-custom-crawl-queue)
- [ Graceful shutdown ](https://spatie.be/docs/crawler/v9/advanced-usage/graceful-shutdown)

 Respecting robots.txt
=====================

###  On this page

1. [ Ignoring robots rules ](#content-ignoring-robots-rules)
2. [ Accepting nofollow links ](#content-accepting-nofollow-links)
3. [ Custom user agent ](#content-custom-user-agent)

By default, the crawler will respect robots data from `robots.txt` files, meta tags, and response headers. More information on the spec can be found at [robotstxt.org](http://www.robotstxt.org/).

Parsing robots data is done by the [spatie/robots-txt](https://github.com/spatie/robots-txt) package.

Ignoring robots rules
-----------------------------------------------------------------------------------------------------------------------

You can disable all robots checks using the `ignoreRobots` method.

```
use Spatie\Crawler\Crawler;

Crawler::create('https://example.com')
    ->ignoreRobots()
    ->start();
```

You can re-enable robots checking after disabling it using the `respectRobots` method.

```
$crawler = Crawler::create('https://example.com')
    ->ignoreRobots();

// later...
$crawler->respectRobots();
```

Accepting nofollow links
--------------------------------------------------------------------------------------------------------------------------------

By default, the crawler will reject all links containing `rel="nofollow"`. You can disable this check using the `followNofollow` method.

```
use Spatie\Crawler\Crawler;

Crawler::create('https://example.com')
    ->followNofollow()
    ->start();
```

You can re-enable nofollow rejection using the `rejectNofollowLinks` method.

```
$crawler = Crawler::create('https://example.com')
    ->followNofollow();

// later...
$crawler->rejectNofollowLinks();
```

Custom user agent
-----------------------------------------------------------------------------------------------------------

The [user agent](/docs/crawler/v9/configuring-the-crawler/configuring-requests#user-agent) is also used when checking robots.txt rules. When you set a custom user agent, robots.txt rules specific to that agent will be respected. For example, if your robots.txt contains:

```
User-agent: my-agent
Disallow: /
```

The crawler (when using `my-agent` as user agent) will not crawl any pages on the site.
