# Extractor settings

The plugin integrates third-party scraping services to help bypass IP restrictions or blocks:

* [Scrapingdog](https://www.keywordrush.com/go/scrapingdog)
* Scrapeowl.com
* [Scraperapi](https://keywordrush.com/go/scraperapi)
* Crawlbase

{% hint style="warning" %}
These services are paid, but each typically includes about **1,000 free requests** per month.
{% endhint %}

### How to Route Requests Through a Scraping Service

#### 1. Add Your API Keys

1. In your WordPress admin, go to:\
   **Affiliate Egg → Settings → Extractor Settings**
2. Enter the API key for each provider you want to use.
3. Save changes.

You can enable **one or multiple providers** at the same time.\
Which provider is used for each URL is controlled by **routing rules** (see below).

<figure><img src="https://940736139-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-M5W1lOLsBsZnXap8X6J%2Fuploads%2FNhlmnK5si5flrx31hVsA%2Fimage.png?alt=media&#x26;token=cd39f2d2-3560-4179-9b09-fab895dcbbb5" alt="" width="563"><figcaption></figcaption></figure>

#### 2. Configure Routing Rules

Routing rules tell the plugin **which scraping service** to use for specific domains or URL patterns.

1. In **Affiliate Egg → Settings → Extractor Settings**, scroll to the **Routing rules** table.
2. Click **Add rule**.
3. Fill in the fields:
   * **Pattern** – the domain or URL pattern to match.
   * **Provider** – select the scraping service to use.
   * **Extra params (optional)** – additional query parameters for the provider API.
4. Save your changes.

Whenever a URL matches a rule, that request will be sent through the selected provider.

<figure><img src="https://940736139-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-M5W1lOLsBsZnXap8X6J%2Fuploads%2F8HkqWSwgaC7OFaSUYGlf%2Fimage.png?alt=media&#x26;token=322d8162-fb79-45ce-ad0c-7ea9748d5632" alt=""><figcaption></figcaption></figure>

**Pattern Examples**

You can **match domains** or more specific URL paths. Some common patterns:

* `example.com`\
  Matches the domain example.com. This is the **most common pattern** and is usually what you’ll use when creating a rule for a specific site.
* `*.example.com`\
  Matches any subdomain, e.g.:
  * `shop.example.com`
  * `de.example.com`
* `example.com/path/*`\
  Matches only URLs that start with `/path/`, e.g.:
  * `https://example.com/path/product-123`
  * `https://example.com/path/category/`

**Additional Parameters**

Additional parameters are simply added to the provider’s API request as query parameters.

* Each provider has its **own parameter names and supported values**.
* Use these to enable features like geo-targeting, premium proxies, or JavaScript rendering.

Below are examples for the providers currently supported.

**ScraperAPI Parameters**

You can pass these in the **Extra params** field:

* `country_code=us`\
  Use US-based proxies (geo-targeting).
* `premium=true`\
  Use premium residential/mobile IPs.
* `ultra_premium=true`\
  Use the advanced bypass mechanism for harder sites.
* `render=true`\
  Enable JavaScript rendering for dynamic pages.

**Scrapingdog Parameters**

You can pass these in the **Extra params** field:

* `country=de`\
  Use German IPs (geo-targeting).
* `premium=true`\
  Use premium residential proxies.
* `dynamic=true`\
  Enable JavaScript rendering for dynamic pages.

**Combining Parameters**

You can combine multiple parameters using `&` just like a normal query string.

Example:

```
country_code=us&premium=true&render=true
```

### Rule Priority and Order

Routing rules are evaluated **from top to bottom**:

1. The plugin checks the first rule.
2. If the URL matches the rule pattern, that rule’s provider (and parameters) are used.
3. If it doesn’t match, it moves to the next rule, and so on.

The **first matching rule wins**.

**Tips:**

* Put **more specific patterns (e.g. `example.com/path/*`) above** more general ones (e.g. `example.com`).
* If a URL does not match any rule, the request will be made **without a scraping provider**.

#### Custom Parameters for Scraping Services (Programmatically)

```php
// Add premium parameter to ScrapingDog requests
add_filter('affegg_parse_url_scrapingdog', function($url) {
    return add_query_arg('premium', 'true', $url);
}, 10, 1);

// Add country parameter to ScrapingDog requests
add_filter('affegg_parse_url_scrapingdog', function($url) {
    return add_query_arg('country', 'au', $url);
}, 10, 1);

// Add country_code parameter to ScraperAPI requests
add_filter('affegg_parse_url_scraperapi', function($url) {
    return add_query_arg('country_code', 'de', $url);
}, 10, 1);


```
