Self-Service Crawler:
Subscriber may use the Crawler Service only to extract data from the Subscriber's domains to create searchable entries in indices, and Subscriber will obtain and maintain any required consents necessary to use the Crawler with the domains. The Crawler is provided "as is" and "as available" and is not covered by the Service Level Agreement. Subscriber is responsible for any incurred costs, including infrastructure costs, associated with crawling the domains. Subscriber will have access to the crawler console, through which Subscriber may administer the Crawler. These Self-Service Crawler terms are incorporated into and form a part of your Agreement with Algolia. Data extracted from the domains by the Crawler is Subscriber Data. In addition to any Plan limits, the Crawler is subject to the following technical limits:
CRAWLER DATA LIMITATION | |
---|---|
Size per Resource* *A resource means an HTML or non-HTML document (e.g. pdf, doc, jpeg). |
10 MB |
Number of Resource crawled (per month) | 5 million (with a maximum of 500,000 non-HTML Resources or HTML pages requiring Javascript rendering) |
Number of Crawling refresh/Recrawl (per day) | Manual: 100 Automatic: once |