For some particular use cases, you might want to only scrape and save unique or new URLs found from a crawl session. E.g if you are scraping a property listing site and you only want to find new properties and save them once

For this use case, we created this feature for you

How to use

Step 1

Go to Advanced settings

Step 2

Click on the "Find & Save unique URLs once. Once this is clicked only new URLs will be saved. If any links are found that were previously crawled in a past session they will be ignored

Step 3:

Once you start running sessions and want to download the data. click here

Download by PDE: This will download all data for all extractors in that domain e.g "amazon.com"

Download by Extractor: This will download all data in all sessions within that extractor

Step 4:

If you would like to either reset and start again using the reset option under the "More" button

Did this answer your question?