We've done this by separating the concepts of selecting and doing something to a selection. Specifically, we've created tools like click, input, hover, etc. that you can combine with any selection, and with each other. This keeps a lot of the power that you have with programming.
This also applies to the data structures that are created. So it's easy to express nested lists or even recursive lists, because of the ability to combine tools easily.
If you have any questions I'd love to answer them.
One question about this FAQ:
> Does ParseHub respect robots.txt?
> We're working on an admin panel to give webmasters full transparency and control. We'll have more info soon.
At the moment, ParseHub does not respect robots.txt. We do expect to add this + features for webmasters in the future, but have not had the developer cycles to do this yet.
Or it might be that in my keyboard layout, [ and ] requires pressing RIGHT ALT + 8 and RIGHT ALT + 9, respectively.
PS: Indeed changing the layout to en-US fixes the problem, but that not a real solution.
Also are you thinking on allowing it to run locally? (i.e. I have some websites that only work for my IPs)
Please note that the password will be accessible by ParseHub, since it needs to enter it on the web page.
Currently, we support local deployments only in our custom enterprise plan. That may change in the future.
One of the things that has been heavily marketed by other web scrapers is "crawling" as a separate feature.
With ParseHub, all the tools easily combine, so you don't need that distinction. You can use the navigate tool to jump to another page (see our interactive navigation tutorial in the extension for the details).
And you can combine multiple navigations to go as deep in the website structure as you like. For example, say you have a forum that links to subforums that link to posts that link to users. You can easily model the structure of such a site by using a few navigation nodes (one from forum to its subforums, another from subforum to posts, etc.). The result would be a big json (or csv) dump of all the data on the forum, in the proper hierarchy.
We've really tried to make our tools as general as possible. A side effect of the navigate tool is that you can use it to get "pagination" for free as well (another feature that's been heavily marketed).
> Easily turn websites into APIs or tables of data you can download in JSON or CSV
Do you need to download, or can you call these APIs from an application?
Is a chrome extensions in the works at all?
We want to show a sample immediately as a user changes what they extract. On a static website, this is fairly easy. You simply run what the user created on the currently visible page.
However, when you involve interactivity, you can no longer do that. The major problem is idempotent operations. Imagine a click that changed the dom of a page. And now imagine running the sample on that same page. Re-running the sample may no longer work, because the click could have changed the page in such a way that the extraction no longer works (e.g. it deletes an element from the page).
To solve this issue, we actually reset a "hidden tab" to the starting state of the page you're on. This happens every time you re-run a sample. Unfortunately, it's not possible with Chrome to create such hidden tabs. We also mess with the cache to make sure that this tab can be reset really quickly, something that we couldn't find an API for with chrome.
Hope that answers your question.
Not sure if it does fit all your needs.