Hacker News new | past | comments | ask | show | jobs | submit login
Web Scraping with Electron (jeffprod.com)
57 points by tazeg95 13 days ago | hide | past | web | favorite | 52 comments





> Is there a better way to surf the web, retrieve the source code of the pages and extract data from them ?

Yes, of course! To get the source code of a web site you don't need a browser and all its complexity. It makes me so sad how far we have come in terms of unnecessary complexity for simple tasks.

If you want to extract data from web pages without requiring hundreds of megabytes for something like Electron, there are lots of scraping libraries out there. There are for example at least two good Python implementations: Scrapy[1] and BeautifulSoup[2].

[1]: https://scrapy.org/

[2]: https://www.crummy.com/software/BeautifulSoup/


This is nice sounding, but many modern web-pages use extensive client-side rendering. Sure, you can work around that without needing a full JS environment, but doing so is ad-hoc and you wind up having to write complex code on a per-site basis.

I do a bunch of web-scraping for hobby shit, and I've love to be able to not have to shell out to chromium for some sites, but unfortunately the modern web basically means you're stuck with it.


Also sites with some kind of 2FA / oauth happening. This _looks_ like it would be possible to login manually then start scraping.

Correct me if I’m wrong but neither one supports Javascript rendered pages?

You’re right in the overhead though; I’d stay miles away from Electron for scraping but you’ll need more than a CURL wrapper to properly fetch data in all shapes and sizes :) Headless Chromium does do the trick in that regard.


With web scraping you typically don’t want the visuals anyway. JS rendered applications are usually easier to scrape because they have data in a more raw or canonical format available somewhere to do that rendering.

Plenty of websites will only render the content fully after some JavaScript runs, so to properly scrape them you do indeed need a browser to process the JS. This includes text content.

Javascript rendered pages load JS which then in turn calls some rest API to get data and use that to render contents. Web scraper stops scraping the html, but calls and scrapes the rest api endpoint.

Sure, but i meant to build a portable app, for end users who are not coders, with a GUI, and for a dedicated purpose, like for exemple navigating on facebook.

So i will edit this question to this : Is there a better way to code a portable application with a graphical user interface to scrape a given site ?

Thanks for your comment.


Look up robot process automation and visual web scraping. Web scraping without having to write code is a well established field. Just not very popular with the HN crowd for obvious reasons.

Some example would be Scrapinghub's Portia system and the Kantu startup. There are also established players like UIPath and Visualwebripper.


You can access the html of the website and use regular expressions.

> You can access the html of the website and use regular expressions.

Yes but using regular expressions is the last and least recommended solution, please read : https://stackoverflow.com/questions/3577641/how-do-you-parse...


If you read that link, it’s only not recommmended because people don’t know how to use it. Regular expressions are powerful.

Read the link. Just wondering how you managed to interpret this:

> regular expressions is a waste of time when the aforementioned libraries already exist and do a much better job on this.

as this:

> it’s only not recommmended because people don’t know how to use it


> https://stackoverflow.com/questions/3577641/how-do-you-parse...

It says "can make regex fail when not properly written" etc.

There are different circumstances where using a premade parsing library versus using raw regular expressions are going to make sense.

The answer is not binary.


I thinking that was the joke

You're thinking of another post [0]. Not a joke either, really

[0] https://stackoverflow.com/questions/1732348/regex-match-open...


I can't not mention this infamous SO answer: https://stackoverflow.com/questions/1732348/regex-match-open...

> like for exemple navigating on facebook.

What would you want to scrape there which is not against their ToS and a violation of user privacy in general?


I wouldn't mind a personal scraper that pulls down the family updates and pictures I want and puts them somewhere private where I can see them.

Would get rid of the clutter and keep FB from some amount of shenanigans with my browser.


I can’t believe there are people still defending this scummy company.

Facebook broke both legal and ethical “ToS” countless times and has no plan to stop.

Why do you consider what Facebook is doing as OK but a little web scraping for personal usage to be so bad?


As the saying goes: "two wrongs don't make a right." Facebook's ToS is still a ToS. If you want to scrape the data that they've collected, either risk your account due to it being against the ToS or collect the data yourself.

Good luck with that :) Any modern website requires javascript interpreter on client side, so unless you provide some sort of javascript interpretation (which can be messy), you'll be able to scrape only simple content with scrapy/BS.

I mean, I guess the point is that it allows you to scrap data after it was rendered by JS.

you can always use something like proxycrawl to scrape javascript without using Electron. And it's compatible with scrapy

You forgot curl in C/C++ which is the most advanced tool out there

I wish there was an easy way to send commands to the console of a browser.

That would be all I need to satisfy all my browser automation tasks.

Without installing and learning any frameworks.

Say there was a linux command 'SendToChromium' that would do that for Chromium. Then to navigate to some page one could simply do this:

SendToChromium location.href="/somepage.html"

SendToChromium should return the output of the command. So to get the html of the current page, one would simply do:

SendToChromium document.body.innerHTML > theBody.html

Ideally the browser would listen for this type of command on a local port. So instead of needing a binary 'SendToChromium' one could simply start Chromium in listening mode:

chromium --listen 12345

And then talk to it via http:

curl 127.0.0.1:12345/execute?command=location.href="/somepage.html"


What you are describing is exactly what puppeteer uses internally, you might want to explore their code base.

While not currently "easy", there exists the Chrome Devtools Protocol.[0] I'm not aware of a CLI utility that communicates with it, but it wouldn't be impossible to make one that fulfills what you're looking for. A second tool could then act as a REST proxy, if calling the commands via curl is really your jam.

I think you've given my weekend some purpose. Lemme see what I can pull together...

[0] https://chromedevtools.github.io/devtools-protocol/


Yes, that might work. Maybe an even better approach is to use chromium-chromedriver.

I just got it working like this:

    apt install chromium-chromedriver
    chromedriver
This seems to create a service that listens on port 9515 for standardized commands to remote control a chromium instance. The commands seem to be specified by the W3C:

https://www.w3.org/TR/webdriver/

I got it to open a browser with this curl command:

    curl  -d '{ "desiredCapabilities": { "caps": { "nativeEvents": false, "browserName": "chrome", "version": "", "platform": "ANY" } } }'  http://localhost:9515/session
I have not yet figured out how to send javascript commands though.

You can do this with Selenium pretty easily, and low level webdrivers support it too.

I would give low level webdrivers a go. But so far I have not even figured out how to install one for Chromium on Debian.

    apt install chromedriver
gives me:

    Package 'chromedriver' has no installation candidate
There is something called "chromium-chromedriver".

Let me try that ... one moment ...

Ok. So I start it via:

    apt install chromium-chromedriver
Now according to the docs, this should create a browser:

    curl  -d '{ "desiredCapabilities": { "caps": { "nativeEvents": false, "browserName": "chrome", "version": "", "platform": "ANY" } } }'  http://localhost:9515/session
Ha! It works!

So chromedriver might be a solution!


I've written something[1] that can basically do this, though it's non-interactive.

The CLI interface is somewhat incomplete as-of-now, but it'd be fairly easy to add more comprehensive tools.

https://github.com/fake-name/ChromeController


What would you do with this solution that can’t already be done with headless browsers, apart from looking at it?

It’s very much like that already: write script, send to browser, let it do it’s thing, run javascript code if you want and get the final renderend HTML and console output.


    What would you do with this solution
Automate the browser

    that can’t already be done
I did not say it can't be done now. But I don't want the overhead of pupeteer, selenium plus the client libraries to control them. These things are fricking monsters. And they will go out of fashion at some point. Simple javascript commands will not.

    with headless browsers
I don't want headless. I just want to automate.

But I don't want the overhead of pupeteer

I don't understand this objection. Puppeteer is the client library that you want, and it's not particularly massive. It's the plumbing to handle all of the stuff around starting and connecting to browsers, providing a tidy API to abstract the stuff that's annoying to use directly.

In fact, you can do almost anything you want to using the existing interface that Chrome has built in:

  chromium --remote-debugging-port=9222
You can connect to it using e.g. the WS command line tool using the URL it spits out:

  ws ws://127.0.0.1:9222/devtools/browser/81b9b178-ece0-4953-8ea0-bce6ac31d89c
Then you can make it do things:

  {"id": 1, "method": "Target.createTarget", "params": {"url": "http://www.google.com"}}
It does what you want, over a web socket interface rather than HTTP.

That looks interesting.

Where does that ws tool come from? Is it this one? https://packages.debian.org/stretch/node-ws

How do you send javascript commands over that protocol?


You can run puppeteer with any chromium binary, including non-headless mode.

You can also just start chromium with debugging active `--remote-debugging-port=9222` and then connect to that port and send any dev console commands.


How do you send dev console commands to that port?

It uses the devtools protocol, see here https://chromedevtools.github.io/devtools-protocol/

Looks like it's a REST protocol.

You have to connect to a tab and then call e.g.

    Runtime.evaluate {expression: "console.log('test')"}

Also, you can apparently connect the puppeteer client to a running chrome instance [1]. That way you can use the nice and statically typed puppeteer api and you should have pretty much zero overhead.

[1] https://github.com/GoogleChrome/puppeteer/issues/238


With ‘native messaging,’ you can have your program communicate with your extension, so the extension could then do everything that's available to it via the API.

Won't be surprised if an extension like this already exists.


Interesting but seems less powerful than my current setup:

- I have mitmproxy to capture the traffic / manipulate the traffic

- I have Chrome opened with Selenium/Capybara/chromedriver and using mitmproxy

- I then browse to the target pages, it records the selected requests and the selected responses

- It then replays the requests until they fail (with a delay)

I highly recommend mitmproxy, it's extremely powerful: capture traffic, send responses without hitting the server, block/hang requests, modify responses, modify requests/responses headers.

Then higher level interfaces can be built on top, Selenium allows you to load Chrome extensions and execute Javascript on any page for instance. You can also manage many tabs at the same time.

I could make a blog post/demo if people are interested


A blog article would be nice. Sounds interesting, but I’m having a hard-time understanding. If it’s replaying requests, how do you get it to do things like go to the next pagination and click on all of the next paginated results?

In my case I can't do the pagination automatically so I have to fetch the pages myself to then have them replayed.

In most cases you would capture the request and change the "page=" parameter (either for an HTML page or an API).

You could also use selenium to click on each "next page". Could be parallelized with multiple tabs / windows.

The only website that blocks me is Bloomberg because they detect mitmproxy (I didn't care enough to make mitmproxy harder to detect).

Another detail is that regular Chrome doesn't let you load insecure certificates while chromedriver allows that.

Anyway, I will write about all that, I already posted some code on my Twitter: https://twitter.com/localhostdotdev (that I will turn into a blog).


>I could make a blog post/demo if people are interested

yes, please !


I'm going to plug my app that does scraping with Electron: https://github.com/CGamesPlay/chronicler

To the commenters who don't understand why this is necessary:

- It reliably loads linked resources in a WYSIWYG fashion, including embedded media and other things that have to be handled in an ad-hoc fashion when using something like BeautifulSoup.

- It handles resources loaded through JavaScript, including HTML5 History API changes.


Could you explain what electron offers here over for example a browser plugin? I'm not that familiar with limitations of WebExtension APIs

It looks like an interesting project but only for a few selected sites. For more random browsing I believe I would be too security conscious (https://electronjs.org/docs/tutorial/security) to allow it

Might be unlikely that some random script on a random site will target Electron but you never know


Well in Chrome you can't hook into the network layer to record/replay requests like I've done here (you could fake it by overriding the ServiceWorker, possibly, but this feels brittle and I'm not sure if it's possible either). I'm not familiar with Firefox extensions but I'm given to understand you likely could implement this project as a Firefox extension.

Locking down Electron apps to be safe on the larger web is certainly one area where I think electron could do a lot better. I think my project has followed all of the recommendations and should be safe, but I agree with you that it feels like a bigger attack surface. I personally wanted it to support offline browsing of documentation sites, which are generally pretty "safe" from that perspective.


Are there any other advantages over things like webdriver or puppeteer?

Not really.

I also have no idea what cheerio brings to the table here.

Seems like a hefty solution to web scraping


No, there aren't. Just use WebDriver.

One thing about Selenium and Puppeteer is that they trigger captcahs on some websites, making scraping impossible. This would perhaps fix it?



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: