Chrome deals pretty well with garbage collection, so long as I'm careful to de-reference closed windows properly¹, and only uses a maximum of 150mbs. PhantomJS eats up almost 6GB of memory before it's done, which makes it almost unusable on machines with less memory or CI boxes. Travis is a no-go.
I'm hoping running Chrome in headless mode should give a nice speedup for our tests.
¹ Turns out even a closed popup window or iframe keeps a huge amount of memory hanging around. Who knew.
The DevTools Protocol is the primary API for headless Chrome, but we are excited for higher-level abstractions like PhantomJS & NightmareJS's API to manipulate the browser as well. Plenty of details to work out, but hopefully sometime this year you'll get a drop-in solution for some of your testing to upgrade from Phantom's older QTWebKit to the latest Chromium.
> >Is there interest on your side in adopting Chromium as a runtime? There's some existing documentation  around the API and embedding, but admittedly, this would be some work.
> We are interested. But I am afraid not in the current state. Currently, PhantomJS heavily relies on Qt and QtWebKit. It's not that easy to adopt Chrome as a new runtime.
> But I think we could implement PhantomJS as a completely new (with the same API) project that will use Chrome - Phantomium!
We laypeople can only find out things via word of mouth or observational tests on assumptions.
Why not the WebDriver protocol? That seems to be exactly what it's intended for…
*Yeah I know phantomjs is more cool these days but phantom doesn't support windows height so there's that.
The official Selenium Docker image uses the same technique to run headless Chrome / Firefox.
From there you can just run
docker run -d -P selenium/standalone-chrome
- Making sure all promises are fulfilled or rejected, so window objects don't get caught indefinitely in closure scope for any .then() or .catch() handler functions.
- Using WeakMaps as much as possible, when we have things that are tied to a particular window, like message listeners or response handlers in post-robot
- Manually clearing up any global references to windows when we destroy an xcomponent instance
Finding the references was the tricky bit. A lot of the effort was finding a leaky test-case, running in 100 times in succession, and deleting code until the memory graph was flat -- then figuring out what I'd just deleted that caused the leak.
The problem started manifesting as I added more and more tests -- so now I'm actually checking my tests' memory usage on the fly and failing if they cross a threshold. Hopefully that should avoid getting into this kind of sticky situation ever again.
Memory usage is pretty high, lot of heavy webpages result in crashes/hangs, there are many inconsistencies between features available in full version and headless, their debugging protocol has different APIs that work on headless/non-headless in Linux or Windows, and so on.
Of the bugs I've submitted, some have been fixed in the upcoming M59, so other critical ones may take longer due to their backlog. I suppose for now (maybe until M61-62), Chrome full with xvfb or even PhantomJS are better options. When you realize that Chrome is about the same size (by LoC) as the Linux kernel , you can't help but wish for a leaner & faster headless browser.
There seems to be some work going on building Firefox pure headless as well. Great overall, as long as all the browsers try to follow the RemoteDebug initiative .
Run it with:
Chrome v55 was a 30% memory savings, before that I used 1GB containers.
It's not perfect, but I am definitely pushing high volume (multiple tabs, concurrent activity) and I am not having any significant stability issues and I am pushing diverse sets of webpages.
- Are you worried about same-origin pollution if you run multiple tabs from the same origin in the same process? If so -> Extra process
- Do you have to take screenshots? You can only take screenshots of the tab that's in the foreground, so you have to activate it first to take the screenshot. This might fail if you have lots of tabs which roughly trigger at once.
You can see what I've built at https://urlscan.io btw.
A list of V8 flags can be found with:
I ended up writing a simple check using Go's net/http library to do basic performance profiling but it doesn't measure DOM loading like the Chrome checks do. Such a bummer
What I really want is an easy, cross-platform way to collect the network timings for each object like you get in Chrome's dev tools network waterfall graphs.
Looks like this might make my project obsolete.
It's a phantom js (and other headless browser) web service. Using the site, you can quickly create different tests, scheduled tests, chained tests, keep screen shots, create videos of multi step tests, and have historical information of it all.
Can't say enough good things about the site.
Edit: also there's a great chrome extension that will record your mouse clicks and keyboard commands to make creating a test that much simpler.
They were just starting, but service was rather reliable, and their tech support was excellent (maybe because we were early customers). We used to run a bunch of automated tests for monitoring and compliance, archiving hourly screenshots over different builds for later comparison.
I agree with your sentiment regarding these types of tools. It has been a long time coming but this release and the tools available now are things I wish I had years ago when I was building my first company.
It's also worth noting that the 57+ series has a nice embedded viewer you can use to view the actual viewport via devtools.
Do you know if chromedp can access any of the timing measurements?
It dynamically generates Typescript definitions for intelligence and type checking from their protocol.json files.
Vscode chrome debugger uses a fork of it.
I'm the author. /shamelessplug
This is the internal API that DevTools uses, and is what is referred to as the "Chrome Debugging Protocol" (ie, chromedp). Since 57+, the built in DevTools UI displays whatever the active viewport Chrome "sees" using the screencast APIs. It's just a PNG that's updated every couple hundred milliseconds with the output of Chrome's headless renderer.
Its a binary that I suppose runs the Chrome in headless mode, supports some command line options like --screenshot to take screenshots, etc.
I'm having a hard time understanding why its hanging on some runs, and how --timeout and --virtual-time-budget could help me with this.
Will generate a project file you can use in your own solution. Generated protocol is customizable via mustache templates too
Mostly, I'd like to know how the control of the virtual time system would be exposed. Would it be through the C++ API, or could it be made available through the debugging protocol?
But ultimately selenium and webdriver.io will do a better job at this
or wait for this issue: https://github.com/segmentio/nightmare/issues/224
Not headless, but ideally suited for automating complex websites (date controls, etc).
I'm trying to replace PhantomJS in my infrastructure with chromium. Not having to build my own chromium will be a very nice thing.
Limited height would be better/ok (something like the first 3000 pixels).
Low volume / can be slow (30 seconds would be ok).
Those news websites many times have infinite scrolling.
- phantomJS (rendering sucked, tried every technique I could find to wait for JS to load)
- wkhtmltopdf (almost ok, generates a huge 30M image with all the height, no antialiasing it seems)
- https://github.com/gen2brain/url2img (this was the best so far, uses Qt bindings but not the latest version)
- actually run a headless browser in DigitalOcean with xvfb-run and take a screenshot: I failed at this
What I didn't tried was Selenium, because it seemed even harder.
How would you guys do it?
server side SCTP to client(p2p over SCTP/data channel) would be cool.
Then I'm using simple-peer on top of that. There's also a library for UDP communications from the node process to the golang process.
I've not tested that yet but I can't really see a reason why it wouldn't work.
Headless Chrome architecture: https://docs.google.com/document/d/11zIkKkLBocofGgoTeeyibB2T...
Mailing list: https://groups.google.com/a/chromium.org/forum/#!forum/headl...
All of those links are on https://chromium.googlesource.com/chromium/src.git/+/master/...
the chrome browser spends a decent amount of time on other steps such as parsing HTML. I wonder how much time could be saved by not rendering pages into pixels.
(We should probably run under the real IE but jut haven't been bothered.)
I tried googling around but didnt find much to say either way...
It will be great to use this headless Chromium on Raspberry Pi to execute some routine web browser jobs.
Does it support the extensions installed on Chromium? Curious.
Running Version 59.0.3069.0 (Official Build) canary (64-bit)
and the api is available for free, https://github.com/letsvalidate/api
Practically speaking, software developers will use headless Chrome to automate testing of product functionality. Today, developers use systems like Selenium or PhantomJS to accomplish this feat, but it's a painful process to maintain these headless browser execution engines. Adding headless support into Chrome means that developers can count on the presentation of their application on a given version of the Blink engine run within Chrome.
Please see bug: https://bugs.chromium.org/p/chromium/issues/detail?id=603559 for updates.
You can download the two tools wkhtmltopdf and wkhtmltoimage which use WebKit to generate pdfs/images.
Not sure if all the full functionality that wkhtmltopdf can be ported, it had patches to Qt/WebKit to enable that ... probably will need API enhancements in Chrome. Don't have the time right now, but I registered http://crhtmltopdf.org a while ago hoping that I'd get around to it.
Producing nice semantic HTML is much harder, though also easy if you don't mind every word in a separate absolutely positioned div.
Many PDF reader software already contains empirically tuned routines to infer the text flow and generate text files (because the software needs to handle Select All and Copy), but they often produce bad results.
But if you just want to read a PDF on a remote machine over ssh, the easiest solution might be just transferring the file and then opening it locally, or use X forwarding and open the PDF with a graphic reader.
On the opposite side of the complexity level, I have also used this http://www.pdfsharp.com/PDFsharp/ to extract bits of text from PDFs. It's free, but you only get access to the raw PDF text with formatting codes. It works fine if you just want to grab a short string, but you got your work cut out for you if you want to do anything more sophisticated.
It's based on webkit.
(I've been using Prince for over a decade, rendering everything from prescription labels, packing slips, receipts, resumes, books, and more. It's great.)