
Ask HN: How to take a screenshot of a complex website in 2017? - swah
Hey,<p>(I fail at this every two years, here we go again, now with your help).<p>I want to take screenshots of a few news websites for a little fake news project of mine, and most approaches return something completely different than what I&#x27;m seeing when I open Chrome.<p>Limited height would be better&#x2F;ok (something like the first 3000 pixels). Those news websites many times have infinite scrolling.<p>I&#x27;ve tried:<p>- phantomJS (rendering sucked, tried every technique I could find to wait for JS to load)<p>- wkhtmltopdf (almost ok, generates a huge 30M image with all the height, no antialiasing it seems)<p>- https:&#x2F;&#x2F;github.com&#x2F;gen2brain&#x2F;url2img (this was the best so far, uses Qt bindings but not the latest version)<p>- actually run a headless browser in DigitalOcean with xvfb-run and take a screenshot: I failed at this<p>What I didn&#x27;t tried was Selenium, because it seemed even harder.<p>How would you do it?
======
swah
Here are the major news websites I'm testing on:

[http://www.estadao.com.br](http://www.estadao.com.br)

[http://www.folha.com.br](http://www.folha.com.br)

[http://g1.globo.com](http://g1.globo.com)

[http://www.bol.com.br](http://www.bol.com.br)

[http://edition.cnn.com](http://edition.cnn.com)

