In that context, it's dead-simple to use, and someone with very little experience should be able to get a working prototype in under 5 minutes.
For my use case, it's closer to "wget/curl with JS processing" than "automating a user's browsing experience". I don't particularly care which browser is doing the emulation, with the ease-of-use of the API making the biggest difference.
It seems very similar to PhantomJS, but to be honest, it's more attractive from an ongoing-support standpoint simply because it's an official Chrome project.
If you have an existing web service, this appears suitable for actual production usage to deliver features like PDF invoices and receipts, on-demand exports to multiple file formats (PNG/SVG/PDF) etc., which has quite different requirements compared to an automated testing framework.
headless-chrome would be used for the functionality of a server-side microservice, rather than for automated testing of UI/UX, there are already more appropriate projects to achieve that.
As someone with a bunch of tooling around the dev-tools API, it's a huge pain in the ass to not be able to tell what functions the remote browser supports. There's a version number in the protocol description, but it's literally never been incremented as far as I've seen.
It should be possible to get the running chrome version and to do feature detection over this protocol.
If it isn't, it's a bug
For an example of an impossible task, try to retrieve request headers using Selenium. Browser automation getting more and more complicated the more cases you're trying to cover and in my impression WebDriver is definitely not enough. Who knows, perhaps some new version of WebDriver that I never heard of it will catch up once the functionality gets properly defined.
Given that the selenium leadership is apparently uninterested in improvements, and it's many limitations, trying to improve there is more effort then it's worth.
Basically, they're still stuck in this idea that they're ONLY for emulating user-available input (and the dev-tools don't exist).
In reality, there is tremendous interest in more complex programmatic interfaces, but apparently they're unwilling to see that, and are instead only interested in their implementations "user-only" ideological purity.
Selenium can't do that itself but Selenium can drive a browser that uses a proxy and you can retrieve everything about the request from that. It's a lot more challenging if you're testing things that use SSL but it's not impossible with a decent proxy app (eg Charles).
Due to the nature of browser plugins, they could only access information that the browser made available. Also, the dev's have consistently been of the opinion that they only care to simulate an end-user experience. (End user's only care about the webpage's presentation, not which headers were present per transaction.) Although, I suspect that early restrictions influenced that viewpoint.
It wasn't until much later that browsers started implementing their own automation channels; Chrome's Automation Extension and Debugging Protocol, Firefox's Marionette, etc. At the same time, browsers started putting additional security measures around plugins making it even more difficult to have consistent features across Selenium's drivers.
Which is why the WebDriver became an open specification instead various driver implementations. I believe, Microsoft was the first to implement their own driver, InternetExplorerDriver, for IE7+. Then ChromeDriver (powered by Chrome Automation Extension), GeckoDriver (Firefox translation to their Marionette driver), and SafariDriver is now baked into Safari 10.
I think for most developers ("devs", "qa", "scrapers", etc.) there's very little appeal in moving away from Selenium because it would require maintaining multiple test suites. It gives consistent results and just works. If you want lower level information, it's fairly simple to either 1) just use a CLI client (curl, wget, etc.), or 2) a library libcurl, Requests(Python), Net::HTTP(Ruby), etc. etc. etc., or 3) setup proxy server. I do all of the above, each has it's own downsides clients and libs don't do any rendering themselves and proxies tend to rewrite transactions (ex. strip compression and add/remove/alter headers 'Content-Type: gzip').
But resonated with your point that there are just so much codebase / automation assets already written. Usually exploring new tools happens when a new project happens, rather than recoding entirely an existing project. For the existing code base, unless contributors from the community writes a parser to translate those to Puppeteer API?