Since the semantics of 1.1 and 2.0 are the same (besides push) you can choose which encoding you prefer based on your requirements and constraints. In fact, maybe we should rename HTTP/1.1 to "HTTP/2.0 Textual Encoding" or something so that people can feel like they're not missing out on anything. It worked for USB 2.0.
I've never understood this sentiment. Its not like we can know what is passing through the boxes by watching the blinkenlights - this isn't paper tape anymore.
You have to use tcpdump to break up the binary bits - and since its likely encrypted, set up the appropriate certificates if you want to decipher arbitrary traffic anyway.
If its your own app, you know what requests are going where and can hex dump them any way you choose.
The whole "its binary so its bad" thing is a complete non-issue. I've heard people complaining "netcat doesn't work anymore" but there's no reason why netcat2 can't be used to talk to arbitrary HTTP/2.0 endpoints.
Sorry, I lost some mates in the SOAP wars of the late 90s and I'm still bitter.
Personally, my issue with HTTP 2 is that it seems to be designed from the assumption that Google-scale is the common or even only use case, and that optimizing Google's bandwidth use is the sole purpose of HTTP.
To me, the real reason is that we want to get to the point where all traffic is secure, and since the cost of establishing that secure connection is high we should do more with the secure connections we've got before letting it drop!