The gRPC-Web _protocol_ supports streaming - it supports everything that standard gRPC does.* The client libraries that operate in web browsers don't support client streaming, because every major browser currently buffers request bodies.
Right now, you're correct - the easiest way for another server to speak the Connect protocol is to use connect-go. We're planning to support more backend languages soon: Node.js is up next, with the JVM maybe afterwards. We'd also love to contribute (time, code, review, whatever) to any other backend implementations - the protocol is not that complex. File an issue on one of our projects or ping us in our Slack, it'll probably be an easy sell.
Envoy support is complicated. The honest truth is that we haven't looked into it all that much because it feels rude - Envoy's architecture requires first-party plugins to live in the main repo, and it feels impolite to attempt a C++ code dump for our new project. Maybe once Connect takes the world by storm :) Even if we did do that, a Connect-to-gRPC translation layer would only work for binary payloads - converting JSON to binary protobuf requires the schemas, and redeploying proxies every time the schema changes is an operational nightmare.
* Worth noting that gRPC-Web doesn't really have a specification. There's a narrative description of how gRPC-Web differs from standard gRPC, but they specifically mention that the protocol is defined by the reference implementation in Envoy.
Random thoughts: I personally would not mind requiring Protobuf payloads so the proxy portion can stay dumb — keeping proxies up to date sounds no fun. Losing inspectability (not having JSON) is a minus but not a deal breaker. Requiring servers to use a new library is a big lift for people, and hard to sell if what you have works. Having a proxy you can drop in the meantime, while getting client streaming at the same time, seems nice. In any case, having the shiny Typescript friendly libraries that can easily switch between gRPC-Web and Connect is nice, so thanks for that.
For what it’s worth, our servers are C++ and use Arrow Flight which why I’m even interested in client streaming at all (its DoPut method has a streaming argument). My current solution is to add a regular gRPC method that behaves similarly to DoPut but is a unary call for browser clients. Not terrible, but it’d be nice to not have to do that.
I may well take you up on filing an issue/pinging y’all on Slack.
Right now, you're correct - the easiest way for another server to speak the Connect protocol is to use connect-go. We're planning to support more backend languages soon: Node.js is up next, with the JVM maybe afterwards. We'd also love to contribute (time, code, review, whatever) to any other backend implementations - the protocol is not that complex. File an issue on one of our projects or ping us in our Slack, it'll probably be an easy sell.
Envoy support is complicated. The honest truth is that we haven't looked into it all that much because it feels rude - Envoy's architecture requires first-party plugins to live in the main repo, and it feels impolite to attempt a C++ code dump for our new project. Maybe once Connect takes the world by storm :) Even if we did do that, a Connect-to-gRPC translation layer would only work for binary payloads - converting JSON to binary protobuf requires the schemas, and redeploying proxies every time the schema changes is an operational nightmare.
* Worth noting that gRPC-Web doesn't really have a specification. There's a narrative description of how gRPC-Web differs from standard gRPC, but they specifically mention that the protocol is defined by the reference implementation in Envoy.