gRPC lets you use other data exchange formats or IDLs as well, such as flatbuffers. However, the protobuf codegen experience we have spent most of the time and energy on.
I see two sides to this - on one hand, there are folks who want a 'contract first' development experience, in which the service contracts are defined first, and the business logic is implemented later. gRPC lends itself to this model very well. Admittedly, this is also the way services are developed in Google.
On the other hand, there are folks who want a model whereby you evolve a service and generate the specs from that service. Currently, this is not the experience that gRPC is optimized for. Time and effort are the main barriers to making this work well alongside a contract-first experience. FWIW: I believe both models have merit and it really depends on what you want to adopt as the source of truth for how your services interact. Ideally, gRPC would be good at both.
In contrast, I have been using NSwag  to generate C# client code with far greater comfort, and it seems to support multiple Typescript clients already.
Myself and 40+ top contributors have decided to fork Swagger Codegen so as to maintain a community-driven version called OpenAPI Generator  with a better governance structure to move the project forward. Now there are 10+ core team members and contributors with proper rights to merge PRs so I think we've better PR management in OpenAPI Generator. Please refer to the Q&A  for the reasons behind the fork.
For TypeScript generators, we've recently added the TypeScript Axios client generator  and there's an ongoing project to consolidate the TypeScript generators into one . Please check these out and let us know if you've any feedback.
We hope you will find OpenAPI Generator useful in your projects.
also i think i'm missing something:
>which you don’t really need with json+REST anyways
why don't you need server stubs for json+REST?
Checkout the main router, it's open source and handles a decent 6k msg/sec on a raspi: https://crossbar.io/
The best things about it is that you don't have to write the message schemas in advance, only the function signatures.
However, you have a low number of requests, or big requests, REST is going to be faster.
But as usual, it depends of your implementation and your constraint. After all, facebook is using polling if I recall.
All it all, if your configuration allows it, it's way, way easier and more flexible than MQTT to use.
Combared to RabbitMQ, it's not as fast. You can't beat years of optimized Erlang and field testing by fortune 500. Yet, Rabbit MQ is very low level: you need to setup queues, and consumers, and if you need RPC with returned value you will add manual logic on top of it. Don't get me started if you want to load balance consummers. Real life AMQP is hard.
Comparatively crossbar offers a great out of the box experience.
All in all, I'd say the sweet spot for the tech is between the arduino/raspi and an average website/company micro service archi. If you have very small hardware, MQTT could fit in the tiny space, and if you have a 100 message highways interconnecting your data centers around the world, you may want AMQP. Between those, crossbar.io is great.