> It's used by many companies and projects
No one cares, they would care if you have contributions from many parties which indicates longivity, do tell in that case.
I do think your project is interesting, it's just the title doesn't do it justice.
I have been working as a programmer for 20 years, and have contributed to some popular projects, if you interested, please check it in github. And you can check my contributions at https://github.com/kevwan, 946 contributions in the last year.
Would you please give me some suggestions on the title? Thank you very much!
> Cloud Native Computing Foundation (CNCF) serves as the vendor-neutral home for many of the fastest-growing open source projects, including Kubernetes, Prometheus, and Envoy.
Does erlang and OTP count by this definition?
Would be a fun project btw to detect gh not account and projects that obviously paid to get stars.
What is it trying to show?
What the graph reads:
If my framework is beego and I want to get 30k rps, I would need to limit my data to processing times in the 0ms bucket. I think? Or maybe gin will allow for 7.5k rps if my processing time is 500ms?
Like, I think I can get to similar information to what you are trying to show if I read backwards, upside down, and rotated 90 degrees.
I wish Go just got something like Lisp macros that could work compile-time only. Running macros at run-time isn't really the big deal, but generating code in the same language and context as the normal code is a big deal.
Adding generics really doesn't help much here..
1) Go library/main that wraps gRPC handlers to build a server, like Dropwizard or TwitterServer?
2) A CLI tool "goctl" that does code generation for setting up the code layout (server and client API stubs), as well as some Docker/k8s functionality for deploying.
I'm not into Go so can't comment on aspects like the performance benchmark.
However the project documentation is quite interesting if you'd like to see how, presumably the company for which this was built for, their production environment is set up like CI/CD, logging, etc.
It could be more helpful if the architecture diagram showed the roles of the different pieces better. For example it isn't immediately clear what component does authentication so then the JWT can be used for service authorization. So the question would be is how much is the framework tied to some assumptions in a production setup.
go-zero is easy to use with docker/Kubernetes, we built a command line tool to generate the Dockerfile and Kubernetes deployment files. It's easy to use with ECSs as well, because Go packages it into a single executable binary, you can use tools like supervisord to manage it easily.
As the name of go-zero, I was trying to tell 2 things:
1. go from zero with microservices development
2. go from zero when you get something hard to fix, perhaps you think the problem itself in a wrong way
Also, go-zero consists of 3 main parts:
1. api gateway, along with a newly created simple API idl to describe apis
2. zRPC, with the microservices governance built on top of gRPC
3. goctl, a command line tool to make microservices development much easier
Not sure if I answered your question?
I look forward to (trying) to replicate something similar for my own understanding. Any architecture docs will be gratefully recveied
For example: most people just want to receive infinite HTTP requests and route them to custom business logic, automatically scaling as needed. So why can't I just push a button and make that happen? AWS ECS has a command-line client now that attempts to do that, but it's clunky and unreliable. And K8s doesn't have anything like that; it's all hand-crafted YAML files and mountains of custom cluster configuration. I just want to run some code at scale. And we've implemented that (scaling web requests/business logic) a million times by now. But it's never really gotten significantly easier.
Docker is a good example of the amount of work you need to pour into a technical solution to make it a game-changing user solution. Docker is like 50 different technical solutions rolled up into one user-friendly command-line tool and a backend daemon. You could do most of what Docker does in LXC, but it was painful. Docker made it easy. That's what we need for running services at scale in general. Docker Swarm was created basically to scratch that itch, but it still hasn't really taken off. I'm hopeful it does one day.
It never will, that ship sailed a long time ago. Docker the company is struggling to survive after making wrong bets multiple times. The "enterprise" part, including Swarm, was sold off to Mirantis after years of neglect, and got some much needed investment and big fixes. Nonetheless, Docker Swarm is basically abandonware with terrible reputation without a viable future.
Hashicorp's Nomad is much better placed to compete with Kubernetes. Shameless plug of my article on the matter:
A few weird api decisions but I found the developer's workflow to be better.
I got rid of some weird allocation behaviours I couldn't explain with nomad (the real motivator to get off nomad).
I also don't have to install anything.
Can you provide source for terrible reputation.
Kubernetes gets some abstractions right - such as pods, statefulsets, ingress, deployments
What we now need is abstraction of endpoints, routes, request, response type/schema (to do automatic marshalling), id space sharding like vitess or logical server identifiers as part of the guid for what server the data belongs to..microservice becomes a bucket of routes/rpc calls not hardcoded to a microservice.
Envoy, linkerd and hystrix look like a nightmare to configure right. I think the configuration behind them could be more elegant. The software is probably fine but it's a nightmare to get the right data structure that does what you want - hence the mountains of YAML configuration like you mentioned.
I don't want to care if it's the storage layer or the application layer - it should all be scalable.
I want to be able to set a maximum budget in financial terms and open ended autoscaling ceilling.