Originally, I was going to complain that this is more of a critique of the grpc ecosystem rather than protocol.
IMO, readability of generated code, is largely a non concern for the vast majority use cases. Additionally, if anything it's more so a criticism of the codegen tool. Same with the complaints around the http server used with go.
However, I totally agree with criticisms of the enum naming conventions. It's an abomination and super leaky. Made worse by the fact it's part of the official(?) style guide https://protobuf.dev/programming-guides/style/#enums
To be fair, the ecosystem is kind of inextricably tied to the protocol. I’m not aware of any other production grade Go gRPC implementations besides the official one.
But grpc isn't limited to go. Criticizing gprc, as a whole, for the http library used with go isn't valid. However, it's fair to take issue with the choice of http library used by the most popular go codegen tool.
> IMO, readability of generated code, is largely a non concern for the vast majority use cases
Completely disagree. More often than I'd like to, I've had to read generated code from various codegen tools in order to figure out what it was doing (in order to fix my usage of that code where I was making bad assumptions about the generated interface) or figure out why the generated code wasn't doing what it was supposed to (because it was was buggy). All code eventually needs to be read by someone, even code that's generated on the fly during the build process.
I read the generated code quite often and each time it boggles my mind who in the world came up with that crap. The readability and code quality is seriously awful and it is a valid criticism. When the generated code indeed is buggy, this a double whammer.
However, it is also true that a lot of devs don't read it or simply don't care so I would argue it is mostly a non-issue in practice contrary to what the author of the article suggest. My life is certainly not affected by ugly generated code.
Also, worth mentioning, when I wrote code generators in the past, albeit less complex, it's rarely the common case that makes the generated code ugly, but rather the coverage of a gazillion corner cases.
Can the generatee code be 2-4% faster? Sure. Is anyone updating the code generator for that? Well, if you feel the small gain is worth the pain of touching a fairly complicated generator that produces already monstrous code, patch it, test it, and fill a PR. Point is, none of the proto maintainer is moving a finger for 2% better.
In that case, I would imagine you would struggle with any clients generated via an IDL. The same "issue" occurs with openapi/swagger generated clients.
If you're not working on whatever is responsible for generating the code, you're not supposed to have to look under the hood. The whole purpose is to abstract away the implementation details, it's contract driven development. If you find yourself frequently needing to read the underlying code to figure out what's going on, the problem isn't with the tool, it's elsewhere.
>In that case, I would imagine you would struggle with any clients generated via an IDL. The same "issue" occurs with openapi/swagger generated clients.
Sometimes. Only sometimes. And that doesn't mean it's not a problem there either.
Abstractions that completely abstract what they wrap can claim "no need to look under the hood", but generated RPC code fails miserably there when something fails or changes, particularly during code generation (where you can't usually even get partial data / intermediate state, just "it crashed in this core loop that is executed tens of thousands of times, good luck").
And on this front, protobuf and gRPC are rather bad. Breaking changes are somewhat frequent, almost never have a breaking semver change so they surprise everyone, and for some languages (e.g. Go) they cannot coexist with the previous versions.
Figuring out what broke, why, and how to work around or fix or just live with it is made vastly more difficult by unreadable generated code.
Warren Buffett files documents with the SEC to disclose his positions, if it is held by Berkshire Hathaway.
FWIW, what deepfuckingvalue is doing is fine by me as long as he isn’t telling a private group to buy a shitload of calls because he is going to move the price of GME by tweeting about it.
These posts are not evidence to support an argument, it’s conspiracy fueled speculation that is worth less than zero.
Superstonk is just r/conspiracy but exclusively focused on Wall Street.
If the price immediately drops 99.97% with no intervening orders on the tape, there was definitely a technical glitch.
There is no situation where every market maker simultaneously pulls the bid and lets the price fall 99.97%, arbitragers would’ve had a field day since BRK.A would’ve been priced lower than BRK.B, it would’ve been scooped up and resold when the valuations between the two share classes resumes parity.
The market participants that are market making BRK.A have plenty of capital, plus they’re contractually obligated to provide a bid and an ask while the market is open, no exceptions. They would be holding shares as part of their market making activities and have no reason to let the bid fall 99.97%.
The LME unwinding nickel trades a few years back was actually a bad thing, one market participant was extremely short and didn’t have to post margin to cover their position when a short squeeze happened and some trades were unwound. That was an actual example of an exchange helping out a major customer by unwinding trades, this BRK.A situation is not that.
We have had this feature operational against Huggingface models (pre-RLHF advancements) but didn't launch it because it was very brittle. ChatGPT (gpt-turbo-3.5) works reasonably well but GPT4 can handle complexit much better.
More importantly, the set of formulas we support are not always one to one with Excel. We try to keep them the same but sometimes improvements and additions are needed. So we have to feed in a pretty complex prompt with a bunch of context (like source and destination types, schemas, validations etc.). With GPT4, we can do this with less explanations and higher accuracy.