Stripe runs a ton of Mongo replication clusters and uses home-grown proxy services on top of Mongo that manage and control where data lives, so the services don't have to think about that side of things. I'm not sure what changes have been made to Mongo itself but for the most part it's standard Mongo 4.
I love protobufs as a type-safe way of defining messages and providing auto-generated clients across languages.
I can't stand gRPC. It's such a Google-developed product and protocol that trying to use it in a simpler system (e.g. everyone else) is frustrating at best, and infuriating at worst. Everything is custom and different than what you're expecting to deal with when at its core, it is still just HTTP.
Something like Twirp (https://github.com/twitchtv/twirp) is so much better. Use existing transports and protocols, then everything else Just Works.
Yea, and the code gen (for Go at least) very clearly assumes you're using a monorepo and how dare you think of doing anything else you monster.
E.g. there's a type registry, which means you can't ever have the same proto type compiled by two different configs (it'll panic at import time). In a monorepo that's (potentially) fine, but for the rest of the world it means libraries can't embed the generated code that they rely on (if the spec is shared), which means they can't customize it (no perf/size/etc tradeoff possible), can't depend on different versions of codegen or .proto files (despite code clearly needing specific versions, and breaking changes to the generated code are somewhat common), can't have convenience plugins for things that would benefit from it, etc.
And all of this to support... an almost-completely-unused text protocol. And `google.protobuf.Any` auto-return-value-typing, but tbh I think that's simply a bad feature, and it would be better modeled as a per Any deserialize call registry, where you can do whatever the heck you like (or not use it, and just `.UnmarshalTo(&out)` with the correct type).
---
What really gets my goat here is that none of this makes sense at all for a protocol. The whole point of having a language-and-implementation-agnostic binary protocol is to not be dependent on specific codegen / languages / etc, but per above the whole Go protobuf ecosystem is rigidly locked in at all times, and nearly every change is required to be a breaking change... and if you make that breaking change in a new Go module version, like you should, you immediately break anyone who uses two of them at once, so it must also always be a semver-violating breaking change.
I used protobuf extensively with Kafka and I remember having to be quite particular about how the proto files/packages were arranged so as to avoid naming and versioning conflicts.
We never generated go code from it, but it took a bit of fine tuning to get generated code that felt at least somewhat ergonomic for Ruby and Typescript. It usually involved using some language specific alternative to protoc for that language because the code generated by protoc itself was practically unreadable. IIRC in the case of Typescript I had to write a script that messed around with the directory structure so you could use sensible import paths and aliases, because TS itself wasn't discovering them automatically without it.
That's stuff you can work with and solve technically. Initial faff but it's one and done. The worst problem I had with it was protobuf3 stating every field is optional by default, and the company I worked at basically developed a custom schema registry setup that declared every field as required by default, with a custom type to mark a field as optional. It turned literally every modification to a protobuf definition into a breaking change and, what's worse, it wasn't done end to end and the failures were silent, so you'd end up with missing data everywhere without knowing it for weeks.
>the failures were silent, so you'd end up with missing data everywhere without knowing it for weeks.
This is the main reason I think protobuf's "zero values are simply not communicated" is fundamentally wrong. Missing data is one of the easiest flaws to miss, and it tends to cause problems far away from the source of the flaw, in both time and space, which makes it extremely hard to notice and fix.
I get the arguments in its favor. I get the arguments in favor of "everything is optional by default". But presence is utterly critical in detecting flaws like this, and it can't always be addressed in a backwards-compatible way by application code. E.g. in proto's case, it's not possible because that data does not exist, and adding it would change the binary data. Even binary-compatible workarounds like "add a field with a presence fieldset" aren't usable because that unrecognized field will be silently ignored by older consumers, so you're right back to where you started.
It needs to exist from day 1 or you're shooting your users in the feet.
> E.g. there's a type registry, which means you can't ever have the same proto type compiled by two different configs (it'll panic at import time). In a monorepo that's (potentially) fine, but for the rest of the world it means libraries can't embed the generated code that they rely on (if the spec is shared), which means they can't customize it (no perf/size/etc tradeoff possible), can't depend on different versions of codegen or .proto files (despite code clearly needing specific versions, and breaking changes to the generated code are somewhat common), can't have convenience plugins for things that would benefit from it, etc.
I actually forgot about this when writing the article. This is a major pain in the ass both in Go and Python and basically forces you to ensure than no 2 services have the same file called "api/users/service.proto". There have been multiple instances where we literally had to rename a proto file to something like reponame_service.proto to avoid this limitation.
Yet ESDF is objectively better because it frees up Q, A, Z, and W to easily be used for other keybinds. The number of people I've seen stretch their pinky down to Left-CTRL for crouch boggles my mind. With ESDF, crouch is A!
Yes, this is a very tiny hill on which to die on, but it's my hill!
If I were to switch to ESDF, I would have only one key to the right of my index finger in each row, as opposed to two. (I have a split keyboard.) You could argue that I would have two keys to the left of my pinky finger, but I find that reach more difficult.
I would also have a harder time orienting my fingers without looking, because I would no longer have the differently-shaped keys under my pinky to let me know I was in the right place. (The indentation that indicates home position on my F key is subtle, and easy to miss when I'm focused on the action in a game.)
I have large hands. Left control is the natural resting place of my pinky, while reaching A from an ESDF rest requires I adjust my whole hand to scrunch my fingers closer.
ESDF is also more natural to touch-typers as it’s basically the same as the left hand “home position” of ASDF. And it’s easy to locate without looking, since the F key usually has a raised marker on most keyboards.
I touch type, but I've always felt WASD more "natural" or comfortable for gaming. I've tried ESDF but can't get used to it... For me I think it really is that the edge of the keyboard helps anchor and give a solid reference for where the hand is and those keys at the edge will be easily hittable by everyone. Also, a lot of times my pinky awkwardly rests inbetween the shift & z.
As an EDSF gamer (blame Tribes) I agree but I've run into issues with (IIRC) E, Shift, and Space on keyboards that aren't n-key rollover but it's been over a decade since I last owned one. So I think that's the reason for WASD.
> Yet ESDF is objectively better [...] crouch is A!
I think you've forgotten the hardware limitations (rollover/ghosting) that usually made it objectively worse instead, where "A" often wouldn't behave the same as ctrl or shift when multiple keys are being held down.
Being able to strafe while holding crouch is a handicap.
There actually was a time when computer scientists at Atari studied user interface stuff with game performance in mind. All that ended years before WASD, but it’s worth pointing out that WASD came about as part of a competitive process. It’s not like anyone’s being forced to use something that doesn’t work for them (except on consoles).
> stretch their pinky down to Left-CTRL for crouch boggles my mind. With ESDF, crouch is A!
Because those are the only two conceivable choices…
The point was that there is a way to do "objective" study of Human-Computer Interaction, with game controllers as well as everything else, and then there are random opinions offered as "objective" for no reason other than people not caring about what "objective" means.
But you're right about those controllers being somewhat shit. In fairness to the amazing Atari research labs that were used as part of a perhaps poor comparison, they had nothing to do with the design of those controllers and weren't around long enough to really affect much on the hardware side before their iteration of Atari died. For a couple of years Atari had some amazing researchers, but they would have benefitted from some amazing (or merely competent) senior management as well.
Ehhh, sort of. I think people copied it from Thresh (a famous Quake player). It was better than the default, and "good enough" for competition. So a lot of people simply copied it and never thought about it again, especially after games started making it the default.
I don't believe that competitive players are unwilling to experiment with this stuff to get an edge. On the hand, if someone polled today's competitive gamers to see what they're doing with their keybindings, the results were not published in this article. The author might ascribe more importance to the defaults than is really appropriate.
An issue with EDSF that WASD dodged was that some keyboards had groupings that could only register a limited number of key-downs simultaneously. I believe EDSF was in one group. But, WASD spanned two. So, on those keyboards you could get ignored inputs specifically when things were getting frantic.
I've never experienced that myself. Instead, I've been EDSF fo' lyfe because I had so many keybinds in HL1 multiplayer they surrounded those keys on all sides.
Input ghosting. Happens when they don’t use a 1 to 1 switch matrix. Probably some truth to what you’re saying, I could see the column/row inputs being different.
Disagree. If I put my pinky on A for ESDF then I have to rotate my wrist enough such that pressing E is uncomfortable. The natural resting place for my pinky with WASD is on shift, not tab. CTRL is trivial to reach. Tab is more awkward.
Funny, haven't seen esdf mention in ages. I used to use ESDF exclusively. I eventually gave it up though just due to needing to remap it in ever damn game. Plus some wouldn't let me remap, which was even more annoying.
It is incredible the number of AAA releases which have an abysmal rebinding experience. Like they gave the job to the intern and nobody attempted to try it once. From tutorials which hard code the controls, special modes (driving, UI) which only respect the original, etc.
Shift is can be another interesting key to hit. Many non-US keyboards have an extra key between shift and z (or whatever letter takes to place of z in a particular region. Not only are you reaching further to hit the shift key, but the size of the target is not much larger than a normal key.
In countries like Canada, life is even more fun. Most keyboards are of the US variety, yet many laptops come with the multi-lingual variant. Better get used to two keyboard layouts if you're using WASD with shift/control.
I've been using ESDF (well, .oeu now that I'm using the Dvorak layout, but details) since I was using Azerty keyboards. A (Q on Azerty) for jump, C (J on Dvorak) for crouch, Z (; on Dvorak) for walk, and I've got enough reach for plenty of other keys.
Got started doing that when I played Bungie's Oni, which required editing an INI file to rebind keys.
The Crowdstrike issue is causing the Azure issue as I understand, lots of machines using that software that all updated to the blue screen feature around 6 PM apparently.
I get what the author is trying to say but there's a fundamental difference between "complexity" and "complicated".
Life is complex (and often quite complicated). Most actual solutions that people need are solving complex problems. You can't really solve complex problems with simple software, you'll just end up building a complex (and often complicated) web of simple solutions.
Our job as software engineers is to prevent the software from getting complicated, managing the complexity such that it's able to morph as the users needs change over time.
To fit the article, adding a grill to a car would be complicating the car, not making it more complex.
> Most actual solutions that people need are solving complex problems.
My general experience has been the problems people want me to solve are not the actual problems that they need to solve.
The problems that need to be solved have very simple solutions.
The purported solutions to the purported problems are usually a way to avoid doing the simple solutions because it is "inconvenient" to people who want the other solution.
Maybe I'm jaded. Maybe I've had a bad lot. But it's happened enough, and in enough disparate instances, for me to think this is just humanity being humanity.
edit to add a caveat -- this applies generally, i.e. to most people most of the time. i feel like i cast a bit of a wide net with how this comment has been phrased.
I'm of the opinion that nothing is ever simple. We live in a fascinatingly complex universe, and the closer you look the more information you can see packed inside everything.
Simple is just a name we attach to domains of well understood, well managed complexity.
Effective altruism never had solid footing to begin with. It was nothing more than a ploy to let rich people feel good about being rich (or at least, to convince them that they don't have to feel guilty about everyone they've stepped on over the years).
If these people were actually altruistic, they would be giving away their wealth today.
Effective altruism generally encourages donating 10%+ of income, which is quite a bit more than the average American. It’s hard for me to see how it isn’t doing more good than harm, even with all the buying castles nonsense.
reply