I love to see people advocating for better protocols and standards but seeing the title I expected the author to present something which would be better in the sense of supporting the same or more use cases with better efficiency and/or ergonomics and I don't think that protobuf does that.
Protobuf has advantages, but is missing support for a tons of use cases where JSON thrives due to the strict schema requirement.
A much stronger argument could be made for CBOR as a replacement for JSON for most use cases. CBOR has the same schema flexibility as JSON but has a more concise encoding.
I think the strict schema of Protobuf might be one of the major improvements, as most APIs don't publish a JSON schema? I've always had to use ajv or superstruct to make sure payloads match a schema, Protobuf doesn't need that (supposedly).
One limitation of protobuf 3 schemas, is they doen't allow required fields. That makes it easier to remove the field in a later version in a backwards compatible way, but sometimes fields really are required, and the message doesn't make any sense without them. Ideally, IMO, if the message is missing those fields, it would fail to parse successfully. But with protobuf, you instead get a default value, which could potentially cause subtle bugs.
I suppose I should publish this, but a WASM module, in Rust, which just binds [ciborium] into JS only took me ~100 LoC. (And by this I mean that it effectively provides a "cbor_load" function to JS, which returns JS objects; I mention this just b/c I think some people have the impression that WASM can't interact with JS except by serializing stuff to/from bytestrings and/or JSON, which isn't really the whole story now with refs.)
But yes, a native implementation would save me the trouble!
Jujutsu has a command which is helpful for this sort of workflow called absorb which pushes all changes from the current commit into the most recent commit which modified that file. (Each file may be merged into a different commit).
This seems very similar to how I work by default. I sort of think in terms of "keyframes" and "frames", or "commits" and "fixes to commits."
Whenever I sit down to code with a purpose, I'll make a branch for that purpose:
git checkout -b wip/[desc]
When I make changes that I think will be a "keyframe" commit, I use:
git add .
git commit -m "wip: desc of chunk" (like maybe "wip: readme")
if I make refinements, I'll do:
git add .
git commit --amend
and when I make a nee "keyframe commit":
git commit -m "wip: [desc 2]"
and still amend fixes.
Occasionally I'll make a change that I know fixes something earlier (i.e. an earlier "keyframe" commit) but I won't remember it. I'll commit and then do:
git add .
git commit -m "fixup: wip desc, enough to describe which keyframe commit should be amended"
at the end I'll do a git rebase -i main and see something like:
123 wip: add readme (it's already had a number of amends made to it)
456 wip: add Makefile (also has had amendments)
789 wip: add server (ditto)
876 fixup: readme stuff
098 fixup: more readme
543 fixup: makefile
and I'll use git rebase -i to change it to reword for the good commits, and put the fixups right under the ones they edit. then i'll have a nice history to fast forward into main.
I think you might be aware given the specific words you use but for the benefit of others:
Git commit --fixup lets you attach new commits to previous hashes you specify and then can automatically (or semi-manually depending on settings) squash them in rebases.
They are quite different methods, explained by the respective implementations. IME autofixup finds the relevant commit successfully more often. There's no reason you can't use both, of course. I would always check the results of either before actually doing the rebase.
Analog clocks mostly don't have the problem the author is complaining about since most minute hands move once per second and you can easily see (depending on your eyesight and distance to the clock) that the minute is partially consumed.
I agree though that this is a downside to digital clocks which don't show seconds, though whether the best fix is to round instead of averaging is hard to say.
How is "land which is too cold to use now might be useable on a warmer world" and "human beings aren't as tolerant to higher heat as we once thought" the 'exact opposite' of each other?
This seems to be a misuse of the term “endure”. I spent most of my childhood in a place that frequently got hotter than the numbers quoted in this article.
But was it hotter at the relative humidity numbers they state in the article (50% and 100% relative humidity?)
I also grew up in a place that routinely saw triple-digit Fahrenheit temps, but the RH was < 20%. RH has a huge impact on the evaporative cooling capacity of the human body.
Protobuf has advantages, but is missing support for a tons of use cases where JSON thrives due to the strict schema requirement.
A much stronger argument could be made for CBOR as a replacement for JSON for most use cases. CBOR has the same schema flexibility as JSON but has a more concise encoding.