Hacker Newsnew | past | comments | ask | show | jobs | submit | codewritero's commentslogin

I love to see people advocating for better protocols and standards but seeing the title I expected the author to present something which would be better in the sense of supporting the same or more use cases with better efficiency and/or ergonomics and I don't think that protobuf does that.

Protobuf has advantages, but is missing support for a tons of use cases where JSON thrives due to the strict schema requirement.

A much stronger argument could be made for CBOR as a replacement for JSON for most use cases. CBOR has the same schema flexibility as JSON but has a more concise encoding.


I think the strict schema of Protobuf might be one of the major improvements, as most APIs don't publish a JSON schema? I've always had to use ajv or superstruct to make sure payloads match a schema, Protobuf doesn't need that (supposedly).


One limitation of protobuf 3 schemas, is they doen't allow required fields. That makes it easier to remove the field in a later version in a backwards compatible way, but sometimes fields really are required, and the message doesn't make any sense without them. Ideally, IMO, if the message is missing those fields, it would fail to parse successfully. But with protobuf, you instead get a default value, which could potentially cause subtle bugs.


Okay, this is a definite issue, you're still stuck validating inputs/outputs.


We need browsers to support CBOR APIs… and it shouldn’t be that hard as they all have internal implementations now


I suppose I should publish this, but a WASM module, in Rust, which just binds [ciborium] into JS only took me ~100 LoC. (And by this I mean that it effectively provides a "cbor_load" function to JS, which returns JS objects; I mention this just b/c I think some people have the impression that WASM can't interact with JS except by serializing stuff to/from bytestrings and/or JSON, which isn't really the whole story now with refs.)

But yes, a native implementation would save me the trouble!

[ciborium]: a Rust CBOR library; https://docs.rs/ciborium/latest/ciborium/


Jujutsu has a command which is helpful for this sort of workflow called absorb which pushes all changes from the current commit into the most recent commit which modified that file. (Each file may be merged into a different commit).


This seems very similar to how I work by default. I sort of think in terms of "keyframes" and "frames", or "commits" and "fixes to commits."

Whenever I sit down to code with a purpose, I'll make a branch for that purpose: git checkout -b wip/[desc]

When I make changes that I think will be a "keyframe" commit, I use: git add . git commit -m "wip: desc of chunk" (like maybe "wip: readme")

if I make refinements, I'll do: git add . git commit --amend

and when I make a nee "keyframe commit": git commit -m "wip: [desc 2]"

and still amend fixes.

Occasionally I'll make a change that I know fixes something earlier (i.e. an earlier "keyframe" commit) but I won't remember it. I'll commit and then do: git add . git commit -m "fixup: wip desc, enough to describe which keyframe commit should be amended"

at the end I'll do a git rebase -i main and see something like:

123 wip: add readme (it's already had a number of amends made to it) 456 wip: add Makefile (also has had amendments) 789 wip: add server (ditto) 876 fixup: readme stuff 098 fixup: more readme 543 fixup: makefile

and I'll use git rebase -i to change it to reword for the good commits, and put the fixups right under the ones they edit. then i'll have a nice history to fast forward into main.


I think you might be aware given the specific words you use but for the benefit of others:

Git commit --fixup lets you attach new commits to previous hashes you specify and then can automatically (or semi-manually depending on settings) squash them in rebases.


You can combine this with the `:/<text>` syntax [0] for matching the most recent commit with a given text in the commit message, e.g.

    $ commit frobinator/ -m "refactor the frobnicator"

    [ more work ]

    $ commit echaton/ -m "immanentize the eschaton"

    [ oops, missed a typo ]

    $ commit frobinator/ --fixup :/frobic
0: https://stackoverflow.com/a/52039150


Thanks, I am —- but I always found it easier to just give the new commit a name I know how to squash rather than type in a SHA.

The other post about being able to do it on a substring match sounds way more ergonomic though, I’ll have to try that!


git-absorb (https://github.com/tummychow/git-absorb) does a bit more, figuring out the exact changes that should be fixed up.


git-autofixup is better and easier to install: https://github.com/torbiak/git-autofixup


Could you elaborate how it is better?


They are quite different methods, explained by the respective implementations. IME autofixup finds the relevant commit successfully more often. There's no reason you can't use both, of course. I would always check the results of either before actually doing the rebase.


JJ absorb does the same as far as I understand


Yes, totally useful compared to default git base commands.

And also - melding the "changed twice" (or thrice...) mutations into a single commit is a brilliant isolation of a subtle common pattern.


git-absorb does exist [1]. It seems to be inspired by a mercurial subcommand of the same name. It's also available in most distro repos.

[1] https://github.com/tummychow/git-absorb


Analog clocks mostly don't have the problem the author is complaining about since most minute hands move once per second and you can easily see (depending on your eyesight and distance to the clock) that the minute is partially consumed.

I agree though that this is a downside to digital clocks which don't show seconds, though whether the best fix is to round instead of averaging is hard to say.


It wasn’t that long ago that an article making the exact opposite point was at the top of this site.


got link?


Can’t find it. The title was along the lines of global warming might result in a net increase in usable land.


How is "land which is too cold to use now might be useable on a warmer world" and "human beings aren't as tolerant to higher heat as we once thought" the 'exact opposite' of each other?


Well, it might!

It might also be necessary for many people to migrate to cooler areas than where they currently live.


While I suspect this is probably not true, it is plausible due to the fact that there is a lot of land in the northernmost regions of the world.


This seems to be a misuse of the term “endure”. I spent most of my childhood in a place that frequently got hotter than the numbers quoted in this article.


But was it hotter at the relative humidity numbers they state in the article (50% and 100% relative humidity?)

I also grew up in a place that routinely saw triple-digit Fahrenheit temps, but the RH was < 20%. RH has a huge impact on the evaporative cooling capacity of the human body.


50% yes. It also got to 100% humidity occasionally which was definitely worse than the hot dry days.


Likely not in a 100% humidity environment.


Places that routinely get such heat (I grew up in one, also) are normally pretty dry. They are talking about 31C *wet bulb*.


Judging from Twitter comments I wonder if it’s due to a high number of chargebacks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: