Hacker News new | past | comments | ask | show | jobs | submit | dwattttt's comments login

You _still_ need a consistent way to talk about values; IPC systems tackle the same problems under the name marshalling & de/serialisation. They just tend to take much more conservative options to deal with exactly this kind of problem (you don't have to care about integer endian-ness if integers are expressed as strings).

> Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution ...

As much as I'm not a fan of the status quo, the argument that changing "Possible" back to "Permissible" isn't particularly solid; the rest of the paragraph makes it clear it's not an exhaustive list of behaviours, merely examples from a "range".


Yes, I agree that there are better solutions, this one is just minimal.

The "range" clearly indicates that you can do something that's semantically in-between the options provided.

What current compilers do is nowhere near that range.


I think it's time we had the quantum computing talk: https://www.smbc-comics.com/comic/the-talk-3

I was really hoping it would be executing Go in kernel.


You should produce a key per device, and produce a backup key that is safely stored & not used anywhere.

You can recover if you lose all devices via your break-glass backup key, and you limit the blast radius of "my key got stolen" from rotating all your keys to just a single device (or maybe the more likely "I screwed up and pushed my key somewhere public")


... which is completely nonviable if you connect to more than a single service.

I agree that you should use a different key per device, but when you connect to over a dozen different services/machines it quickly starts to become a serious chore to add another key. Have fun spending an hour enrolling your new device - provided you can even remember every single usage it should be enrolled with.


SSH certificates solve this issue.

AFAIK there is no equivalent for Passkeys.


Unfortunately SSH certificates have really poor uptake in practice, and it's essentially unheard of to have a personal CA instead of a per-company CA.

But yes, having a single long-living "primary key" everyone can trust which you'd use to generate short-living per-device "secondary keys" would indeed be the ideal solution.


The interface to the gaming system might be strictly data, but you've got power supplied to the cartridge; you can put whatever hardware you can fit inside that cartridge still


More broadly: processor design has been optimised around C style antics for a long time, trying to optimise the code produced away from that could well inhibit processor tricks in such a way that the result is _slower_ than if you stuck with the "looks terrible but is expected & optimised" status quo


Reminds me of Fortran compilers recognising the naive three-nested-loops matrix multiplication and optimising it to something sensible.


> If a non-polymorphic, non-inline function may have its address taken (as a function pointer), either because it is exported out of the crate or the crate takes a function pointer to it, generate a shim that uses -Zcallconv=legacy and immediately tail-calls the real implementation. This is necessary to preserve function pointer equality.

If the legacy shim tail calls the Rust-calling-convention function, won't that prevent it from fixing any return value differences in the calling convention?


Yes. People tend to forget about the return half of the calling convention though so it's an understandable typographical error.


From memory (and I am not a lawyer), in civil cases you are allowed to infer negatively from a refusal to provide a password, and I think in criminal cases that is not allowed.

Edit: spelling


> "we were trial migrating your whole file system … consistency checking it … reporting back to us whether the upgrade was 100 percent clean, then rolling it back"

Actually doing the migration then undoing it is very much not a dry run. I assume they included an "if anything goes wrong, restore everything from iCloud" check or something, because otherwise an unexpected flaw in migration/rollback could mean data loss.


One key feature of APFS was the metadata location didn't overlap with any hfs+ metadata location, so the two file systems could co-exist on the drive. So I presume they created the apfs metadata inside the hfs+ unallocated space, and so if it failed, hfs+ would happily overwrite the apfs metadata and continue on without any impact.


From the linked article:

«Namely, in iOS 10.1 and 102 (sic!), metadata for APFS was test written and the superblock header was created but not actually written out. The file data remained untouched for safety and crash protection and the user remained in HFS+.»


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: