i guess the point of having a tag in the shape of a card is to prevent a thief from throwing the obvious airtag away, the card tag may look like an ordinary bank card and kept in the wallet longer
so it is like humans vs robots started? robots ask humans questions to verify they are not robots. humans mark content as robot-generated to filter it out.
Thank you so much! I did not know I needed it! Still it does not help much to see which window is active right now (Sequoia), but makes overall experience easier.
In my experience, highly portable C is cleaner and easier to understand and maintain than C which riddles abstract logic with dependencies on the specific parameters of the abstract machine.
Sometimes the latter is a win, but not if that is your default modus operandi.
Another issue is that machine-specific code that assumes compiler and machine characteristics often has outright undefined behavior, not making distinctions between "this type is guaranteed to be 32 bits" and "this type is guaranteed to wrap around to a negative value" or "if we shift this value 32 bits or more, we get zero so we are okay" and such.
There are programmers who are not stupid like this, but those are the ones who will tend to reach for portable coding.
yep, i remember when i tried coding for some atmega, i was wondering "how big are int and uint?" and wanted the types names to always include the size like uint8. but also there is char type, which should become char8 which looks even more crazy.
You'd define architecture-specific typedefs to deal with these cases in a portable way. The C standard already has types like int_fast8_t that are similar in principle.
See, why would you need an "architecture specific typedef" in order to represent the day of the month, or the number of arguments in main "in a portable way".
int does it in a portable way already.
int is architecture specific too, and it's been "muddled" plenty due to backward compatibility concerns. Using typedefs throughout would be a cleaner choice if we were starting from scratch.
The original meaning of byte was a variable number of bits to represent a character, joined into a larger word that reflected the machine's internal structure. The IBM STRETCH machines could change how many bits per character. This was originally only 1-6 bits [1] because they didn't see much need for 8 bit characters and it would have forced them to choose 64 bit words, when 60 bit words was faster and cheaper. A few months later they had a change of heart after considering how addressing interacted with memory paging [2] and added support for 8 bit bytes for futureproofing and 64 bit words, which became dominant with the 360.
The moment you feel the need to skip letters due to propensity for errors should also be the moment you realise you're doing something wrong, though. It's kind of fine if you want a case insensitive encoding scheme, but it's kind of nasty for human-first purposes (e.g. in source code).
> The moment you feel the need to skip letters due to propensity for errors should also be the moment you realise you're doing something wrong, though.
When you think end-to-end for a whole system and do a cost-benefit analysis and find that skipping some letters helps, why wouldn't you do it?
But I'm guessing you have thought of this? Are you making a different argument? Does it survive contact with system-level thinking under a utilitarian calculus?
Designing good codes for people isn't just about reducing transcription errors in the abstract. It can have real-world impacts to businesses and lives.
Safety engineering is often considered boring until it is your tax money on the line or it hits close to home (e.g. the best friend of your sibling dies in a transportation-related accident.) For example, pointing and calling [1] is a simple habit that increases safety with only a small (even insignificant) time loss.
I started off by saying that 0-9a-v digits was "a bit extreme", which was a pretty blatant euphemism — I think that's a terrible idea.
Visually ambiguous symbols are a well-known problem, and choosing your alphabet carefully to avoid ambiguity is a tried and true way to make that sort of thing less terrible. My point was, rather, that the moment you suggest changing the alphabet you're using to avoid ambiguity should also be the moment you wonder whether using such a large number base is a good idea to begin with.
In the context of the original discussion around using larger bytes, the fact that we're even having a discussion about skipping ambiguous symbols is an argument against using 10-bit bytes. The ergonomics or actually writing the damned things is just plain poor. Forget skipping o, O, l and I, 5 bit nibbles are just a bad idea no matter what symbols you use, and this is a good enough reason to prefer either 9-bit bytes (three octal digits) or 12-bit bytes (four octal or three hex digits).
ok, agree with your point, i should have got the numbers from chatgpt and just put them in the comment with my words, i was just lazy to calculate how much profit we would have with 10-bit bytes.