The things complained about in the article are not a minimal reflection of how computers work.
Take the "wobbly types" for example. It would have been more "minimal" to have types tied directly to their sizes instead of having short, int, long, etc.
There isn't any reason that compilers on the same platform have to disagree on the layout of the same basic type, but they do.
The complaints about parsing header files could potentially be solved by an IDL that could compile to c header files and ffi definitions for other languages. It could even be a subset of c that is easier to parse. But nothing like that has ever caught on.
There were many different types of computers back then. Some even had 36 bit word sizes. I don't think there was any clear winner like amd64 back then that they could have prioritized. 16 and 32 bit machines existed in decent amounts and so on.
> in C it's hard to say reliably define this int is 64-bit wide
That isn't really a problem any more (since c99). You can define it as uint64_t.
But we have a ton of existing APIs that are defined using the wobbly types, so we're kind of stuck with it. And even new APIs use the wobbly types because the author didn't use that for whatever reason.
But that is far from the only issue.
128 bit ints is definitely a problem though, you don't even get agreement between different compilers on the same os on the same hardware.
FWIW, the crabi project within rust is trying to improve on some parts of it. But it is still built on (a subset of) the environments c ABI. And it doesn't fix all the problems.
Sharding is often not easy. Depending on the application, it may add significant complexity to the application. For example, what do you do if you have data related to multiple customers? How do you handle customers of significantly different sizes?
And that is assuming you have a solution for things like balancing, and routing to the correct shard.
Presumably sharding is a lot easier than trying to debug lockups in individual postgres thread? It's well known, we've been doing it for at least 30+ years as an industry.
echo $NUM_PAGES | sudo tee /proc/sys/vm/nr_hugepages
I've always found it odd that there isn't a standard command to write stdin to a file that doesn't also write it to stdout. Or that tee doesn't have an option to supress writing to stdout.
If my memories serves me right it was meant to be "Copy Convert" but "cc" was already taken for "C Compiler" so they went to the next letter in the alphabet (alpha beta), hence "dd". Thanks for listening to my TED talk of useless and probably false information :)
According to Dennis Ritchie, the name is an allusion to the DD statement found in IBM's Job Control Language (JCL), where DD is short for data definition https://en.wikipedia.org/wiki/Dd_%28Unix%29#History
I've always thought that there should be `cat -o output-file` flag for that. GNU coreutils have miriads of useless flags and missing one actually useful flag LoL.
- a word processor - a spreadsheet application - presentation software
This doesn't look like it has any of these
reply