Hacker Newsnew | past | comments | ask | show | jobs | submit | Joker_vD's commentslogin

It's small, but not unnoticeable... depending on the exact size of the workload and the amount of computation per element. In fact, for huge arrays it may be beneficial to have structs packed if that leads to less memory traffic.

[0] https://jordivillar.com/blog/memory-alignment

[1] https://lemire.me/blog/2012/05/31/data-alignment-for-speed-m...

[2] https://lemire.me/blog/2025/07/14/dot-product-on-misaligned-...


Do I understand it correctly that the logic is that if timestamp B is above timestamp A, but the difference is more than half of the unsigned range, B is considered to happen before A?

Yes. When the timestamps wrap it's fundamentally ambiguous, but this will be correct unless the timestamps are very far apart (and the failure mode is more benign: a really long time difference being considered shorter is better than all time differences being considered zero after the timestamp wraps).

They've meant something like "arbitrary", in its "without any good/justifiable reason" sense. The word "random" is also used in this sense, especially when talking about human-made decisions.

Every time I see yet another dotfile-management solution I just can't help but wonder: maybe it's the dotfiles that are the problem?

That ship has sailed 30 years ago.

There are still I/O libraries that play with read/write buffers really fast & dirty, which C standard explicitly allows for with its "However, output shall not be directly followed by input without an intervening call to the fflush function or to a file positioning function (fseek, fsetpos, or rewind), and input shall not be directly followed by output without an intervening call to a file positioning function, unless the input operation encounters end-of-file" wording.

There is. For example, four months ago [0] they've accidentally stumbled upon about an explicitly documented quirk of SQLite file format.

[0] https://news.ycombinator.com/item?id=45101854


I stumbled on the lock page myself when I was experimenting with writing a sqlite vfs. It's been years since I abandoned the project so I don't recall much including why I was using the sqlitePager but I do recall the lockpage being one of the first things I found where I needed to skip sending page 262145 w/ 4096 byte pages to the pager when attempting to write the metadata for a 1TB database.

I'm surprised they didn't have any extreme tests with a lot of data that would've found this earlier. Though achieving the reliability and test coverage of sqlite is a tough task. Does make the beta label very appropriate.


> programs could have their output connected to all sorts of sinks (terminal, file, GUI, web content) without carrying baggage related to those sinks' behaviors.

We already have this. The TTY itself is not very special at all. It's just that the applications, traditionally, decide that they should special-case the writing to TTYs (because those, presumably, are human-oriented and should have as little batching as possible). But you, as an application developer, can simply not do this, you know.


Automatically changing behavior by testing if the output sink is a TTY is traditionally considered an anti-pattern by those with enough time and hair loss spent at the terminal. It's one of those things where there are definitely occasions where it's useful, but it's overused and can end up frustrating people more than it helps, like when they're attempting to replicate a work flow in a script.[1] A classic example of "just because you can do something doesn't mean you should do it".

I don't know how it works today, but IIRC colorization by GNU ls(1) used to require an explicit option, --color, typically added through an alias in default interactive shell configs, rather than ls automatically enabling it by default when detecting a TTY.

Explicit is generally better than implicit unless you're reasonably sure you're the last layer in the software stack interacting with the user. For shell utilities this is almost never the case, even when 99% of usage is from interactive shells. For example, `git` automatically invokes a pager when it detects output is to a TTY; this is endlessly frustrating to me because most of the time I'd prefer it dumped everything to the screen so I could more easily scroll using my GUI terminal window, as well as retain the output in the scroll buffer for later reference. Git does have the -P option to disable this behavior, but IMHO it has proper defaults reversed; usually I just end up pipe'ing to cat because that's easier to remember than bespoke option arguments for frilly anti-features.

[1] Often times it forces people to use a framework like expect(1) to run programs with another pseudo TTY for child programs just to replicate the behavior.


> I don't know how it works today, but IIRC colorization by GNU ls(1) used to require an explicit option, --color, typically added through an alias in default interactive shell configs, rather than ls automatically enabling it by default when detecting a TTY.

It works exactly like this today. Plus, lots of software added support of NO_COLOR today.

> For example, `git` automatically invokes a pager when it detects output is to a TTY; this is endlessly frustrating to me because most of the time I'd prefer it dumped everything to the screen so I could more easily scroll using my GUI terminal window.

Set your pager to cat? That's what I personally do, never really liked this built-in convention either.


ls is usually aliased to ls --color=auto in the bashrc that comes with your distribution

Auto means to enable if output is a terminal. I think this is reasonable. The default is no color, ever.


> I am not sure if there is a better way to find the fastest code path besides "measure on the target system", which of course comes with its own challenges.

Yeah, and it's incredibly frustrating because there is almost zero theory on how to write performant code. Will caching things in memory be faster than re-requesting them over network? Who knows! Sometimes it won't! But you can't predict what those times will be beforehand which turns this whole field into pure black magic instead of anything remotely similar to engineering or science, since theoretical knowledge has no relation to reality.

At my last job we had one of the weirdest "memory access is slo-o-o-ow" scenarios I've ever seen (and it would reproduce pretty reliably... after about 20 hours of the service's continuous execution): somehow, due to peculiarities of the GC and Linux physical memory manager, almost all of the working set of our application would end up in a single physical DDR stick, as opposed to being evenly spread across 4 stick the server actually has. Since a single memory stick literally can't cope with such high data throughput, the performance tanked. And it took quite some time us to figure out what the hell was going on because nothing came up on the perf graphs or metrics or whatever: it's just that almost everything in the the application's userspace became slower. No, the CPU is definitely not throttled, it's actually 20–30% idle. No, there is almost zero disk activity, and the network is fine. Huh?!


You do have NUMA to control memory placement, it's not that easy to use though: https://blog.rwth-aachen.de/itc-events/files/2021/02/13-open...

Well, we didn't, for obvious reasons, patch JVM to manually finagle with physical memory allocation — which it probably wouldn't be able to do anyway, being run in a container.

> Enough "Market for all the things" already..

See, there are two major flavours of pro-market attitudes. The first one is "if we allow many independent individuals to try their own approaches to a problem and let the people with "better" approach to personally profit from it handsomely and make them compete against each other in an environment with objective-ish judgement of "what is better" instead of "impress the (inevitably corruptible) officials to be judged victorious and awarded the fortunes", and also manually guard and regulate against several universally known ways to sabotage such competition, then we'll be able to channel human ingenuity into solving difficult to solve technical problems while also rewarding those who are able to come up with (and implement) such solutions with low overseeing overhead". Of course, such an attitude isn't strictly speaking "pro-market", it's been around since ancient times; hell, the USSR of all places had this attitude in spades until about the 70s or so.

The second one is "Nah, we don't have to try and think about anything ourselves, just let people fend for themselves, they'll figure it out, and it won't have any unforeseen bad side effects, why would it; markets are magical like that!" Yeah, about that...


Right, a market is a small tool of larger systems. That’s fine, hard to get right but can make systems better. Type two just seems to be the cargo culted everywhere..

I think it's actually not that hard to get right (or at least "right enough"), as evidenced by the fact that markets have successfully run the entire global economy for thousands of years with no central oversight and almost no regulation.

Markets failures do happen, and when they do it can be helpful to have an external force step in to nudge the market back onto the rails. But even without such interventions they work remarkably well on balance.


> markets have successfully run the entire global economy for thousands of years

Markets, as they are understood today, are more like about 300 years old, even less in some places. The bulk of world economy has been sustenance farming for most of the human history, with some communal mutual help (based on favours and mutual indebtedness) thrown in.

> with no central oversight and almost no regulation

Lol what? E.g. Roman Republic (and later empire) tightly regulated its markets, especially food trade, during all of its existence.


The industrial revolution multiplied the size of the global economy by several orders of magnitude, but it didn't create it. International trade has been happening on a smaller scale since large-scale civilized society existed. And I said "almost no regulation", not "no regulation". Probably something on the order of 99% of the regulations we have today wouldn't have even been feasible to enforce a couple hundred years ago, yet markets still functioned just fine on the whole.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: