That only really applies at a small scale. At some point you either stop logging into them, or do it just to run some automation. I can't remember the last time I did something non trivial in a remote terminal now. (Apart from my home server which has everything I want)
This completely depends on the system architecture of your company and your job role, scale has nothing to do with it. There are so many giant Unix shops out there with people herding them day in, day out.
You get backported security fixes in release channels as well. Unless anything changed recently, there's no explicit guarantee around them, but the core packages typicaly land just as fast or faster then other distros. Keep in mind though that more esoteric software with a small number of users and auto-update disabled may lag a bit.
Maybe I'm missing something, but the only branch in github:nixos/nixpkgs I can see receiving fixes is the 24.05 branch getting fixes backported from unstable. The last commit I can see to the 23.11 branch is about 3 months ago
This would imply only 9 months of security patches before I would need to upgrade the server. That is of course a far less risky process with NixOS, so perhaps that is ok, but it is a lot more work than the 5 years you get (free) with Ubuntu/Debian
It's more like 7 months of patches. Release (n-1) gets EOL'd 1 month after release (n), and releases are 6 months apart (in May and November). So 23.11 would've been EOL in July 2024.
And since a release happens every 6 months, while you do have an extra month's window, you still have to upgrade... every 6 months.
You're correct, it's 9 months max, there's no lts equivalent. The updates are much less risky, but in business context you'd really need to discuss the decision. (I'd still prefer that way)
Updating on NixOS is so much less painful that doing an update every 6 months is actually viable. Also the update already contains large parts of your config and can be easily tested in a VM.
Almost all of the changes flip an official setting. Those stay around for a long time and get a proper deprecation notice when they go away, so you won't be surprised. Replacing systemd-minimal with the full version may potentially cause some edge case issues, but it's the same package with more features enabled, so I wouldn't really expect any.
Nothing will break when the package gets updated as long as you keep to your specific release - backported changes are backwards compatible.
It's totally different in practice. Copilot is sparkling autocomplete in comparison. Cursor suggests changes outside of your current position and has the context of what's happened before. It also lets you prompt for changes you want implemented, but that's something it have to explicitly do. Without prompting for actions, Cursor's suggestions when editing are scary good most of the time.
You don't need special models. The current ones can do decent code review as long as:
- you define exactly what you want out of the review - spell out that you want low abstraction, straightforward code, limit indirection (and a hundred other rules)
- provide enough code context for the review, which is tricky to do without effectively sending the whole project
But the models are already capable. Using them effectively is the tricky part.
Only if the automation is cheaper and more efficient and that improvement/saving gets passed on to the (un)loading fees and then to the companies doing the transport and then to the marketplace middlemen and then then to the buyers/sellers.
In my opinion, port costs never go down with automation. If anything, they go up when automation is deployed (this essentially means unmanned reach stackers, more cranes and eventually new TOS (Terminal Operating System) to compensate investment.
This is interesting document (PDF) with port performance index for 2023. Page 11.
https://documents1.worldbank.org/curated/en/0990603241145396...
Not a general purpose one really, but it is a document management system. It's aimed at incoming mail. You get automatic OCR and learned classification / tagging / date finding.
And "docker compose up" is the easiest way to deploy things these days in general. That's got nothing to do with this software specifically.
It was a Bluetooth issue years ago. Now it's only an Apple issue where it can't use a more decent codec. On Linux you can choose the mSBC codec and get decent two-way quality on a modern headset.
Regardless of codecs, don't all Bluetooth headsets switch to mono I/O when the microphone connects? I find that to be a much bigger quality hit than the encoding.
Using the same headset on both Windows and Linux leads to a very different experience. Windows works fine. Linux has the issue macOS has mentioned here.
mSBC is still crappy quality compared to what I get on my smartphone. But still Linux allows me to easily use AAC or APTX codec and a separate mic if I want.
I don't understand why all desktop OS can't have something better while when I pair my Bose headset on my smartphone it seems to be using a better quality codec profile.
The macOS utility Audio MIDI Setup allows you to pick separate devices for this and it also lets you separate the device for system sounds from the device for other sounds.
Its a licensing issue. The borderlands between the headset and headphone profiles are rife with licensing land-mines - developers have flipped the table and rage-quit the issue, and this technical debt has been shipped.
(Disclaimer: I make headset/headphone firmware for a major competitor and deal with this issue every single week...)
This honestly sounds like a problem I would expect to be solved by white-labelled AliExpress junk products, whose manufacturers can just ignore the licensing issues entirely, because they’re able to hide their IP violations behind reselling through endless shell companies.
But I guess it isn’t solved by that. Why isn’t it?
You can't hide from the fact that you have to get your chips from somewhere, and those chips have to run some software, and if you are going to just copy others' software, you inherit their technical debt too - unless you invest in fixing their bugs - and what white-label AliExpress junk product provider has the time for that?
idk man airpods do switch to an AAC variant called AAC-ELD for bidirectional audio but thats still compressed to hell. better than SBC but not as good as unidirectional AAC.
I had high hopes for BLE Audio but that seems to be stalled
> Now it's only an Apple issue where it can't use a more decent codec.
Isn't this a Mac-specific issue? I have some recollection in my head that Mac OS uses a terrible codec for bidirectional bluetooth audio, but iOS uses a good one.
It’s sometimes still an issue on windows but besides advertising the headset audio device most good headsets will create one or more additional audio devices that support high quality input and output.
Modifying the existing journal really sounds like the wrong solution. Just "journalctl --rotate" the file and throw out the one with accidental PII. Journal files are not great for long-term storage or search. You can export the old file and filter out manually if you really want to preserve that one https://www.freedesktop.org/wiki/Software/systemd/export/
In what situations is it a harder problem than this?
I'm a core developer of Crystal.
Looks like something went very wrong there. The GC may not be super optimised, but it's still practical.
I have never heard about such drastic performance issues. And I'm aware of quite a few companies who use Crystal in heavy production loads for exactly the web server + db use case without such issue reports.
So I'd suggest the root cause might be something else then the GC implementation.
It is using Boehm/libgc. Just a simple webserver should not have the described behavior. The GC is not incremental though, so having a big heap would trigger it. But that is typically not the case for the described use case. Likely the issue is with doing something that involves more allocations than necessary.
There are works in libgc to allow incremental collection, but it is not yet ready for the needs of crystal (or at least it wasn't the last time I investigated).
Conceptually, I think the correct time to do garbage collection is when your web server process is idle.
My Crystal implementation of idle-time garbage collection is here: https://github.com/compumike/idle-gc though please note that its idle detection mechanism only works for single-threaded Crystal programs.
An analogy is to imagine a single employee (thread) operating a convenience store. If there are customers waiting in the checkout line (latency-sensitive requests), the employee should priorities serving the customers ASAP! But once the line is empty (thread is idle), that might be a good time to start sweeping.
Right now, with automatic garbage collection, the employee only decides to start sweeping the entire store while in the middle of serving a customer! (Because that's when mallocs are happening, which may trigger automatic GC.) Pretty ridiculous!
With idle-time GC, the sweeping happens entirely or mostly while there are no customers waiting. This may not show latency improvements in an artificial benchmark where the system is running flat-out with a full request queue, but in the real world, it changes GC from something that happens 100% of the time in the middle of a request is being served (because that's when mallocs happen and trigger automatic GC), to something that only rarely or never happens while a request is being served.
Even better would be to combine idle-time GC with incremental GC, so that the employee could put down the broom when a new customer arrives without finishing sweeping the entire store. :)
reply