Hacker News new | past | comments | ask | show | jobs | submit | more Elv13's comments login

To be fair, "old hardware" is kind of frozen in time by definition. DSL is what it is. If you want to play with hardware with 32mb of ram, it does what it does. It isn't really an OS you want to "use" or "expand upon". I would not suggest to connect it to the Internet or expect any kind of updates, security or otherwise.

The main case for it at this point is retro computing. Most retro computing enthusiasts run era correct OS (MacOS7-9, Mac OS X, Win9x, DOS, AIX/IRIX/SunOS/HPUX, BeOS, Amiga 3x, etc). Those are not getting security updates either.

I put it in the same category as Haiku, Visopsys, AROS and ReactOS, fun toy for older computers. Not very relevant as day-to-day. I still have and expand a collection of live CDs for the P3/PR era laptop. Again, those don't get security updates, but are fun to explore.

Personally, I am more into Linux window managers (and AwesomeWM maintainer) to recreate the interesting concepts from those OS rather than rice 90s silicon. However I really enjoyed using a Pentium1 laptop full time for a few months in university in the late 00's just to prove a point. But for that I compiled my own OS rather than use a distro. If you want to get the most out of these machine, that's the way.


> To be fair, "old hardware" is kind of frozen in time by definition.

No, not at all.

It's a continuously-moving baseline, because it's relative to now. And where "old" begins is a judgement call.

So, for instance, one useful definition is "not capable of usefully running a contemporary OS."

Since all current mainstream Linuxes (Ubuntu, Fedora, even Arch, etc.) are 64-bit that implies a 32-bit machine. One with a reasonable amount of RAM for the time, a gigabyte or two say, but which can't be upgraded. Intel Atom chips were mostly 32-bit until a decade and a bit ago. Core Solo was quite quick but 32-bit only.

Some early 64-bit chips have 32-bit firmware and so can't run a 32-bit OS.

So there is a moving baseline of machines that can't take >=4GB RAM, can only boot a 32-bit OS, maybe have 1 CPU core, but were made in the 1st decade of this century and remain fully-functional, with wifi etc.

DSL isn't very useful on such kit, and if it works, it's insecure.

So, no, it's not frozen in time, and no, a never-updated 20YO snapshot isn't very useful.


Haiku has 3D support on some AMD GPUs.


There was a (government) public investigation / shame campaign a few years ago ago the construction industry in Montreal, Canada.

One of the person who testified in exchange for not being jailed was "Mr. 3%", a member of the office who approve (public) construction projects. He took 3% of the project total cost (in bribes) from the top 30 contractors in exchange for making sure their permits requests never got in these artificial rejection loops.

The bureaucracy, just like the old taxi industry, is about keeping smaller players out, not enforcing the codes.


> The bureaucracy, just like the old taxi industry, is about keeping smaller players out, not enforcing the codes.

See my post above.. it is also about feeding the bureaucracy itself as well.

Some places (like where i live) actually go driving around looking for construction taking place so they can issue "stop work" orders and force you to get permits.

they use these permits to ask for more federal/provincial money as they are 'growing'.

They also grow the bureaucracy machine, which allows them to hire more people to drive around looking for lumber on your yard, to make you buy more permits...


Honestly paying 3% to make sure your shit gets approved sounds great. It's kind of like dealing with a hooker instead of a wife -- at least you know exactly what it's going to cost after you leave.

More corruption at this point would almost be a good thing. As things stand there's basically no personal incentive for inspectors to actually approve projects.


It was 3% as long as you were part of the conspirator and make it extra painful to everybody else. What the conspirators wanted out of this was to control the number of players so they could limit supply and control rates. Paying people to do nothing while you wait for the permits is how you bankrupt small contractors.


Would you rather have `.rpm`s build by someone on the command line using `rpmbuild` or `mock` or something built by a CI with proper bootstraping and "nearer" to reproduceable builds. Also, by cutting corners on the CI, you risk introducing mild ABI problems which wont crash, but causes instabilities and potential security vulnerabilities. If they say they needed time to do it properly, I respect that.


I don't know how RHEL packages are built exactly, since I'm not an employee, but mock is what the Fedora infrastructure runs under koji for builds. I don't see how "a CI" can make the process "nearer to reproduceable builds", whatever that means. I'm not aware or serious efforts on reproducible builds for Fedora/EL, in contrast to Debian, though there has been talk of it. You obviously should never use rpmbuild for binary rpms outside a chroot.

The Alma infrastructure appears to be in their github space, though I don't know anything about it. What's wrong with it?


For the record, I am not saying something is wrong with Alma. I am saying if Rocky says they need time to get it right, then I see this as a good sign.

About `mock`, the problem is the ABI. If you don't build the packages in the "perfect" order, the ABI degrades over time. For example, some libraries might accidentally add something in the middle of a struct. The API is 100% compatible. It will also run without any warning, but all pointers in the application using the libraries provided by those packages will now have an offset. A boolean might now point to in integer or something like this. If you don't have the tooling to detect this and don't have the tooling to ensure you build packages in the right order (and rebuild when needed), then you will eventually get some of these problems. Mostly on point releases. To solve this, the "trivial" way is to follow the RHEL build ordering, which requires some tools. The "correct" way is to use `libabigail`, `libsolv`, `libdnf` and other binary tooling and keep track of these things.

There are more of these little papercuts left and right you get when you build a RHEL clone. You can always cut corners and manually build everything, but you will payback the time you save in outages. RedHat has the test suite, the clones only have a small part of it, they have to be extra careful.


So, the claim is that RHEL has broken packaging which doesn't reflect ABI changes, and somehow that means you have to reverse engineer its build mechanism from srpms to accommodate it being non-deterministic somehow? Care to give an example? I don't remember ever seeing one. (Of course package maintainers for EPEL, for instance, should use abipkgdiff. I don't remember what the status of automating that is in Fedora.)


>> I am saying if Rocky says they need time to get it right, then I see this as a good sign.

I've worked in software engineering for a long time. Sometimes delays are a good thing; A sign of waiting for quality. And sometimes they're a sign that things went wrong. That poor decisions were made, or implementation was slow due to junior people etc..

Interpreting 'need time to get this right' as a positive, and worse, as somehow a negative on Alma, who thrashed them to the finish line with an identical product, doesn't make sense.


if that's the case, why in your example is someone running rpmbuild doing it so much faster than an automated CI system. If your reasoning is that it will be faster in the future. Fine. But I doubt it. Criticizing Alma because it was first sounds like FUD.


Qt (the legacy Qt::Widget variant) is quite productive for quick and dirty GUIs. You can do 95% of basic dialogs in QtDesigner and then connect the signals to your code and vice versa. The code can be Python or C++. It can also be mapped 1:1 into SQLite tables without any external libraries, which is handy for basic forms or data apps.

QML also has GUI design tools, but it takes much longer to learn and isn't very good at simple GUIs. However it's better when you need to deploy on Android (iOS requires the paid version).

One thing to keep in mind if the LGPL3 license. If the app isn't distributed, then it's fine, but if distributed, it has to be dynamically linked with some ways of swapping the .so/.dylib/.dll


Qt is fantastic, I've used it with Python using the semi-commercial PyQT and open source PySide which are largely mutually compatible. The example projects that come with it are a great way to get started and the documentation is world class. I found it much easier to work with than the native Windows options 10 years ago, but haven't really looked at anything from MS since.


I've often found Qt GUIs to be bloated and slow. Not sure whether that's brought about by Qt inherent overheads or bad design by those using Qt.


That may be so, but I've used very performant software that used Qt. The desktop version of Google Earth was a Qt application and was amazing. The whole KDE desktop is Qt, as are Ableton Live, Mathematica, and several Adobe and Autodesk apps. If a Qt app has performance issues it's unlikely to be down to the UI layer, unless they're doing something particularly weird. Having said that I have no experience of QML or how it performs.


AppImages are only as self-contained as the author put effort into making them self-contained. There's also upper limits to how self-contained they are. While some terminal and bitmap only X11 app can be compiled as static binaries, anything that depends on system libraries needs to be compiled with an older version of glibc. The best example is libGl (GLX or EGL) for hardware 3D acceleration or libvdpau for hardware media decoding. You can't just bundle those, you have to use the system ones. Using any system library forces you to use glibc (AppImage don't work on Alpine). OpenSSL and a few other a libs you usually want to use the system one and have a built-in fallback because of security concerns.

Making perfect AppImages is often possible, but the automated tooling isn't smart enough. A proper AppImage (this one is by me) look like this: https://github.com/Elv13/reclaimail/blob/master/docker-edito... . Obviously this doesn't scale very well to projects with 300 dependencies like Digikam. My NeoVIM appimage linked above "really, really" bundles all dependency and compile your NeoVIM config to luajit bytecode. It's 3.9mb compared to the upstream one which is 15mb without any config. Note than 0.7mb of that 3.9 is the spellcheck dictionary, 0.4 my enormous config, 0.5 the AppImage overhead and 0.7 all the legacy plugins still written in vimscript.


Interesting. Yeah it seems like the solution for digikam on my solution was something like unpack the app image and recompile something and I’ve not had the time to mess with it. I hope the maintainer can release a fixed image at some point. I really like that app!


Wouldn't another option be to use the system package [0] or is the version too old? Or use the Flatpak [1] (or NIH-flatpak [2]), which is probably a better fit than AppImage for GUI programs that sit on top of a heavy toolkit.

[0] https://packages.ubuntu.com/search?keywords=digikam

[1] https://flathub.org/apps/details/org.kde.digikam

[2] https://snapcraft.io/digikam


A PC is ideal as long as there is only one. Once you start adding machines, it is time to move to a rack. Otherwise you end up with a giant ball of wires. Racks have rails like drawers, so the units are easy to service. Thats not the case with a heap of ATX towers. Also, your "collection" remains self contained and doesn't grow. Past some scale, you start to also make use of other rack mountable accessories like UPSes, PDUs, patch panels and switches. In the past, you had to move pretty early because older consumer machines like Pentium4 could only do so much, so you needed many of them even for a basic setup like a LAMP server or render/compile cluster.


> Otherwise you end up with a giant ball of wires.

Let's be honest, it's frighteningly easy for a rack to turn into a giant ball of wires too. See r/cablecore for examples.


It's r/cablegore


That's what I get for typing on my phone.


Just a little note that we (AwesomeWM team) have stable APIs since 4.0 and each releases no longer break the API. We also have reached 90% code/behavior coverage (from 0% in < 4.0 days).

From a maintainer PoV, this is often a pain since AwesomeWM exposes most of its internal guts, but with enough compatibility code and Android style API levels, we still manage compatibility pretty well.


I know this thread is almost 20h old. But, I started using awesomewm maybe 8 years ago, decided on a look and config and haven't changed it since, every computer i use have the same setup, it is indeed awesome! Thank you so much for your time and dedication, it's a pleasure to use


Yeah, if you started using `git-master` around ~7 years ago, it hasn't broken ever since. The last breaking release was v4.0 in 2016, but it was in development for years before that in the git-master branch. 3.5->4.0 was a pretty nasty upgrade process since we had to nuke a lot of the unsustainable things one last time (https://awesomewm.org/apidoc/documentation/17-porting-tips.m...).

Glad it works for you!


I love using awesomeWM , thanks a lot for making it !


> you can't provide a dedicated GUI for them

You could expose enough GTK bits to expose an event loop to the LGI lua library. It's gobject-introspection for Lua. Since you already use these libs, it would not make Ardour any bigger.

I am not saying it's a great idea to mix GUI and a realtime DSP in the same thread, but it would be supported in you see some demand there.


I've previously written a long-ish article about the role of Lua/scripting in general in the context of a libre & open-source DAW before:

https://discourse.ardour.org/t/is-open-source-a-diversion-fr...

There's really no technological reason for not allowing Lua to create GUIs within Ardour. It's more a question of whether or not we actually want to. Either way, you would not be mixing GUI and realtime code - the architecture of Ardour doesn't allow that.


Nice article Paul. It is motivating me to take another look at Ardour. I have also worked on some very large audio/video authoring tools. When we made the new lighting tools at Dreamworks, you could only create the UI using the scripting system. I am not sure if that discipline is still observed, but it was a good way to make sure that there wasn't a 2nd class citizen status given to extending the UI.


Thanks. Thing is, in an open source system, there are no 2nd class citizens caused by the "eat your own dogfood" rules (or lack thereof). You want to change the GUI in Ardour? The code is all right there.

The question I was raising (which I think you understand) is whether most users care that this is possible if it can't be done without a rebuild (compiling).


Right. In the case of the studio tools, artists could extend the application but not touch the core. Much like Ardour, having to build the application from source was more complex and who wants to make their users do that?


I've been reading all of your comments in this thread and the links provided carefully, thank you for all the great work on Ardour and the degree of forethought that goes into it, it really shows in the final product.


vxworks claimed that title. Unfortunately, Linux never was the "only OS with an installed base covering several planets".

And if you never heard of vxworks or QNX, that's the point. They just work and that's the point.


> A process which doesn't exist cannot hold memory

Not quite. Some leaks are across processes. If your process talk to a local daemon and cause it to hold memory then quitting the client process wont necessarily free it. In a similar way, some application are multi-process and keep some background process active even when you quit to "start faster next time" (of act as spywares). This includes some infamous things like Apple updater that came with iTunes on Windows. It's also possible to cause SHM enabled caches to leak quite easily. Finally, the kernel caches as much as it can (file system content, libraries, etc) in case it is reused. That caching can push "real process memory" into the swap.

So quitting a process does not always restore the total amount of available memory.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: