If you would like to contribute, feel free to drop by the forums, and say hi!
I guess it doesn't "matter" to consumers buying a product, but it is unfortunate how confusing the naming schemes are versus the underlying architecture. The R9/R8 cards were similarly a mixture of GCN 1, 2, 3, and even TeraScale.
(similarly, the RX 5xx/4xx series has GCN 1 and GCN 3 cards mixed in, in addition to the actual GCN 4 chips)
Except for the ones that aren't. And it's not even chronological -- the HD 7510 with a TeraScale 2 chip was released after some southern isles cards.
And there's the Radeon HD 7790, which has a GCN 2 sea isles chip.
I became aware of this because my "R9 280x" turned out to essentially be an upclocked HD 7970, using a Tahiti/southern isles chip. GPU was fine, it's just a nuisance to sift through what's what...
Trying to discern if my gpu is better supported by radeon or amdgpu gives me headaches.
The latter issue (software) would be mitigated significantly in my case if it had a good hypervisor and a WINE port, but the former issue seems nigh-insurmountable given the amount of hardware around today. It's really sad because it's a big reason that we don't have more experimentation in the Desktop OS space.
Unfortunately I never did find supported hardware and clear out enough hard drive space to install it (despite shuffling my disks around and losing my last copy of important files). Perhaps I could try running it in a VM, but why not use the outside OS at that point?
Personally, I am not a fan of the reliance on ported-from-unix packages. Especially considering that there is currently no autoremove for the mountains of dependencies that they bring with them. If Haiku had containerization this could be more easily mitigated, but as far as I'm aware it doesn't.
> Perhaps I could try running it in a VM, but why not use the outside OS at that point?
My feeling as well.
Also, the performance can be much faster on Qt software. I remember some QT5 based video player playing 4k videos on an old Celeron in software while Linux wasn't able to do so even with GLX/DRI/XV acceleration.
Obligatory defense: I refuse to believe this snipe extends to KDE on a rolling distro (e.g., Arch or Manjaro), where it's objectively a better UX than Windows.
(I stress rolling since new versions of DEs seem paradoxically better than old versions, so the versions point release distros like Ubuntu use are always crashier and buggier.)
In part I guess it's because there is no Linus-enforced requirement to have sane specifications, rather than only code. In fact, they seem to encourage having Linux-specific drivers to the detriment of other OSes. Having non-upstream "generic" drivers is not only heavily frowned upon but also rather hard due to the constant breaking API changes. And for upstreamed drivers, you basically have to make your kernel code all but Linux-specific in order to have any chance; generic code or code that doesn't use all the specific Linux infrastructure is basically forbidden everywhere except staging/.
The lack of stable API/ABI should in theory prevent "write-once-and-forget" drivers, as well as discourage proprietary ones, but in practice we have both of these. I would really like to see how the world would look like if there was one stable, open, popular API for drivers.
i.e. Imagine they just enforce that in order to submit a driver upstream you have to actually document it, so that problems can be fixed in the future.
The PC (more or less accidentally) started out as standardized hardware. Everyone could write an OS for it. Then things like sound cards, network cards, advanced video were added on top, but standardization here was incomplete to nonexistent.
All kinds if programs had drivers, e.g. games for sound cards etc. As an example, I had to install a printer driver in wordperfect 5.1, a word processor under DOS.
Windows centralized these drivers. Games could talk to the sound API. Word processors to the printer API. Hardware vendors could stop writing drivers forceach program. But note a fundamental shift here: No more standard hardware spec, only a standard driver spec.
Microsoft didn't ask for this, but didn't fight it too hard either. History accidentally gave them a moat, and they liked it. They could easily have pushed hardware specs harder, like they did for USB with HID and Mass Storage.
There is no sane need for individual printer driver, for example. A standard way to push a bitmap, to read some specs( DPI), to control buttons, toner levels and lights gets you mostly there. We would have 1 stable high quality printer driver for each OS instead of the crappy mess we have now. In fact, IPP getsvus partially there. But you need a big player pushing for it, and Microsoft has everyone working for only them right now, so why would they change anything.
Linux wrote its own drivers until it got big enough to get some love from some big hardware vendors. But I see no way for it to push hardware standardization.
But today Linux has the mantle. In fact a shitton of the Linuxisms that we have today in the embedded space (e.g. "BSP"s) come directly from Microsoft.
Linux is constantly breaking these internal APIs for a reason: to keep adding new (hardware) features, while preventing each driver from implementing large amounts of would-be-common code each in their own non-standard and slightly broken way.
That's why some types of drivers tend to be huge on Windows: they're offering functionality that the (relatively stable) driver API doesn't support. This approach requires drivers with user interfaces. Good luck doing that cross-platform.
What you're asking for is highly impractical.
One of the joys of the BeOS kernel is that it is optimised to be extremely responsive to interactive, graphical workloads.
The Linux kernel is not.
The problem is that the Linux GUI stack is largely corrupt beyond recovery:
- the UI is largely awful. Most distributions adopt GNOME or resort to writing their own DE if they want to achieve a particular UX end-to-end.
- the graphics stack is largely awful. X11 is bad, and Wayland is less bad but I want to see something that doesn't leave me wanting if it's the "end-all and be-all" of modernised UI stacks.
- the audio stack is bad. PulseAudio should be replaced, IMO, but Pipewire may fix this
- the graphics toolkits are bad. GTK still has the proverbial thumbnail picker missing, and QT has commercial licensing issues.
- packaging is a mess. There are at least 3 "new" packaging formats (AppImage, Flatpak, Snap), and countless traditional packaging formats. It's a nightmare for independent software vendors, who only need to produce 1 (maybe 2) binaries per any other platform.
- binary compatibility is capital-B BAD. Windows 10 can run GUI binaries compiled for Windows 95, and meanwhile Ubuntu 8.04 binaries will refuse to run on Ubuntu 20.04 unless they were statically compiled. "Don't break the userspace" begins and ends at the Linux kernel - literally everything above it, including core GUI libraries and the C runtime itself has broken old userspace usages.
The problem is not reworking a scheduler, it's reworking large parts of the desktop stack and the culture surrounding it. At this point, I'd surrender Linux to the servers and wait another 10 years for another OS to have its Year of the "X" Desktop.
That doesn't matter because you can recompile all the crap you want.
Even if you could theoretically recompile, compiling old unmaintained code on new distributions is a huge pain. Broken/removed API calls for every external library you include will be a huge pain, because they will need to be fixed manually. That's assuming the library still exists, and you don't have to port your software to some entirely new library that suddenly means you have to restructure or refactor the entire codebase to use it.
That's not trivial. You need a programmer to do that, which is outside of reach of most average users.
Huge pain? You know near nil. Most patches are not that big.
What it gets them is relentless hostility, contempt, and eventually deciding that it's easier to give up.
If you knew anything about Linux history you'd know that, but I guess smart-alec retorts are easier than learning.
When compared with other certain unified hobby OSes out there written in C++ or Rust, they don’t even come anywhere close to being useable. Not even a complex browser or office app running. They are still stuck in a VM and yet celebrated here even when this OS is able to run all of that on real hardware after installation.
If RISC-V is not modern hardware, then I don’t know what is.
How well does Haiku work on bare iron nowadays? Any tested laptops - better, any repository of tested brand/models, or of tested hardware in general?
My biggest issue running Haiku bare metal has persisted across hardware and form factors though: Stability. I can't seem to keep a machine running for more than a few days without a kernel panic or a frozen GUI. I haven't tried the latest snapshot on my "newest" acquisition, a Haswell era HP EliteDesk castoff from work, it's my project for this coming weekend and I'm looking forward to seeing how long I can keep it running.
Now that I think about it, it definitely could be USB related on those other machines too. I've used the same cheap USB KVM for years and it has polling issues in Windows 10, I didn't even consider that might be the source of my issues elsewhere. I'll try it without the KVM and see what happens. Thanks!
Note that the wireless only works if you do an install, the live-cd won't be able to use the wireless driver for some reason.
So Linux is Minix then?Where does it say that Haiku is BeOS?
I'm not claiming that such a port would be easy, but that the Raspberry Pi represents a relatively stable platform. Rather than having people turn away when they discover that component X of their PC is unsupported, they have a path where they can boot the image, try it out, and learn how to develop software for it. While this probably wouldn't address the issue of kernel/driver development, it may help to address application development.
The problem, as others have said, is the lack of dedicated manpower. This is one of those situations where I wish I was a developer, I'd dive right in and try to get Haiku to daily-driver stability on the platform. There is work being done on porting to ARM already, and my hope is that as the X86/64 platform matures, focus can shift to ARM and the Pi specifically.
The future, rather than the past.