However, the last time I looked into this matter, GNUstep still has quite a way to catch up to macOS Mojave's version of Cocoa. At the risk of going off topic, I still don't understand why GNUstep never seemed to have reached critical mass despite all these years of development. I was just a kid during the mid-1990s, but KDE and GNOME took off around this time while GNUstep kept pressing on. Here's my thoughts: maybe in 1995 and 1996 the OpenStep API might not have been attractive to developers in a world where Windows completely dominated the desktop, but this doesn't explain the decision of the KDE developers to use Qt instead of GNUstep (as well as the decision of the GNOME developers to use GTK+). There may have also been much uncertainty in 1997 following Apple's purchase of NeXT about the future of OpenStep. Such anxieties would be alleviated sometime during 2001-2003 when Mac OS X was released and started gaining popularity, but by then KDE and GNOME (and their respective underlying GUI toolkits) were firmly established in the Linux desktop ecosystem, and to this day they remain the dominant Linux desktops, and many less popular desktops still rely on Qt or GTK.
— System-wide UI consistency, especially in the fine details
— Singular UI/UX vision eliminates points of confusion and everything-is-a-compromise choices for third party developers
— Navigable by non-experts, even when things turn to shit
— Nominally "perfect" hardware support
— Robust colorimetry
— Millisecond audio latency
I use Linux extensively, but I'm not aching to leave macOS as my primary platform. Yes, they did start to go a bit loopy and off-track around Mac OS X 10.7–10.9, but from my vantage point it has been all uphill again since Mavericks.
The idea of running Mac apps in a Wine-like compatibility layer sounds worse than anything Apple could ever do to macOS. Yes, I like free. And yes, I like personal choice. But I value my time and I'm sorry to proponents of other platforms, but macOS just values my time more than any other platform.
 The achilles heel of the open source community stems from the lack of a unifying vision and a top-down approach. This has major advantages as well as major disadvantages. But nonetheless it affects the result. And depending on your priorities, the advantages might outweigh the disadvantages or vice versa.
 And to be fair it's equally true in the opposite direction. Open source succeeds in many aspects like robustness, hardware compatibility, longevity and transparency/deep trust. Hence why it has utterly destroyed the server market. These areas of success are not accidental or arbitrary.
However, the lack of unifying vision is not an Achilles heel of the open source community. It's the essence of it. I like how any two installations of two distributions are never the same, and this broad ecosystem just works, tumbles, fights and creates new stuff.
I've created a compression algorithm  for my graduation project. It was a novel approach, but performed worse due to some technical reasons. One of my professors asked why I didn't compare it to a similar algorithm instead of plain old Zip. I said that our algorithm is never tried before, and it's pretty novel, so we had no equal to compare.
Then, one of my other professors replied to other professor asking the question. "There's no need to find an equal to compare. They did something new and untried. This is research".
Equally, Darling doesn't need to surpass macOS to be useful, because "this is research" too. Maybe they will learn something useful from this endeavor and incorporate it into Linux or their future development career.
This Katamari Damacy nature of GNU, Linux and free software community makes it so powerful and unique. Let Microsoft, Apple and Google have a top-down approach and others play as they feel and everyone improves the world as they wish.
The lack of unifying vision in open source can be both an Achilles heel and its most valuable asset.
The lack of a community-led ethos in macOS can be be both its Achilles heel and its most valuable asset.
When read in full, we're on the same page. OTOH, I also didn't disagree with you on anything it seems. So, the two comments can be nicely summed up as a look to same lawn from a opposing vantage points, since I use Linux more often than macOS (albeit I have a MacBook Pro that I use pretty regularly).
If my tone sounds a bit harsh, sorry for that. English is not my mother tongue. Also, if you can tell me where it's rude(ish), I can work on it.
> — Singular UI/UX vision eliminates points of confusion and everything-is-a-compromise choices for third party developers
I mean, first-party GNOME applications are as consistent as first-party Apple macOS applications; and third-party macOS applications are as highly customized as third-party applications one might run on GNOME. Not sure what you base your experience on but this is not really any more of an issue for those looking for consistency over all else.
> — Properly accessible by non-experts, even when things turn to shit
I'd like an example, not really sure what you're talking about here. In my experience, when things go wrong on macOS, you're prettymuch SOL until the broken feature is either fixed, reimplemented, or removed in a subsequent major release of the operating system. In my experience, Apple doesn't respond to bug reports any more than other desktop OS vendors, including open source ones.
> — Nominally "perfect" hardware support
The graphics drivers on macOS are very poor. Apple's decision to neglect and then subsequently abandon OpenGL is pure laziness, and their implementation was already bad when they were still maintaining it. Furthermore, hardware compatibility with random gizmos on macOS leaves a lot to be desired in comparison to Linux, in my experience, though I guess your mileage may vary.
> — Robust colorimetry
colord works just fine for me, no amount of software is going to profile your monitors for you though.
> — Millisecond audio latency
I'm pretty sure CoreAudio frames are not less than 44 samples.
What I will give Apple credit for is a great set of graphics manipulation libraries which make it simple to use high quality scaling and manipulation algorithms, and making sure it's a conscious choice to use cheaper, faster ones, and their implementation of seamless suspend-to-disk with full disk encryption is admirable (though honestly, dispensable at the end of the day). Their shaping and font rendering libraries are almost as good as Harfbuzz and FreeType 2 (though I think they've just started using FreeType at least on some platforms). Accessibility features are also pretty good, and depending on what disability you have, it's a tossup between GNOME and macOS.
— I mean, first-party GNOME applications are as consistent as
Straight-up disagree. It all depends what your threshold of UI/UX consistency is. Unfortunately most open source enthusiasts have a low threshold. It is difficult to convey the importance of a thousand subtle details, each one impossibly trivial, but the sum total moves mountains.
— I'd like an example, not really sure what you're talking about here.
An example: If my aging father's Mac turns to shit, he can hold down R, boot up the computer from the recovery disk and restore from a Time Machine backup that he himself set up with no assistance required from anyone.
— The graphics drivers on macOS are very poor.
That's really a marginal opinion. Maybe if you're an OpenGL developer. Apple provides an excellent framework (Metal) which works brilliantly. Yes it would be great if Apple delivered first-class support for Vulkan, but complaining about OpenGL is last decade's problem.
— colord works just fine for me
It's increasingly robust at handling the basics, yes.
The Metal drivers are more stable, but the shader compilers generate slow code, just like the old Apple OpenGL drivers (the ones they wrote for the Intel GPUs). OpenGL issues are not "last decade's problem", I have on several occasions needed to write workarounds in WebGL shaders to prevent the NVIDIA drivers from crashing the whole windowing system (and every application with it) on macOS. On the same hardware, the drivers on Linux are faster, more stable, and more featureful and also include implementations of Vulkan.
> An example: If my aging father's Mac turns to shit, he can hold down R, boot up the computer from the recovery disk and restore from a Time Machine backup that he himself set up with no assistance required from anyone.
That's not a macOS feature though, that's an Apple PC firmware feature. I'll grant that AFAIK no vendor who ships a Linux distribution by default has a durable recovery partition, but there's nothing about macOS itself which makes that easier.
That you see a distinction here is telling. The end user doesn't see the distinction.
As for all of your complaints about the OpenGL drivers, that's an issue for developers, not end users. Yes, maybe there might be more 3D apps and faster 3D apps if the video driver situation was better, but also maybe not? Either way, this is all irrelevant because the fundamental argument here is the impetus for end users to remain on a platform. Just because you can possibly eek out an extra 20% performance in your Linux app on the same hardware isn't going to shift people.
On a side note, could the people who are down-voting microcolonel please stop? This is a discussion of ideas, and his/her ideas are being expressed with valid form and structure.
I shipped an application with WebGL under the impression that it would not need to be custom tested on a decade's worth of MacBooks Pro, but received a report later that a user had lost data because the graphics driver restarted the windowing system when he opened my webpage.
If an honest, non-malicious webpage can cause your windowing system to restart, that is an end-user problem. Even Apple themselves don't bother to test their official websites on more than one generation of Mac, why should everyone who uses this now half-decade-old web API have to buy $10,000+ worth of equipment, some of it with old versions of the OS, because they can't trust the vendor of the hardware to maintain the drivers?
And this is on top of the fact that, generally speaking, on a given piece of hardware, the application will run dramatically more slowly on macOS than on Windows or Linux.
> That you see a distinction here is telling. The end user doesn't see the distinction.
I tend to agree, but in this case there's an important distinction for us to make, even if the end users are generally unaware. In the case of recovering intentional backups on a fully-functioning computer, Apple has done a good job of making that straightforward on their laptops, if you attribute that to "macOS", then you miss the point that a) any vendor could offer the same thing, even if they don't, and it has nothing to do with macOS and b) improving "Linux" won't make a recovery partition suddenly appear on your computer. Furthermore, Apple's advantage here only applies to functioning computers. Apple makes it extremely difficult to recover data from damaged devices, and in the case of the iPhone, they literally censor any mention of it being possible from the forums, and lie straight to the faces of their customers. When something gets a little bit wet, Apple will tell you that you should have bought iCloud, and that your data are gone forever.
— Apple makes it extremely easy for anyone to start backing up their computer in a way that covers an array of scenarios from accidental or malicious deletions to full-on disaster recovery.
— It has everything to do with MacOS because the creation of backups happens within a MacOS environment and the restoring of backups happens within a MacOS environment. The only "firmware" aspect is the (relatively) simple boot-time keyboard triggers.
— Apple's solution to damaged devices is having a comprehensive backup strategy. If your plan is to recover data from a damaged device, you've failed before you begin. Apple doesn't offer first-party data recovery services, but there are plenty of third party services to handle disaster recovery situations. To the extent that they make it difficult for you to recover data from a damaged device, it's because they do robust on-disk encryption.
— Yes, any vendor "could" offer the same thing. That's exactly the point. They could. Most don't. That's the point.
Apple says this is the case, but it's not actually the case. Repairing a device is how you recover data from it. I know it's preferable that people make an effort to protect their data, but in the real world, approximately zero people back up anything.
But as a practical matter I don't disagree. If you failed to maintain comprehensive backups (or you have suffered a rare double-disaster) then it's great that these hardware repair experts exist.
For example Linux mint comes with time shift. I honestly haven't paid much attention to what people do provide as I've mostly historically used rsync or more recently zfs send/syncoid.
That's true in the basic sense, and my impression is that KDE is better than Gnome, but Mac OS is ahead of both in the overall consistency of interaction with the application.
All three platforms have a set of human interface guidelines:
I use KDE, and the core applications are equal with Mac OS for consistency of interaction. The difference is the niche applications, where I think a Mac OS developer puts a bit more effort into following the guidelines, whereas the KDE developer adds an additional feature or customization.
On Mac, you get that insane latency without having to put a moment's thought into any part of your architecture. I get that insane latency even if I never bothered to learn why latency matters.
"Just works" is more than ease of use. It's productivity. It shows respect to your mental load and mental priorities. And it is worth money to anyone whose time is valuable.
The plus to CoreAudio is that any system it runs on has probably been designed so that a non-technical user can get something like the low-latency you're describing by default. The minus is that hardware probably costs at least $800, and CoreAudio doesn't support things like the RPI.
The plus to ALSA is that it runs on things like RPI, with the minus that non-technical users probably won't get round-trip latency below 5 ms without paying someone Mac-level money for hardware designed especially for Linux audio.
Edit: and it's possible to go much lower, for example look at the linux-based Bela: https://bela.io/about#why-latency-matters
CoreAudio is actually pretty good.
Without the linux-rt patchset, jack pipelines do overrun even when running jack at 10ms. Linux has quite the latency spikes.
It's possible to demonstrate this fairly quickly by running cyclictest from rt-tests.
I wasn't arguing that, I was saying that "millisecond latency" involves frames smaller than (samplerate / 1000) samples, completely ignoring overhead.
> Without the linux-rt patchset, jack pipelines do overrun even when running jack at 10ms. Linux has quite the latency spikes.
Are we talking about sub-10ms latency? Sure, that's a different matter, and Linux's default schedulers could stand to improve support for mixed-realtime.
Yes. Even for the audio case, having to run jack at 10ms is already a lot, particularly since Linux is not the only source of latency.
>Sure, that's a different matter, and Linux's default schedulers could stand to improve support for mixed-realtime.
Even SCHED_FIFO (where preemption is immediate, and cpu isn't released until the high priority program yields it itself) suffers from latency spikes; it is not a scheduler issue but an overall Linux design issue.
cyclictest from rt-tests will easily highlight that. Try leaving cyclictest --smp -p98 -m on the background. After a while, you'll notice the entirely unacceptable max latency readings. All the test does is set an alarm so that the task becomes runnable (which means it should run immediately due to SCHED_FIFO) and check the difference between the alarm and the current time.
Mainline kernel is effectively unusable for anything that requires low latencies such as audio work, as it spends too much time running non-preemptable code in supervisor mode. Linux-rt improves this situation radically, but the monolithic design simply isn't suitable for this; microkernel multiserver systems are a much better fit.
Incidentally, refer to seL4 for a system that has a guarantee in the form of a formal proof of worst case execution time.
With the decline of Apple hardware, this is less true, and modern Linux hardware support is, at worst, no worse than what other OSes provide, in my experience.
> Navigable by non-experts, even when things turn to shit
I think the "Closed Box Philosophy" of Apple either defines or disproves this: Non-experts can't be tripped up by having to fix things... because nobody outside of Apple can fix things, eh? Can't screw up something you're not allowed to do.
> The achilles heel of the open source community stems from the lack of a unifying vision and a top-down approach.
I will still take a perceived regression in hardware from Apple over anything else on the market today. They are that far ahead. Still the best touchpad and connectivity support which are the two most important factors in my book. Also still the most aesthetically pleasing and known brand of laptop in the world. The latter point is not so important in my book, but I do believe it helps contribute to the resale value of the Mac, which again is class leading in terms of mass produced laptops.
I get the appeal of the Apple Universe: It Just Works, everything is crafted, etc. The problem with that idea is that it's been eroded from two directions: Apple's own incompetence at making hardware and software which Just Works, and everyone else catching up to Apple at lower price points and, as I said before, while offering more meaningful choice than Apple has since the days of the Apple II.
I remember when I needed ndiswrapper to use WiFi on a laptop under Linux. I remember when I needed a bizarre Frankenstein pseudo-FTP setup to access NTFS partitions on Linux. I remember when USB didn't exist and you needed device drivers for every single thing. Those days are gone. Macs not having to deal with those things is no longer a competitive advantage.
These days, Macs are only really special in that they tie you to the Apple universe. I'm not interested in being tied to a corporation like that.
Also, if you care a lot about looks then I'd recommend checking out Razer, or aluminum chassis notebooks:
> Singular UI/UX vision eliminates points of confusion and everything-is-a-compromise choices for third party developers
There are consistent themes across both GTK+ and Qt. Pretty much all my apps share a similar theme and UI. There are Mac Apps as well that deviate from whatever "standardization" you're referring to.
It is no longer 2004. A lot of Mac users are also more than capable of running a few shell commands when needed. There are tons of web UIs and graphical apps to manage a Linux OS, from hardware, users, configuration, etc. However, a properly configured Linux OS won't need a lot of interaction on the frontend.
> Nominally "perfect" hardware support
Linux has way better hardware support than Mac. I'm not sure what you're referring to here.
> Robust colorimetry
Agreed, this is one area that needs some improvement, but last I checked it is pretty well supported:
> Millisecond audio latency
And then you open an app that’s neither GTK+ nor Qt, and you’re back to square one.
Besides, there’s more to UI than “theme”. I have ^w mapped to delete word on OS X. One line in one config file. It works universally in every text box on the system.
I tried setting that up in Linux. Eventually got it working through some gnome setting or some such. In some apps. Firefox didn’t respect it and wanted its own setting iirc. Then it would forget it every few months and trying to delete a word while I was typing would unconditionally close the browser. Fucking ridiculous stuff like that abounded. Life’s too short.
> Pretty much all my apps share a similar theme and UI. There are Mac Apps as well that deviate from whatever "standardization" you're referring to.
A few (very few anymore, in my experience) might deviate in terms of widget styling. Essentially none ignore system wide keybindings, or fail to integrate with system wide services, etc. In terms of inconsistency it’s night and day vs what I’ve had to put up with from Linux desktops.
The same is true on every OS, if you open a java app on macos it's not going to look right, do you blame apple for that or the developers of the app?
> In some apps. Firefox didn’t respect it and wanted its own setting iirc.
Will firefox respect it on macos? It's not a good example anyway because firefox is not native, firefox is the electron of the 00's. Even when firefox tries to emulate the native theme it screws up, I had to turn off my dark theme just to get text areas with visible text.
I’d blame the app, because in the case of OS X there is a single consistent set of UI components that effectively everything uses, and being gratuitously incompatible is on the app.
In the case of Linux there is only a hodge-podge bazaar of gratuitously incompatible UI kits that every third app disagrees on which to use, so it’s hard to blame any single dev for the universally frustrating shitshow that results.
As an aside, the only java app I’ve interacted with in years (seriously, how often do these even come up any more?), IntelliJ, actually sunk the effort into looking and feeling right. Like I said, it’s night and day on this stuff versus what I experience in Linux on the regular.
> Will firefox respect it on macos?
> It's not a good example anyway because firefox is not native, firefox is the electron of the 00's
Electron apps also get this 100% right on OS X.
I don't honestly believe that the inconsistency in colors or gui elements that exist on a typical linux system are a meaningful barrier to anyone.
A typical desktop ships with mostly qt or mostly gtk apps that share the same look and feel + a browser that looks and works like the user is used to working across platforms.
In a sense the browser is actually the single most important app for the majority of users and its more useful for it to be consistent with expectations than consistent with the desktop.
I think power users are more apt to be put off by small differences in much used keybindings but they are the minority.
While consistency may seem like an impossible task there are really 4 really big camps. Chrome Firefox gnome kde.
This seems like a small enough group that getting everyone to agree on a common way to communicate to all the desired way to define keybindings ought to be tractable.
I think this is a worthwhile endeavor we ought to persue.
As someone who was part of the GNUstep scene and contributed a tiny bit of code, I think the reason is obvious: GNOME was getting the big corporate investment, and KDE also had a stable foundation somehow. As you say, the "Linux desktop" very quickly came down to a GNOME/KDE duoply. Plus, GNUstep was written in Objective-C which was outside of a lot of devs' comfort zone at that time. So, GNUstep only managed to attract a few passionate hobbyists but no more.
Whereas GNUstep was associated with an utterly failed platform.
But subsequent to the NeXT buyout and as MacOS increasingly proved its geek credentials, you can see how the community yardstick has progressively shifted in its direction. Had the serious push for a Linux desktop begun ten years ago and not 25 years ago, GNUstep might well have been the victor.
They've added theming support to GNUSTEP, but that's really lipstick on a pig. Then again, I can't blame them too much, the project is mostly quiet and doesn't attract as much attention as the other popular Linux toolkits.
In fairness, they've made strides since then. The Foundation library seems to be aiming for compatibility with a macOS release from several years ago, at least still more recent than the early-to-mid 90s.
Edit: It looks like they already have Foundation working.
Since what Apple products do has proven adequate for a large number of people, and you are no less adaptable than they are, you know you can adapt, too.
Having once chosen to adapt to what Apple has chosen to offer, you find it easier each time, until it becomes wholly unconscious. Each time Apple takes away something you had used, you might momentarily balk at the "upgrade", but always acquiesce in the end.
[Edit] So, the appeal of Mac emulation is very limited, because it starts out with tinkering.
The things that are resistant to tinkering in MacOS are the UI and how you do stuff. Those are infinitely flexible in Linux, but heavily standardised on the Mac. However when it comes to getting useful stuff done, the Mac has a wealth of tinker-y toys waiting to do your bidding.
The amount of "tinkering" I have done on my Ubuntu PC was limited to changing background and reducing icon size to fit my monitor better.
I'm not going to apologise for having better performance and first class containers.
Outliers need not apply.
Tinkering is endlessly playing with the window manager configs, changing desktop environments, getting your system "just so", switching this (e.g. audio framework) for that, etc.
But not completely resistant. For example, there are multiple tiling window managers for os x.
I won't say that tinkering is totally productive, but 1) personally my tinkering on Linux has led to technology development and I'm the type of guy who benefits from that anyway, and 2) I believe the pain you leave behind in lost productivity is at least offset by 25% instant upgrade-related stress for each Apple device in your household or sphere of personal work activity.
Even just constrained to hardware, the horror stories about new Apple devices alone made me doubt my HW upgrade plans. By itself that was enough to make me wonder if I was about to throw away thousands.
Unsurprisingly I felt like there were things Apple could do to make this all better, but like you said, they cannot be made to do what you like :-)
Many old Mac users were very unhappy about the upgrade from Classic to OS X. "File extensions? Non-spatial Finder? A command line? What is this bullshit, and why does it run so slowly on my top of the line 400 MHz Power Mac G4..."
But eventually they adapted. At present, Apple is "boiling the frog" on turning Mac into something closer to an iPad Pro with a keyboard (Marzipan brings iOS UI style to desktop; mandatory app notarization prevents running non-approved software; etc.) Despite the grumbles, Mac users will acquiesce here as well and just get on with their work.
Talking to Apple enthusiasts is really tough. It's almost as you are speaking a different language. Of course, in the end of the day a computer is just a tool. They are happy with what they can do with their machine and I am happy with my machine. But they have a hard time understanding why I see their system as limiting.
Personally I like to tinker and personalise only a subset of tools I use to get things done (iTerm, tmux, vim...) and have good defaults on the rest.
> Personally I like to tinker and personalise only a subset
> of tools I use to get things done (iTerm, tmux, vim...)
> and have good defaults on the rest.
Our lives, especially with that high work-load we still have to cope with, don't allow us to choose many things to do when not working.
If you have family with kids you loose every tiny bit of freedom to "waste your time" on tinkering and get easily annoyed when things just don't work out of the box.
I like the possibility to customize everything on linux but I also hate how the linux world can't provide the standardization and clarity I am used to from OSX.
How on earth is there still no terminal emulator like iTerm2 on linux????!
From linux I miss the possibility to customize everything on OSX. I hate it how Apple allways tries to lock me into it's golden cage and imposes it's way of thinking on me.
In 80% of times I can totally agree with the apple way but there is 20% when I could throw that MBP against the wall with full force.
Ok I didn't want to start some OSX vs. Linux debate here it's just an example to show the love-hate relationship to both.
It's great there is some effort put into connecting these worlds. The way our economy works is the reason we don't have the computers and OSes we really want to have and sadly only the open source world will change this.
> How on earth is there still no terminal emulator like iTerm2 on linux????!
* User accessible features like the tmux integration (look at this: https://www.iterm2.com/documentation-tmux-integration.html)
Just to mention a few advantages.
Of course most of the awesome apps like tmux just need you to learn a bunch of new commands but I can't imagine myself doing that a lot over my whole lifetime.
Some things I just want to use without a steep learning curve and apps like iTerm2 prove that this is possible.
I use the shell a lot since many years but I'm far from using it the way I could imagine it in the 21st century (the shell is still the superior interface for computers in my opinion but I'm afraid this topic didn't really get much attention & innovative approaches in the last decades).
Example, I don't know about a terminal on linux that both works as a drop down terminal (quake-like) & supports inline images.
Or split screen & password manager.
That was the REPL of non-UNIX graphical workstations of uore, and inline images was naturally part of it.
Many users are not college students discovering the world of UNIX and customization and tinker happy -- they're people that want to get something (not "how the computer works" related) done.
I think your post highlights exactly that: a buy-in to the brand so strong as to not even try and look elsewhere when issues arise, just adapt and accept.
But, I also love to tinker and automate, and MacOS gives me a full Unix environment to play with.
The IBM PC was the exception here, and it only grew thanks to the mistake of IBM not protecting their BIOS as they thought.
I'm a long time Apple user from the Mac SE. The reason I switched to OSX was effectively, I was getting a Unix OS than ran MS Office and came with a really nice suite of built in programmes that 'just worked'. It was the best of both worlds.
I'm a tinkerer and even today MacOS still allows me to tinker quite a lot. But there's no requirement to tinker. If I want to actually get work done, I can use it in 'It just works' mode.
The appeal of Darling isn't bringing macOS to Linux, the appeal is breaking down the walls put up by shortsighted developers who decided not to bother porting the apps they developed for macOS over to a different platform because "every developer I know uses macOS".
Linux users who would want something like Darling aren't aspiring to be macOS users, they are Linux users who want/need to use software written by macOS users who didn't put in the effort to port their software.
I was a strict Windows user for a long time, and I remember the days of reinstalling Windows every few weeks or so to get performance back. Microsoft seems to acknowledge this (but not fix!) with the feature of resetting your PC.
I stopped using linux when the system completely broke after updates (on a slow connection) and spent way too much time trying to fix it. (This was last year on Ubuntu)
When I was younger I'd just deal with them, but now I just use OS X.
There are countless of utilities for tinkering with you mac setup and the best and most tinker-y terminal for any platform is a mac-only app (iterm2). It's just that macOS starts out with a far high usability level without tinkering and comes with lots of basic stuff working that no amount of tinkering will ever give you with linux (like you can actually find files on your computer, good luck doing that with linux).
And of course even if your premise were true, there are plenty of reasons people would like to run macOS apps under linux:
- lots of good software is mac only
- automated testing with macs is a pain and expensive, doing at least some with linux boxes would be a pretty decent win.
I was with you right up until you made the cheap shot at Linux. That was as unnecessary as it was untrue.
> (like you can actually find files on your computer, good luck doing that with linux)
Linux is no harder to find files on than it is on OS X. They have the same CLI tools, they both has desktop environments that support file indexing and rapid searching (eg Spotlight). And on Linux that all gets installed by default with the desktop environment - just as it does on OS X too. So they really aren't all that different.
> lots of good software is mac only
I agree. But there is also lots of good software on Linux too. In all my years of running Linux and OS X the only Mac-only application I've missed on other platforms is Logic. But even there, we're talking 15 years ago and Linux has come a long, long way since in terms of the quality of DAWs available on it.
Here are some concrete examples:
- macOS has essentially a single set of efficient and consistent keybindings that works everywhere. Command line and GUI work essentially the same, i can use emacs style navigation with C-a C-e C-f C-b etc. everywhere. I can copy with Cmd-C and paste with Cmd-V in my terminal. The geniuses who created the first mainstream Linux GUI paradigms decided to copy windows and go with Control as the main key modifier to create a set of clashing keybindings for GUI and console.
- file history (If I messed something up in my keynote presentation, I can easily compare previous versions and restore what's needed)
- Cmd-? allows you to access any menu item quickly by search, how do I do that on Linux?
- finding files (see below)
- MacOS can recover from memory pressure fine. How do I get my linux machine not to effectively crash if I run an app that happens to use too much memory (technically it just swaps itself to death, in practice reset is the only remotely timely way to recover)? I've tried any amount of tweaking, but turns out that you can't turn off overcommit and swap completely, even if have lots of memory, things will just break randomly (chrome for example).
> Linux is no harder to find files on than it is on OS X.
Can you point me at something on linux that comes anywhere close to spotlight/mdfind?
On of the reasons spotlight works well is file system integration, to the best of my knowledge nothing on linux does that.
It's trivial to open a file by recency or contents or type (or tags or ...) from the open dialogue of any application on macOS, how do I do that on linux?
If I want to find all mp4 videos of 1000x1000 resolution that I modified within the last week, or all jpeg files with sRGB color profile, I can do so instantaneously with mdfind.
That is not to say that linux isn't more ergonomic for certain things, but in my experience they tend to be mostly limited to things only programmers would care about (/proc is the number one thing I miss on a mac; some commandline utilities are also nicer on linux, but you can normally install them easily enough on macs as well).
For what it's worth: I'm using both linux and macOS daily and am both productive and reasonably expert with both.
Re. file history: use real version control or a filesystem with this functionality (I imagine that ZFS would). Better to just use Git.
Re. Cmd-?: no alternative exists that I know of. However, menu bars are usually less prevalent on software written for traditional Unix than on a Mac.
Re. Finding files: GNOME does this with the search bar OOTB it appears. Otherwise, use search in Nautilus like you would in Finder.
Re. Memory presssure: search “swappiness Linux” into a search engine. Do you prefer maximum available memory or responsiveness? Linux gives you the choice here.
Recency, contents and type can all be sorted with Nautilus on GNOME. I’m not aware of a way to find very specific files that meet your search query quite like you mentioned (although I’m sure they exist). I would pipe file into grep personally, but that’s probably too rudimentary for what you want.
How would this help with my example? Git is terrible for managing non-text files and has zero support for browsing such files interactively. It also doesn't work at the file level and is pretty much unusable for anyone who isn't a developer.
> Re. keybindings: you can change your default GTK bindings to Emacs-style if you want to.
Yeah, but that doesn't really work, it just makes the whole mess even worse (oops, no longer can select everything with a shortcut, webapps, other toolkits don't care etc).
> Do you prefer maximum available memory or responsiveness? Linux gives you the choice here.
It doesn't. I want responsiveness, but I can't have it. Turning off overcommit (and swap) does improve responsiveness but is not a viable option for a desktop system, apps will just break if you turn off overcommit completely.
> Recency, contents and type can all be sorted with Nautilus on GNOME.
For me this works neither reliably nor with acceptable performance (unsurprising since a proper version needs FS integration).
Back when I got my first SSD I ran Linux without a swap file/partition. I did this partly because it was only a 60GB SSD so I wanted to conserve space. I also did it because I didn't want to shorten the life of the SSD with (this was back when such a thing was a concern) and I ran that set up for years on a pretty modest 8GB RAM with KDE installed (ie not just a lightweight tiling WM).
> unsurprising since a proper version needs FS integration
It really doesn't and apfs (your file system in OSX) doesn't even do this. In fact it's probably better that your meta-data indexer isn't embedded into your file system driver because you're just going to slow down file system operations - which matters a lot on UNIX-based platforms because they do lots of file system operations.
A far better approach is to have your indexer run as a separate process that monitors file writes (you can still have a kernel hook for that if you wish) thus you can then catalogue your files without interrupting your normal file system operations. You can also add more granularity like separate database per home directory (which would be much harder to do securely if you were embedding that code into the fs driver without then going down the route of having multiple tanks ala ZFS). It also makes it much easier to optimize your meta-data db since you can now dump everything into a RDBMS rather than attached to the space constrained inodes.
For what it's worth, this is another area I have first hand experience with because I've written a few hobby file systems over the years. Nothing serious nor performant; just myself messing around with a few ideas. But it's still earned me a greater appreciation for the design decisions behind the file systems we do commonly use.
You can turn off swap if you don't need hibernate, and from memory even turning off overcommit used to be OK-ish (of course most software written for linux doesn't try to deal with failing malloc requests gracefully, because there's no point since it never happens in the default configuration). You end up with a noticeably snappier system. However, this no longer seems to work in practice. Try turning off overcommit completely and see how long it takes Chrome to crash even if you have a lot of available memory.
> A far better approach is to have your indexer run as a separate process that monitors file writes (you can still have a kernel hook for that if you wish)
This is how spotlight works though, no? It's a separate process gets notified by the kernel on file system changes and then indexes them (that's what I meant with FS integration). I agree that you don't want to synchronously update all indexing meta info on FS operations because everything will grind to a halt if you do that. But you still want OS support such as reliable notification and extended FS attributes to store things like "this was downloaded from here" or tagging. I don't think there is anything particularly magical about this (spotlight is 15 years old tech and linux has had xattr support in all major file systems for ages) but in practice xattrs end up pretty much useless in linux because next to nothing uses them (baloo probably does) and as far as I'm aware there is no robust file system change notification API (you can use inotify for some stuff, but it's limited in various ways). I'd love to be wrong about this though.
I think the situation is better on macOS, but it might just be that spotlight is more polished and there is no fundamental difficulty in writing the same for linux these days.
Do people still hibernate? I thought these days suspending was a solved problem.
> Try turning off overcommit completely and see how long it takes Chrome to crash even if you have a lot of available memory.
I thought the point of this discussion was talking about sane defaults? Of course if you're going to mess with kernel parameters then you run the risk of getting undesired behaviour. It's no different to when we used to tweak the BIOS in the 90s. So I'm not going to disagree with you there. But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts?
> This is how spotlight works though, no? It's a separate process gets notified by the kernel on file system changes and then indexes them (that's what I meant with FS integration).
That's not file system integration though. What you were actually describing was a completely different behaviour. Moreover, you claimed that Spotlight works differently from other tools of it's ilk and that is also untrue.
> (spotlight is 15 years old tech and linux has had xattr support in all major file systems for ages)
Again, you don't want that information in the file system table. Storing every little bit of information like that in xattr would slow down standard file system operations. What you actually want to do is store that information in a separate RDBMS (eg sqlite3, MySQL/MariaDB, etc). To be honest even something like Redis might work as long as it has a persistent backup.
> as far as I'm aware there is no robust file system change notification API (you can use inotify for some stuff, but it's limited in various ways). I'd love to be wrong about this though.
I've not spent a great amount of time with inotify but from my limited exposure I do recall it wasn't great with nested hierarchies. There's probably some better ways that I don't know of but this is a particular problem I've not needed to solve before so I'm as in the dark as you are.
> I think the situation is better on macOS, but it might just be that spotlight is more polished and there is no fundamental difficulty in writing the same for linux these days.
Honestly, I think the perceived differences are all imaginary. Like wine tasting when you're told one bottle is expensive and another is moderately priced - lots of people will start to imagine deeper flavours in the more expensive bottle even if those flavours don't exist. So much of our perceptions are based on expectations rather than experiences and I think that's what's happening here because I've used both Krunner and Spotlight and my honest impression is that they're both much the same.
I should probably have phrased this differently, "kernel file system layer integration" maybe. The relevant (and presently, I believe, lacking part) in linux would be VFS. It also relies on applications making consistent use of xattrs for some functionality, something that does also not happen on linux.
> Moreover, you claimed that Spotlight works differently from other tools of it's ilk and that is also untrue. [...] Honestly, I think the perceived differences are all imaginary.
Right. I'm not a file system expert, but I'm increasingly wondering if your confident pronouncements are backed up by sufficient knowledge what you're talking about. Spotlight is implemented with major kernel support in the form of fsevents. This allows the user space portions of it to receive fairly reliable and timely notification of file system changes efficiently. This is a key ingredient to make it work as well as it does.
(see e.g. https://eclecticlight.co/2017/09/12/watching-macos-file-syst...)
Now the thing is, linux doesn't have a direct equivalent (or at least if it now has, it's a pretty recent thing, more than a decade after spotlight).
Quoting from lkml (https://lkml.org/lkml/2016/12/20/312)
Other operating systems have a scalable way of watching changes on
a large file system. Windows has USN Journal, macOS has FSEvents
and BSD has kevents.
The only way in Linux to monitor file system namei events
(e.g. create/delete/move) is the recursive inotify watch way and
this method scales very poorly for large enough directory trees.
> But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts? But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts?
Let me try again: with default settings my high spec linux box ground to unusable state (and no I'm not making it up) frequently enough that I got sick off it. So unlike you (somewhat rudely) continue to imply, I'm not some bozo who randomly screwed around with system settings he didn't grok on a whim and then started whining after everything broke.
> Again, you don't want that information in the file system table.
Yeah, you do because that way it stays around when you copy, move or archive the file. You probably only want to do that with a few select metainfo fields (like the examples I gave earlier: download origin info and user supplied tags), but that's exactly what macOS does. Also, whilst I agree that storing search indexes and everything directly in the file system is probably not ideal, there is historical precedent of a systems that did exactly that, fairly successfully from what i hear (BeOS/BFS).
P.S. maybe a more productive direction: what is you recommended way for setting up some baloo or some other linux indexer for running mdfind-style commandline queries with it (I don't want KDE or Gnome and I think baloosearch vs mdfind is also easier to compare directly)?
But that's not how any other those services work - including Spotlight.
> I'm not a file system expert, but I'm increasingly wondering if your confident pronouncements are backed up by sufficient knowledge what you're talking about.
I appreciate your frustration but the problem here is that you keep conflating multiple different technologies and not understanding the distinction I'm trying to make between each of them. I admit I'm not the best at explaining complex technologies (though I wouldn't say the stuff we're talking about is particularly complex) so maybe this conversation is better left to yourself to do some independent research because there is clearly a language gap between what I'm trying to describe and what you're apparently reading.
But the crux of it is you seem to think Spotlight stores all of it's data in the file system itself and is unique in that regard. That isn't true on both counts:
1. Spotlight will use a separate database - not xattr - to store it's indexes.
2. Every tool akin to Spotlight (including Krunner) does the same
There is the caveat that some of the searchable parameters in Spotlight obviously would be in the file system as well as Spotlights database - which might be where you're getting confused? But not everything you described would by xattr and Spotlight itself wouldn't be running slow file system scans to return it's results when it could instead use a local cached database (as I described above) with indexed fields against several parameters rather than just the inode number (which I'll get into later).
You also seem to think that inotify and/or fevents count as "file system integration". It does not. They are completely separate APIs. Whether they're backed by a kernel syscall is completely besides the point because they're not part of the file system ABI. Thus they're not actually tied to the file system itself (ie Spotlight can then work against any file system rather than just apfs).
> Let me try again: with default settings my high spec linux box ground to unusable state (and no I'm not making it up) frequently enough that I got sick off it. So unlike you (somewhat rudely) continue to imply, I'm not some bozo who randomly screwed around with system settings he didn't grok on a whim and then started whining after everything broke.
But you are over commiting resources to virtual machines then moaning when it grinds to a halt. Which isn't any better than tinkering with kernel parameters and making the same complaints.
> Yeah, you do because that way it stays around when you copy, move or archive the file.
That's what fevents is for ;)
By the way, even the file system doesn't index files by file name nor path. Every file system object (files, directories, TTYs, etc) on UNIX and Linux is just an inode. So even the file name and path itself is just metadata stored against the inode. The kernel itself doesn't understand file names, it just passes node indexes around and your file system driver will return metadata such as file name - if requested - by the calling userspace tool. That's how it works at a low level - even though file names and paths feel like a first class parameter in the userspace tools we use.
The reason you don't want too much metadata in the file system itself (eg xattr) is because it slows down file system operations. In fact many GUI platforms intentionally store extended attributes in hidden (technically just dot-prefixed because there isn't actually a "hidden" attribute on UNIX) for that reason. Partly that reason anyway - the other part is because not all filesystems support xattr. Which is actually another reason Spotlight wouldn't want to use xattr.
> Also, whilst I agree that storing search indexes and everything directly in the file system is probably not ideal, there is historical precedent of a systems that did exactly that, fairly successfully from what i hear (BeOS/BFS).
I did run BeOS but I can't remember much about BFS so I'm not going to comment on that specifically, however the other systems were split between two camps:
1. They either stored extended attributes in hidden files or directories - such as .Directory (KDE), .DS_Store (OSX), desktop.ini (Windows) - or
2. instead of a traditional file system layout they will have what is ostensibly be a fully fledged RDBMS. Those tended to be exclusive to mainframes but Microsoft was experimenting with a similar approach with WinFS in Longhorn (I think it was called?). However it was eventually canned due to it's shitty performance.
That's at least the hysterical precedence of storing super detailed meta-information. Historically the stuff that appeared to be stored as xattr were often just read from the file data itself (eg image sizes might be read from the JPEG headers). In fact in the 90s it was common for some platforms to identify what the type of file by literally reading the first few bytes of that file (eg does it have a pkzip header?) and some CLI tools still do this (eg `file` does exactly that. `grep` reads the first 1000 (exact number escapes me) bytes and if there is a null byte (0x00) then it is assumed to by a binary file rather than text and outputs an error.
As an aside, one of the hobby file systems I wrote was long the kinds of lines of (2) too. It used vanilla MySQL/MariaDB as the back end because one of it's features was that you could then connect to a remote filesystem via a simple MySQL connection string. It was a pretty fun project and I'd gotten all the read operations working but there was a few bugs with the write operations that I never fully solved and I eventually lost interest when I starting working other projects.
> P.S. maybe a more productive direction: what is you recommended way for setting up some baloo or some other linux indexer for running mdfind-style commandline queries with it (I don't want KDE or Gnome and I think baloosearch vs mdfind is also easier to compare directly)?
Honestly I don't know. I might not like Windows much as a platform but I do really like the explorer.exe shell as UI paradigm. So I tend to gravitate towards KDE on Linux (plus I think the KDE team have done a great job refining that paradigm in ways the Microsoft have failed to). Krunner has always "just worked" for me so I haven't spent any energy looking for ways to replace it. However I'm sure there will be some guides online about setting up runners (are they called?) on Linux given the diversity of it's ecosystem.
Honestly, I find Mac OS keybindings and keyboard layout the worst thing about using Macs. Yes it might make some sense but when literally every other platform on the planet follows the same standard apart from Apple, it then makes Apple the ugly stepsister regardless of how rational it might be on paper.
I mean if you only ever use Macs then I guess you might like it, but for anyone who swaps between systems (or even just wants to use a non-mac keyboard) it can be very annoying.
> file history
You can have that in Linux
> Cmd-? (how do I do that on Linux?)
shrugs maybe you can't. But that's just one feature. As a counterargument I could list a dozen things that are in Linux that aren't in OSX. Like up to date core utils and proper package management. Which are just about the two most important things on a dev machine - far more important than Cmd-?. And sure you could install GNU core utils via brew, none of that is part of the default OS X build - which matters because the whole basis of your argument was that OSX has better defaults.
Ultimately though I don't see the point in nitpicking each OS - feature by feature.
> finding files
I'd already disagreed this in my previous post after you made that claim earlier
> MacOS can recover from memory pressure fine. How do I get my linux machine not to effectively crash if I run an app that happens to use too much memory (technically it just swaps itself to death, in practice reset is the only remotely timely way to recover)? I've tried any amount of tweaking, but turns out that you can't turn off overcommit and swap completely, even if have lots of memory, things will just break randomly (chrome for example).
The problem there is the application. However Linux will just kill the last process that over allocates memory. If you're getting the kind of symptoms you've described then you've either fiddled with your swap file settings (so not running defaults) and/or you're running Linux on some pretty awesome spinning disks while comparing it to nice fast SSDs on OSX. Either way, you're not comparing like for like.
> Can you point me at something on linux that comes anywhere close to spotlight/mdfind?
There's loads. Krunner, for example, has all the same features as Spotlight plus supports plugins to extend it. For example I can run math calculations in it - which I haven't yet worked out how to do in Spotlight.
> For what it's worth: I'm using both linux and macOS daily and am both productive and reasonably expert with both.
But do you actually use desktop Linux on modern hardware? Or are you just running Linux on a few servers and guessing about the desktop experience. I ask because your comments were valid about 10 or 15 years ago but really aren't the case any longer.
> That is not to say that linux isn't more ergonomic for certain things, but in my experience they tend to be mostly limited to things only programmers would care about
This I do wholeheartedly agree with.
You can use lvm or zfs snapshots, but that's not what I'm talking about – I'm talking about in-app browsable history of things like documents or presentations.
> Krunner, for example, has all the same features as Spotlight
Last I checked it used Baloo to do the actual indexing, the list of high priority features/bugs on the project site https://community.kde.org/Baloo (Baloo crashes a lot in various places etc.) and a quick google make it look like it remains alpha software at best, I'm also pretty sure it doesn't have an equally reliable index update mechanism. The most important thing about spotlight for me is that it can search file names and content (filtered by type if necessary) fast and reliably and up-to-date. But you can also do types of searches that as far I'm aware of none of the linux utilities can do.
E.g. show me all the items I downloaded from a google.com domain:
mdfind "kMDItemWhereFroms == '*google.com*'"
mdfind -0 "kMDItemWhereFroms == '*google.com*'" | xargs -0 -n1 mdls -name "kMDItemWhereFroms" | sort | uniq -c | sort -n
You literally just type what you want calculated, e.g. `sin(pi/4)`.
> As a counterargument I could list a dozen things that are in Linux that aren't in OSX. Like up to date core utils and proper package management.
nix. By my lights the only proper package management for any OS. Works fine under both linux and macOS (and will also trivially supply you with up to date coreutils).
> But do you actually use desktop Linux on modern hardware?
I have been using (well-specced) linux desktops for most of my work for a long time.
> The problem there is the application. However Linux will just kill the last process that over allocates memory.
I don't think that's how it works. The whole point of having a proper OS (rather than say DOS) is that misbehaving app won't just bring down everything else. Also if you have a process that wildly allocates memory, by default linux will start off swapping like mad, making your computer effectively unusable (and yeah, in fact my linux desktop does have an SSD and many times as much RAM as my macbook, so if I'm not comparing like to like my linux station is the one with much more powerful hardware). And even if it runs out of swap it doesn't just kill the last process, it uses a more complex scoring algorithm which has a good chance of killing something you didn't want to be killed.
I got that. It's still just some application UI wrapped around a CoW file system. Maybe a better way of saying your point is "doing the same on Linux lacks a lot of polish" - which is true. But that's when happens when Linux has to support a multitude of file systems but Apple can control every aspect of their ecosystem.
> Last I checked it used Baloo to do the actual indexing,
Possibly? Krunner has always "just worked" for me so I've never bothered to look under it's hood.
Regarding the bug you found, well I'd argue that you should expect to read bugs on a bug tracker given that's the point of bug trackers. It does feel like what you're basically now doing is the equivalent of reading a 1 star review of a product (eg on Amazon) and claiming it doesn't work by the proxy of others while also ignoring all the 5 star reviews from people who haven't had any issues. It's a heavily biased way to hold a discussion and if we're both honest, Mac's haven't been without their fair share of bad publicity either. So is it really worth our time cherry picking all the negative things when you and I both know that they're the exception rather than the norm?
> You literally just type what you want calculated, e.g. `sin(pi/4)`.
Handy to know. I suspected it would have been possible but I kept prefixing the formula with `=` which Spotlight didn't like.
> nix. By my lights the only proper package management for any OS. Works fine under both linux and macOS (and will also trivially supply you with up to date coreutils).
My point is you shouldn't have to install a 3rd party package manager. That's the bare minimum a modern OS should provide out of the box.
> I have been using (well-specced) linux desktops for most of my work for a long time.
I struggle to believe that given the descriptions of faults that you've been discussing. Though you have also said you've tinkered with the "swappiness" parameters (plus more) so I guess it's possible that you are running current hardware but have inadvertently tweaked Linux into performing terribly? Or maybe you're just exaggerating all these problems to make a point (much like your "look, I've found a bug on a bug tracker" comment above).
Either way, if the problems were as prevalent and severe as you keep describing then you and I - and millions of other techies for that matter - wouldn't be running Linux.
> The whole point of having a proper OS (rather than say DOS) is that misbehaving app won't just bring down everything else.
"Proper OS" is such a flakey term and what you described isn't even the "whole point" of running an OS. But that's a whole other tangent. More importantly Linux doesn't do what you're accusing it of doing. Thus your statement is simply untrue in a multitude of ways.
> Also if you have a process that wildly allocates memory, by default linux will start off swapping like mad
It's actually a great deal more complicated than that. It depends on the size of your swap file, what applications you have open and their current running state (ie can they be paged). It depends on whether your cache is non-zero and it also depends on the kernel parameters you define.
> And even if it runs out of swap it doesn't just kill the last process, it uses a more complex scoring algorithm which has a good chance of killing something you didn't want to be killed.
Depends on the version of Linux (the kernel) you're running. Older kernels will just kill the last requester. Newer kernels do have a scoring algorithm but it's really not that complex at all (if memory serves, it's ostensibly a just a percentage*10 figure of used memory)
So? In terms of usability impact I still consider it a major feature (that no amount of tweaking will get you on Linux).
> My point is you shouldn't have to install a 3rd party package manager.
But macOS has a "package manager" – it's called App Store. You and I may not think it sufficient for our (developer) needs, but we're not representative users. And for normal users and even myself it offers very useful functionality over what they'd get out of the typical native linux package manger. You can trivially reinstall everything on a different machine with a different os version (as long as it's not super ancient), and it works – no "DLL" hell, because everything is essentially self-contained. And since software is tied to your account, there is no need for crufty apt queries in the hope to get a list of packages out you can backup for reinstall elsewhere or after clean upgrade. Ubuntu has tried to establish a clone in the Snap Store, but no one I know seems to use it and I haven't tried it myself, so I don't know how compelling it is.
And I need to install a 3rd party package manager on (non-NixOS) linux distros anyway, because IMO apt, yum etc. fundamentally suck and nix is the only thing that doesn't. Funnily enough, the only really compelling UX argument for linux instead of macOS for developers I can think of apart from /proc is that with NixOS you can codify your complete machine setup in a single nice config file, making it super easy to replicate, backup or inspect.
> Or maybe you're just exaggerating all these problems to make a point [...] More importantly Linux doesn't do what you're accusing it of doing.
It's a bit annoying to be told that what I'm saying literally can't be true. It is, and I didn't tweak any sysctl params or the swap setup before I got tired of my machine grinding to a halt and me having to reset it. I can ensure you it's entirely possible to have a high grade desktop with SSD and have linux fall over swapping endlessly without even being able to move the cursor anymore. Of course this doesn't happen in "everyday" usage otherwise no one would be running linux, but it's not that hard to trigger if you're running a VMs, a few browsers and dev tooling stuff that can potentially consume large amount of memory very quickly. I've moved away from having to use these tools (and also tweaked my machine) so it hasn't been a problem off late, but I ran into it with completely stock ubuntu.
Some time ago I encountered issues similar to what you mention in your posts. I solved it selecting the "Deadline" IO scheduler when I built my kernel.
Hopefully this helps you solve the issue :)
It best not to use such firm definitives like that when what you actually mean is "more tweaking than a typical user would be bothered with". :)
> But macOS has a "package manager" – it's called App Store. You and I may not think it sufficient for our (developer) needs, but we're not representative users.
You're seriously going to defend the App Store?! The App Store isn't just garbage for developers, it's garbage for everyone because it misses so many non-developer productivity tools too. It doesn't even have Chrome nor Firefox in it.
> And for normal users and even myself it offers very useful functionality over what they'd get out of the typical native linux package manger
Sorry but I'm not buying that argument. You claim to be a "normal user" then talk about messing around with kernel parameters in Linux. I really don't think you're making any fair and balanced arguments on this topic at all.
> You can trivially reinstall everything on a different machine with a different os version (as long as it's not super ancient), and it works – no "DLL" hell, because everything is essentially self-contained.
I guess if you compare the App Store to manually loading software on Windows - literally the worst platform ever created for managing installed software - then the App Store would look good. But likewise if you compare a heart surgery to a lobotomy then heart surgery would look less invasive too. This is why I don't think it's productive to compare solutions to the worst examples.
> And since software is tied to your account, there is no need for crufty apt queries in the hope to get a list of packages out you can backup for reinstall elsewhere or after clean upgrade.
It's a pity that the App Store offers so little software so you end up falling back to 3rd party package managers. So now on OSX you not only need to run the same "crufty [package manager] queries" on OSX but you also need to install the package manager itself too.
> And I need to install a 3rd party package manager on (non-NixOS) linux distros anyway, because IMO apt, yum etc. fundamentally suck and nix is the only thing that doesn't.
A moment ago you were claiming to be a "normal user". Normal users don't install nix :) tbh I'm not the biggest fan of apt, yum is better but I do really like pacman. However claiming apt and yum suck when also praising the OSX App Store is just weird.
> I can ensure you it's entirely possible to have a high grade desktop with SSD and have linux fall over swapping endlessly without even being able to move the cursor anymore. Of course this doesn't happen in "everyday" usage otherwise no one would be running linux, but it's not that hard to trigger if you're running a VMs, a few browsers and dev tooling stuff that can potentially consume large amount of memory very quickly.
Right, I get you now. That context helps. Your previous description just said you were running a browser and sounded like it was happening everyday (so basically you were exaggerating by leaving key details out when describing the root cause). The problem there is that you're not just over commiting on memory but also over commiting on CPU resources too. That latter part matters because swapping can be CPU expensive too. Hence why your system is grinding to a halt.
Also I still think you're to blame a little there because if you're running VMs then you should be setting their thresholds to a level that doesn't overcommit your systems resources (baring in mind these tools aren't the stuff that "normal users" would be using either). It's like opening a bottle of wine and pouring yourself 4 glasses then complaining that the bottle is empty and you couldn't squeeze out a 5th glass (can you tell I'm drinking wine at the moment hehe?). You only have a finite amount of system resources so you cant really complain if you intentionally over commit them.
Yup, flawed as it is, I find it much more useful than apt. If I'm wearing a dev hat and were forbidden from using anything to manage software installs other than one of apt or App Store (no nix!), I'd rather have apt. But for my non dev apps (you know, even people who tweak kernel parameters have non-programming related apps they want to use from time to time ;), Appstore is obviously more useful.
> However claiming apt and yum suck when also praising the OSX App Store is just weird.
Why? Both fill different needs and App Store solves a problems that are useful to me acceptably well (making it easy to install up-to-date software I want, upgrade it and remember what I have on a per-account not per machine basis).
Yum and apt, on the other hand don't (they don't have up-to-date software I want, they don't give me what I consider a decent way to manage the same or similar setups on multiple machines etc.). I basically install everything I can with nix instead.
> So now on OSX you not only need to run the same "crufty [package manager] queries" on OSX but you also need to install the package manager itself too.
Unlike apt/yum nix offers good ways to do this – no cruftiness involved. E.g. you can just write a small file with what you want and you'll get it, on any machine.
> You only have a finite amount of system resources so you cant really complain if you intentionally over commit them.
That's not what's happened, my VMs where capped at reasonable limits. I used to run some tools for various reasons that could in some scenarios eat a lot of a ram fairly suddenly (I don't think the systems was anywhere close to overloaded CPU wise was true most of the time but I can't vouch I remember this right anymore).
Either way, I don't think the whole OS falling over because one app wants to consume too much memory and the OS has decided to never say no is reasonable. And it's not something I can recall ever happening to me with any other OS (in recent years, I don't want to think back to ancient windows days).
I take your multi-machine point but the above just depends on what repository you're pointing at (eg stable, testing, etc) and which Linux distro you're running. You can't really blame apt for being out of date if you're running Debian. And nor could you blame apt for delivering buggy packages if you're running the testing repos on Ubuntu.
It's the same package manager, just different end points.
> Unlike apt/yum nix offers good ways to do this – no cruftiness involved. E.g. you can just write a small file with what you want and you'll get it, on any machine.
Technically you can do that with any package manager - given that's the core point of a package manager :P
I've not used nix (read a little about it but never taken the time to try it) so I can't comment how much easier that makes the process of custom repositories than hosting your own apt or yum repo, but it's not actually hard to do in those two either. Plus you could always compile your own .deb or RPM and install it like a standalone installer (MSI et al).
I've got nothing against nix though. In fact weirdly I think your underselling nix by focusing on the points you have rather than it's major differences from traditional package management.
> That's not what's happened, my VMs where capped at reasonable limits. I used to run some tools for various reasons that could in some scenarios eat a lot of a ram fairly suddenly (I don't think the systems was anywhere close to overloaded CPU wise was true most of the time but I can't vouch I remember this right anymore).
The problem with over commiting is the limits might seem reasonable under normal workloads but when you do end up with an empty bucket you have no safe way to recover from that. Or at least not with desktop virtualisation solutions like VirtualBox. ESXi et al will handle such situations more gracefully because they're designed to over commit during off peak work loads.
That said, I don't know how long ago it was you last did this but a few years ago VirtualBox did add a CPU execution cap in the guest config. IIRC it defaults to 100% but if you're running multiple guests and/or running heavy applications on the host while also running heavy guest VMs then it's worth dropping the CPU execution cap down so the guest cannot lock up the host.
> Either way, I don't think the whole OS falling over because one app wants to consume too much memory and the OS has decided to never say no is reasonable.
I think your expectation here is a little unreasonable to be honest. You cannot drain the host of free system memory and idle CPUs then expect the host to gracefully recover. It's like trying to dowse a fire with an empty bucket. I honestly can't see how OSX would perform any different to Linux in that regard. So you were probably using different virtualisation technologies on OSX (VMWare perhaps?) that handle guests more responsibly.
Thank you for showing this to me.
What would the results of a successful or unsuccessful test mean? Either might be a consequence of the different environment.
I have no difficulty locating files. They stay right where I put them. If storage did not keep growing it might bother me that they never fade away.
All that said, a few Mac owners, and even some former owners, might have a use for emulation, and I would never begrudge it to them.
For example, let's say that I'm writing some cross-platform open source software. And let's say that I am developing this software on a Windows or Linux Box.
Now, let's say that for whatever reason, I don't have and can't afford a Mac (a scenario like this is more common than you would think, especially in developing countries...)
OK, so now how do I compile/run/test the Mac version of my software -- without having that Mac?
That's why your software is so important.
Anyway, if you can get this fully operational (GUI and everything), you'll solve that problem for that group of software developers...
If you're successful, if you prevail... Apple shouldn't sue you... they should help you, because additional software for their platform ultimately benefits their platform.
Also, judging by the source, you've done an amazing amount of work so far... I hope you can find the additional developers/contributors you need to take this thing to completion...
Why wouldn't you just virtualize, same as any other OS? If you're not virtualizing on Mac hardware than there are a few minor extra hoops to jump through but it's still less work compared to a hackintosh. Performance can be janky if you don't dedicate a video card to it, but something perfectly adequate like a Radeon 560 seem readily available for $60-80 now. There are a few bits of Mac specific hardware these days like the T-series chips, but not even all supported Macs have that by a long shot.
There are other working solutions, JetBrains seems to do alright with Java runtimes.
Rely on some developer(s) spend 10,000 hours of their time for free developing a Mac emulation layer, or go on eBay and buy a $600 used Mac to test on...
> OK, so now how do I compile/run/test the Mac version of my software -- without having that Mac?
You don't. Darling is not a suitable replacement for testing on a real Mac in exactly the same way WINE is not a suitable replacement for testing Windows software. A developer who thinks so is misguided.
Virtualising macOS would be more suitable, although the only way to do so without violating macOS' licence conditions is to run the virtualised OS on a Mac anyway. Even then, virtualised macOS lacks hardware accelerated graphics support, limiting the testing of GUI apps at least.
> Apple shouldn't sue you
There's no reason they would. Nothing of Apple's seems to be infringed, as the system seems only capable of running anything that Apple's open source Darwin OS can.
Even if the Darling project reimplemented some of Apple's proprietary frameworks, this could be done based on Apple's open source releases of things such as Core Foundation, etc. The reimplementations of things like AppKit, when done in the future, could even possibly be based on something like GNUstep — which would give that project a well-needed shot in the arm, to say the least.
> additional software for their platform ultimately benefits their platform
Firstly, any well-written cross-platform software is easily ported to macOS. This is most evident with software intended for FreeBSD but is equally possible with software that originates on Linux; see Homebrew, MacPorts, etc. for the plethora of utilities that began life on Linux but have since become cross-platform, or software made by GNU that is typically cross-platform by design.
Secondly, Apple has been down the cross-platform road before, and none of its developers wanted a bar of it. Apps that are developed on other systems but not tested properly on macOS are always heavily criticised by macOS users as feeling foreign and un-Mac-like.
Back when Mac OS X was shiny and new, Apple offered three major platforms for developers: Cocoa, their C and Objective-C APIs inherited from NeXTSTEP; Carbon, their C and C++ APIs inherited from Classic Mac OS; and Java, a cross-platform offering to entice developers from other platforms, particularly Linux.
Apple deprecated their own Java in 2010 because (A) people really disliked using Java apps, even though Apple's implementation of the JVM was performant and had native support for Cocoa-style controls and (B) nobody was using it, with major preference going to Apple's own Cocoa and Carbon APIs.
Apple, and its userbase, prefer apps that are made with love and care _on_ Macs and _for_ Macs/iOS devices/Apple Watches/Apple TVs, respecting those platform conventions by being developed and tested on them.
Disagree. Darling is not quite there (yet), but wine could reasonably be used to test dev builds, so long as a native windows version was trialed before release.
It didn't go well.
If you're already porting to Linux, you're already dealing with a UNIX executable, which makes it at least slightly easier to port to MacOS.
There are also virtualization services available where you can rent dev time, and decent and upgradeable Macs are not that expensive second hand.
I dev on MacOS all the time, and have the same issue with Windows. These days I try to write everything as a PWA to start so I especially don't have to deal with extreme UI/UX pains.
Splurge on a $1/hour or $20/month account with
It's usefulness is entirely limited to the implementation of the core library frameworks such as Core Image, Audio, MIDI, Animation, Data, etc...which will be very, very difficult, I think, while maintaining FOSS status.
If you need Mac or Windows, your best bet is not to move to Linux in the first place.
If you do mostly development or scientific computing, then you’ve probably dealt with more pain on those platforms, that melts away on Linux.
Best tool for job wins.
As for making the front-page of HN, the HN crowd probably consists disproportionately of technology enthusiasts who find interest in technology beyond its immediate usefulness.
Also, Darling desperately needs a re-implementation of Apple’s CoreCrypto .
Like Darling could be already polished product.
Which makes it rather pointless, for now. Practically all non-GUI software that runs on Darwin can be made to run on Linux too and thereby even Cygwin. What I would much rather see is library support for some BSD standard functions that are missing from Linux. Trying to migrate software that uses funopen(3) to Linux gives me an ulcer.
Still insanely cool though.
Mac backwards compatibility isn't as bad as some people say—I have a decade-old program that still works in Mojave, for instance—but Windows is a lot better.
The platform with awful backwards compatibility is iOS. And there, you don't even have the option to dual boot or downgrade. The fact that no one cares says something about how much we value mobile software...
I have. I had an iPod Touch for a couple of years as a teenager, but then left iOS until I got an iPhone in college. Once I had the iPhone, I decided to go through my purchase history and re-download the apps I'd used on my iPod, for nostalgia's sake if nothing else.
To my dismay, exceedingly few of the old apps worked, and most of those that did had major graphical glitches. (Not including apps which had been updated by the developer more recently, of course.)
Link them to me and I'll try them out. I've never had that experience.
* Tap Tap Revenge Classic / 2.5 / Dance
* Roland 2
There were definitely way more, but I don't remember which ones, and now that I'm on iOS 12 I can't test any of them (all 32 bit). These are the three I specifically remember not working.
Hah, guess the reason I've never noticed any compat issues is they removed the apps they broke compat with.
I've used Linux on desktop exclusively ~95-07. Switched to MacOS after that. Tried out mythical "Linux on the desktop" in 2018, was amused to find that iTerm2 has grown so much, and there's hard to find a good replacement for it under Linux. Of all the things I'd expect it to handle well... 8)
Regarding Hammerspoon (which reminds me very much on what you can do with AppleScript), you certainly can do many of the examples (https://www.hammerspoon.org/go/#spoons) with modern-day frameworks like xdg, dbus, libevent, inotify. My feeling is that (for instance) Python PIP has pretty comprehensive libraries which allows you to use it as a glue language for all that interfaces. I would not be surprised if that even feels slimmer and more powerful then Lua in the end, but it's a matter of taste. As always.