Hacker News new | past | comments | ask | show | jobs | submit login
Darling – macOS Translation Layer for Linux (darlinghq.org)
300 points by goranmoomin on April 28, 2019 | hide | past | favorite | 174 comments



What would be amazing is if Darling could join forces with the long-running GNUstep project in order to provide access to the Cocoa APIs. This would provide a means to run Cocoa apps on Linux, which will make Linux a more attractive operating system for prospective Mac converts and may also increase the popularity of GNUstep. Also, with many Mac users expressing anxieties about the future of the Mac given Apple's business decisions under Tim Cook, a combination of Darling and GNUstep would provide Mac users an option to still use the apps they have invested in, just in case Apple neglects the Mac or makes radical decisions that would negatively impact Mac users.

However, the last time I looked into this matter, GNUstep still has quite a way to catch up to macOS Mojave's version of Cocoa. At the risk of going off topic, I still don't understand why GNUstep never seemed to have reached critical mass despite all these years of development. I was just a kid during the mid-1990s, but KDE and GNOME took off around this time while GNUstep kept pressing on. Here's my thoughts: maybe in 1995 and 1996 the OpenStep API might not have been attractive to developers in a world where Windows completely dominated the desktop, but this doesn't explain the decision of the KDE developers to use Qt instead of GNUstep (as well as the decision of the GNOME developers to use GTK+). There may have also been much uncertainty in 1997 following Apple's purchase of NeXT about the future of OpenStep. Such anxieties would be alleviated sometime during 2001-2003 when Mac OS X was released and started gaining popularity, but by then KDE and GNOME (and their respective underlying GUI toolkits) were firmly established in the Linux desktop ecosystem, and to this day they remain the dominant Linux desktops, and many less popular desktops still rely on Qt or GTK.


The problem with replacing MacOS is that its key value lies in aspects that the open source community have yet to succeed in, or are structurally unfit to succeed in.[0][1]

— System-wide UI consistency, especially in the fine details

— Singular UI/UX vision eliminates points of confusion and everything-is-a-compromise choices for third party developers

— Navigable by non-experts, even when things turn to shit

— Nominally "perfect" hardware support

— Robust colorimetry

— Millisecond audio latency

I use Linux extensively, but I'm not aching to leave macOS as my primary platform. Yes, they did start to go a bit loopy and off-track around Mac OS X 10.7–10.9, but from my vantage point it has been all uphill again since Mavericks.

The idea of running Mac apps in a Wine-like compatibility layer sounds worse than anything Apple could ever do to macOS. Yes, I like free. And yes, I like personal choice. But I value my time and I'm sorry to proponents of other platforms, but macOS just values my time more than any other platform.

________

[0] The achilles heel of the open source community stems from the lack of a unifying vision and a top-down approach. This has major advantages as well as major disadvantages. But nonetheless it affects the result. And depending on your priorities, the advantages might outweigh the disadvantages or vice versa.

[1] And to be fair it's equally true in the opposite direction. Open source succeeds in many aspects like robustness, hardware compatibility, longevity and transparency/deep trust. Hence why it has utterly destroyed the server market. These areas of success are not accidental or arbitrary.


As a long time (10+ years) Linux and macOS user, I accept most of the things you say, and macOS is an extremely time efficient platform to do work.

However, the lack of unifying vision is not an Achilles heel of the open source community. It's the essence of it. I like how any two installations of two distributions are never the same, and this broad ecosystem just works, tumbles, fights and creates new stuff.

I've created a compression algorithm [0] for my graduation project. It was a novel approach, but performed worse due to some technical reasons. One of my professors asked why I didn't compare it to a similar algorithm instead of plain old Zip. I said that our algorithm is never tried before, and it's pretty novel, so we had no equal to compare.

Then, one of my other professors replied to other professor asking the question. "There's no need to find an equal to compare. They did something new and untried. This is research".

Equally, Darling doesn't need to surpass macOS to be useful, because "this is research" too. Maybe they will learn something useful from this endeavor and incorporate it into Linux or their future development career.

This Katamari Damacy nature of GNU, Linux and free software community makes it so powerful and unique. Let Microsoft, Apple and Google have a top-down approach and others play as they feel and everyone improves the world as they wish.

[0]: https://www.academia.edu/20575315/Lossless_Text_Compression_...


I don't disagree with anything you've said, and I don't think that my post—if read in full—disagrees with it either.

The lack of unifying vision in open source can be both an Achilles heel and its most valuable asset.

The lack of a community-led ethos in macOS can be be both its Achilles heel and its most valuable asset.


Honestly, I missed your last footnote only. Possibly I got distracted by something. My fault.

When read in full, we're on the same page. OTOH, I also didn't disagree with you on anything it seems. So, the two comments can be nicely summed up as a look to same lawn from a opposing vantage points, since I use Linux more often than macOS (albeit I have a MacBook Pro that I use pretty regularly).

If my tone sounds a bit harsh, sorry for that. English is not my mother tongue. Also, if you can tell me where it's rude(ish), I can work on it.


— Internationalization


— Accessibility (deaf, blind etc)


> — System-wide UI consistency, especially in the fine details

> — Singular UI/UX vision eliminates points of confusion and everything-is-a-compromise choices for third party developers

I mean, first-party GNOME applications are as consistent as first-party Apple macOS applications; and third-party macOS applications are as highly customized as third-party applications one might run on GNOME. Not sure what you base your experience on but this is not really any more of an issue for those looking for consistency over all else.

> — Properly accessible by non-experts, even when things turn to shit

I'd like an example, not really sure what you're talking about here. In my experience, when things go wrong on macOS, you're prettymuch SOL until the broken feature is either fixed, reimplemented, or removed in a subsequent major release of the operating system. In my experience, Apple doesn't respond to bug reports any more than other desktop OS vendors, including open source ones.

> — Nominally "perfect" hardware support

The graphics drivers on macOS are very poor. Apple's decision to neglect and then subsequently abandon OpenGL is pure laziness, and their implementation was already bad when they were still maintaining it. Furthermore, hardware compatibility with random gizmos on macOS leaves a lot to be desired in comparison to Linux, in my experience, though I guess your mileage may vary.

> — Robust colorimetry

colord works just fine for me, no amount of software is going to profile your monitors for you though.

> — Millisecond audio latency

I'm pretty sure CoreAudio frames are not less than 44 samples.

What I will give Apple credit for is a great set of graphics manipulation libraries which make it simple to use high quality scaling and manipulation algorithms, and making sure it's a conscious choice to use cheaper, faster ones, and their implementation of seamless suspend-to-disk with full disk encryption is admirable (though honestly, dispensable at the end of the day). Their shaping and font rendering libraries are almost as good as Harfbuzz and FreeType 2 (though I think they've just started using FreeType at least on some platforms). Accessibility features are also pretty good, and depending on what disability you have, it's a tossup between GNOME and macOS.


All those points are fine, but they're all yes-buts.

— I mean, first-party GNOME applications are as consistent as

Straight-up disagree. It all depends what your threshold of UI/UX consistency is. Unfortunately most open source enthusiasts have a low threshold. It is difficult to convey the importance of a thousand subtle details, each one impossibly trivial, but the sum total moves mountains.

— I'd like an example, not really sure what you're talking about here.

An example: If my aging father's Mac turns to shit, he can hold down R, boot up the computer from the recovery disk and restore from a Time Machine backup that he himself set up with no assistance required from anyone.

— The graphics drivers on macOS are very poor.

That's really a marginal opinion. Maybe if you're an OpenGL developer. Apple provides an excellent framework (Metal) which works brilliantly. Yes it would be great if Apple delivered first-class support for Vulkan, but complaining about OpenGL is last decade's problem.

— colord works just fine for me

It's increasingly robust at handling the basics, yes.


> That's really a marginal opinion. Maybe if you're an OpenGL developer. Apple provides an excellent framework (Metal) which works brilliantly. Yes it would be great if Apple delivered first-class support for Vulkan, but complaining about OpenGL is last decade's problem.

The Metal drivers are more stable, but the shader compilers generate slow code, just like the old Apple OpenGL drivers (the ones they wrote for the Intel GPUs). OpenGL issues are not "last decade's problem", I have on several occasions needed to write workarounds in WebGL shaders to prevent the NVIDIA drivers from crashing the whole windowing system (and every application with it) on macOS. On the same hardware, the drivers on Linux are faster, more stable, and more featureful and also include implementations of Vulkan.

> An example: If my aging father's Mac turns to shit, he can hold down R, boot up the computer from the recovery disk and restore from a Time Machine backup that he himself set up with no assistance required from anyone.

That's not a macOS feature though, that's an Apple PC firmware feature. I'll grant that AFAIK no vendor who ships a Linux distribution by default has a durable recovery partition, but there's nothing about macOS itself which makes that easier.


> That's not a macOS feature though, that's an Apple PC firmware feature.

That you see a distinction here is telling. The end user doesn't see the distinction.

As for all of your complaints about the OpenGL drivers, that's an issue for developers, not end users. Yes, maybe there might be more 3D apps and faster 3D apps if the video driver situation was better, but also maybe not? Either way, this is all irrelevant because the fundamental argument here is the impetus for end users to remain on a platform. Just because you can possibly eek out an extra 20% performance in your Linux app on the same hardware isn't going to shift people.

-------------------

On a side note, could the people who are down-voting microcolonel please stop? This is a discussion of ideas, and his/her ideas are being expressed with valid form and structure.


> As for all of your complaints about the OpenGL drivers, that's an issue for developers, not end users.

I shipped an application with WebGL under the impression that it would not need to be custom tested on a decade's worth of MacBooks Pro, but received a report later that a user had lost data because the graphics driver restarted the windowing system when he opened my webpage.

If an honest, non-malicious webpage can cause your windowing system to restart, that is an end-user problem. Even Apple themselves don't bother to test their official websites on more than one generation of Mac, why should everyone who uses this now half-decade-old web API have to buy $10,000+ worth of equipment, some of it with old versions of the OS, because they can't trust the vendor of the hardware to maintain the drivers?

And this is on top of the fact that, generally speaking, on a given piece of hardware, the application will run dramatically more slowly on macOS than on Windows or Linux.

> That you see a distinction here is telling. The end user doesn't see the distinction.

I tend to agree, but in this case there's an important distinction for us to make, even if the end users are generally unaware. In the case of recovering intentional backups on a fully-functioning computer, Apple has done a good job of making that straightforward on their laptops, if you attribute that to "macOS", then you miss the point that a) any vendor could offer the same thing, even if they don't, and it has nothing to do with macOS and b) improving "Linux" won't make a recovery partition suddenly appear on your computer. Furthermore, Apple's advantage here only applies to functioning computers. Apple makes it extremely difficult to recover data from damaged devices, and in the case of the iPhone, they literally censor any mention of it being possible from the forums, and lie straight to the faces of their customers. When something gets a little bit wet, Apple will tell you that you should have bought iCloud, and that your data are gone forever.


Woah there buddy.

— Apple makes it extremely easy for anyone to start backing up their computer in a way that covers an array of scenarios from accidental or malicious deletions to full-on disaster recovery.

— It has everything to do with MacOS because the creation of backups happens within a MacOS environment and the restoring of backups happens within a MacOS environment. The only "firmware" aspect is the (relatively) simple boot-time keyboard triggers.

— Apple's solution to damaged devices is having a comprehensive backup strategy. If your plan is to recover data from a damaged device, you've failed before you begin. Apple doesn't offer first-party data recovery services, but there are plenty of third party services to handle disaster recovery situations. To the extent that they make it difficult for you to recover data from a damaged device, it's because they do robust on-disk encryption.

— Yes, any vendor "could" offer the same thing. That's exactly the point. They could. Most don't. That's the point.


> If your plan is to recover data from a damaged device, you've failed before you begin.

Apple says this is the case, but it's not actually the case. Repairing a device is how you recover data from it. I know it's preferable that people make an effort to protect their data, but in the real world, approximately zero people back up anything.


What I mean is that you've failed in principle. You shouldn't ever risk being in a position where access to your data may become contingent upon the skills of a repairer wielding a soldering iron. If maintaining comprehensive backups isn't boneheadedly simple and robust, your vendor is selling junk.

But as a practical matter I don't disagree. If you failed to maintain comprehensive backups (or you have suffered a rare double-disaster) then it's great that these hardware repair experts exist.


Basically the point is that either distro or hardware providers can already offer easy to use backup functionality.

For example Linux mint comes with time shift. I honestly haven't paid much attention to what people do provide as I've mostly historically used rsync or more recently zfs send/syncoid.


> I mean, first-party GNOME applications are as consistent as first-party Apple macOS applications

That's true in the basic sense, and my impression is that KDE is better than Gnome, but Mac OS is ahead of both in the overall consistency of interaction with the application.

All three platforms have a set of human interface guidelines:

https://developer.apple.com/design/human-interface-guideline...

https://hig.kde.org/

https://developer.gnome.org/hig/stable/

I use KDE, and the core applications are equal with Mac OS for consistency of interaction. The difference is the niche applications, where I think a Mac OS developer puts a bit more effort into following the guidelines, whereas the KDE developer adds an additional feature or customization.


CoreAudio round trip latency can be under 5ms.


And maybe you can get something comparable on Linux. Maybe? Do I have to think about which distro I use? Do I have to think about which audio hardware I use? Do I have to think about which audio apps I use?

On Mac, you get that insane latency without having to put a moment's thought into any part of your architecture. I get that insane latency even if I never bothered to learn why latency matters.

"Just works" is more than ease of use. It's productivity. It shows respect to your mental load and mental priorities. And it is worth money to anyone whose time is valuable.


Ubuntu Studio just works.


I bet somebody has a Gnu/Linux rig using ALSA with well below 5ms round-trip latency, too.

The plus to CoreAudio is that any system it runs on has probably been designed so that a non-technical user can get something like the low-latency you're describing by default. The minus is that hardware probably costs at least $800, and CoreAudio doesn't support things like the RPI.

The plus to ALSA is that it runs on things like RPI, with the minus that non-technical users probably won't get round-trip latency below 5 ms without paying someone Mac-level money for hardware designed especially for Linux audio.


Yeah, 5ms isn't that tough to achieve. You need decent hardware for sure, but hell, you can get PulseAudio of all things to under 10ms.


It's easy to get latency this low with ALSA+JACK.

Edit: and it's possible to go much lower, for example look at the linux-based Bela: https://bela.io/about#why-latency-matters


My Linux DAW regularly trumps that.


>I'm pretty sure CoreAudio frames are not less than 44 samples.

CoreAudio is actually pretty good.

Without the linux-rt patchset, jack pipelines do overrun even when running jack at 10ms. Linux has quite the latency spikes.

It's possible to demonstrate this fairly quickly by running cyclictest from rt-tests.


> CoreAudio is actually pretty good.

I wasn't arguing that, I was saying that "millisecond latency" involves frames smaller than (samplerate / 1000) samples, completely ignoring overhead.

> Without the linux-rt patchset, jack pipelines do overrun even when running jack at 10ms. Linux has quite the latency spikes.

Are we talking about sub-10ms latency? Sure, that's a different matter, and Linux's default schedulers could stand to improve support for mixed-realtime.


>Are we talking about sub-10ms latency?

Yes. Even for the audio case, having to run jack at 10ms is already a lot, particularly since Linux is not the only source of latency.

>Sure, that's a different matter, and Linux's default schedulers could stand to improve support for mixed-realtime.

Even SCHED_FIFO (where preemption is immediate, and cpu isn't released until the high priority program yields it itself) suffers from latency spikes; it is not a scheduler issue but an overall Linux design issue.

cyclictest from rt-tests will easily highlight that. Try leaving cyclictest --smp -p98 -m on the background. After a while, you'll notice the entirely unacceptable max latency readings. All the test does is set an alarm so that the task becomes runnable (which means it should run immediately due to SCHED_FIFO) and check the difference between the alarm and the current time.

Mainline kernel is effectively unusable for anything that requires low latencies such as audio work, as it spends too much time running non-preemptable code in supervisor mode. Linux-rt improves this situation radically, but the monolithic design simply isn't suitable for this; microkernel multiserver systems are a much better fit.

Incidentally, refer to seL4 for a system that has a guarantee in the form of a formal proof of worst case execution time.


> Nominally "perfect" hardware support

With the decline of Apple hardware, this is less true, and modern Linux hardware support is, at worst, no worse than what other OSes provide, in my experience.

> Navigable by non-experts, even when things turn to shit

I think the "Closed Box Philosophy" of Apple either defines or disproves this: Non-experts can't be tripped up by having to fix things... because nobody outside of Apple can fix things, eh? Can't screw up something you're not allowed to do.

> The achilles heel of the open source community stems from the lack of a unifying vision and a top-down approach.

This means that I can keep running Window Maker and nobody can take it away from me. In a broader sense, I'm not afraid to upgrade for fear of losing some part of my workflow due to a high-level decision taking an option away from me. I can use Emacs even if everyone else around me is using Vim, I can use qutebrowser even if nobody knows what the fuck a "Cute Browser" even is, I can write in Ruby even if the Current Dogma is JavaScript.


> With the decline of Apple hardware, this is less true, and modern Linux hardware support is, at worst, no worse than what other OSes provide, in my experience.

I will still take a perceived regression in hardware from Apple over anything else on the market today. They are that far ahead. Still the best touchpad and connectivity support which are the two most important factors in my book. Also still the most aesthetically pleasing and known brand of laptop in the world. The latter point is not so important in my book, but I do believe it helps contribute to the resale value of the Mac, which again is class leading in terms of mass produced laptops.


The butterfly switch Macbook Pro keyboards prove it isn't just a perceived regression.

I get the appeal of the Apple Universe: It Just Works, everything is crafted, etc. The problem with that idea is that it's been eroded from two directions: Apple's own incompetence at making hardware and software which Just Works, and everyone else catching up to Apple at lower price points and, as I said before, while offering more meaningful choice than Apple has since the days of the Apple II.

I remember when I needed ndiswrapper to use WiFi on a laptop under Linux. I remember when I needed a bizarre Frankenstein pseudo-FTP setup to access NTFS partitions on Linux. I remember when USB didn't exist and you needed device drivers for every single thing. Those days are gone. Macs not having to deal with those things is no longer a competitive advantage.

These days, Macs are only really special in that they tie you to the Apple universe. I'm not interested in being tied to a corporation like that.


This is entirely subjective. Aesthetically pleasing and "well known" may be factors for you in a purchase decision, but I personally prefer more open hardware and choice in my purchases so that I can fine tune each component and not be at least a year behind when it comes to modern hardware when making a purchase to get better performance than what Mac will offer for a fraction of the cost.

Also, if you care a lot about looks then I'd recommend checking out Razer, or aluminum chassis notebooks:

https://www.reinisfischer.com/top-20-aluminium-chassis-noteb...


> System-wide UI consistency, especially in the fine details

> Singular UI/UX vision eliminates points of confusion and everything-is-a-compromise choices for third party developers

There are consistent themes across both GTK+ and Qt. Pretty much all my apps share a similar theme and UI. There are Mac Apps as well that deviate from whatever "standardization" you're referring to.

> Navigable by non-experts, even when things turn to shit

It is no longer 2004. A lot of Mac users are also more than capable of running a few shell commands when needed. There are tons of web UIs and graphical apps to manage a Linux OS, from hardware, users, configuration, etc. However, a properly configured Linux OS won't need a lot of interaction on the frontend.

> Nominally "perfect" hardware support

Linux has way better hardware support than Mac. I'm not sure what you're referring to here.

> Robust colorimetry

Agreed, this is one area that needs some improvement, but last I checked it is pretty well supported:

https://wiki.archlinux.org/index.php/ICC_profiles

> Millisecond audio latency

Already exist:

https://wiki.archlinux.org/index.php/Professional_audio


> There are consistent themes across both GTK+ and Qt.

And then you open an app that’s neither GTK+ nor Qt, and you’re back to square one.

Besides, there’s more to UI than “theme”. I have ^w mapped to delete word on OS X. One line in one config file. It works universally in every text box on the system.

I tried setting that up in Linux. Eventually got it working through some gnome setting or some such. In some apps. Firefox didn’t respect it and wanted its own setting iirc. Then it would forget it every few months and trying to delete a word while I was typing would unconditionally close the browser. Fucking ridiculous stuff like that abounded. Life’s too short.

> Pretty much all my apps share a similar theme and UI. There are Mac Apps as well that deviate from whatever "standardization" you're referring to.

A few (very few anymore, in my experience) might deviate in terms of widget styling. Essentially none ignore system wide keybindings, or fail to integrate with system wide services, etc. In terms of inconsistency it’s night and day vs what I’ve had to put up with from Linux desktops.


> And then you open an app that’s neither GTK+ nor Qt, and you’re back to square one.

The same is true on every OS, if you open a java app on macos it's not going to look right, do you blame apple for that or the developers of the app?

> In some apps. Firefox didn’t respect it and wanted its own setting iirc.

Will firefox respect it on macos? It's not a good example anyway because firefox is not native, firefox is the electron of the 00's. Even when firefox tries to emulate the native theme it screws up, I had to turn off my dark theme just to get text areas with visible text.


> The same is true on every OS, if you open a java app on macos it's not going to look right, do you blame apple for that or the developers of the app?

I’d blame the app, because in the case of OS X there is a single consistent set of UI components that effectively everything uses, and being gratuitously incompatible is on the app.

In the case of Linux there is only a hodge-podge bazaar of gratuitously incompatible UI kits that every third app disagrees on which to use, so it’s hard to blame any single dev for the universally frustrating shitshow that results.

As an aside, the only java app I’ve interacted with in years (seriously, how often do these even come up any more?), IntelliJ, actually sunk the effort into looking and feeling right. Like I said, it’s night and day on this stuff versus what I experience in Linux on the regular.

> Will firefox respect it on macos?

Yes.

> It's not a good example anyway because firefox is not native, firefox is the electron of the 00's

Electron apps also get this 100% right on OS X.


Jetbrains had over a quarter of a million paying users in 2017. I'm not sure how much the average paying user spends but presumably they make somewhere between 37M and 125M gross. More money means more polish. Logically we need to invest in what is important to us.


There are really mostly 2. Gtk and QT and themed with similar themes they aren't different enough to be too meaningful.


All are exceptions that prove the rule.


The majority of users use virtually no hotkeys. A truly shocking portion don't even know how to copy and paste with their keyboard. Weep when you watch people who don't even know how shift keys work and hit caps lock letter caps lock.

I don't honestly believe that the inconsistency in colors or gui elements that exist on a typical linux system are a meaningful barrier to anyone.

A typical desktop ships with mostly qt or mostly gtk apps that share the same look and feel + a browser that looks and works like the user is used to working across platforms.

In a sense the browser is actually the single most important app for the majority of users and its more useful for it to be consistent with expectations than consistent with the desktop.

I think power users are more apt to be put off by small differences in much used keybindings but they are the minority.

While consistency may seem like an impossible task there are really 4 really big camps. Chrome Firefox gnome kde.

This seems like a small enough group that getting everyone to agree on a common way to communicate to all the desired way to define keybindings ought to be tractable.

I think this is a worthwhile endeavor we ought to persue.


> At the risk of going off topic, I still don't understand why GNUstep never seemed to have reached critical mass despite all these years of development.

As someone who was part of the GNUstep scene and contributed a tiny bit of code, I think the reason is obvious: GNOME was getting the big corporate investment, and KDE also had a stable foundation somehow. As you say, the "Linux desktop" very quickly came down to a GNOME/KDE duoply. Plus, GNUstep was written in Objective-C which was outside of a lot of devs' comfort zone at that time. So, GNUstep only managed to attract a few passionate hobbyists but no more.


I think it was even simpler than that: during the inflection point between GNUstep and Gnome/KDE, the mainstream market was dominated by Windows. It was clear that the open source community saw Windows as the yardstick for a successful Linux desktop.

Whereas GNUstep was associated with an utterly failed platform.

But subsequent to the NeXT buyout and as MacOS increasingly proved its geek credentials, you can see how the community yardstick has progressively shifted in its direction. Had the serious push for a Linux desktop begun ten years ago and not 25 years ago, GNUstep might well have been the victor.


Every year or so I take a look at GNUStep just to see what's going on. Crazy it hasn't taken off.


Unfortunately - call me superficial - I can't really get past its crusty look.


I'm with you on that, and it seems to be symptomatic of the project's refusal to step away from the legacy of mimicking OPENSTEP. They've made 1995 look like a goal towards which to aspire.

They've added theming support to GNUSTEP, but that's really lipstick on a pig. Then again, I can't blame them too much, the project is mostly quiet and doesn't attract as much attention as the other popular Linux toolkits.


With material design and fluent design, OPENSTEP is fossilizing.


GNUSTEP, you mean. OPENSTEP is the fossil GNUSTEP wants to be.

In fairness, they've made strides since then. The Foundation library seems to be aiming for compatibility with a macOS release from several years ago, at least still more recent than the early-to-mid 90s.


Has anyone tried building libobjc on Darling? I feel like that is probably something that they'd need to do first.

Edit: It looks like they already have Foundation working.


The main attraction of Apple products is the certainty that, no matter what you do, they cannot be made to do what you would like. The consequence is that you feel no urge to spend time tinkering to get there, and instead adapt yourself to what it actually does. This reclaims all the time and attention that you would have spent on tinkering.

Since what Apple products do has proven adequate for a large number of people, and you are no less adaptable than they are, you know you can adapt, too.

Having once chosen to adapt to what Apple has chosen to offer, you find it easier each time, until it becomes wholly unconscious. Each time Apple takes away something you had used, you might momentarily balk at the "upgrade", but always acquiesce in the end.

[Edit] So, the appeal of Mac emulation is very limited, because it starts out with tinkering.


I don't really get this, MacOS is super tinker-y. You can script multi-app work flows with Automator, set up folder actions to magically transmute files (I have one that insta-shrinks the images in PDFs for me and puts them in an output directory), cast magic spells on selected data using custom services, use the full range of Unix commands for text processing. You have Perl and Python right there.

The things that are resistant to tinkering in MacOS are the UI and how you do stuff. Those are infinitely flexible in Linux, but heavily standardised on the Mac. However when it comes to getting useful stuff done, the Mac has a wealth of tinker-y toys waiting to do your bidding.


I think you have proven your parent's point by drawing the line between useful and time wasting tinkering.


These whole thread saddens me.

The amount of "tinkering" I have done on my Ubuntu PC was limited to changing background and reducing icon size to fit my monitor better.

I'm not going to apologise for having better performance and first class containers.


There is certainly a fair bit of extremity measuring going on in this thread, and I'm as guilty as anybody. The fact is a default install of MacOS or any of the mainstream Lunux distros is a fine system just as it is.


>The amount of "tinkering" I have done on my Ubuntu PC was limited to changing background and reducing icon size to fit my monitor better.

Outliers need not apply.


Those are not tinkering things -- those are productive uses.

Tinkering is endlessly playing with the window manager configs, changing desktop environments, getting your system "just so", switching this (e.g. audio framework) for that, etc.


> The things that are resistant to tinkering in MacOS are the UI

But not completely resistant. For example, there are multiple tiling window managers for os x.


My friend uses one of those and it takes over a second for a new window to find its place. I can’t see what productivity benefits you would get from that. It look pretty hacky compared to any decent X11 WM with tiling functionality.


I tried the adjustment to the Apple way from 2010 to 2018. By the end of it I realized that "tinkering" has an equivalent in the Apple world and it is called "upgrading." These two terms are almost synonymous in the pain which they will cause. By the end of my journey I had a firm "one major version upgrade per iOS device" rule and a desktop OS upgrade experience that was driving me crazy, because random CAD app wouldn't support the new OS, but another app wouldn't support the mainstream-old OS anymore.

I won't say that tinkering is totally productive, but 1) personally my tinkering on Linux has led to technology development and I'm the type of guy who benefits from that anyway, and 2) I believe the pain you leave behind in lost productivity is at least offset by 25% instant upgrade-related stress for each Apple device in your household or sphere of personal work activity.

Even just constrained to hardware, the horror stories about new Apple devices alone made me doubt my HW upgrade plans. By itself that was enough to make me wonder if I was about to throw away thousands.

Unsurprisingly I felt like there were things Apple could do to make this all better, but like you said, they cannot be made to do what you like :-)


.... ish.

I'm a long time Apple user from the Mac SE. The reason I switched to OSX was effectively, I was getting a Unix OS than ran MS Office and came with a really nice suite of built in programmes that 'just worked'. It was the best of both worlds.

I'm a tinkerer and even today MacOS still allows me to tinker quite a lot. But there's no requirement to tinker. If I want to actually get work done, I can use it in 'It just works' mode.


As a Mac user since 2000 and through the initial pains of the OS X transition, this is on the mark.

Many old Mac users were very unhappy about the upgrade from Classic to OS X. "File extensions? Non-spatial Finder? A command line? What is this bullshit, and why does it run so slowly on my top of the line 400 MHz Power Mac G4..."

But eventually they adapted. At present, Apple is "boiling the frog" on turning Mac into something closer to an iPad Pro with a keyboard (Marzipan brings iOS UI style to desktop; mandatory app notarization prevents running non-approved software; etc.) Despite the grumbles, Mac users will acquiesce here as well and just get on with their work.


Thanks for putting into words what I had suspected for so long. Apple is really good at identifying what the vast majority of people (aka normal users) want to do with their computer and then they optimize these workflows. But if you want to do something more exotic you are usually out of luck.

Talking to Apple enthusiasts is really tough. It's almost as you are speaking a different language. Of course, in the end of the day a computer is just a tool. They are happy with what they can do with their machine and I am happy with my machine. But they have a hard time understanding why I see their system as limiting.


Many Mac users come from Linux and Windows (of course there are migrations in other directions too), most developers on MacOS also use Linux daily. So I'd wager most of them understand what you mean, but disagree.

Personally I like to tinker and personalise only a subset of tools I use to get things done (iTerm, tmux, vim...) and have good defaults on the rest.


You are talking about developers. I am talking about regular non-IT people. IT people know that they can just SSH into a Linux server or that they can run Linux in a VM or a container. Regular people don't know these things.

  > Personally I like to tinker and personalise only a subset
  > of tools I use to get things done (iTerm, tmux, vim...)
  > and have good defaults on the rest.
If these things are good enough out of the box for you, that's fine. But it also shows that you and me have different mindsets about what we expect from our digital work environments. If you spend a lot of time in front of computers, it is worthwhile to adapt the systems to your needs.


IT people need to spend more time with their non-IT moms. It really does ground you to reality more.


Others, like myself, have done already enough tinkering during our university days and now rather focus on get stuff done.


Exactly. And you don't have to say this and sound arrogant.

Our lives, especially with that high work-load we still have to cope with, don't allow us to choose many things to do when not working.

If you have family with kids you loose every tiny bit of freedom to "waste your time" on tinkering and get easily annoyed when things just don't work out of the box.

I like the possibility to customize everything on linux but I also hate how the linux world can't provide the standardization and clarity I am used to from OSX.

How on earth is there still no terminal emulator like iTerm2 on linux????!

From linux I miss the possibility to customize everything on OSX. I hate it how Apple allways tries to lock me into it's golden cage and imposes it's way of thinking on me.

In 80% of times I can totally agree with the apple way but there is 20% when I could throw that MBP against the wall with full force.

Ok I didn't want to start some OSX vs. Linux debate here it's just an example to show the love-hate relationship to both.

It's great there is some effort put into connecting these worlds. The way our economy works is the reason we don't have the computers and OSes we really want to have and sadly only the open source world will change this.


  > How on earth is there still no terminal emulator like iTerm2 on linux????!
Genuinely curious: What does iTerm2 do better than any other terminal emulator on Linux?


* The gap between Windows Manager and Terminal is not as wide as with most terminal emulators I know in the linux world (copy&paste, drag&drop, search-function, mouse support, easy (!) image rendering support out of the box...)

* User accessible features like the tmux integration (look at this: https://www.iterm2.com/documentation-tmux-integration.html)

Just to mention a few advantages.

Of course most of the awesome apps like tmux just need you to learn a bunch of new commands but I can't imagine myself doing that a lot over my whole lifetime.

Some things I just want to use without a steep learning curve and apps like iTerm2 prove that this is possible.

I use the shell a lot since many years but I'm far from using it the way I could imagine it in the 21st century (the shell is still the superior interface for computers in my opinion but I'm afraid this topic didn't really get much attention & innovative approaches in the last decades).


It's probably because it has a lot of features that are not unique, but you won't find all of them in a single terminal emulator on Linux.

Example, I don't know about a terminal on linux that both works as a drop down terminal (quake-like) & supports inline images.

Or split screen & password manager.


For the split screen functionality, I’d recommend tmux or Terminator. If you try w3m on xterm, you’ll notice that inline images can work. Could someone enlighten me as to why inline images are useful in the first place?


Imagine something like Jupiter Notebooks as shell.

That was the REPL of non-UNIX graphical workstations of uore, and inline images was naturally part of it.


One thing I have not found (admittedly I have not searched much) is to be able to detect patterns printed on screen and launch triggers. E.g.: if something prints on screen "created a job XXX" I can color this text blue and make it an URL pointing to example.com/XXX


Look for "Triggers" under the "Advanced" tab of your profile. You just need to enter a regex and specify an action.


This Apple "Upgrade" is an almost perfect example of real world Doublethink as George Orwells defined it in the novel 1984.

https://en.wikipedia.org/wiki/Doublethink


I also noted the Orwellian tone. We live in troubling times when the value of tinkering is lost to the tinkerers themselves.


Tinkering was never important to real hackers. It's the script kiddie analogue to hacking.


No, it's a real example on "focus on what matters" and dropping support for bike-shedding (all too popular with tinkerers).

Many users are not college students discovering the world of UNIX and customization and tinker happy -- they're people that want to get something (not "how the computer works" related) done.


I disagree entirely with that. I think Apple sold people the idea that what they provide is really cool and 'just works' and that you want to be in their 'ecosystem.' It worked, people bought the allure and then just stopped looking at other solutions because Apple said they don't need to.

I think your post highlights exactly that: a buy-in to the brand so strong as to not even try and look elsewhere when issues arise, just adapt and accept.


How do you explain people like me that used desktop Linux for years but then moved to Mac? I fully understand the benefits of Linux and MacOS. I compared them and made an informed choice around the time I stopped being a student and started making my living a professional writer. I've got deadlines and word counts and I can't afford to waste time fiddling with my computer. I want to be able to turn it on, even after updates, and start working.

But, I also love to tinker and automate, and MacOS gives me a full Unix environment to play with.


So, in the end, disagree, or agree? The head spins.


Amiga, Atari and other 8/16 bit platforms also did just work, it wasn't Apple selling the idea.

The IBM PC was the exception here, and it only grew thanks to the mistake of IBM not protecting their BIOS as they thought.


Do you really believe that? That people are so incapable of independent thought that they will subservantly obey the wishes and demands of Apple?

Come on.


ps. I used to have an iPhone, now I have an Android phone.


Considering how many developer-focussed apps were written for macOS first/only because of the massive adoption of macbooks by developers after the rise of iOS apps, this is an extremely narrow perspective.

The appeal of Darling isn't bringing macOS to Linux, the appeal is breaking down the walls put up by shortsighted developers who decided not to bother porting the apps they developed for macOS over to a different platform because "every developer I know uses macOS".

Linux users who would want something like Darling aren't aspiring to be macOS users, they are Linux users who want/need to use software written by macOS users who didn't put in the effort to port their software.


Insightful comment. I call this 'working on the car more than we drive it'. You see that a lot in tech. It's fine as a hobby, but not for production. You do need both as the tinkering helps us to master concepts and systems and find new, better ways which end-up becoming best practice in the stable, production systems. Just have to known when to tinker and when to not.


I disagree, Apple products are popular because they just work (for most people).

I was a strict Windows user for a long time, and I remember the days of reinstalling Windows every few weeks or so to get performance back. Microsoft seems to acknowledge this (but not fix!) with the feature of resetting your PC.

I stopped using linux when the system completely broke after updates (on a slow connection) and spent way too much time trying to fix it. (This was last year on Ubuntu)

When I was younger I'd just deal with them, but now I just use OS X.


> So, the appeal of Mac emulation is very limited, because it starts out with tinkering.

There are countless of utilities for tinkering with you mac setup and the best and most tinker-y terminal for any platform is a mac-only app (iterm2). It's just that macOS starts out with a far high usability level without tinkering and comes with lots of basic stuff working that no amount of tinkering will ever give you with linux (like you can actually find files on your computer, good luck doing that with linux).

And of course even if your premise were true, there are plenty of reasons people would like to run macOS apps under linux:

- lots of good software is mac only

- automated testing with macs is a pain and expensive, doing at least some with linux boxes would be a pretty decent win.


> There are countless of utilities for tinkering with you mac setup and the best and most tinker-y terminal for any platform is a mac-only app (iterm2). It's just that macOS starts out with a far high usability level without tinkering and comes with lots of basic stuff working that no amount of tinkering will ever give you with linux

I was with you right up until you made the cheap shot at Linux. That was as unnecessary as it was untrue.

> (like you can actually find files on your computer, good luck doing that with linux)

Linux is no harder to find files on than it is on OS X. They have the same CLI tools, they both has desktop environments that support file indexing and rapid searching (eg Spotlight). And on Linux that all gets installed by default with the desktop environment - just as it does on OS X too. So they really aren't all that different.

> lots of good software is mac only

I agree. But there is also lots of good software on Linux too. In all my years of running Linux and OS X the only Mac-only application I've missed on other platforms is Logic. But even there, we're talking 15 years ago and Linux has come a long, long way since in terms of the quality of DAWs available on it.


> I was with you right up until you made the cheap shot at Linux. That was as unnecessary as it was untrue.

Here are some concrete examples:

- macOS has essentially a single set of efficient and consistent keybindings that works everywhere. Command line and GUI work essentially the same, i can use emacs style navigation with C-a C-e C-f C-b etc. everywhere. I can copy with Cmd-C and paste with Cmd-V in my terminal. The geniuses who created the first mainstream Linux GUI paradigms decided to copy windows and go with Control as the main key modifier to create a set of clashing keybindings for GUI and console.

- file history (If I messed something up in my keynote presentation, I can easily compare previous versions and restore what's needed)

- Cmd-? allows you to access any menu item quickly by search, how do I do that on Linux?

- finding files (see below)

- MacOS can recover from memory pressure fine. How do I get my linux machine not to effectively crash if I run an app that happens to use too much memory (technically it just swaps itself to death, in practice reset is the only remotely timely way to recover)? I've tried any amount of tweaking, but turns out that you can't turn off overcommit and swap completely, even if have lots of memory, things will just break randomly (chrome for example).

> Linux is no harder to find files on than it is on OS X.

Can you point me at something on linux that comes anywhere close to spotlight/mdfind?

On of the reasons spotlight works well is file system integration, to the best of my knowledge nothing on linux does that.

It's trivial to open a file by recency or contents or type (or tags or ...) from the open dialogue of any application on macOS, how do I do that on linux?

If I want to find all mp4 videos of 1000x1000 resolution that I modified within the last week, or all jpeg files with sRGB color profile, I can do so instantaneously with mdfind.

That is not to say that linux isn't more ergonomic for certain things, but in my experience they tend to be mostly limited to things only programmers would care about (/proc is the number one thing I miss on a mac; some commandline utilities are also nicer on linux, but you can normally install them easily enough on macs as well).

For what it's worth: I'm using both linux and macOS daily and am both productive and reasonably expert with both.


Re. keybindings: you can change your default GTK bindings to Emacs-style if you want to. Whether the super/Windows/Command key should be used for window management or application controls is still up for debate; it always bothers me whenever I have to use a Mac that some browser bindings are already taken by the WM/OS because Cmd is shared between apps and the OS. There are very few bindings that you can’t change on Linux or another non-Mac Unix-like.

Re. file history: use real version control or a filesystem with this functionality (I imagine that ZFS would). Better to just use Git.

Re. Cmd-?: no alternative exists that I know of. However, menu bars are usually less prevalent on software written for traditional Unix than on a Mac.

Re. Finding files: GNOME does this with the search bar OOTB it appears. Otherwise, use search in Nautilus like you would in Finder.

Re. Memory presssure: search “swappiness Linux” into a search engine. Do you prefer maximum available memory or responsiveness? Linux gives you the choice here.

Recency, contents and type can all be sorted with Nautilus on GNOME. I’m not aware of a way to find very specific files that meet your search query quite like you mentioned (although I’m sure they exist). I would pipe file into grep personally, but that’s probably too rudimentary for what you want.


> Better to just use Git.

How would this help with my example? Git is terrible for managing non-text files and has zero support for browsing such files interactively. It also doesn't work at the file level and is pretty much unusable for anyone who isn't a developer.

> Re. keybindings: you can change your default GTK bindings to Emacs-style if you want to.

Yeah, but that doesn't really work, it just makes the whole mess even worse (oops, no longer can select everything with a shortcut, webapps, other toolkits don't care etc).

> Do you prefer maximum available memory or responsiveness? Linux gives you the choice here.

It doesn't. I want responsiveness, but I can't have it. Turning off overcommit (and swap) does improve responsiveness but is not a viable option for a desktop system, apps will just break if you turn off overcommit completely.

> Recency, contents and type can all be sorted with Nautilus on GNOME.

For me this works neither reliably nor with acceptable performance (unsurprising since a proper version needs FS integration).


> Turning off overcommit (and swap) does improve responsiveness but is not a viable option for a desktop system

Back when I got my first SSD I ran Linux without a swap file/partition. I did this partly because it was only a 60GB SSD so I wanted to conserve space. I also did it because I didn't want to shorten the life of the SSD with (this was back when such a thing was a concern) and I ran that set up for years on a pretty modest 8GB RAM with KDE installed (ie not just a lightweight tiling WM).

> unsurprising since a proper version needs FS integration

It really doesn't and apfs (your file system in OSX) doesn't even do this. In fact it's probably better that your meta-data indexer isn't embedded into your file system driver because you're just going to slow down file system operations - which matters a lot on UNIX-based platforms because they do lots of file system operations.

A far better approach is to have your indexer run as a separate process that monitors file writes (you can still have a kernel hook for that if you wish) thus you can then catalogue your files without interrupting your normal file system operations. You can also add more granularity like separate database per home directory (which would be much harder to do securely if you were embedding that code into the fs driver without then going down the route of having multiple tanks ala ZFS). It also makes it much easier to optimize your meta-data db since you can now dump everything into a RDBMS rather than attached to the space constrained inodes.

For what it's worth, this is another area I have first hand experience with because I've written a few hobby file systems over the years. Nothing serious nor performant; just myself messing around with a few ideas. But it's still earned me a greater appreciation for the design decisions behind the file systems we do commonly use.


> Back when I got my first SSD I ran Linux without a swap file/partition

You can turn off swap if you don't need hibernate, and from memory even turning off overcommit used to be OK-ish (of course most software written for linux doesn't try to deal with failing malloc requests gracefully, because there's no point since it never happens in the default configuration). You end up with a noticeably snappier system. However, this no longer seems to work in practice. Try turning off overcommit completely and see how long it takes Chrome to crash even if you have a lot of available memory.

> A far better approach is to have your indexer run as a separate process that monitors file writes (you can still have a kernel hook for that if you wish)

This is how spotlight works though, no? It's a separate process gets notified by the kernel on file system changes and then indexes them (that's what I meant with FS integration). I agree that you don't want to synchronously update all indexing meta info on FS operations because everything will grind to a halt if you do that. But you still want OS support such as reliable notification and extended FS attributes to store things like "this was downloaded from here" or tagging. I don't think there is anything particularly magical about this (spotlight is 15 years old tech and linux has had xattr support in all major file systems for ages) but in practice xattrs end up pretty much useless in linux because next to nothing uses them (baloo probably does) and as far as I'm aware there is no robust file system change notification API (you can use inotify for some stuff, but it's limited in various ways). I'd love to be wrong about this though.

I think the situation is better on macOS, but it might just be that spotlight is more polished and there is no fundamental difficulty in writing the same for linux these days.


> You can turn off swap if you don't need hibernate

Do people still hibernate? I thought these days suspending was a solved problem.

> Try turning off overcommit completely and see how long it takes Chrome to crash even if you have a lot of available memory.

I thought the point of this discussion was talking about sane defaults? Of course if you're going to mess with kernel parameters then you run the risk of getting undesired behaviour. It's no different to when we used to tweak the BIOS in the 90s. So I'm not going to disagree with you there. But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts?

> This is how spotlight works though, no? It's a separate process gets notified by the kernel on file system changes and then indexes them (that's what I meant with FS integration).

That's not file system integration though. What you were actually describing was a completely different behaviour. Moreover, you claimed that Spotlight works differently from other tools of it's ilk and that is also untrue.

> (spotlight is 15 years old tech and linux has had xattr support in all major file systems for ages)

Again, you don't want that information in the file system table. Storing every little bit of information like that in xattr would slow down standard file system operations. What you actually want to do is store that information in a separate RDBMS (eg sqlite3, MySQL/MariaDB, etc). To be honest even something like Redis might work as long as it has a persistent backup.

> as far as I'm aware there is no robust file system change notification API (you can use inotify for some stuff, but it's limited in various ways). I'd love to be wrong about this though.

I've not spent a great amount of time with inotify but from my limited exposure I do recall it wasn't great with nested hierarchies. There's probably some better ways that I don't know of but this is a particular problem I've not needed to solve before so I'm as in the dark as you are.

> I think the situation is better on macOS, but it might just be that spotlight is more polished and there is no fundamental difficulty in writing the same for linux these days.

Honestly, I think the perceived differences are all imaginary. Like wine tasting when you're told one bottle is expensive and another is moderately priced - lots of people will start to imagine deeper flavours in the more expensive bottle even if those flavours don't exist. So much of our perceptions are based on expectations rather than experiences and I think that's what's happening here because I've used both Krunner and Spotlight and my honest impression is that they're both much the same.


> That's not file system integration though. What you were actually describing was a completely different behaviour.

I should probably have phrased this differently, "kernel file system layer integration" maybe. The relevant (and presently, I believe, lacking part) in linux would be VFS. It also relies on applications making consistent use of xattrs for some functionality, something that does also not happen on linux.

> Moreover, you claimed that Spotlight works differently from other tools of it's ilk and that is also untrue. [...] Honestly, I think the perceived differences are all imaginary.

Right. I'm not a file system expert, but I'm increasingly wondering if your confident pronouncements are backed up by sufficient knowledge what you're talking about. Spotlight is implemented with major kernel support in the form of fsevents. This allows the user space portions of it to receive fairly reliable and timely notification of file system changes efficiently. This is a key ingredient to make it work as well as it does.

(see e.g. https://eclecticlight.co/2017/09/12/watching-macos-file-syst...)

Now the thing is, linux doesn't have a direct equivalent (or at least if it now has, it's a pretty recent thing, more than a decade after spotlight).

Quoting from lkml (https://lkml.org/lkml/2016/12/20/312)

    Other operating systems have a scalable way of watching changes on
    a large file system. Windows has USN Journal, macOS has FSEvents
    and BSD has kevents.

    The only way in Linux to monitor file system namei events
    (e.g. create/delete/move) is the recursive inotify watch way and
    this method scales very poorly for large enough directory trees.
In light of this apparent disparity can you provide more detail on how linux based indexers work just the same and just as well as spotlight on macOS? What's the equivalent to fsevents they're all using?

> But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts? But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts?

Let me try again: with default settings my high spec linux box ground to unusable state (and no I'm not making it up) frequently enough that I got sick off it. So unlike you (somewhat rudely) continue to imply, I'm not some bozo who randomly screwed around with system settings he didn't grok on a whim and then started whining after everything broke.

> Again, you don't want that information in the file system table.

Yeah, you do because that way it stays around when you copy, move or archive the file. You probably only want to do that with a few select metainfo fields (like the examples I gave earlier: download origin info and user supplied tags), but that's exactly what macOS does. Also, whilst I agree that storing search indexes and everything directly in the file system is probably not ideal, there is historical precedent of a systems that did exactly that, fairly successfully from what i hear (BeOS/BFS).

P.S. maybe a more productive direction: what is you recommended way for setting up some baloo or some other linux indexer for running mdfind-style commandline queries with it (I don't want KDE or Gnome and I think baloosearch vs mdfind is also easier to compare directly)?


> I should probably have phrased this differently, "kernel file system layer integration" maybe. The relevant (and presently, I believe, lacking part) in linux would be VFS. It also relies on applications making consistent use of xattrs for some functionality, something that does also not happen on linux.

But that's not how any other those services work - including Spotlight.

> I'm not a file system expert, but I'm increasingly wondering if your confident pronouncements are backed up by sufficient knowledge what you're talking about.

I appreciate your frustration but the problem here is that you keep conflating multiple different technologies and not understanding the distinction I'm trying to make between each of them. I admit I'm not the best at explaining complex technologies (though I wouldn't say the stuff we're talking about is particularly complex) so maybe this conversation is better left to yourself to do some independent research because there is clearly a language gap between what I'm trying to describe and what you're apparently reading.

But the crux of it is you seem to think Spotlight stores all of it's data in the file system itself and is unique in that regard. That isn't true on both counts:

1. Spotlight will use a separate database - not xattr - to store it's indexes.

2. Every tool akin to Spotlight (including Krunner) does the same

There is the caveat that some of the searchable parameters in Spotlight obviously would be in the file system as well as Spotlights database - which might be where you're getting confused? But not everything you described would by xattr and Spotlight itself wouldn't be running slow file system scans to return it's results when it could instead use a local cached database (as I described above) with indexed fields against several parameters rather than just the inode number (which I'll get into later).

You also seem to think that inotify and/or fevents count as "file system integration". It does not. They are completely separate APIs. Whether they're backed by a kernel syscall is completely besides the point because they're not part of the file system ABI. Thus they're not actually tied to the file system itself (ie Spotlight can then work against any file system rather than just apfs).

> Let me try again: with default settings my high spec linux box ground to unusable state (and no I'm not making it up) frequently enough that I got sick off it. So unlike you (somewhat rudely) continue to imply, I'm not some bozo who randomly screwed around with system settings he didn't grok on a whim and then started whining after everything broke.

But you are over commiting resources to virtual machines then moaning when it grinds to a halt. Which isn't any better than tinkering with kernel parameters and making the same complaints.

> Yeah, you do because that way it stays around when you copy, move or archive the file.

That's what fevents is for ;)

By the way, even the file system doesn't index files by file name nor path. Every file system object (files, directories, TTYs, etc) on UNIX and Linux is just an inode. So even the file name and path itself is just metadata stored against the inode. The kernel itself doesn't understand file names, it just passes node indexes around and your file system driver will return metadata such as file name - if requested - by the calling userspace tool. That's how it works at a low level - even though file names and paths feel like a first class parameter in the userspace tools we use.

The reason you don't want too much metadata in the file system itself (eg xattr) is because it slows down file system operations. In fact many GUI platforms intentionally store extended attributes in hidden (technically just dot-prefixed because there isn't actually a "hidden" attribute on UNIX) for that reason. Partly that reason anyway - the other part is because not all filesystems support xattr. Which is actually another reason Spotlight wouldn't want to use xattr.

> Also, whilst I agree that storing search indexes and everything directly in the file system is probably not ideal, there is historical precedent of a systems that did exactly that, fairly successfully from what i hear (BeOS/BFS).

I did run BeOS but I can't remember much about BFS so I'm not going to comment on that specifically, however the other systems were split between two camps:

1. They either stored extended attributes in hidden files or directories - such as .Directory (KDE), .DS_Store (OSX), desktop.ini (Windows) - or

2. instead of a traditional file system layout they will have what is ostensibly be a fully fledged RDBMS. Those tended to be exclusive to mainframes but Microsoft was experimenting with a similar approach with WinFS in Longhorn (I think it was called?). However it was eventually canned due to it's shitty performance.

That's at least the hysterical precedence of storing super detailed meta-information. Historically the stuff that appeared to be stored as xattr were often just read from the file data itself (eg image sizes might be read from the JPEG headers). In fact in the 90s it was common for some platforms to identify what the type of file by literally reading the first few bytes of that file (eg does it have a pkzip header?) and some CLI tools still do this (eg `file` does exactly that. `grep` reads the first 1000 (exact number escapes me) bytes and if there is a null byte (0x00) then it is assumed to by a binary file rather than text and outputs an error.

As an aside, one of the hobby file systems I wrote was long the kinds of lines of (2) too. It used vanilla MySQL/MariaDB as the back end because one of it's features was that you could then connect to a remote filesystem via a simple MySQL connection string. It was a pretty fun project and I'd gotten all the read operations working but there was a few bugs with the write operations that I never fully solved and I eventually lost interest when I starting working other projects.

> P.S. maybe a more productive direction: what is you recommended way for setting up some baloo or some other linux indexer for running mdfind-style commandline queries with it (I don't want KDE or Gnome and I think baloosearch vs mdfind is also easier to compare directly)?

Honestly I don't know. I might not like Windows much as a platform but I do really like the explorer.exe shell as UI paradigm. So I tend to gravitate towards KDE on Linux (plus I think the KDE team have done a great job refining that paradigm in ways the Microsoft have failed to). Krunner has always "just worked" for me so I haven't spent any energy looking for ways to replace it. However I'm sure there will be some guides online about setting up runners (are they called?) on Linux given the diversity of it's ecosystem.


I don't mind OSX - honestly there's bits of it I do genuinely like. But it's blind-sighted fanboyism that really does the platform harm.

> keybindings

Honestly, I find Mac OS keybindings and keyboard layout the worst thing about using Macs. Yes it might make some sense but when literally every other platform on the planet follows the same standard apart from Apple, it then makes Apple the ugly stepsister regardless of how rational it might be on paper.

I mean if you only ever use Macs then I guess you might like it, but for anyone who swaps between systems (or even just wants to use a non-mac keyboard) it can be very annoying.

> file history

You can have that in Linux

> Cmd-? (how do I do that on Linux?)

shrugs maybe you can't. But that's just one feature. As a counterargument I could list a dozen things that are in Linux that aren't in OSX. Like up to date core utils and proper package management. Which are just about the two most important things on a dev machine - far more important than Cmd-?. And sure you could install GNU core utils via brew, none of that is part of the default OS X build - which matters because the whole basis of your argument was that OSX has better defaults.

Ultimately though I don't see the point in nitpicking each OS - feature by feature.

> finding files

I'd already disagreed this in my previous post after you made that claim earlier

> MacOS can recover from memory pressure fine. How do I get my linux machine not to effectively crash if I run an app that happens to use too much memory (technically it just swaps itself to death, in practice reset is the only remotely timely way to recover)? I've tried any amount of tweaking, but turns out that you can't turn off overcommit and swap completely, even if have lots of memory, things will just break randomly (chrome for example).

The problem there is the application. However Linux will just kill the last process that over allocates memory. If you're getting the kind of symptoms you've described then you've either fiddled with your swap file settings (so not running defaults) and/or you're running Linux on some pretty awesome spinning disks while comparing it to nice fast SSDs on OSX. Either way, you're not comparing like for like.

> Can you point me at something on linux that comes anywhere close to spotlight/mdfind?

There's loads. Krunner, for example, has all the same features as Spotlight plus supports plugins to extend it. For example I can run math calculations in it - which I haven't yet worked out how to do in Spotlight.

> For what it's worth: I'm using both linux and macOS daily and am both productive and reasonably expert with both.

But do you actually use desktop Linux on modern hardware? Or are you just running Linux on a few servers and guessing about the desktop experience. I ask because your comments were valid about 10 or 15 years ago but really aren't the case any longer.

> That is not to say that linux isn't more ergonomic for certain things, but in my experience they tend to be mostly limited to things only programmers would care about

This I do wholeheartedly agree with.


> > file history > You can have that in Linux

You can use lvm or zfs snapshots, but that's not what I'm talking about – I'm talking about in-app browsable history of things like documents or presentations.

> Krunner, for example, has all the same features as Spotlight

Last I checked it used Baloo to do the actual indexing, the list of high priority features/bugs on the project site https://community.kde.org/Baloo (Baloo crashes a lot in various places etc.) and a quick google make it look like it remains alpha software at best, I'm also pretty sure it doesn't have an equally reliable index update mechanism. The most important thing about spotlight for me is that it can search file names and content (filtered by type if necessary) fast and reliably and up-to-date. But you can also do types of searches that as far I'm aware of none of the linux utilities can do.

E.g. show me all the items I downloaded from a google.com domain:

    mdfind "kMDItemWhereFroms == '*google.com*'"
If I want to see the filetypes that where downloaded sorted by frequency I can add

    mdfind -0 "kMDItemWhereFroms == '*google.com*'" | xargs -0 -n1 mdls -name "kMDItemWhereFroms" | sort | uniq -c | sort -n
> For example I can run math calculations in it - which I haven't yet worked out how to do in Spotlight.

You literally just type what you want calculated, e.g. `sin(pi/4)`.

> As a counterargument I could list a dozen things that are in Linux that aren't in OSX. Like up to date core utils and proper package management.

nix. By my lights the only proper package management for any OS. Works fine under both linux and macOS (and will also trivially supply you with up to date coreutils).

> But do you actually use desktop Linux on modern hardware?

I have been using (well-specced) linux desktops for most of my work for a long time.

> The problem there is the application. However Linux will just kill the last process that over allocates memory.

I don't think that's how it works. The whole point of having a proper OS (rather than say DOS) is that misbehaving app won't just bring down everything else. Also if you have a process that wildly allocates memory, by default linux will start off swapping like mad, making your computer effectively unusable (and yeah, in fact my linux desktop does have an SSD and many times as much RAM as my macbook, so if I'm not comparing like to like my linux station is the one with much more powerful hardware). And even if it runs out of swap it doesn't just kill the last process, it uses a more complex scoring algorithm which has a good chance of killing something you didn't want to be killed.


> You can use lvm or zfs snapshots, but that's not what I'm talking about – I'm talking about in-app browsable history of things like documents or presentations.

I got that. It's still just some application UI wrapped around a CoW file system. Maybe a better way of saying your point is "doing the same on Linux lacks a lot of polish" - which is true. But that's when happens when Linux has to support a multitude of file systems but Apple can control every aspect of their ecosystem.

> Last I checked it used Baloo to do the actual indexing,

Possibly? Krunner has always "just worked" for me so I've never bothered to look under it's hood.

Regarding the bug you found, well I'd argue that you should expect to read bugs on a bug tracker given that's the point of bug trackers. It does feel like what you're basically now doing is the equivalent of reading a 1 star review of a product (eg on Amazon) and claiming it doesn't work by the proxy of others while also ignoring all the 5 star reviews from people who haven't had any issues. It's a heavily biased way to hold a discussion and if we're both honest, Mac's haven't been without their fair share of bad publicity either. So is it really worth our time cherry picking all the negative things when you and I both know that they're the exception rather than the norm?

> You literally just type what you want calculated, e.g. `sin(pi/4)`.

Handy to know. I suspected it would have been possible but I kept prefixing the formula with `=` which Spotlight didn't like.

> nix. By my lights the only proper package management for any OS. Works fine under both linux and macOS (and will also trivially supply you with up to date coreutils).

My point is you shouldn't have to install a 3rd party package manager. That's the bare minimum a modern OS should provide out of the box.

> I have been using (well-specced) linux desktops for most of my work for a long time.

I struggle to believe that given the descriptions of faults that you've been discussing. Though you have also said you've tinkered with the "swappiness" parameters (plus more) so I guess it's possible that you are running current hardware but have inadvertently tweaked Linux into performing terribly? Or maybe you're just exaggerating all these problems to make a point (much like your "look, I've found a bug on a bug tracker" comment above).

Either way, if the problems were as prevalent and severe as you keep describing then you and I - and millions of other techies for that matter - wouldn't be running Linux.

> The whole point of having a proper OS (rather than say DOS) is that misbehaving app won't just bring down everything else.

"Proper OS" is such a flakey term and what you described isn't even the "whole point" of running an OS. But that's a whole other tangent. More importantly Linux doesn't do what you're accusing it of doing. Thus your statement is simply untrue in a multitude of ways.

> Also if you have a process that wildly allocates memory, by default linux will start off swapping like mad

It's actually a great deal more complicated than that. It depends on the size of your swap file, what applications you have open and their current running state (ie can they be paged). It depends on whether your cache is non-zero and it also depends on the kernel parameters you define.

> And even if it runs out of swap it doesn't just kill the last process, it uses a more complex scoring algorithm which has a good chance of killing something you didn't want to be killed.

Depends on the version of Linux (the kernel) you're running. Older kernels will just kill the last requester. Newer kernels do have a scoring algorithm but it's really not that complex at all (if memory serves, it's ostensibly a just a percentage*10 figure of used memory)


> It's still just some application UI wrapped around a CoW file system.

So? In terms of usability impact I still consider it a major feature (that no amount of tweaking will get you on Linux).

> My point is you shouldn't have to install a 3rd party package manager.

But macOS has a "package manager" – it's called App Store. You and I may not think it sufficient for our (developer) needs, but we're not representative users. And for normal users and even myself it offers very useful functionality over what they'd get out of the typical native linux package manger. You can trivially reinstall everything on a different machine with a different os version (as long as it's not super ancient), and it works – no "DLL" hell, because everything is essentially self-contained. And since software is tied to your account, there is no need for crufty apt queries in the hope to get a list of packages out you can backup for reinstall elsewhere or after clean upgrade. Ubuntu has tried to establish a clone in the Snap Store, but no one I know seems to use it and I haven't tried it myself, so I don't know how compelling it is.

And I need to install a 3rd party package manager on (non-NixOS) linux distros anyway, because IMO apt, yum etc. fundamentally suck and nix is the only thing that doesn't. Funnily enough, the only really compelling UX argument for linux instead of macOS for developers I can think of apart from /proc is that with NixOS you can codify your complete machine setup in a single nice config file, making it super easy to replicate, backup or inspect.

> Or maybe you're just exaggerating all these problems to make a point [...] More importantly Linux doesn't do what you're accusing it of doing.

It's a bit annoying to be told that what I'm saying literally can't be true. It is, and I didn't tweak any sysctl params or the swap setup before I got tired of my machine grinding to a halt and me having to reset it. I can ensure you it's entirely possible to have a high grade desktop with SSD and have linux fall over swapping endlessly without even being able to move the cursor anymore. Of course this doesn't happen in "everyday" usage otherwise no one would be running linux, but it's not that hard to trigger if you're running a VMs, a few browsers and dev tooling stuff that can potentially consume large amount of memory very quickly. I've moved away from having to use these tools (and also tweaked my machine) so it hasn't been a problem off late, but I ran into it with completely stock ubuntu.


Drive-by poster here but I was wondering if you had spent any time looking into what IO scheduler you're using on Linux?

Some time ago I encountered issues similar to what you mention in your posts. I solved it selecting the "Deadline" IO scheduler when I built my kernel.

Hopefully this helps you solve the issue :)

~K


> (that no amount of tweaking will get you on Linux).

It best not to use such firm definitives like that when what you actually mean is "more tweaking than a typical user would be bothered with". :)

> But macOS has a "package manager" – it's called App Store. You and I may not think it sufficient for our (developer) needs, but we're not representative users.

You're seriously going to defend the App Store?! The App Store isn't just garbage for developers, it's garbage for everyone because it misses so many non-developer productivity tools too. It doesn't even have Chrome nor Firefox in it.

> And for normal users and even myself it offers very useful functionality over what they'd get out of the typical native linux package manger

Sorry but I'm not buying that argument. You claim to be a "normal user" then talk about messing around with kernel parameters in Linux. I really don't think you're making any fair and balanced arguments on this topic at all.

> You can trivially reinstall everything on a different machine with a different os version (as long as it's not super ancient), and it works – no "DLL" hell, because everything is essentially self-contained.

I guess if you compare the App Store to manually loading software on Windows - literally the worst platform ever created for managing installed software - then the App Store would look good. But likewise if you compare a heart surgery to a lobotomy then heart surgery would look less invasive too. This is why I don't think it's productive to compare solutions to the worst examples.

> And since software is tied to your account, there is no need for crufty apt queries in the hope to get a list of packages out you can backup for reinstall elsewhere or after clean upgrade.

It's a pity that the App Store offers so little software so you end up falling back to 3rd party package managers. So now on OSX you not only need to run the same "crufty [package manager] queries" on OSX but you also need to install the package manager itself too.

> And I need to install a 3rd party package manager on (non-NixOS) linux distros anyway, because IMO apt, yum etc. fundamentally suck and nix is the only thing that doesn't.

A moment ago you were claiming to be a "normal user". Normal users don't install nix :) tbh I'm not the biggest fan of apt, yum is better but I do really like pacman. However claiming apt and yum suck when also praising the OSX App Store is just weird.

> I can ensure you it's entirely possible to have a high grade desktop with SSD and have linux fall over swapping endlessly without even being able to move the cursor anymore. Of course this doesn't happen in "everyday" usage otherwise no one would be running linux, but it's not that hard to trigger if you're running a VMs, a few browsers and dev tooling stuff that can potentially consume large amount of memory very quickly.

Right, I get you now. That context helps. Your previous description just said you were running a browser and sounded like it was happening everyday (so basically you were exaggerating by leaving key details out when describing the root cause). The problem there is that you're not just over commiting on memory but also over commiting on CPU resources too. That latter part matters because swapping can be CPU expensive too. Hence why your system is grinding to a halt.

Also I still think you're to blame a little there because if you're running VMs then you should be setting their thresholds to a level that doesn't overcommit your systems resources (baring in mind these tools aren't the stuff that "normal users" would be using either). It's like opening a bottle of wine and pouring yourself 4 glasses then complaining that the bottle is empty and you couldn't squeeze out a 5th glass (can you tell I'm drinking wine at the moment hehe?). You only have a finite amount of system resources so you cant really complain if you intentionally over commit them.


> You're seriously going to defend the App Store?!

Yup, flawed as it is, I find it much more useful than apt. If I'm wearing a dev hat and were forbidden from using anything to manage software installs other than one of apt or App Store (no nix!), I'd rather have apt. But for my non dev apps (you know, even people who tweak kernel parameters have non-programming related apps they want to use from time to time ;), Appstore is obviously more useful.

> However claiming apt and yum suck when also praising the OSX App Store is just weird.

Why? Both fill different needs and App Store solves a problems that are useful to me acceptably well (making it easy to install up-to-date software I want, upgrade it and remember what I have on a per-account not per machine basis).

Yum and apt, on the other hand don't (they don't have up-to-date software I want, they don't give me what I consider a decent way to manage the same or similar setups on multiple machines etc.). I basically install everything I can with nix instead.

> So now on OSX you not only need to run the same "crufty [package manager] queries" on OSX but you also need to install the package manager itself too.

Unlike apt/yum nix offers good ways to do this – no cruftiness involved. E.g. you can just write a small file with what you want and you'll get it, on any machine.

> You only have a finite amount of system resources so you cant really complain if you intentionally over commit them.

That's not what's happened, my VMs where capped at reasonable limits. I used to run some tools for various reasons that could in some scenarios eat a lot of a ram fairly suddenly (I don't think the systems was anywhere close to overloaded CPU wise was true most of the time but I can't vouch I remember this right anymore).

Either way, I don't think the whole OS falling over because one app wants to consume too much memory and the OS has decided to never say no is reasonable. And it's not something I can recall ever happening to me with any other OS (in recent years, I don't want to think back to ancient windows days).


> they don't have up-to-date software I want

I take your multi-machine point but the above just depends on what repository you're pointing at (eg stable, testing, etc) and which Linux distro you're running. You can't really blame apt for being out of date if you're running Debian. And nor could you blame apt for delivering buggy packages if you're running the testing repos on Ubuntu.

It's the same package manager, just different end points.

> Unlike apt/yum nix offers good ways to do this – no cruftiness involved. E.g. you can just write a small file with what you want and you'll get it, on any machine.

Technically you can do that with any package manager - given that's the core point of a package manager :P

I've not used nix (read a little about it but never taken the time to try it) so I can't comment how much easier that makes the process of custom repositories than hosting your own apt or yum repo, but it's not actually hard to do in those two either. Plus you could always compile your own .deb or RPM and install it like a standalone installer (MSI et al).

I've got nothing against nix though. In fact weirdly I think your underselling nix by focusing on the points you have rather than it's major differences from traditional package management.

> That's not what's happened, my VMs where capped at reasonable limits. I used to run some tools for various reasons that could in some scenarios eat a lot of a ram fairly suddenly (I don't think the systems was anywhere close to overloaded CPU wise was true most of the time but I can't vouch I remember this right anymore).

The problem with over commiting is the limits might seem reasonable under normal workloads but when you do end up with an empty bucket you have no safe way to recover from that. Or at least not with desktop virtualisation solutions like VirtualBox. ESXi et al will handle such situations more gracefully because they're designed to over commit during off peak work loads.

That said, I don't know how long ago it was you last did this but a few years ago VirtualBox did add a CPU execution cap in the guest config. IIRC it defaults to 100% but if you're running multiple guests and/or running heavy applications on the host while also running heavy guest VMs then it's worth dropping the CPU execution cap down so the guest cannot lock up the host.

> Either way, I don't think the whole OS falling over because one app wants to consume too much memory and the OS has decided to never say no is reasonable.

I think your expectation here is a little unreasonable to be honest. You cannot drain the host of free system memory and idle CPUs then expect the host to gracefully recover. It's like trying to dowse a fire with an empty bucket. I honestly can't see how OSX would perform any different to Linux in that regard. So you were probably using different virtualisation technologies on OSX (VMWare perhaps?) that handle guests more responsibly.


I would pay a decent amount just to get Omnifocus working on windows and linux machines.


Omni has just released a beta web app, which you will have to pay a decent amount for (it's a subscription service).

https://test.omnifocus.com


You have earned yourself a virtual high five, and a redeemable beer if we ever met in person :)

Thank you for showing this to me.


The facts remain that (1) even Free-Software developers rarely tinker with their Macs, (2) what can be achieved by tinkering is extremely limited, and (3) vanishingly few who don't own a Mac have the slightest idea of, or interest in, what MacOS apps they might (someday) be able to run.

What would the results of a successful or unsuccessful test mean? Either might be a consequence of the different environment.

I have no difficulty locating files. They stay right where I put them. If storage did not keep growing it might bother me that they never fade away.

All that said, a few Mac owners, and even some former owners, might have a use for emulation, and I would never begrudge it to them.


What a bunch of nonsense.


Something like this is sorely needed by the open-source, not-for-profit, shoestring-resources, cross-platform development community...

For example, let's say that I'm writing some cross-platform open source software. And let's say that I am developing this software on a Windows or Linux Box.

Now, let's say that for whatever reason, I don't have and can't afford a Mac (a scenario like this is more common than you would think, especially in developing countries...)

OK, so now how do I compile/run/test the Mac version of my software -- without having that Mac?

?

That's why your software is so important.

Anyway, if you can get this fully operational (GUI and everything), you'll solve that problem for that group of software developers...

If you're successful, if you prevail... Apple shouldn't sue you... they should help you, because additional software for their platform ultimately benefits their platform.

Also, judging by the source, you've done an amazing amount of work so far... I hope you can find the additional developers/contributors you need to take this thing to completion...


>I don't have and can't afford a Mac

Why wouldn't you just virtualize, same as any other OS? If you're not virtualizing on Mac hardware than there are a few minor extra hoops to jump through but it's still less work compared to a hackintosh. Performance can be janky if you don't dedicate a video card to it, but something perfectly adequate like a Radeon 560 seem readily available for $60-80 now. There are a few bits of Mac specific hardware these days like the T-series chips, but not even all supported Macs have that by a long shot.


Doing so is not legal in most countries. the macOS T&C/EULA/etc. forbid running it on non-apple hardware. This is a large barrier to entry that a lot of people don't want to even approach.


In developing countries piracy is rampant none really cares much about EULAs


Worth noting that outside of this bubble, macOS has less than 10% market share. Not every software needs to run everywhere.

There are other working solutions, JetBrains seems to do alright with Java runtimes.


> Now, let's say that for whatever reason, I don't have and can't afford a Mac.

Rely on some developer(s) spend 10,000 hours of their time for free developing a Mac emulation layer, or go on eBay and buy a $600 used Mac to test on...


The nice thing about the former is that that cost is shared across all consumers, whereas the latter would be incurred for every user. The latter approach may still cost less for now (or forever), but it's something to keep in mind.


It may still be too expensive for devs in (very) developing countries or whatever, but I'm pretty sure there are (paid) remote desktop solutions for this. Also you can snag an old mac mini for well under $600, where I am. Like $200-300. It won't be good but that's probably better for testing anyway.


I offer a service that provides you with your own macOS VM, running on real Mac hardware. https://zeromac.com It's billed by the hour, so if you just need to quickly test something, it's pretty cheap. Buying an older mac may be more economical though if you need to develop software long term.


This is super interesting. Is it your full time job? What level of success are you having with your idea?


I run it alongside my regular consulting work, so I suppose it's not my full time job. It's also a fairly new business so I'm not too sure how successful it will be yet.


While your point is taken... I write and contribute to free/open source tools. I can afford a Mac, but will not buy one because there's zero incentive to me. While I'm not sure this project would help, if a free or very low cost proper emulation existed, I'd use it.


The logical option c is just to not support mac unless your target market is one disproportionately likely to use macs like developers or artists in the US.


It may also be a matter of principle. I won't buy macs even though they are sexy and I have the money for it because I don't agree with its practices. My theoretical customers shouldn't suffer from it though


Or use one of the CI providers that will test your stuff on macOS (alongside Windows, FreeBSD, ...). Possibly for free, if your stuff is open source. I use that for Windows, which I don't personally use at all, and it finds bugs and problems all the time. I wish someone would provide free (!) CI for some rarer OSes and CPU architectures too, but I can dream.


You can't fully test things this way. Not enough to make a serious release anyway. You can easily pass all tests while not being able to even start the app. You need at least one person with the real hardware to do testing.


This really depends on the tests you're running.


True. But in context of: "tests in a CI environment which costs you less than a MacBook", I don't believe anyone would go to the level that avoids those issues. It would take a lot of work and resources to replicate a "real run in clean, real environment".


Again, this depends on the tests you’re running and how you’ve set up your CI environment.


Unfortunately MacOS testing is usually extremely slow due to limited numbers of machines to run tests on.


> For example, let's say that I'm writing some cross-platform open source software

> OK, so now how do I compile/run/test the Mac version of my software -- without having that Mac?

You don't. Darling is not a suitable replacement for testing on a real Mac in exactly the same way WINE is not a suitable replacement for testing Windows software. A developer who thinks so is misguided.

Virtualising macOS would be more suitable, although the only way to do so without violating macOS' licence conditions is to run the virtualised OS on a Mac anyway. Even then, virtualised macOS lacks hardware accelerated graphics support, limiting the testing of GUI apps at least.

> Apple shouldn't sue you

There's no reason they would. Nothing of Apple's seems to be infringed, as the system seems only capable of running anything that Apple's open source Darwin OS can.

Even if the Darling project reimplemented some of Apple's proprietary frameworks, this could be done based on Apple's open source releases of things such as Core Foundation, etc. The reimplementations of things like AppKit, when done in the future, could even possibly be based on something like GNUstep — which would give that project a well-needed shot in the arm, to say the least.

> additional software for their platform ultimately benefits their platform

Firstly, any well-written cross-platform software is easily ported to macOS. This is most evident with software intended for FreeBSD but is equally possible with software that originates on Linux; see Homebrew, MacPorts, etc. for the plethora of utilities that began life on Linux but have since become cross-platform, or software made by GNU that is typically cross-platform by design.

Secondly, Apple has been down the cross-platform road before, and none of its developers wanted a bar of it. Apps that are developed on other systems but not tested properly on macOS are always heavily criticised by macOS users as feeling foreign and un-Mac-like.

Back when Mac OS X was shiny and new, Apple offered three major platforms for developers: Cocoa, their C and Objective-C APIs inherited from NeXTSTEP; Carbon, their C and C++ APIs inherited from Classic Mac OS; and Java, a cross-platform offering to entice developers from other platforms, particularly Linux.

Apple deprecated their own Java in 2010 because (A) people really disliked using Java apps, even though Apple's implementation of the JVM was performant and had native support for Cocoa-style controls and (B) nobody was using it, with major preference going to Apple's own Cocoa and Carbon APIs.

Apple, and its userbase, prefer apps that are made with love and care _on_ Macs and _for_ Macs/iOS devices/Apple Watches/Apple TVs, respecting those platform conventions by being developed and tested on them.


> You don't. Darling is not a suitable replacement for testing on a real Mac in exactly the same way WINE is not a suitable replacement for testing Windows software.

Disagree. Darling is not quite there (yet), but wine could reasonably be used to test dev builds, so long as a native windows version was trialed before release.


I tried this with a c++ class surely I can just recompile the final project somewhere towards the end of the class and everything will totally work!

It didn't go well.


You don't need to recompile, you can just give them the windows binary you compiled with wine. (Assuming it works.)


I stand corrected!


Generally different platform builds are handled by different developers - the codebase has to be modularized to be ported from Windows, to, say, Linux, to start, same with the UI.

If you're already porting to Linux, you're already dealing with a UNIX executable, which makes it at least slightly easier to port to MacOS.

There are also virtualization services available where you can rent dev time, and decent and upgradeable Macs are not that expensive second hand.

I dev on MacOS all the time, and have the same issue with Windows. These days I try to write everything as a PWA to start so I especially don't have to deal with extreme UI/UX pains.


>OK, so now how do I compile/run/test the Mac version of my software -- without having that Mac?

Splurge on a $1/hour or $20/month account with

https://www.macincloud.com/

?


In case anyone is curious, you have to prepay $30 for the PAYG plan, and you get non-admin access to the slowest Mac Mini you've ever used in your whole life. It does technically work to build iOS and macOS apps, though.


I wish I had the time to contribute to this. Getting Logic running on linux would let me ditch the Apple hardware.

It's usefulness is entirely limited to the implementation of the core library frameworks such as Core Image, Audio, MIDI, Animation, Data, etc...which will be very, very difficult, I think, while maintaining FOSS status.


I wish Ardour had some of the features Logic does (most significantly flex time), as well as a nicer UI. Literally Logic is the only thing keeping me on macOS.


That, Final Cut, and the occasional need to use Xcode.


Bitwig has a Linux version now.


It always has



Linux audio latency really isn't great, having another layer in between your app and the kernel probably won't help.


It's actually quite good, or at the least it's better than Windows 10 on the same hardware for me, even if using wine (Apple is still the best here obviously)


Its actually fine and has been for a long long while. Pulseaudio kind of sucked when it first came out.


Even with jack?


This extra layer (and wine) are always going to be garbage.

If you need Mac or Windows, your best bet is not to move to Linux in the first place.

If you do mostly development or scientific computing, then you’ve probably dealt with more pain on those platforms, that melts away on Linux.

Best tool for job wins.


Maloader[0] is a similar (smaller, easier to understand) project that runs in userland. It's a nice code base to read and there's this nice presentation about it[1].

[0] https://github.com/shinh/maloader [1] http://shinh.skr.jp/slide/ldmac/000.html


I'm puzzled, wont most console apps already cross compile without much modification? Isn't this basis of brew/etc? The GUI piece seems like the key, but without it I'm failing to understand why this is significant.


It says that it "does not yet" run GUI applications, so it seems like they plan on it. Additionally, if no one ever talked about software like this before completion, they'd have a pretty hard time getting enough developers interested to ever reach completion.

As for making the front-page of HN, the HN crowd probably consists disproportionately of technology enthusiasts who find interest in technology beyond its immediate usefulness.


Well, this runs _binaries_ that were compiled for macOS. Yes, practically speaking, the only binaries worth running on macOS tend to be open-source utilities that can be compiled for Linux anyway. But it is a fundamental difference in implementation that makes this project truly exciting.


If you look at [0], you'll see that non-GUI APIs need still need to be developed as a foundation for the project to continue. Implicit is that at some point in the future a GUI may be available. [0] Also states that Apples Toolchain can be run, which is promising.

Also, Darling desperately needs a re-implementation of Apple’s CoreCrypto [1].

[0] https://www.darlinghq.org/project-status/

[1] https://www.darlinghq.org/developer-zone/low-hanging-fruit/


Projects usually walk before they run.


I wonder where humanity would be, if people (including me) instead of arguing on forums, actually did something useful :)

Like Darling could be already polished product.


> At this point, does not yet run macOS application with a GUI

Which makes it rather pointless, for now. Practically all non-GUI software that runs on Darwin can be made to run on Linux too and thereby even Cygwin. What I would much rather see is library support for some BSD standard functions that are missing from Linux. Trying to migrate software that uses funopen(3) to Linux gives me an ulcer.


Will this enable Linux servers to build iOS apps?


How is iOS and macOS the same thing? It's not even running on the same architecture.


He means running xcode to produce iOS binaries


Then that's different. This won't work, xcode is heavily integrated with macOS only libraries and as far as I'm aware, it requires GUI interaction at some point to produce the iOS binary.


Been experimenting using this to run legacy software and while it seems to work great with command line tools, as soon as you progress onto gui stuff you quickly run into missing functions and libs.

Still insanely cool though.


Especially for MacOs, where you can expect apps to stop working after a new new major OS versions, this is really nice. Meanwhile, Microsoft maintains an amazing degree of compatibility. I haven't tried, but I have a feeling Office 97 works in Windows 10.


Traditionally, that was the case, but recently older applications have been breaking in recent versions of windows. And windows and macos both have hacks in place to fix older versions of popular applications (ms office, adobe suite, etc.).


While Windows's backwards compatibility is sadly getting worse, it's also still pretty incredible. I can run lots of obscure Windows programs that 2+ decades old!

Mac backwards compatibility isn't as bad as some people say—I have a decade-old program that still works in Mojave, for instance—but Windows is a lot better.

The platform with awful backwards compatibility is iOS. And there, you don't even have the option to dual boot or downgrade. The fact that no one cares says something about how much we value mobile software...


Honestly, the only time ios has broken backwards compatibility is with 32->64 bit. And—not to minimize that, that is quite major, but that's pretty much the only time it's happened.


Have you ever actually tried to run early iOS apps on iOS 10?

I have. I had an iPod Touch for a couple of years as a teenager, but then left iOS until I got an iPhone in college. Once I had the iPhone, I decided to go through my purchase history and re-download the apps I'd used on my iPod, for nostalgia's sake if nothing else.

To my dismay, exceedingly few of the old apps worked, and most of those that did had major graphical glitches. (Not including apps which had been updated by the developer more recently, of course.)


> To my dismay, exceedingly few of the old apps worked, and most of those that did had major graphical glitches

Link them to me and I'll try them out. I've never had that experience.


I took a look, and it seems like a lot of them were removed from the App Store. If you happen to have any of these in your purchase history:

* Convertbot

* Tap Tap Revenge Classic / 2.5 / Dance

* Roland 2

There were definitely way more, but I don't remember which ones, and now that I'm on iOS 12 I can't test any of them (all 32 bit). These are the three I specifically remember not working.


Hmm, those seem to be gone and I never downloaded them before.

Hah, guess the reason I've never noticed any compat issues is they removed the apps they broke compat with.


*Rolando 2, with an "o" at the end. I hate catching typos after the edit window has closed.


for reference a link to the previous conversation is here:

https://news.ycombinator.com/item?id=12854895


The comments here sound like they are from Android users who go on about how great Android is because they can spend hours tweaking their phones and rooting them and side loading them. LOL. "made to do what you like". Dudes, just get Windows 10 and install WSL and tweak away.


I really want something like Hammerspoon, but for Linux


Surely there's something equivalent...?

I've used Linux on desktop exclusively ~95-07. Switched to MacOS after that. Tried out mythical "Linux on the desktop" in 2018, was amused to find that iTerm2 has grown so much, and there's hard to find a good replacement for it under Linux. Of all the things I'd expect it to handle well... 8)


One of the most sophisticated Linux terminal emulators I worked with is Konsole, the KDE terminal. However, I never worked long enought with iTerm2 to learn all the features. My feeling is that there is a growing crowd of Linuxers who go for "minimalism", like tiling WMs and minimal terminal emulators, and using tools like tmux or zsh to streamline their experience.

Regarding Hammerspoon (which reminds me very much on what you can do with AppleScript), you certainly can do many of the examples (https://www.hammerspoon.org/go/#spoons) with modern-day frameworks like xdg, dbus, libevent, inotify. My feeling is that (for instance) Python PIP has pretty comprehensive libraries which allows you to use it as a glue language for all that interfaces. I would not be surprised if that even feels slimmer and more powerful then Lua in the end, but it's a matter of taste. As always.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: