Hacker News new | past | comments | ask | show | jobs | submit login

The main attraction of Apple products is the certainty that, no matter what you do, they cannot be made to do what you would like. The consequence is that you feel no urge to spend time tinkering to get there, and instead adapt yourself to what it actually does. This reclaims all the time and attention that you would have spent on tinkering.

Since what Apple products do has proven adequate for a large number of people, and you are no less adaptable than they are, you know you can adapt, too.

Having once chosen to adapt to what Apple has chosen to offer, you find it easier each time, until it becomes wholly unconscious. Each time Apple takes away something you had used, you might momentarily balk at the "upgrade", but always acquiesce in the end.

[Edit] So, the appeal of Mac emulation is very limited, because it starts out with tinkering.




I don't really get this, MacOS is super tinker-y. You can script multi-app work flows with Automator, set up folder actions to magically transmute files (I have one that insta-shrinks the images in PDFs for me and puts them in an output directory), cast magic spells on selected data using custom services, use the full range of Unix commands for text processing. You have Perl and Python right there.

The things that are resistant to tinkering in MacOS are the UI and how you do stuff. Those are infinitely flexible in Linux, but heavily standardised on the Mac. However when it comes to getting useful stuff done, the Mac has a wealth of tinker-y toys waiting to do your bidding.


I think you have proven your parent's point by drawing the line between useful and time wasting tinkering.


These whole thread saddens me.

The amount of "tinkering" I have done on my Ubuntu PC was limited to changing background and reducing icon size to fit my monitor better.

I'm not going to apologise for having better performance and first class containers.


There is certainly a fair bit of extremity measuring going on in this thread, and I'm as guilty as anybody. The fact is a default install of MacOS or any of the mainstream Lunux distros is a fine system just as it is.


>The amount of "tinkering" I have done on my Ubuntu PC was limited to changing background and reducing icon size to fit my monitor better.

Outliers need not apply.


Those are not tinkering things -- those are productive uses.

Tinkering is endlessly playing with the window manager configs, changing desktop environments, getting your system "just so", switching this (e.g. audio framework) for that, etc.


> The things that are resistant to tinkering in MacOS are the UI

But not completely resistant. For example, there are multiple tiling window managers for os x.


My friend uses one of those and it takes over a second for a new window to find its place. I can’t see what productivity benefits you would get from that. It look pretty hacky compared to any decent X11 WM with tiling functionality.


I tried the adjustment to the Apple way from 2010 to 2018. By the end of it I realized that "tinkering" has an equivalent in the Apple world and it is called "upgrading." These two terms are almost synonymous in the pain which they will cause. By the end of my journey I had a firm "one major version upgrade per iOS device" rule and a desktop OS upgrade experience that was driving me crazy, because random CAD app wouldn't support the new OS, but another app wouldn't support the mainstream-old OS anymore.

I won't say that tinkering is totally productive, but 1) personally my tinkering on Linux has led to technology development and I'm the type of guy who benefits from that anyway, and 2) I believe the pain you leave behind in lost productivity is at least offset by 25% instant upgrade-related stress for each Apple device in your household or sphere of personal work activity.

Even just constrained to hardware, the horror stories about new Apple devices alone made me doubt my HW upgrade plans. By itself that was enough to make me wonder if I was about to throw away thousands.

Unsurprisingly I felt like there were things Apple could do to make this all better, but like you said, they cannot be made to do what you like :-)


.... ish.

I'm a long time Apple user from the Mac SE. The reason I switched to OSX was effectively, I was getting a Unix OS than ran MS Office and came with a really nice suite of built in programmes that 'just worked'. It was the best of both worlds.

I'm a tinkerer and even today MacOS still allows me to tinker quite a lot. But there's no requirement to tinker. If I want to actually get work done, I can use it in 'It just works' mode.


As a Mac user since 2000 and through the initial pains of the OS X transition, this is on the mark.

Many old Mac users were very unhappy about the upgrade from Classic to OS X. "File extensions? Non-spatial Finder? A command line? What is this bullshit, and why does it run so slowly on my top of the line 400 MHz Power Mac G4..."

But eventually they adapted. At present, Apple is "boiling the frog" on turning Mac into something closer to an iPad Pro with a keyboard (Marzipan brings iOS UI style to desktop; mandatory app notarization prevents running non-approved software; etc.) Despite the grumbles, Mac users will acquiesce here as well and just get on with their work.


Thanks for putting into words what I had suspected for so long. Apple is really good at identifying what the vast majority of people (aka normal users) want to do with their computer and then they optimize these workflows. But if you want to do something more exotic you are usually out of luck.

Talking to Apple enthusiasts is really tough. It's almost as you are speaking a different language. Of course, in the end of the day a computer is just a tool. They are happy with what they can do with their machine and I am happy with my machine. But they have a hard time understanding why I see their system as limiting.


Many Mac users come from Linux and Windows (of course there are migrations in other directions too), most developers on MacOS also use Linux daily. So I'd wager most of them understand what you mean, but disagree.

Personally I like to tinker and personalise only a subset of tools I use to get things done (iTerm, tmux, vim...) and have good defaults on the rest.


You are talking about developers. I am talking about regular non-IT people. IT people know that they can just SSH into a Linux server or that they can run Linux in a VM or a container. Regular people don't know these things.

  > Personally I like to tinker and personalise only a subset
  > of tools I use to get things done (iTerm, tmux, vim...)
  > and have good defaults on the rest.
If these things are good enough out of the box for you, that's fine. But it also shows that you and me have different mindsets about what we expect from our digital work environments. If you spend a lot of time in front of computers, it is worthwhile to adapt the systems to your needs.


IT people need to spend more time with their non-IT moms. It really does ground you to reality more.


Others, like myself, have done already enough tinkering during our university days and now rather focus on get stuff done.


Exactly. And you don't have to say this and sound arrogant.

Our lives, especially with that high work-load we still have to cope with, don't allow us to choose many things to do when not working.

If you have family with kids you loose every tiny bit of freedom to "waste your time" on tinkering and get easily annoyed when things just don't work out of the box.

I like the possibility to customize everything on linux but I also hate how the linux world can't provide the standardization and clarity I am used to from OSX.

How on earth is there still no terminal emulator like iTerm2 on linux????!

From linux I miss the possibility to customize everything on OSX. I hate it how Apple allways tries to lock me into it's golden cage and imposes it's way of thinking on me.

In 80% of times I can totally agree with the apple way but there is 20% when I could throw that MBP against the wall with full force.

Ok I didn't want to start some OSX vs. Linux debate here it's just an example to show the love-hate relationship to both.

It's great there is some effort put into connecting these worlds. The way our economy works is the reason we don't have the computers and OSes we really want to have and sadly only the open source world will change this.


  > How on earth is there still no terminal emulator like iTerm2 on linux????!
Genuinely curious: What does iTerm2 do better than any other terminal emulator on Linux?


* The gap between Windows Manager and Terminal is not as wide as with most terminal emulators I know in the linux world (copy&paste, drag&drop, search-function, mouse support, easy (!) image rendering support out of the box...)

* User accessible features like the tmux integration (look at this: https://www.iterm2.com/documentation-tmux-integration.html)

Just to mention a few advantages.

Of course most of the awesome apps like tmux just need you to learn a bunch of new commands but I can't imagine myself doing that a lot over my whole lifetime.

Some things I just want to use without a steep learning curve and apps like iTerm2 prove that this is possible.

I use the shell a lot since many years but I'm far from using it the way I could imagine it in the 21st century (the shell is still the superior interface for computers in my opinion but I'm afraid this topic didn't really get much attention & innovative approaches in the last decades).


It's probably because it has a lot of features that are not unique, but you won't find all of them in a single terminal emulator on Linux.

Example, I don't know about a terminal on linux that both works as a drop down terminal (quake-like) & supports inline images.

Or split screen & password manager.


For the split screen functionality, I’d recommend tmux or Terminator. If you try w3m on xterm, you’ll notice that inline images can work. Could someone enlighten me as to why inline images are useful in the first place?


Imagine something like Jupiter Notebooks as shell.

That was the REPL of non-UNIX graphical workstations of uore, and inline images was naturally part of it.


One thing I have not found (admittedly I have not searched much) is to be able to detect patterns printed on screen and launch triggers. E.g.: if something prints on screen "created a job XXX" I can color this text blue and make it an URL pointing to example.com/XXX


Look for "Triggers" under the "Advanced" tab of your profile. You just need to enter a regex and specify an action.


This Apple "Upgrade" is an almost perfect example of real world Doublethink as George Orwells defined it in the novel 1984.

https://en.wikipedia.org/wiki/Doublethink


I also noted the Orwellian tone. We live in troubling times when the value of tinkering is lost to the tinkerers themselves.


Tinkering was never important to real hackers. It's the script kiddie analogue to hacking.


No, it's a real example on "focus on what matters" and dropping support for bike-shedding (all too popular with tinkerers).

Many users are not college students discovering the world of UNIX and customization and tinker happy -- they're people that want to get something (not "how the computer works" related) done.


I disagree entirely with that. I think Apple sold people the idea that what they provide is really cool and 'just works' and that you want to be in their 'ecosystem.' It worked, people bought the allure and then just stopped looking at other solutions because Apple said they don't need to.

I think your post highlights exactly that: a buy-in to the brand so strong as to not even try and look elsewhere when issues arise, just adapt and accept.


How do you explain people like me that used desktop Linux for years but then moved to Mac? I fully understand the benefits of Linux and MacOS. I compared them and made an informed choice around the time I stopped being a student and started making my living a professional writer. I've got deadlines and word counts and I can't afford to waste time fiddling with my computer. I want to be able to turn it on, even after updates, and start working.

But, I also love to tinker and automate, and MacOS gives me a full Unix environment to play with.


So, in the end, disagree, or agree? The head spins.


Amiga, Atari and other 8/16 bit platforms also did just work, it wasn't Apple selling the idea.

The IBM PC was the exception here, and it only grew thanks to the mistake of IBM not protecting their BIOS as they thought.


Do you really believe that? That people are so incapable of independent thought that they will subservantly obey the wishes and demands of Apple?

Come on.


ps. I used to have an iPhone, now I have an Android phone.


Considering how many developer-focussed apps were written for macOS first/only because of the massive adoption of macbooks by developers after the rise of iOS apps, this is an extremely narrow perspective.

The appeal of Darling isn't bringing macOS to Linux, the appeal is breaking down the walls put up by shortsighted developers who decided not to bother porting the apps they developed for macOS over to a different platform because "every developer I know uses macOS".

Linux users who would want something like Darling aren't aspiring to be macOS users, they are Linux users who want/need to use software written by macOS users who didn't put in the effort to port their software.


Insightful comment. I call this 'working on the car more than we drive it'. You see that a lot in tech. It's fine as a hobby, but not for production. You do need both as the tinkering helps us to master concepts and systems and find new, better ways which end-up becoming best practice in the stable, production systems. Just have to known when to tinker and when to not.


I disagree, Apple products are popular because they just work (for most people).

I was a strict Windows user for a long time, and I remember the days of reinstalling Windows every few weeks or so to get performance back. Microsoft seems to acknowledge this (but not fix!) with the feature of resetting your PC.

I stopped using linux when the system completely broke after updates (on a slow connection) and spent way too much time trying to fix it. (This was last year on Ubuntu)

When I was younger I'd just deal with them, but now I just use OS X.


> So, the appeal of Mac emulation is very limited, because it starts out with tinkering.

There are countless of utilities for tinkering with you mac setup and the best and most tinker-y terminal for any platform is a mac-only app (iterm2). It's just that macOS starts out with a far high usability level without tinkering and comes with lots of basic stuff working that no amount of tinkering will ever give you with linux (like you can actually find files on your computer, good luck doing that with linux).

And of course even if your premise were true, there are plenty of reasons people would like to run macOS apps under linux:

- lots of good software is mac only

- automated testing with macs is a pain and expensive, doing at least some with linux boxes would be a pretty decent win.


> There are countless of utilities for tinkering with you mac setup and the best and most tinker-y terminal for any platform is a mac-only app (iterm2). It's just that macOS starts out with a far high usability level without tinkering and comes with lots of basic stuff working that no amount of tinkering will ever give you with linux

I was with you right up until you made the cheap shot at Linux. That was as unnecessary as it was untrue.

> (like you can actually find files on your computer, good luck doing that with linux)

Linux is no harder to find files on than it is on OS X. They have the same CLI tools, they both has desktop environments that support file indexing and rapid searching (eg Spotlight). And on Linux that all gets installed by default with the desktop environment - just as it does on OS X too. So they really aren't all that different.

> lots of good software is mac only

I agree. But there is also lots of good software on Linux too. In all my years of running Linux and OS X the only Mac-only application I've missed on other platforms is Logic. But even there, we're talking 15 years ago and Linux has come a long, long way since in terms of the quality of DAWs available on it.


> I was with you right up until you made the cheap shot at Linux. That was as unnecessary as it was untrue.

Here are some concrete examples:

- macOS has essentially a single set of efficient and consistent keybindings that works everywhere. Command line and GUI work essentially the same, i can use emacs style navigation with C-a C-e C-f C-b etc. everywhere. I can copy with Cmd-C and paste with Cmd-V in my terminal. The geniuses who created the first mainstream Linux GUI paradigms decided to copy windows and go with Control as the main key modifier to create a set of clashing keybindings for GUI and console.

- file history (If I messed something up in my keynote presentation, I can easily compare previous versions and restore what's needed)

- Cmd-? allows you to access any menu item quickly by search, how do I do that on Linux?

- finding files (see below)

- MacOS can recover from memory pressure fine. How do I get my linux machine not to effectively crash if I run an app that happens to use too much memory (technically it just swaps itself to death, in practice reset is the only remotely timely way to recover)? I've tried any amount of tweaking, but turns out that you can't turn off overcommit and swap completely, even if have lots of memory, things will just break randomly (chrome for example).

> Linux is no harder to find files on than it is on OS X.

Can you point me at something on linux that comes anywhere close to spotlight/mdfind?

On of the reasons spotlight works well is file system integration, to the best of my knowledge nothing on linux does that.

It's trivial to open a file by recency or contents or type (or tags or ...) from the open dialogue of any application on macOS, how do I do that on linux?

If I want to find all mp4 videos of 1000x1000 resolution that I modified within the last week, or all jpeg files with sRGB color profile, I can do so instantaneously with mdfind.

That is not to say that linux isn't more ergonomic for certain things, but in my experience they tend to be mostly limited to things only programmers would care about (/proc is the number one thing I miss on a mac; some commandline utilities are also nicer on linux, but you can normally install them easily enough on macs as well).

For what it's worth: I'm using both linux and macOS daily and am both productive and reasonably expert with both.


Re. keybindings: you can change your default GTK bindings to Emacs-style if you want to. Whether the super/Windows/Command key should be used for window management or application controls is still up for debate; it always bothers me whenever I have to use a Mac that some browser bindings are already taken by the WM/OS because Cmd is shared between apps and the OS. There are very few bindings that you can’t change on Linux or another non-Mac Unix-like.

Re. file history: use real version control or a filesystem with this functionality (I imagine that ZFS would). Better to just use Git.

Re. Cmd-?: no alternative exists that I know of. However, menu bars are usually less prevalent on software written for traditional Unix than on a Mac.

Re. Finding files: GNOME does this with the search bar OOTB it appears. Otherwise, use search in Nautilus like you would in Finder.

Re. Memory presssure: search “swappiness Linux” into a search engine. Do you prefer maximum available memory or responsiveness? Linux gives you the choice here.

Recency, contents and type can all be sorted with Nautilus on GNOME. I’m not aware of a way to find very specific files that meet your search query quite like you mentioned (although I’m sure they exist). I would pipe file into grep personally, but that’s probably too rudimentary for what you want.


> Better to just use Git.

How would this help with my example? Git is terrible for managing non-text files and has zero support for browsing such files interactively. It also doesn't work at the file level and is pretty much unusable for anyone who isn't a developer.

> Re. keybindings: you can change your default GTK bindings to Emacs-style if you want to.

Yeah, but that doesn't really work, it just makes the whole mess even worse (oops, no longer can select everything with a shortcut, webapps, other toolkits don't care etc).

> Do you prefer maximum available memory or responsiveness? Linux gives you the choice here.

It doesn't. I want responsiveness, but I can't have it. Turning off overcommit (and swap) does improve responsiveness but is not a viable option for a desktop system, apps will just break if you turn off overcommit completely.

> Recency, contents and type can all be sorted with Nautilus on GNOME.

For me this works neither reliably nor with acceptable performance (unsurprising since a proper version needs FS integration).


> Turning off overcommit (and swap) does improve responsiveness but is not a viable option for a desktop system

Back when I got my first SSD I ran Linux without a swap file/partition. I did this partly because it was only a 60GB SSD so I wanted to conserve space. I also did it because I didn't want to shorten the life of the SSD with (this was back when such a thing was a concern) and I ran that set up for years on a pretty modest 8GB RAM with KDE installed (ie not just a lightweight tiling WM).

> unsurprising since a proper version needs FS integration

It really doesn't and apfs (your file system in OSX) doesn't even do this. In fact it's probably better that your meta-data indexer isn't embedded into your file system driver because you're just going to slow down file system operations - which matters a lot on UNIX-based platforms because they do lots of file system operations.

A far better approach is to have your indexer run as a separate process that monitors file writes (you can still have a kernel hook for that if you wish) thus you can then catalogue your files without interrupting your normal file system operations. You can also add more granularity like separate database per home directory (which would be much harder to do securely if you were embedding that code into the fs driver without then going down the route of having multiple tanks ala ZFS). It also makes it much easier to optimize your meta-data db since you can now dump everything into a RDBMS rather than attached to the space constrained inodes.

For what it's worth, this is another area I have first hand experience with because I've written a few hobby file systems over the years. Nothing serious nor performant; just myself messing around with a few ideas. But it's still earned me a greater appreciation for the design decisions behind the file systems we do commonly use.


> Back when I got my first SSD I ran Linux without a swap file/partition

You can turn off swap if you don't need hibernate, and from memory even turning off overcommit used to be OK-ish (of course most software written for linux doesn't try to deal with failing malloc requests gracefully, because there's no point since it never happens in the default configuration). You end up with a noticeably snappier system. However, this no longer seems to work in practice. Try turning off overcommit completely and see how long it takes Chrome to crash even if you have a lot of available memory.

> A far better approach is to have your indexer run as a separate process that monitors file writes (you can still have a kernel hook for that if you wish)

This is how spotlight works though, no? It's a separate process gets notified by the kernel on file system changes and then indexes them (that's what I meant with FS integration). I agree that you don't want to synchronously update all indexing meta info on FS operations because everything will grind to a halt if you do that. But you still want OS support such as reliable notification and extended FS attributes to store things like "this was downloaded from here" or tagging. I don't think there is anything particularly magical about this (spotlight is 15 years old tech and linux has had xattr support in all major file systems for ages) but in practice xattrs end up pretty much useless in linux because next to nothing uses them (baloo probably does) and as far as I'm aware there is no robust file system change notification API (you can use inotify for some stuff, but it's limited in various ways). I'd love to be wrong about this though.

I think the situation is better on macOS, but it might just be that spotlight is more polished and there is no fundamental difficulty in writing the same for linux these days.


> You can turn off swap if you don't need hibernate

Do people still hibernate? I thought these days suspending was a solved problem.

> Try turning off overcommit completely and see how long it takes Chrome to crash even if you have a lot of available memory.

I thought the point of this discussion was talking about sane defaults? Of course if you're going to mess with kernel parameters then you run the risk of getting undesired behaviour. It's no different to when we used to tweak the BIOS in the 90s. So I'm not going to disagree with you there. But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts?

> This is how spotlight works though, no? It's a separate process gets notified by the kernel on file system changes and then indexes them (that's what I meant with FS integration).

That's not file system integration though. What you were actually describing was a completely different behaviour. Moreover, you claimed that Spotlight works differently from other tools of it's ilk and that is also untrue.

> (spotlight is 15 years old tech and linux has had xattr support in all major file systems for ages)

Again, you don't want that information in the file system table. Storing every little bit of information like that in xattr would slow down standard file system operations. What you actually want to do is store that information in a separate RDBMS (eg sqlite3, MySQL/MariaDB, etc). To be honest even something like Redis might work as long as it has a persistent backup.

> as far as I'm aware there is no robust file system change notification API (you can use inotify for some stuff, but it's limited in various ways). I'd love to be wrong about this though.

I've not spent a great amount of time with inotify but from my limited exposure I do recall it wasn't great with nested hierarchies. There's probably some better ways that I don't know of but this is a particular problem I've not needed to solve before so I'm as in the dark as you are.

> I think the situation is better on macOS, but it might just be that spotlight is more polished and there is no fundamental difficulty in writing the same for linux these days.

Honestly, I think the perceived differences are all imaginary. Like wine tasting when you're told one bottle is expensive and another is moderately priced - lots of people will start to imagine deeper flavours in the more expensive bottle even if those flavours don't exist. So much of our perceptions are based on expectations rather than experiences and I think that's what's happening here because I've used both Krunner and Spotlight and my honest impression is that they're both much the same.


> That's not file system integration though. What you were actually describing was a completely different behaviour.

I should probably have phrased this differently, "kernel file system layer integration" maybe. The relevant (and presently, I believe, lacking part) in linux would be VFS. It also relies on applications making consistent use of xattrs for some functionality, something that does also not happen on linux.

> Moreover, you claimed that Spotlight works differently from other tools of it's ilk and that is also untrue. [...] Honestly, I think the perceived differences are all imaginary.

Right. I'm not a file system expert, but I'm increasingly wondering if your confident pronouncements are backed up by sufficient knowledge what you're talking about. Spotlight is implemented with major kernel support in the form of fsevents. This allows the user space portions of it to receive fairly reliable and timely notification of file system changes efficiently. This is a key ingredient to make it work as well as it does.

(see e.g. https://eclecticlight.co/2017/09/12/watching-macos-file-syst...)

Now the thing is, linux doesn't have a direct equivalent (or at least if it now has, it's a pretty recent thing, more than a decade after spotlight).

Quoting from lkml (https://lkml.org/lkml/2016/12/20/312)

    Other operating systems have a scalable way of watching changes on
    a large file system. Windows has USN Journal, macOS has FSEvents
    and BSD has kevents.

    The only way in Linux to monitor file system namei events
    (e.g. create/delete/move) is the recursive inotify watch way and
    this method scales very poorly for large enough directory trees.
In light of this apparent disparity can you provide more detail on how linux based indexers work just the same and just as well as spotlight on macOS? What's the equivalent to fsevents they're all using?

> But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts? But what are you actually proving aside how easy it is to break things if you mess with core settings that are designed for experts?

Let me try again: with default settings my high spec linux box ground to unusable state (and no I'm not making it up) frequently enough that I got sick off it. So unlike you (somewhat rudely) continue to imply, I'm not some bozo who randomly screwed around with system settings he didn't grok on a whim and then started whining after everything broke.

> Again, you don't want that information in the file system table.

Yeah, you do because that way it stays around when you copy, move or archive the file. You probably only want to do that with a few select metainfo fields (like the examples I gave earlier: download origin info and user supplied tags), but that's exactly what macOS does. Also, whilst I agree that storing search indexes and everything directly in the file system is probably not ideal, there is historical precedent of a systems that did exactly that, fairly successfully from what i hear (BeOS/BFS).

P.S. maybe a more productive direction: what is you recommended way for setting up some baloo or some other linux indexer for running mdfind-style commandline queries with it (I don't want KDE or Gnome and I think baloosearch vs mdfind is also easier to compare directly)?


> I should probably have phrased this differently, "kernel file system layer integration" maybe. The relevant (and presently, I believe, lacking part) in linux would be VFS. It also relies on applications making consistent use of xattrs for some functionality, something that does also not happen on linux.

But that's not how any other those services work - including Spotlight.

> I'm not a file system expert, but I'm increasingly wondering if your confident pronouncements are backed up by sufficient knowledge what you're talking about.

I appreciate your frustration but the problem here is that you keep conflating multiple different technologies and not understanding the distinction I'm trying to make between each of them. I admit I'm not the best at explaining complex technologies (though I wouldn't say the stuff we're talking about is particularly complex) so maybe this conversation is better left to yourself to do some independent research because there is clearly a language gap between what I'm trying to describe and what you're apparently reading.

But the crux of it is you seem to think Spotlight stores all of it's data in the file system itself and is unique in that regard. That isn't true on both counts:

1. Spotlight will use a separate database - not xattr - to store it's indexes.

2. Every tool akin to Spotlight (including Krunner) does the same

There is the caveat that some of the searchable parameters in Spotlight obviously would be in the file system as well as Spotlights database - which might be where you're getting confused? But not everything you described would by xattr and Spotlight itself wouldn't be running slow file system scans to return it's results when it could instead use a local cached database (as I described above) with indexed fields against several parameters rather than just the inode number (which I'll get into later).

You also seem to think that inotify and/or fevents count as "file system integration". It does not. They are completely separate APIs. Whether they're backed by a kernel syscall is completely besides the point because they're not part of the file system ABI. Thus they're not actually tied to the file system itself (ie Spotlight can then work against any file system rather than just apfs).

> Let me try again: with default settings my high spec linux box ground to unusable state (and no I'm not making it up) frequently enough that I got sick off it. So unlike you (somewhat rudely) continue to imply, I'm not some bozo who randomly screwed around with system settings he didn't grok on a whim and then started whining after everything broke.

But you are over commiting resources to virtual machines then moaning when it grinds to a halt. Which isn't any better than tinkering with kernel parameters and making the same complaints.

> Yeah, you do because that way it stays around when you copy, move or archive the file.

That's what fevents is for ;)

By the way, even the file system doesn't index files by file name nor path. Every file system object (files, directories, TTYs, etc) on UNIX and Linux is just an inode. So even the file name and path itself is just metadata stored against the inode. The kernel itself doesn't understand file names, it just passes node indexes around and your file system driver will return metadata such as file name - if requested - by the calling userspace tool. That's how it works at a low level - even though file names and paths feel like a first class parameter in the userspace tools we use.

The reason you don't want too much metadata in the file system itself (eg xattr) is because it slows down file system operations. In fact many GUI platforms intentionally store extended attributes in hidden (technically just dot-prefixed because there isn't actually a "hidden" attribute on UNIX) for that reason. Partly that reason anyway - the other part is because not all filesystems support xattr. Which is actually another reason Spotlight wouldn't want to use xattr.

> Also, whilst I agree that storing search indexes and everything directly in the file system is probably not ideal, there is historical precedent of a systems that did exactly that, fairly successfully from what i hear (BeOS/BFS).

I did run BeOS but I can't remember much about BFS so I'm not going to comment on that specifically, however the other systems were split between two camps:

1. They either stored extended attributes in hidden files or directories - such as .Directory (KDE), .DS_Store (OSX), desktop.ini (Windows) - or

2. instead of a traditional file system layout they will have what is ostensibly be a fully fledged RDBMS. Those tended to be exclusive to mainframes but Microsoft was experimenting with a similar approach with WinFS in Longhorn (I think it was called?). However it was eventually canned due to it's shitty performance.

That's at least the hysterical precedence of storing super detailed meta-information. Historically the stuff that appeared to be stored as xattr were often just read from the file data itself (eg image sizes might be read from the JPEG headers). In fact in the 90s it was common for some platforms to identify what the type of file by literally reading the first few bytes of that file (eg does it have a pkzip header?) and some CLI tools still do this (eg `file` does exactly that. `grep` reads the first 1000 (exact number escapes me) bytes and if there is a null byte (0x00) then it is assumed to by a binary file rather than text and outputs an error.

As an aside, one of the hobby file systems I wrote was long the kinds of lines of (2) too. It used vanilla MySQL/MariaDB as the back end because one of it's features was that you could then connect to a remote filesystem via a simple MySQL connection string. It was a pretty fun project and I'd gotten all the read operations working but there was a few bugs with the write operations that I never fully solved and I eventually lost interest when I starting working other projects.

> P.S. maybe a more productive direction: what is you recommended way for setting up some baloo or some other linux indexer for running mdfind-style commandline queries with it (I don't want KDE or Gnome and I think baloosearch vs mdfind is also easier to compare directly)?

Honestly I don't know. I might not like Windows much as a platform but I do really like the explorer.exe shell as UI paradigm. So I tend to gravitate towards KDE on Linux (plus I think the KDE team have done a great job refining that paradigm in ways the Microsoft have failed to). Krunner has always "just worked" for me so I haven't spent any energy looking for ways to replace it. However I'm sure there will be some guides online about setting up runners (are they called?) on Linux given the diversity of it's ecosystem.


I don't mind OSX - honestly there's bits of it I do genuinely like. But it's blind-sighted fanboyism that really does the platform harm.

> keybindings

Honestly, I find Mac OS keybindings and keyboard layout the worst thing about using Macs. Yes it might make some sense but when literally every other platform on the planet follows the same standard apart from Apple, it then makes Apple the ugly stepsister regardless of how rational it might be on paper.

I mean if you only ever use Macs then I guess you might like it, but for anyone who swaps between systems (or even just wants to use a non-mac keyboard) it can be very annoying.

> file history

You can have that in Linux

> Cmd-? (how do I do that on Linux?)

shrugs maybe you can't. But that's just one feature. As a counterargument I could list a dozen things that are in Linux that aren't in OSX. Like up to date core utils and proper package management. Which are just about the two most important things on a dev machine - far more important than Cmd-?. And sure you could install GNU core utils via brew, none of that is part of the default OS X build - which matters because the whole basis of your argument was that OSX has better defaults.

Ultimately though I don't see the point in nitpicking each OS - feature by feature.

> finding files

I'd already disagreed this in my previous post after you made that claim earlier

> MacOS can recover from memory pressure fine. How do I get my linux machine not to effectively crash if I run an app that happens to use too much memory (technically it just swaps itself to death, in practice reset is the only remotely timely way to recover)? I've tried any amount of tweaking, but turns out that you can't turn off overcommit and swap completely, even if have lots of memory, things will just break randomly (chrome for example).

The problem there is the application. However Linux will just kill the last process that over allocates memory. If you're getting the kind of symptoms you've described then you've either fiddled with your swap file settings (so not running defaults) and/or you're running Linux on some pretty awesome spinning disks while comparing it to nice fast SSDs on OSX. Either way, you're not comparing like for like.

> Can you point me at something on linux that comes anywhere close to spotlight/mdfind?

There's loads. Krunner, for example, has all the same features as Spotlight plus supports plugins to extend it. For example I can run math calculations in it - which I haven't yet worked out how to do in Spotlight.

> For what it's worth: I'm using both linux and macOS daily and am both productive and reasonably expert with both.

But do you actually use desktop Linux on modern hardware? Or are you just running Linux on a few servers and guessing about the desktop experience. I ask because your comments were valid about 10 or 15 years ago but really aren't the case any longer.

> That is not to say that linux isn't more ergonomic for certain things, but in my experience they tend to be mostly limited to things only programmers would care about

This I do wholeheartedly agree with.


> > file history > You can have that in Linux

You can use lvm or zfs snapshots, but that's not what I'm talking about – I'm talking about in-app browsable history of things like documents or presentations.

> Krunner, for example, has all the same features as Spotlight

Last I checked it used Baloo to do the actual indexing, the list of high priority features/bugs on the project site https://community.kde.org/Baloo (Baloo crashes a lot in various places etc.) and a quick google make it look like it remains alpha software at best, I'm also pretty sure it doesn't have an equally reliable index update mechanism. The most important thing about spotlight for me is that it can search file names and content (filtered by type if necessary) fast and reliably and up-to-date. But you can also do types of searches that as far I'm aware of none of the linux utilities can do.

E.g. show me all the items I downloaded from a google.com domain:

    mdfind "kMDItemWhereFroms == '*google.com*'"
If I want to see the filetypes that where downloaded sorted by frequency I can add

    mdfind -0 "kMDItemWhereFroms == '*google.com*'" | xargs -0 -n1 mdls -name "kMDItemWhereFroms" | sort | uniq -c | sort -n
> For example I can run math calculations in it - which I haven't yet worked out how to do in Spotlight.

You literally just type what you want calculated, e.g. `sin(pi/4)`.

> As a counterargument I could list a dozen things that are in Linux that aren't in OSX. Like up to date core utils and proper package management.

nix. By my lights the only proper package management for any OS. Works fine under both linux and macOS (and will also trivially supply you with up to date coreutils).

> But do you actually use desktop Linux on modern hardware?

I have been using (well-specced) linux desktops for most of my work for a long time.

> The problem there is the application. However Linux will just kill the last process that over allocates memory.

I don't think that's how it works. The whole point of having a proper OS (rather than say DOS) is that misbehaving app won't just bring down everything else. Also if you have a process that wildly allocates memory, by default linux will start off swapping like mad, making your computer effectively unusable (and yeah, in fact my linux desktop does have an SSD and many times as much RAM as my macbook, so if I'm not comparing like to like my linux station is the one with much more powerful hardware). And even if it runs out of swap it doesn't just kill the last process, it uses a more complex scoring algorithm which has a good chance of killing something you didn't want to be killed.


> You can use lvm or zfs snapshots, but that's not what I'm talking about – I'm talking about in-app browsable history of things like documents or presentations.

I got that. It's still just some application UI wrapped around a CoW file system. Maybe a better way of saying your point is "doing the same on Linux lacks a lot of polish" - which is true. But that's when happens when Linux has to support a multitude of file systems but Apple can control every aspect of their ecosystem.

> Last I checked it used Baloo to do the actual indexing,

Possibly? Krunner has always "just worked" for me so I've never bothered to look under it's hood.

Regarding the bug you found, well I'd argue that you should expect to read bugs on a bug tracker given that's the point of bug trackers. It does feel like what you're basically now doing is the equivalent of reading a 1 star review of a product (eg on Amazon) and claiming it doesn't work by the proxy of others while also ignoring all the 5 star reviews from people who haven't had any issues. It's a heavily biased way to hold a discussion and if we're both honest, Mac's haven't been without their fair share of bad publicity either. So is it really worth our time cherry picking all the negative things when you and I both know that they're the exception rather than the norm?

> You literally just type what you want calculated, e.g. `sin(pi/4)`.

Handy to know. I suspected it would have been possible but I kept prefixing the formula with `=` which Spotlight didn't like.

> nix. By my lights the only proper package management for any OS. Works fine under both linux and macOS (and will also trivially supply you with up to date coreutils).

My point is you shouldn't have to install a 3rd party package manager. That's the bare minimum a modern OS should provide out of the box.

> I have been using (well-specced) linux desktops for most of my work for a long time.

I struggle to believe that given the descriptions of faults that you've been discussing. Though you have also said you've tinkered with the "swappiness" parameters (plus more) so I guess it's possible that you are running current hardware but have inadvertently tweaked Linux into performing terribly? Or maybe you're just exaggerating all these problems to make a point (much like your "look, I've found a bug on a bug tracker" comment above).

Either way, if the problems were as prevalent and severe as you keep describing then you and I - and millions of other techies for that matter - wouldn't be running Linux.

> The whole point of having a proper OS (rather than say DOS) is that misbehaving app won't just bring down everything else.

"Proper OS" is such a flakey term and what you described isn't even the "whole point" of running an OS. But that's a whole other tangent. More importantly Linux doesn't do what you're accusing it of doing. Thus your statement is simply untrue in a multitude of ways.

> Also if you have a process that wildly allocates memory, by default linux will start off swapping like mad

It's actually a great deal more complicated than that. It depends on the size of your swap file, what applications you have open and their current running state (ie can they be paged). It depends on whether your cache is non-zero and it also depends on the kernel parameters you define.

> And even if it runs out of swap it doesn't just kill the last process, it uses a more complex scoring algorithm which has a good chance of killing something you didn't want to be killed.

Depends on the version of Linux (the kernel) you're running. Older kernels will just kill the last requester. Newer kernels do have a scoring algorithm but it's really not that complex at all (if memory serves, it's ostensibly a just a percentage*10 figure of used memory)


> It's still just some application UI wrapped around a CoW file system.

So? In terms of usability impact I still consider it a major feature (that no amount of tweaking will get you on Linux).

> My point is you shouldn't have to install a 3rd party package manager.

But macOS has a "package manager" – it's called App Store. You and I may not think it sufficient for our (developer) needs, but we're not representative users. And for normal users and even myself it offers very useful functionality over what they'd get out of the typical native linux package manger. You can trivially reinstall everything on a different machine with a different os version (as long as it's not super ancient), and it works – no "DLL" hell, because everything is essentially self-contained. And since software is tied to your account, there is no need for crufty apt queries in the hope to get a list of packages out you can backup for reinstall elsewhere or after clean upgrade. Ubuntu has tried to establish a clone in the Snap Store, but no one I know seems to use it and I haven't tried it myself, so I don't know how compelling it is.

And I need to install a 3rd party package manager on (non-NixOS) linux distros anyway, because IMO apt, yum etc. fundamentally suck and nix is the only thing that doesn't. Funnily enough, the only really compelling UX argument for linux instead of macOS for developers I can think of apart from /proc is that with NixOS you can codify your complete machine setup in a single nice config file, making it super easy to replicate, backup or inspect.

> Or maybe you're just exaggerating all these problems to make a point [...] More importantly Linux doesn't do what you're accusing it of doing.

It's a bit annoying to be told that what I'm saying literally can't be true. It is, and I didn't tweak any sysctl params or the swap setup before I got tired of my machine grinding to a halt and me having to reset it. I can ensure you it's entirely possible to have a high grade desktop with SSD and have linux fall over swapping endlessly without even being able to move the cursor anymore. Of course this doesn't happen in "everyday" usage otherwise no one would be running linux, but it's not that hard to trigger if you're running a VMs, a few browsers and dev tooling stuff that can potentially consume large amount of memory very quickly. I've moved away from having to use these tools (and also tweaked my machine) so it hasn't been a problem off late, but I ran into it with completely stock ubuntu.


Drive-by poster here but I was wondering if you had spent any time looking into what IO scheduler you're using on Linux?

Some time ago I encountered issues similar to what you mention in your posts. I solved it selecting the "Deadline" IO scheduler when I built my kernel.

Hopefully this helps you solve the issue :)

~K


> (that no amount of tweaking will get you on Linux).

It best not to use such firm definitives like that when what you actually mean is "more tweaking than a typical user would be bothered with". :)

> But macOS has a "package manager" – it's called App Store. You and I may not think it sufficient for our (developer) needs, but we're not representative users.

You're seriously going to defend the App Store?! The App Store isn't just garbage for developers, it's garbage for everyone because it misses so many non-developer productivity tools too. It doesn't even have Chrome nor Firefox in it.

> And for normal users and even myself it offers very useful functionality over what they'd get out of the typical native linux package manger

Sorry but I'm not buying that argument. You claim to be a "normal user" then talk about messing around with kernel parameters in Linux. I really don't think you're making any fair and balanced arguments on this topic at all.

> You can trivially reinstall everything on a different machine with a different os version (as long as it's not super ancient), and it works – no "DLL" hell, because everything is essentially self-contained.

I guess if you compare the App Store to manually loading software on Windows - literally the worst platform ever created for managing installed software - then the App Store would look good. But likewise if you compare a heart surgery to a lobotomy then heart surgery would look less invasive too. This is why I don't think it's productive to compare solutions to the worst examples.

> And since software is tied to your account, there is no need for crufty apt queries in the hope to get a list of packages out you can backup for reinstall elsewhere or after clean upgrade.

It's a pity that the App Store offers so little software so you end up falling back to 3rd party package managers. So now on OSX you not only need to run the same "crufty [package manager] queries" on OSX but you also need to install the package manager itself too.

> And I need to install a 3rd party package manager on (non-NixOS) linux distros anyway, because IMO apt, yum etc. fundamentally suck and nix is the only thing that doesn't.

A moment ago you were claiming to be a "normal user". Normal users don't install nix :) tbh I'm not the biggest fan of apt, yum is better but I do really like pacman. However claiming apt and yum suck when also praising the OSX App Store is just weird.

> I can ensure you it's entirely possible to have a high grade desktop with SSD and have linux fall over swapping endlessly without even being able to move the cursor anymore. Of course this doesn't happen in "everyday" usage otherwise no one would be running linux, but it's not that hard to trigger if you're running a VMs, a few browsers and dev tooling stuff that can potentially consume large amount of memory very quickly.

Right, I get you now. That context helps. Your previous description just said you were running a browser and sounded like it was happening everyday (so basically you were exaggerating by leaving key details out when describing the root cause). The problem there is that you're not just over commiting on memory but also over commiting on CPU resources too. That latter part matters because swapping can be CPU expensive too. Hence why your system is grinding to a halt.

Also I still think you're to blame a little there because if you're running VMs then you should be setting their thresholds to a level that doesn't overcommit your systems resources (baring in mind these tools aren't the stuff that "normal users" would be using either). It's like opening a bottle of wine and pouring yourself 4 glasses then complaining that the bottle is empty and you couldn't squeeze out a 5th glass (can you tell I'm drinking wine at the moment hehe?). You only have a finite amount of system resources so you cant really complain if you intentionally over commit them.


> You're seriously going to defend the App Store?!

Yup, flawed as it is, I find it much more useful than apt. If I'm wearing a dev hat and were forbidden from using anything to manage software installs other than one of apt or App Store (no nix!), I'd rather have apt. But for my non dev apps (you know, even people who tweak kernel parameters have non-programming related apps they want to use from time to time ;), Appstore is obviously more useful.

> However claiming apt and yum suck when also praising the OSX App Store is just weird.

Why? Both fill different needs and App Store solves a problems that are useful to me acceptably well (making it easy to install up-to-date software I want, upgrade it and remember what I have on a per-account not per machine basis).

Yum and apt, on the other hand don't (they don't have up-to-date software I want, they don't give me what I consider a decent way to manage the same or similar setups on multiple machines etc.). I basically install everything I can with nix instead.

> So now on OSX you not only need to run the same "crufty [package manager] queries" on OSX but you also need to install the package manager itself too.

Unlike apt/yum nix offers good ways to do this – no cruftiness involved. E.g. you can just write a small file with what you want and you'll get it, on any machine.

> You only have a finite amount of system resources so you cant really complain if you intentionally over commit them.

That's not what's happened, my VMs where capped at reasonable limits. I used to run some tools for various reasons that could in some scenarios eat a lot of a ram fairly suddenly (I don't think the systems was anywhere close to overloaded CPU wise was true most of the time but I can't vouch I remember this right anymore).

Either way, I don't think the whole OS falling over because one app wants to consume too much memory and the OS has decided to never say no is reasonable. And it's not something I can recall ever happening to me with any other OS (in recent years, I don't want to think back to ancient windows days).


> they don't have up-to-date software I want

I take your multi-machine point but the above just depends on what repository you're pointing at (eg stable, testing, etc) and which Linux distro you're running. You can't really blame apt for being out of date if you're running Debian. And nor could you blame apt for delivering buggy packages if you're running the testing repos on Ubuntu.

It's the same package manager, just different end points.

> Unlike apt/yum nix offers good ways to do this – no cruftiness involved. E.g. you can just write a small file with what you want and you'll get it, on any machine.

Technically you can do that with any package manager - given that's the core point of a package manager :P

I've not used nix (read a little about it but never taken the time to try it) so I can't comment how much easier that makes the process of custom repositories than hosting your own apt or yum repo, but it's not actually hard to do in those two either. Plus you could always compile your own .deb or RPM and install it like a standalone installer (MSI et al).

I've got nothing against nix though. In fact weirdly I think your underselling nix by focusing on the points you have rather than it's major differences from traditional package management.

> That's not what's happened, my VMs where capped at reasonable limits. I used to run some tools for various reasons that could in some scenarios eat a lot of a ram fairly suddenly (I don't think the systems was anywhere close to overloaded CPU wise was true most of the time but I can't vouch I remember this right anymore).

The problem with over commiting is the limits might seem reasonable under normal workloads but when you do end up with an empty bucket you have no safe way to recover from that. Or at least not with desktop virtualisation solutions like VirtualBox. ESXi et al will handle such situations more gracefully because they're designed to over commit during off peak work loads.

That said, I don't know how long ago it was you last did this but a few years ago VirtualBox did add a CPU execution cap in the guest config. IIRC it defaults to 100% but if you're running multiple guests and/or running heavy applications on the host while also running heavy guest VMs then it's worth dropping the CPU execution cap down so the guest cannot lock up the host.

> Either way, I don't think the whole OS falling over because one app wants to consume too much memory and the OS has decided to never say no is reasonable.

I think your expectation here is a little unreasonable to be honest. You cannot drain the host of free system memory and idle CPUs then expect the host to gracefully recover. It's like trying to dowse a fire with an empty bucket. I honestly can't see how OSX would perform any different to Linux in that regard. So you were probably using different virtualisation technologies on OSX (VMWare perhaps?) that handle guests more responsibly.


I would pay a decent amount just to get Omnifocus working on windows and linux machines.


Omni has just released a beta web app, which you will have to pay a decent amount for (it's a subscription service).

https://test.omnifocus.com


You have earned yourself a virtual high five, and a redeemable beer if we ever met in person :)

Thank you for showing this to me.


The facts remain that (1) even Free-Software developers rarely tinker with their Macs, (2) what can be achieved by tinkering is extremely limited, and (3) vanishingly few who don't own a Mac have the slightest idea of, or interest in, what MacOS apps they might (someday) be able to run.

What would the results of a successful or unsuccessful test mean? Either might be a consequence of the different environment.

I have no difficulty locating files. They stay right where I put them. If storage did not keep growing it might bother me that they never fade away.

All that said, a few Mac owners, and even some former owners, might have a use for emulation, and I would never begrudge it to them.


What a bunch of nonsense.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: