Hacker News new | past | comments | ask | show | jobs | submit login
MacOS Catalina: Slow by Design? (macromates.com)
2031 points by jrk 11 months ago | hide | past | favorite | 997 comments

It seems like there is a lot of confusion here as to whether this is real or not. I've been able to confirm the behavior in the post by:

- Using a new, random executable. Even echo $rand_int will work. Edit: What I mean here is generate your rand int beforehand and statically include it in your script.

- Using a fresh filename too. Just throw a rand int at the end there. e.g. /tmp/test4329.sh

I MITMd myself while recording the network traffic and, sure enough, there is a request to ocsp.apple.com with a hash in the URL path and a bunch of binary data in the response body. Unsure what it is yet but the URL suggests it is generating a cert for the binary and checking it. See: https://en.wikipedia.org/wiki/Online_Certificate_Status_Prot...

Here's the URL I saw:


Edit2: Anyone know what this hash format is? It's not quite base64, nor is it multiple base64 strings separated with '+'s but it seems similar...

Edit3: Here is the exact filename and file I used: https://gist.github.com/UsmannK/abb4b239c98ee45bdfcc5b284bf0...

Edit4 (final one probably...): On subsequent attempts I'm only seeing a request to https://api.apple-cloudkit.com and not the OCSP one anymore. Curiously, there's no headers at all. It is just checking for connectivity.

Here's some shell script to use a random file name and have friendlier output.

  time_helper() { /usr/bin/time $RAND_FILE 2>&1 | tail -1 | awk '{print $1}'; }  # this just returns the real run time
  echo $'#!/bin/sh\necho Hello' $RANDOM > $RAND_FILE && chmod a+x  $RAND_FILE;
  echo "Testing $RAND_FILE";
  echo "execution time #1: $(time_helper) seconds";
  echo "execution time #2: $(time_helper) seconds";
Introducing a network delay makes the effect much more obvious. Normally I see a delay of about 0.1 seconds, but after using the XCode network link conditioner (pf rules) to add 500ms latency to everything the delay shoots way up to ~2 seconds.

example output:

  Testing /tmp/test-24411.sh
  execution time #1: 2.32 seconds
  execution time #2: 0.00 seconds
with developer tools checked both executions report "0.0 seconds".

I tried just blocking "api.apple-cloudkit.com" with /etc/hosts. This reduces the delay but doesn't eliminate it. A connection attempt is still made every time. (I don't recommend making this change permanent. Just give your terminal app the "Developers Tools" permission instead)

After blocking that domain I can see that tccd and syspolicyd are logging some error messages to the console related to the failed connection. I don't recommend blocking because my guess is that'll put syspolicyd/tccd in some unexpected state and they'll repeatedly keep trying to make requests.

Try this for watching security related console log messages:

  sudo log stream --debug --info --predicate "processImagePath contains 'tccd' OR processImagePath contains 'syspolicyd' OR processImagePath Contains[c] 'taskgated' OR processImagePath contains 'trustd' OR eventMessage Contains[c] 'malware' OR senderImagePath Contains[c] 'security' "
syspolicyd explicitly logs when it makes the network request.

   syspolicyd: cloudkit record fetch: https://api.apple-cloudkit.com/database/1/com.apple.gk.ticket-delivery/production/public/records/lookup, 2/2/23de35......
(you need to enable private logging to see that url)

Enabling private logging is fairly annoying these days, unfortunately. (Interestingly, if macOS thinks you're AppleInternal, it will make it just as annoying to disable private logging…)

wait a sec...I recognize that name. I only know how to enable private logging thanks to your detailed and informative blog post! Seriously, it's one of the favorite macOS things I've read in a while. I loved the step by step walk through using gdb you showed.

Though just today I saw that apparently an enterprise policy config can enable private logging in 10.15.3+ without having to disable SIP. https://georgegarside.com/blog/macos/sierra-console-private/

For reference for others: this is the blog post by OP on enabling private logging in Catalina. check it out! https://saagarjha.com/blog/2019/09/29/making-os-log-public-o...

I’m glad you appreciated it, but I think it also happened to be some of the fastest-to-deteriorate advice I’ve given :) I should go back and revisit this, as on my system I have it currently stuck in a state where it unconditionally enables private data logging at boot (which mean my crash logs have personal information in them unless I remember to turn it off with the workaround I’ve been using until now…)

Huh this is crazy. 2 seconds is way slow and this shouldn't involve any network activity. Seems like a real problem.

He/she added an artificial network latency/delay into the config, just like they describe. That is the reason for the delay. It is made artificially long on purpose.

It’s not an unreasonable delay on a slow 3g hotspot. It’s problematic to have the performance tied to the network speed and suffer an overall slow performance because your network happens to be slow.

Have I written anything that is contradicts that? I simply pointed out that in the example the delay was artificial, and it was definitely due to network, not due to something other than network, as the comment suggested.

It's called lockdown for a reason. Apple was just the very first to implement centralized binary blacklisting, revocation. They call it notarization.

Problem is, that they did it unannounced. There must be really some weird stuff going on in those managers heads. How can they possibly think to go away with that?

There were announcements about notarization around WWDC last year. They didn't seem to get a lot of media traction however, but there were specific pages detailing what's required from a developer and some basic details on how it would work

From April 10, 2019: https://developer.apple.com/news/?id=04102019a


For each and every shell or perl script that I create and use privately? No, certainly not.

Command line apps aren't affected by Notarization.

If you're compiling something yourself, the compiler won't put a quarantine bit on it and it will execute fine. Same with homebrew/friends.

Scripts don't need to be signed. There is something else going on here.

Seems that in fact even though scripts aren't signed, IF YOU DONT have devTooling enabled for a given terminal, scripts are hashed and checked against bad known digests.

not a big deal, assuming no data is kept.

Also I wonder what it looks like if a script is deemed bad...

There was nothing "unannounced" about it. Notarization was introduced at WWDC 2018 and announced as required at WWDC 2019. Every macOS developer should have been aware of this requirement. It was a special project for my apps.

I believe the concern here is that this is affecting not just macOS developers, but all developers who use macOS. That's an important distinction.

Developers who use macOS as shiny GNU/Linux replacement are only getting what they deserve, they should have supported Linux OEMs to start with.

Those that show up at FOSDEM, carrying their beloved macBooks and iPads while pretending to be into FOSS.

I use Apple devices knowingly what they are for, not as replacement for something else.

Sadly it's not the "shiny"... it's the fact that Mac OS has a GUI that works.

Been using linux since the days you installed Slackware from floppies and recompiled your kernel to get drivers. Command line has always been a bliss, but no one has managed to come up with an usable and consistent GUI yet.

Btw does sleep work on linux laptops these days? How's hi dpi support?

Sleep has been working on my last ~10 laptops and desktops, it's a non-issue at this point unless you have brand new exotic hardware. I did have a motherboard issue on a first-gen Ryzen that required a bios update to get it working.

hi-dpi works very nicely if you use GTK or Qt. For the other apps, it really depends how they are implemented. For me it has been working better than Windows.

These are strawman agruments. Give Ubuntu 20.04 a try an you'll see stuff pretty much just works on any common hardware. You can even use slackware and get everything working with a bit of fiddling.

MacOS is a very nice OS but it isn't FOSS and it isn't more capable at this point, it's just a personal preference. Pretending otherwise is disingenuous.

> you'll see stuff pretty much just works

The problem is the "pretty much" part.

We all know what that means in practice. That's why OSX is popular.

I switched my AI workstation to Ubuntu 20 last week, and the experience was fast and great. I can now run docker containers with cuda, use PyCharm to coordinate everything and have code completion as if the code was local, even if it's executing on a docker worker node in our data center.

200% scaling on my 4K screen looks great, wifi, network, sleep, gpu all worked out of the box. And the IDE behaves exactly like on OS X.

The only thing I disliked was the default Ubuntu color scheme, but that was easy enough to change.

Then you cannot possibly have used MacOS. There is plenty of flakey edges, that actually don't work very well.

Fucking multiple desktop shit.

My MacBook Pro can't even remember the order of my monitors when it goes to sleep, or between reboots. Even Linux can do that.

OSX can only guarantee that everything works because apple controls both the hardware and software.

Windows can only guarantee that everything works because they have a monopoly and therefore hardware vendors have to support windows.

Most laptops don't ship with linux/are never tested with linux, so it's never going to work flawlessly on all possible hardware configurations. It's just not possible.

It does however, 'pretty much' work on most hardware.

And if you buy a machine from a vendor that actually supports/pre-installs/tests linux, all of the hardware will work out of the box.

It's that "pretty much" that's the debate.

I recently switched from macOS to Ubuntu 19.10 and then 20.04 as my daily driver and it's way flakier and has far more random app crashes than macOS.

That said, the system is fast, the UX is way further along than I expected -- in some ways it's got a better UX than macOS. It's way, way faster at nearly everything.

my point is that if you want to do better than 'pretty much', you should buy a machine from an OEM that actually supports linux

If you're installing it on a random windows laptop, you're never going to get better than 'pretty much', because the OEM doesn't support linux or test their hardware with linux.

Sway does HiDpi nicely as well, so you don't even have to use the Gnome/KDE pair.

When was the last time you gave KDE a try? I just switched from using a tiling window manager and was impressed by how much stuff "just works" and the degree of customizability.

>the degree of customizability.

That's part of the problem. Customizability is good, but in return you get inconsistency that you can't fix. And even if all system default apps looks the same (they still look horrible in my opinion), 90% of 3rd party apps look and feel different. You can hardly name a linux (qt or gtk) app that can be name elegant or at least thought through (UI wise). Almost all applications still look like they were build to be used on some factory terminal.

Last time i used KDE for a significant amount of time, something was distracting. Then i realized what it was: the "system tray" icons were erasing themselves and then got redrawn one by one and readjusted their position with each redraw. Distracting as hell when you're trying to concentrate on the code in a nearby window.

Mind, that was in 2013, and hopefully KDE has improved since then. Perhaps it has even reached the level KDE 3 was at? It's been downhill from there.

Btw, I switched to Macs from running Linux with KDE as my desktop of choice full time.

Sleep usually works, assuming you get a laptop that's known to with with Linux. The arch wiki is good for this.

HiDPI is hit and miss. Some applications work, some (especially Java) break badly. Expect to need manual, fragile configuration. You also cannot set scaling per-screen, so you're SOL if you have heterogenous monitors.

Personally I use Windows. I check back in Linux every few months, but WSL seems to be improving far faster than native Linux is, so there's not much reason to use it anymore.

Even once HiDPI works, assuming that happens, by that point I'll have HDR and VRR as requirements... and I have no confidence that those will work anytime soon.

Some people at my work use Linux laptops. Judging by the Linux slack channel, no sleep doesn't work reliably yet, external monitor support is terrible and touchpads still suck. No idea about HiDPI but I doubt it works reliably.

Whenever you bring anything like this up though you'll just get a load of "When was the last time you tried it? It works perfectly for me" replies. Linux users don't want to admit its flaws.

It's pretty difficult to acknowledge a supposed flaw pointed by a guy who knows a guy who uses Linux when you have never had it yourself.

I used Linux at work for years. Sleep just works, external monitor also just works. HiDPI was rough at the start but works fine now.

Touchpads do kind of suck. I generally really dislike the default mouse acceleration. Font rendering is still so so if you don't have a HiDPI screen and the most popular desktop environments are still kind of terrible.

But sleep definitely does work.

> Whenever you bring anything like this up though you'll just get a load of "When was the last time you tried it? It works perfectly for me" replies. Linux users don't want to admit its flaws.

Are you implying that those users are lying?

I'm sure sleep does work reliably for them.

'Does sleep work on linux' is a fallacious question to begin with, because sleep working/not working depends on the hardware.

On some configurations it works flawlessly, on others it doesn't. Therefore you will always have some people saying it works, and others saying it doesn't. FWIW, my current laptop is a machine that ships with linux (system76 darter pro) and sleep works 100% reliably.

In my experience, when sleep doesn't work reliably, it's usually due to buggy firmware behaviour because most vendors don't care about supporting anything other than windows.

Along those lines, since most OEMs don't ship/test linux, it's simply not possible for every single hardware configuration to work flawlessly with linux.

Sleep works fine, since many years, but the Hibernate button should get renamed to "Crash now please and Again on the next restart"

You know, many things changed since time Slackware was installed from floppies. Even Macs got working virtual memory meanwhile.

It is hard to improve things when everyone is on other platforms.

I am mostly on Windows devices, and use a GNU/Linux aging netbook for travelling.

In what concerns this Asus 1215B, everything works, with the exception that the open source AMD drivers were a downgrade from the binary blobs (OpenGL 4.1 => OpenGL 3.3 without video hardware decoding).

However I still kept it around, because although I don't target GNU/Linux as part of my work, I wanted to give Asus the message that selling GNU/Linux laptops might be a relevant business.

Eventually when it dies, I will be Windows/Android and occasionally macOS only user/developer, but I am not using any of these platforms to emulate GNU/Linux, I use them for their own value.

If you want a good experience with Linux on an ultra book, you need to buy hardware designed for Linux. System76 or Purism are my recommendations. I don’t trust Dell.

This is the only way to do it.

The kernel devs or distros can't possibly support every hardware combination and BIOS bug for each hardware manufacturer.

For Windows the hardware manufactures have a reason to make the drivers bug free, its where they make most of their money, and Microsoft has the capacity to help them get it fixed if needed.

This doesn't exist for Linux unfortunately, unless you buy a laptop where Linux is fully supported (and you use the supported distro and kernel version most likely).

I have to say the main culprit for issues is usually power saving. I assume that's because ACPI is often badly implemented and power saving requires a lot of separate components to function together, to specification. Likely one doesn't, and the laptop comes out of sleep with the touchpad not working, or something worse.

> Btw does sleep work on linux laptops these days? How's hi dpi support?

Both work out of the box with Ubuntu 18.04 running Gnome on a Thinkpad x1 carbon.

But having to flip a few switches is a funny excuse to handcuff yourself to OSX and the hardware required to run it.

I've partially switched from MacOS X to Linux now that wayland pipewire is reaching a mostly functional state and am quite happy with it.

It took me maybe 150 hours to do the switch though during quarantine, and I still haven't managed to be able to properly connect to SMB at work...

What problem do you have connecting to SMB?

It's one of the things that work better for me on Linux than on MacOS (no problem with browsing shares, no disappearing shares, no problem with non-normalized unicode filenames).

It just doesn't connect / mount at all. Last time I tried to debug it, this was caused due to a too old samba protocol version being used on the Windows side.

On MacOSX, I just click on connect to server, and it works for me "as is".

On MacOS, I get randomly appearing and disappearing servers in the sidebar (they disappear usually when I need them) and "cannot be opened because the original item cannot be found" for already mounted shares. It also keeps permanently mounted "photos" share on my home NAS and bad things happen when I try force unmounting it (but if it disappears because I'm not connected to my home network, that's ok for some reason). This got especially bad in Mojave and Catalina; there was a period of time (10.15.0 - 10.15.2) when I had to restart Finder if I wanted to mount share that was previously unmounted.

Never happened that with Linux. What did happen that there was a period of time on some distributions (circa Fedora 28-30?), when SMB1 discovery didn't work because entire SMB1 was disabled. This was security migitation (EternalBlue/WannaCry/NotPetya) and Microsoft is doing the same in Windows 2016/2019/10[1][2]. In general, using SMB2/3 is good idea anyway, Linux distributions/Samba eventually enabled SMB1 only for client-side discovery, and you can still enable entire SMB1 if you need it for some reason - do you still have Windows 2003 someplace?

[1] https://blogs.technet.microsoft.com/josebda/2015/04/21/the-d... [2] https://techcommunity.microsoft.com/t5/storage-at-microsoft/...

> Last time I tried to debug it, this was caused due to a too old samba protocol version being used on the Windows side

IIRC, the only smb version that would be considered too old is smbv1 (which I'd hope they are not using on the windows side... its quite insecure and is deprecated by microsoft).

I'm on Linux now, very interested in using Wayland+Pipewire, but still stuck on Xorg. What distro are you using?

I was considering building a Wayland/Pipewire Desktop software stack from scratch since my distro doesn't support them yet. I have become partial to experimenting with new software this way because it allows me to switch back to my known-good distro software without rebooting (most things I care about preserving the state of exist in the console anyway).

If it is relatively supported in a specific distro, I'm sort of interested in trying it.

I use Arch with Sway.

What if using macOS enables me to be a more effective FOSS contributor? What if I think that FOSDEM is actually has many participants who aren't really into free software?

Then they are on the wrong spot to start with, and really didn't got the message what FOSDEM is all about.

It is a bit hard to be an aspiring FOSS contributor given the foundations those contributions are built upon.

Those same Apple loving users would be laugh upon at FOSDEM if they demoed any of their stuff on Windows instead.

Yet, there is hardly any difference between those corporations going all the way back to their origins.

Somehow after NeXTSTEP's adoption as OS X, NeXT and Apple's proprietary behaviour was forgotten and everything excused, because "hey they are shipping an UNIX clone"!

> What if using macOS enables me to be a more effective FOSS contributor?

How would that work? When you build a house on rented ground the house may seem to be yours but it can always be taken away from you.

I’m familiar with macOS and contribute to a number of FOSS projects from it. I’m less productive on other platforms.

In that case you'd do both yourself and those who depend on you for your contributions a favour by taking some of that time to get acquainted with alternative platforms seeing as how Apple seems to be on a course which will make it harder and harder to use their platform for this purpose. Like the Boy Scouts (used to) say, "Be Prepared!". Install a (few) Linux/BSD distribution(s) in a VM and try using those for a while to get a feel of the platform and its strengths/weaknesses so you have somewhere to land when the time comes.

I do use Linux for some of my work, especially when I’m working with ELF binaries. Just not a comfortable with it.

Your analogy isn't the best. This is like someone renting construction equipment to build a house on land they own, and finding out that the construction equipment phones home to the owners about how it's being used.

developer who uses MacOS != MacOS developer. I couldn't care less about what is announced at WWDC

First? Windows SmartScreen has checked for malicious binaries since Windows 8.

> Problem is, that they did it unannounced.

No, the entire thing is the problem. Windows 10 can still open applications that were compiled in 1994, and it doesn't make it less secure.

Once you start something, it's hard to stop it.

Every software place I've worked gives a special urgency to security stuff.

And even if features don't come out regularly, security updates do. This is more of that.

Isn't this what bloom filters are for?

>Apple was just the very first to implement centralized binary blacklisting

No, AV vendors did it for decades. In a more efficient way though.

Not sure it’s more efficient given how sluggish most AV software used to make my machine...

Not as bad as Catalina

OCSP is Online Certificate Status Protocol, generally used for checking the revocation status of certificates. You used to be able to turn it off in keychain access, but that ability went away in recent macOS releases.

Ah, Apple. When you can no longer innovate, just start removing features and call it simplicity...

Another way to look at it is that Apple is making it harder to run the system in an insecure fashion. You may not agree with that decision, but I certainly appreciate how Apple is looking out for the safety and security of the user.

Tangent: as much as some developers hate that the only way to distribute apps for the iPhone is through the App Store, as a user I consider that walled garden of apps to be a real security benefit. When John Gruber says “If you must use Zoom or simply want to use it, I highly recommend using it on your iPad and iPhone only. The iOS version is sandboxed and reviewed by the App Store.” There’s a reason why he can say things like that and it’s because Apple draws a hard line in the sand that not everyone will be happy with.

Another way to look at it is that Apple is making it harder to run the system in an insecure fashion. You may not agree with that decision, but I certainly appreciate how Apple is looking out for the safety and security of the user.

"Those who give up freedom for security deserve neither."

(Yes, I know the original intent was slightly different, but that old saying has gotten a lot more vivid recently, as companies are increasingly using the excuse of security to further their own interests and control over their users.)

The ability to control exactly what millions of people can or cannot run on "their" computers is an authoritarian wet dream. People may think Apple's interests aligns with theirs --- but that is not a certainty. How many times have you been stopped from doing what you wanted to because of Apple? It might not be a lot so far, but can you break free from that relationship when/if it does turn against you?

The quote isn't at all relevant to technical decisions though. Eg, there is enforcement that a program can't arbitrarily access any RAM it likes on the same machine. That is trading freedom for security and it is a good trade. And there isn't really an argument against gatekeeping software - users as a body don't have time to verify that the software they use is secure. I'd be shocked if the median web developer even reads up on all the CVEs for their preferred libraries. Gatekeepers are an overwhelmingly good idea for typical don't-care everyday users.

The issue is if it becomes practically impossible to move away from Apple to an alternative. Given that they have a pretty typical market share in absolute terms that doesn't seem like a risk right now. They don't even hold an absolute majority in what I assume is their strongest market, the US, let alone globally.

Of course it's relevant! Software is a form of expression. Apple controls what types of expression are allowed on your phone.

A developer made a game depicting bad practices at FoxConn. Apple removed it for "Objectionable Content"[1]. How is this inherently different from Apple saying you can't use your iPhone to read a certain book?

Apple's restrictions also make it easy for authoritarian governments to ban software they dislike: https://news.ycombinator.com/item?id=21210678

[1] https://www.theverge.com/2012/10/12/3495466/apple-bans-anoth...

It is identical, and if I considered my phone to be primarily a research platform I'd be really upset. I got really upset with YouTube mucking around curating what videos they allow on their platform because I want to choose my own videos.

But ultimately I own an iPhone because I need a GPS map, SIM card and web browser on the go. Apple doesn't exercise any creative control over those things. Apart from that they explicitly sell a highly curated platform. I expect them to make decisions I don't agree with; that is what curators do. That is the service they sell so I'm not going to complain.

If someone used that walled garden approach on my PC I'd be furious. On my phone, I give them hundreds of dollars for the privilege. If I were going to get upset about freedom and phones, which is reasonable, I have a loooong list of problems before I get to Apple's security model - starting with government interception of messages and moving down to having my name attached to my SIM card. Apple's activities don't really rate, and they have better incentives than Google.

PS. I'm not arguing against phones being scary. Look at the COVID tracking apps that some companies and governments are bringing out that might become mandatory one day. Or the way the US is known to use phone GPS to target drone strikes. Phones are terrifying. Apple's curating/censorship/what have you really doesn't rate on my threat model when dealing with a phone.

If someone used that walled garden approach on my PC I'd be furious.

As this article shows, Apple is slowly moving in that direction for their PCs. They aren't going to be satisfied with locking down their phones only.

Are they really moving in that direction, though?

An App Store from which you can download software with confidence is a pretty sensible first step for most users.

Complementing that with a Notarization service for apps that can't live in the App Store, while still giving both users and developers confidence that the user is installing the "real" app, and not something malicious, seems like a pretty sensible way to protect most users outside the App Store.

And if all else fails, there are ways to allow running that un-Notarized, non-App Store app that you're sure you trust.

None of that seems like something that inherently means to take away your ability to run what you want on your PC, it just sounds like a common sense approach to giving your users confidence in what they run, and guiding them to do so safely by default, while allowing overrides as needed.

Are these ALSO things that Apple could use to lock down your PC completely?

Sure... but then, why bother with any of it if that was the intent?

They already have Mac App Store, and they already have the infrastructure to deal with a "whitelist only" approach, so why bother with this Notarization and Gatekeeper stuff at all?

Don't get me wrong, there's plenty of room to criticize Apple for their implementation. They are clearly figuring out some of this as they go, and trying to find a proper balance. That isn't easy, despite how many people make it out like it is.

Give the average user too many prompts or chances to override security, and they will do that, every time, without thinking it through.

On the other hand, bury the overrides too deeply, and risk making things miserable for the developers and power users who need to use your platform freely.

So far, I see only evidence that Apple is trying to find that balance, but no evidence that they intend to lock the entire platform down entirely.

Are they doing it perfectly? Clearly not. But I think if we're being honest, no other platform has either. I appreciate Apple's approach the most so far, but time will tell if they are able to figure this balance out or if another platform will at some point.

> They already have Mac App Store, and they already have the infrastructure to deal with a "whitelist only" approach, so why bother with this Notarization and Gatekeeper stuff at all?

Change management. For the same reason why Ebay had to backtrack changing their background color and do it again, slowly.

That's certainly possible.

But as someone who has been using Macs on and off for about 10 years now, I've heard people shout that Apple was locking down Macs from the moment the App Store was created on iOS (and long before it came to MacOS). So far, that hasn't happened.

Is it possible this is the next step in a 10+ years plan to "boil the frog slowly"? Of course! Not sure how they would accomplish this without also losing the developers they need to continue making both MacOS and iOS viable platforms for users, but I guess if they just don't care and want to lock everything down, this could certainly be one more step towards their long term nefarious goal.

But it also still seems like a reasonable step towards making their platform more trusted and secure for the average user while continuing to give devs and power users control.

So far, I see no evidence for the former, and enough evidence for the later, that I'm not too worried.

Last time I checked, they force you to use the safari engine for your web browser on IOS. Also having a curated app store doesn't mean they have to disallow any other means of installing software. It's even ok if they say: You installed other software, no support for you. But making it not possible is a money grab.

Not at all, you are always free to buy computers, phones and tablets from other vendor.

Don't go buy Apple and then cry in the corner that you aren't getting the right set of toys to play with.

I use Apple devices and fully support don't having random app uploading my stuff into the world.

Sure, you can buy whatever you want, you aren't living in a dictatorial country. Sadly enough, most people can't say this. Therefore it is important for you to fight decisions like this. If something doesn't exist, it cannot be abused by some regime.

I am going to say something very cynical now, if the reader doesn't like that, he should tune out now. But I guess Apple can't wait to have that special China deal. ^_^

Except Apple isn't a dictatorial country, and there are other computer vendors to choose from.

Apple isn't Mafia, doing personal visits while giving advices to buy Apple computers otherwise accidents do happen.

Buying an Apple computer is a conscious decision.

I love how many around here make their decisions, and then feel entitled to complain and point the finger to big corporations, as if these corporations are the only ones to blame and they poor souls were mislead.

Multinationals are not countries, but they are operating in multiple countries and there actions can have influence on the people in those countries. If Apple makes it possible to stop certain software to be installed then China can abuse the mechanism.

And I am entitled to complain about big corporations. That is the beauty if you life in a free country and even if it wasn't free to complain about them, I still would do it.

I rather see them all burn today than tomorrow.

Buying a house and suddenly getting your water cut off because the county"doesent feel like it" is also similarily a "conscious" decision, and similarily bites you only a time after you bought something.

You might say that's illegal, and I'd recommend thinking about why that has become the way it is. Things are deemed important to everyday life, and suddenly they aren't free game.

Which fails again as an example, because legally is not the same thing.

It's can vs. can't, which is perfectly comparable, in both cases you cant know what you get until afterwards, which is not acceptable. When the freedom to use the your own devices is in question, it needs to be addressed.

Shifting the blame onto the victims by saying they should have known the county can do that, is just sheltering yourself from the uncomfortable truth.

I don't want to feel like I'm being taken advatage of either, believe me. It's just better to fight back than let it roll over you.

When they force their proprietary standards on everyone else... https://news.ycombinator.com/item?id=23250831

Apple was the first major HEIC adopter, but it’s not really something proprietary they came up with: https://en.wikipedia.org/wiki/High_Efficiency_Image_File_For...

I agree. I'd take your point on gatekeepers being a good idea further.

Gatekeepers are a good idea for even experts. There's a reason it's still in your best interest to use battle tested crypto libraries instead of writing your own, even if you're a security expert. The reason stands that it's possible for experts to make mistakes, which is why auditing is so important.

Now for this to hold, we need to assume Apple has done a good job with their notarization system, and that it's regularly audited to ensure it's not causing too many issues.

In this case, I trust Apple isn't doing these things to make developers life harder. They're doing it because it's incredibly difficult to make something both ergonomic for experts (developers) and secure/safe for non-experts (average end-users), and they would rather ship something less-than-perfect for developers if it's going to help non-developers.

So keep a Linux box if you want. Don't shit on people for using a mac.

I can use macOS, Windows 10, and any distribution Linux I want without having to pick one. That's freedom. I have choices. I choose all of the above in my personal setup. I'll fight to keep my free software but, at the same time, you can pry logic on the mac from my cold dead hands. I've been using it for 15 years and I am not going to stop now. Use the best/preferred tool for the job you have to do.

I expelled Apple from my life 5 years ago and couldn't be happier. Before that, I'd been using their stuff for longer than you. I was quite close to the company for a time, covering them as a journalist full time. I have 3 Linux boxes and a Windows box. I shit on Apple from great height. Their entire ethos has been lost, and they don't make anything easier. My folks continue to use them, and my father's business life has been nearly ruined by their CONSTANT updating of the OS and ending of support. He's almost 80, he's not going to learn anything new, but he hit one button accidentally when it prompted him, and now he's been updated to god knows what newer-yet-still-unsupported version of their OS and his email client stopped working and his legitimately paid-for iTunes music stopped working. Apple has not only contempt for its users, it has contempt for its developers and fans. It treats them all like morons.

I thought this was computing for the masses.

The original quote from Franklin was about liberty not freedom. A suttle but vitally important distinction as freedom requires security where liberty does not. If you sacrifice freedom for security you still at least have security, as in a despotism, but if you sacrifice security for freedom you have neither. Conversely if you sacrifice liberty for security you have less liberty without any increase in security just resulting in a net loss.

This is perhaps, strangely enough, the most contentious comment I have placed on HN. Last night when the comment was fresh it was quickly up voted at least 7 times. This morning I awoke to the comment down voted back to it’s original 1 karma. I am unclear as to how this comment is so polarized.

Here is the Franklin quote (I encourage you to read the whole article): https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...

I always thought the two words are synonyms. (That belief somehow survived decades of philosophical reading, media, and more than a few moral/political philosophy courses.) Here in Australia, liberty sounds like a USA word. We talk of civil liberties etc, but not liberty on its own like that. That sounds 18th C and/or estadounidense.

Your distinction sounds like (what I learnt as) Berlin's negative and positive liberty:

"Negative liberty is the absence of obstacles, barriers or constraints. One has negative liberty to the extent that actions are available to one in this negative sense. Positive liberty is the possibility of acting — or the fact of acting — in such a way as to take control of one's life and realize one's fundamental purposes. While negative liberty is usually attributed to individual agents, positive liberty is sometimes attributed to collectivities, or to individuals considered primarily as members of given collectivities."

"The idea of distinguishing between a negative and a positive sense of the term ‘liberty’ goes back at least to Kant, and was examined and defended in depth by Isaiah Berlin in the 1950s and ’60s."


That article goes on:

"Many authors prefer to talk of positive and negative freedom. This is only a difference of style, and the terms ‘liberty’ and ‘freedom’ are normally used interchangeably by political and social philosophers. Although some attempts have been made to distinguish between liberty and freedom (Pitkin 1988; Williams 2001; Dworkin 2011), generally speaking these have not caught on."

Ah that's what I thought!

Also, referring to your other comment, if a "despot can do whatever he wants to you or to your family", like disappear you in the night, and it's not a loss of security, I'm not sure what you mean by 'security'.

In despotism, you do not have security either - the despot can do whatever he wants to you or to your family.

That is a loss of freedom, not security. Compare that to living entirely on your own in the wilderness where you will enjoy maximal freedom with no security from people or nature or starvation.

That distinction is why, in history, non-civilized people find civilization abhorrent and why other people would choose to live under a despot opposed to living on their own. In the ancient world people were not friendly to the idea of abandoning freedoms for class distinctions but once they had it they were not willing to sacrifice personal security or quality of life increases for risk of death and starvation.

That is why people claim freedom isn’t free, because many people, even now, are frequently ready to abandon freedoms for increased security opposed to the extra effort required to increase both.

That’s not close to the original quote. And it was just Ben Franklin politicking, not the word of god.

No one cares, it's the concept that matters. This is on the same tier as saying "haha hey buddy looks like you typed 'there' instead of 'their' haha #rekt".

> No one cares, it's the concept that matters. This is on the same tier as saying "haha hey buddy looks like you typed 'there' instead of 'their' haha #rekt".

While the content / concept is the main point, facts matter. Even if it is ancillary to the intended message. Why suffer misinformation no matter how small?

Another way to look at it is that Apple is moving towards a future where all software for the mac must be purchased from the app store.

Bubye Apple, my next machine will likely be a Dell Ubuntu.

Yeah, this is the future I've been foreseeing for years. Every new OS update just ever so slightly decreases your ability to control what software is on your device, and how you can use it.

For example, you used to be able to back up your purchased iOS apps to your computer, and restore them from your computer. In one iOS update (9 IIRC?), they removed the ability to back up the apps from your phone. In a later iOS/iTunes update, they removed the ability to restore backed up apps from your computer, making your existing backed-up apps useless, if you still had them.

Now, the only way to keep your software on your iPhone indefinitely is to never delete it, and never reformat your phone. Ohh and never update iOS because they will break backwards compatibility with apps you already have. For any app that is no longer supported by the developer, you're just out of luck (and I have purchased MANY such apps, being an iPhone user since 2009).

> making your existing backed-up apps useless, if you still had them.

This isn't true. You can still install existing IPAs you have saved in the past by syncing it with Finder. You can also just AirDrop an IPA to your iOS device to install it.

> Now, the only way to keep your software on your iPhone indefinitely is to never delete it, and never reformat your phone.

You can still back up IPA installers by downloading them with Apple Configurator 2. https://ios.gadgethacks.com/how-to/download-ipa-files-for-io...

I can't seem to find documentation about AirDrop installation of .ipa backups I have. Also that Apple Configurator 2 process appears to force me to update the apps before they are backed up (I have automatic updates turned off because of how often app updates tend to be regressions rather than improvements)... Also, how do I "sync it with Finder"? (what is "it"?)

If I may ask, why do you still persist with apple products then? Sounds like masochism from here...

I have no intention of buying more at this point. The last was the iPhone 8 in 2017. No clue yet what I'll do in the future for a smartphone, because I don't see Android as an option at all. Hopefully this iPhone 8 lasts forever :)

Personally I find smartphones less and less useful. I use them mostly to stay in touch with people or to read articles online, and I do all my work from a laptop anyway. I used to buy flagship Android phones but I realized that it's wasted money. Now I have a 200€ Samsung phone, it works fine, yesterday it fell and the screen glass broke a bit, I couldn't care less.

If I keep going at this rate, I think I will quit smartphones within a few years.

Get a server or some hosting, load it with whatever you need - mail, web, cloudy things, media, communications etc - and use a portable terminal to access it when on the move. That portable terminal can be a phone with a browser or some future device which is more tailored to this type of application. With the current generation of SoC, Wasm and a capable browser (Firefox Nightly Preview is shaping up nicely) this setup is a viable replacement for most 'apps'. One of the advantages of such a setup is that those 'apps' do no get to track your every move - that is, as long as that capability is not built into the browser at some stage (persistent web workers etc).

iPhone SE is iPhone 8 on steroids.

this is sort of an ecosystem pattern.

First xbox was offline, subsequent xboxes were more intrusive

first windows pcs were offline, now they have become spy ("telemetry") machines

Apple has reigned itself in (a bit), but they just as stubbornly put business decisions above user wants.

Mine is already about to be a Linux workstation since, in addition to all the developer hostility the past few years, Catalina essentially killed off Mac gaming (something like 75% of Mac games are 32 bit? or something?). Prior to that it was merely a joke, but it was nice to have an occasional game to play. Now? Nope, Apple Store and recently updated game code or GTFO

Dell Ubuntu is not a good choice, they don’t provide proper drivers and their support has zero knowledge about Linux

Ubuntu phones home a lot too.

motd-news, apport, snaps, whoopsie, kerneloops, ubuntu-report, unattended-upgrades, ...

> Dell Ubuntu

Casual Manjaro and Arch rolling distro with AUR is better drop.

The problem is that there is more than one market here. There is a general market where people love the vendor looking after their security and doing things for them, and there is a pro/hacker market where people want to control things themselves and dont want a lot of this stuff.

This. Yes the option of a walled garden is a great thing and I wouldn't recommend anything but an Apple device to my non-technical relatives. But if Apple also wants to make the $$ that comes from selling "pro" gear, they need to stop relentlessly consumerizing and turning OS X into iOS. I don't think they realize the level of ill will they are engendering in the developer/pro market.

Perhaps it's time for a "Pro" and "Home" Mac OS.

I've been doing software development on macOS/OS X for quite some time now and the consumerization aspects don't bother me. I install almost everything I need via Homebrew, from software libraries to desktops apps, and the fact that there's an App Store isn't particularly relevant (although I do use it for consumer apps now and then).

I'm trying to think of how macOS is so different from 10/20 years ago. What's missing? What can I not do now? Maybe my brain has just been consumerized and I forgot something important.

I was going to switch to Linux 10 years ago when people were talking about the iOSification of OS X back then, but that never happened.

Do you write much system-level software? I feel like Apple's changes don't affect the XCode crowd much - but under the hood, things are slowly getting worse for command-line developers.

How about when Apple removed /usr/include in its entirety from Mojave? Or when they decided to make the root filesystem read-only? Or when they removed the ability to permanently disable the "only run verified apps" option? Or when they even made that the default in the first place?

How about when they stopped supporting or updating the MacOS X11 server, which doesn't have proper GPU support and probably never will?

How about when Apple replaced gcc with a thin wrapper around clang, so that /usr/bin/gcc generates identical code to /usr/bin/clang? Or how they froze all GNU tools (including bash) at the last-released GPLv2 version, just so that they could retain the option to lock you out from modifying your OS install?

How about the fact that Apple has officially deprecated Python on MacOS?

How about the increasingly slow filesystem access? Not a big deal for app users, but terrible for shell-scripts and system software kind of generally.

How about when Apple removed the ESC key from two generations of Macbook Pro? And also how they replaced the function keys with a touchbar?

Did you know that Apple will soon be using zsh for /bin/sh? Without much regard to how many shell scripts have a #!/bin/sh hashbang and some bashisms in them? You can call those scripts buggy or poorly designed if you want - but they're plentiful and widespread, and will be broken so that Apple can steer clear of GPLv3 code. All so that they can block you from modifying your OS installation.

MacOS was a Unix nerd's dream 10 years ago. It was fast, reliable, and it had a good terminal paired with amazing hardware and software that "just worked". Over time, everything that attracted me to the platform has slowly eroded. I stopped buying or recommending Macbooks in 2016, and only use one now because my employer is an Apple shop.

> Did you know that Apple will soon be using zsh for /bin/sh? Without much regard to how many shell scripts have a #!/bin/sh hashbang and some bashisms in them? You can call those scripts buggy or poorly designed if you want - but they're plentiful and widespread, and will be broken so that Apple can steer clear of GPLv3 code. All so that they can block you from modifying your OS installation. MacOS was a Unix nerd's dream 10 years ago

Yep. Sorry. I’m struggling to connect “Unix nerd” to “thinks /bin/sh and /bin/bash are the same”, especially as that’s very much a Linux distro created problem, and (the clue’s in the name) Linux Is Not UNix.

Interesting analysis, thanks for sharing.

command line apps installed via home-brew don't have gate-keeper/notarization though.

I don't know why ppl seem to think they do...

What am I missing? I'm on the latest Catalina and, for me, anything installed via home-brew / scripts/c++/python/rust I write and run/compile myself, just run.

I also don't see any time different between my apps on linux and macOS.

I use itemr2, with Fulldisk access and it's specified as a devtool in privacy.

What am I missing that's a big problem here?

Maybe you're missing to foresee the future step in Apple's strategy which will make it harder if not impossible to run something like Homebrew? As far as I know there is no such thing on (non-jailbroken) iOS. Apple seems be be steering macOS in that direction, a curated platform instead of a general-purpose computing device.

You realize Apple employs engineers right? The same engineers who use homebrew for their own job? If they go down that route, it's likely they'll need to support something like homebrew or similar.

Honestly, it wouldn't surprise me if it just meant distributing package via homebrew means signing the package, much like any other package manager. Yes, you can get something similar with checksums, but it doesn't provide any method of authenticity of the distributor.

Is it friction? Hell yeah. A pain? Yes. Is it purely bad? No. Does it have positives? Some. It's not black and white.

If they do that, I am gone. Parent mentioned that they feared that though 10 years ago and it never really happened.

Apple seems to be trying to walk a line with MacOS and keep all of its user bases happy, but it's a hard line to walk.

Agree with you completely.

I would move to Arch or Debian.

That said, how can they lock it down? You need macOS open to develop apps for their other devices.

They can’t get rid of homebrew et al, as they’d lose their iOS developers! Don’t you agree?

The fact they explicitly have a “Dev tool” category you can use here says a lot about their approach being open for power users.

By writing system level macOS software, although I think you mean old style POSIX UNIX stuff.

Here is a thing, already with NeXTSTEP, UNIX support wasn't never something worthwhile looking for, NeXTSTEP was used for its Objective-C tooling and frameworks, like Renderman and Improv.

The UNIX stuff was just a solution for having a quick ramp up for their OS development, and just like Microsoft with Windows 3.1 NT, to have a tick in the box when selling to the government,

Their famous commercial against Sun, hardly touches on UNIX like development.


You aren't going to see a CLI on that NeXTSTEP screen.

Just like the SDK is all about Objective-C related stuff, even the device drivers were written in Objective-C.


The only fouls here are those that keep giving their money to corporations instead of supporting Linux OEMs, as Microsoft cleverly discovered.

In fact, had either A/UX not been discontinued or Microsoft seriously supported their POSIX personality, Linux would never taken off, as the same crowd would be happily using these systems.

I feel everything you say, and still don't see a better alternative. They're just too good at the hardware and integration.

Methinks you don't grok how Apple uses the term ”Pro”

It comes in Space Gray?


No - it's for people who want to Get Stuff Done™ and not worry about all the crap under the hood.

Why can’t they have their walled garden App Store and also allow me to install other app stores?

It’s an authoritarian usurpation of the spirit of property rights. I should be able to decide for myself what software to run on my hardware, Apple HQ’s opinion should be irrelevant.

Why would any developer even want to release their app in walled garden when they can do whatever they want by releasing elsewhere?

Analogue question in the linux world: Why would anyone get something in the debian package repository, when they can just release their package on their website? Because it gets added support, a bigger reach and a safer and easier installation for users?

There are special people: maintainers. They collect software from the world and package them for Debian. They often are different from original developers. Original developers might not even know that their software was repackaged. It's possible because of free software licenses. Apple can't do that even if they would want: proprietary software typically does not allow redistribution.

Good point, it wouldn't work that way with proprietary software.

Usually on the walled garden they get paid.

On macOS, they do. On a phone, if you want to side load, there’s the option of Android.

Wouldn't a sandboxed Zoom downloaded directly from them be equally secure?

> Wouldn't a sandboxed Zoom downloaded directly from them be equally secure?

More relevantly, wouldn't a sandboxed Zoom downloaded from Apple's store be equally secure even if you could install different apps from developers you trust more outside of the store?

Apple’s rejected a huge number of App updates for security reasons. It’s not a huge benefit, but it does exist.

And also allowed a jailbreak app in the iOS App Store. Yes, it only happened once (that I know of), but it still shows you can't really be oblivious to their practices.

So out of the millions of apps on the App Store, they slipped up once? Sounds like a really good success rate.

That's just the one jailbreak that ended up in the news. There's been many other of bad things that have been pulled.

>been many other of bad things that have been pulled

A jailbreak app making it to the app store being bad, and "apple's walled gardens are bad", are fundamentally incompatible.

Apple can be bad at doing what they claim to be doing and also be doing the wrong things. The nice way this works is that Apple curates a bunch of software they think is safe, and I can run whatever I want on my device. The worst of both worlds is that I can't run what I want, but sometimes malicious things get through Apple's checks.

Jailbreak apps are bad for Apple. Walled gardens are bad for users. It's not complicated.

I, a user, am extremely appreciative of Apple's walled garden. I've never once had to worry that the app I'm downloading is crammed full of malware because I trust that Apple's processes are robust and will work well in 99.999% of all circumstances.

A walled garden is not the same as a curated app store. You could have the same benefit if apple would allow non-app-store apps to be installed after flipping a switch, tethering with a Mac or some other voodoo.

Apple does give you the ability to install non-app-store apps (some without tethering), e.g. sideloading or enterprise certificates, although I agree it's not as easy as flipping a switch.

They should also provide a way to downgrade iOS via Xcode for those with a dev account, but that's another story.

People who are precious about security never obtain apps that aren't generally approved and vetted by professionals anyway. Forcing this deciscion onto everybody is just going to push the people who want a free and open platform into places you dont want them. The benefits of openness don't go away just because apple said so.

We get Zoom, we used to install Java (remember when it was bundled with crapware in hope you'll forget to uncheck a checkbox?). Companies routinely strong-armed users into getting malware. And I doubt popular game mods are all that strongly reviewed by security experts, but are quite popular with tech people.

App Store policies are a poor replacement for collective action, of course, but let's not pretend we can just become immune to hostile by sheer force of will.

I care about security, but that doesn't preclude me from jailbreaking my iphone and running dozens of tweaks that haven't been "vetted by professionals", along with sideloaded apps that haven't been through Apple's vetting process either.

My MacBook runs homebrew which currently lists 84 packages installed plus their dependencies, very few of which will have been professionally vetted, and of the 127 apps in my /Applications folder only a third of them came from the Mac App Store, and I would estimate that a quarter of the others aren't even signed with a paid developer certificate.

I want the apps that I get from Apple directly to be safe. I want to know that when I put my faith in the App Store that I'm not lulling myself into a false sense of security. I want my parents and girlfriend, who are not technical people, to have that same sense of security without them having to learn entire programming languages to vet source code themselves.

The benefits of closed systems don't go away just because you say so.

Yes, but would a typical user know or care if the app they downloaded from a web site was sandboxed and would otherwise have been approved by the App Store if it was submitted there? And if not, how could someone like John Gruber make that claim of safety on anything other than iPhone and iPad? Taking the Zoom example on a parent thread above, look at what happens when you’re installing a Zoom client on the Mac without the strict enforcements of the iOS App Store: https://news.ycombinator.com/item?id=22736608

This just doesn't seem like a terribly difficult problem. Web browsers have figured it out. Any webpage that isn't served over SSL says "Not Secure" right at the top.

I can think of a dozen ways which the OS could prominently display "Not Secure" for non-sandboxed applications, in a way that wouldn't preclude or hinder users from using such applications if they really wanted to.

I wonder what's a decent way to do this with a CLI app

I don’t really understand this argument. Apple has long been heralded for its safety and security. It’s why in three decades of owning macs we’ve never installed antivirus software.

What is the point of all this security these days? What are they protecting us from?

Who is this Gruber person you quote and why is he relevant here?

He's the person who made the markdown format, which you've used as your username.

Other than that, he's mostly known for writing and talking about Apple.

> He's the person who made the markdown format, which you've used as your username.

That's news to me. My username is my name plus down (I use up for work-related accounts, and down for leisure).

> Other than that, he's mostly known for writing and talking about Apple.

Ahh, ok thanks.

if gruber wants to dictate what i run on my computer maybe he can pay for my computer instead of me.

Honestly I'm trying to think of a reason you would WANT to disable OCSP, I'm having enough problems thinking of more than 2 developers I know who can actually articulate how it works enough to evaluate this. Not that it's complicated—it's just mostly invisible.

Even when OCSP is a problem, generally you're more worried about issuing a new certificate than an immediate workaround. What are you going to do, ask all your customers to go into keychain access to work around your problem?

This behavior of slowing down appears to be because apple is making HTTPS connections apparently synchronously (probably unnecessarily) and you'd only be potentially harming yourself by disable OCSP.

Though, I am often frustrated FLOSS desktops and Windows don't allow the behavior I want—maybe this is just cultural.

How about it's totally ineffective? OCSP is pointless if you "soft fail" when the OCSP server can't be reached. [1]

This is why Chrome disabled OSCP by default all the way back in 2012-2013 era. Not to mention the performance cost of making all HTTPS connections wait for an OCSP lookup. [2]

[1]: https://www.imperialviolet.org/2012/02/05/crlsets.html

[2]: https://arstechnica.com/information-technology/2012/02/googl...

That's why there's OCSP stapling and OCSP must staple. Ever seen an nginx server fail HTTPS connection exactly once after rotating the certificate? That's nginx lazily fetching the OCSP response from upstream for stapling purposes.

Notarization has a similar "stapling" workflow as well.

Well, security starts from the user. If you're not mindful of what websites you visit, or what files/apps you download and run, there's no OCSP or anything else there to save you.

OCSP enabled or not, you're still one website click away from being pwned to oblivion, giving full control to the hacker – which, of course, is inevitable to an extent, since bugs always find their way into software.

So why not make it easy to disable?

Well, are you going to manually look up certificate revocations yourself? This necessarily requires a network lookup—you can't just glance at the certificate. What's the benefit of disabling this functionality that actively alerts you to revocations?

> Well, security starts from the user. If you're not mindful of what websites you visit, or what files/apps you download and run, there's no OCSP or anything else there to save you.

Sure, but we're discussing good-faith security here. Presumably if people complain about a missing feature they can envision using it. The scenario here is not visiting a shady website and doing something stupid, the scenario here is something like a man-in-the middle attack using a revoked certificate, which would by definition by difficult for the end-user to detect.

> So why not make it easy to disable?

Because then people would disable it for no discernable good effect.

I mean let me be clear, if you're a security researcher you can just modify your own HTTP stack, run a VM, control the hardware, whatever. This isn't a blocker to investigating HTTPS reactions sans OCSP—this is about denying secure connections when they've publicly revoked the cert used to sign the connection. The only reason this is even considered a discrete feature is that most people have never written an OCSP request in order to then trust an HTTPS server—you're just opening yourself up to be misled without even realizing this (and this goes for most of my very network-stack-aware coworkers).

If you're in a browser, you want the browser to be using best practice security, which necessarily includes OCSP. If you know what you're doing this is trivial to bypass.

Feature-removal has been the most aggravating part of my Mac life for the past several years. Admittedly I tend to use unusual features, but it's just another PITA when they go away.

Not sure they have removed anything, but add something.

What happens if you edit /private/etc/hosts to point ocsp.apple.com to and flush the DNS cache?

This seems like an interesting line of inquiry.

AIUI doing what you said would permit the network request to proceed, and it would fail because nothing is listening on port 80 [1] We already know that the phone-home bails out when there's no network connection, so perhaps that code also bails out on connection failure?

Alternatively, is there some way to make DNS lookup itself fail for ocsp.apple.com?

Last resort, if we know how to fake the response, running a dummy server listening on localhost would be faster than allowing the request to go over the internet.

[1] Empirically, `curl` yields a connection failure. I think I know that is used in a listening context to mean "listen on all interfaces" but tbh I don't really know what it means in a sending context. Maybe someone can educate me?

Sending to will fail immediately. This differs from sending to that may connect to a server on the local machine.

> Sending to will fail immediately.

Right, and as far as we know that exception might be caught in the same way as "your computer doesn't have any network connection at all" is caught. Or would those be likely to generate the same exception? Either way, there's a chance that it would result in exec gracefully and quickly not doing the blocking phone-home isn't there? is non-routable and generally only valid as a src not a dest

I think it is fairly likely that your system would not work at all.

I believe it's just Base64 encoded DER information, based on the code that seems to be similar: https://github.com/apple-open-source-mirror/Security/blob/70...

Yes, that base64 decodes to:

  OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: 3381D1EFDB68B085214D2EEFAF8C4A69643C2A6C
          Issuer Key Hash: 5717EDA2CFDC7C98A110E0FCBE872D2CF2E31754
          Serial Number: 7D86ED91E10A66C2

I can't edit anymore but it seems like the OCSP link could potentially be a red herring just checking the cert for the next request to https://api.apple-cloudkit.com/. It's worth looking further!

I'm surprised nobody mentioned that Windows Defender does something very similar (checking for never-seen-before binaries at runtime, uploading them to Microsoft servers, then running them there) : https://news.ycombinator.com/item?id=21180019

God, this shit makes me laugh. Why are they doing this.

But from Edit2: Your hash is some sort of base64

     let str = 

Then we see weird random gaps in the alphabet used, not so weird, because not every character will be used in every string:

     Prelude Data.List> map head $  group $ sort $ str
If we fill these up then:

      Prelude Data.List> let xs = "+0123456789=ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz"
      Prelude Data.List> length xs
So base64 with some non standard symbols. I don't know what standard base64 is supposed to look to be honest, so perhaps it is standard base64. The = is definitely padding.

It decodes cleanly as base64.

Does this mean you can't run a custom shell script without an internet connection?

If the connection fails it goes ahead and grants permission.

The isn't specific to the article, but another place that can be interesting to look at system activity on Mac OS is the console.


Let's assume that sending network packets to verify the trustworthiness of commands is a good idea. (It may not be, but that's a different discussion.) If you have a modern OS with sufficient virtualization and containerization and indirection, you could optimistically let the commands run, and not commit the side effects of the command until you get back a result. Create little write logged mini branches of your file system, and only actually pause when someone else wants to inspect your side effects. By then, an asynchronous check should have gotten back to you.

Were you able to MITM the api.apple-cloudkit.com connection? I tried with MITMProxy but ran into a client error, which made me think they were doing cert pinning.

If you did get it to work could you paste the logs somewhere?

Yes but it looks like there is no actual session, at least for shell scripts that don't have an app bundle ID. There is just an HTTP CONNECT, TLS negotiation, then nothing.

> a degraded user experience, as the first time a user runs a new executable, Apple delays execution while waiting for a reply from their server.

The way to avoid this behavior is to staple the notarization ticket to your bundle (or dmg/pkg), i.e. "/usr/bin/stapler staple <path>." Otherwise, Gatekeeper will fetch the ticket and staple it for the user on the first run.

(I'm the author of xcnotary [1], a tool to make notarization way less painful, including uploading to Apple/polling for completion/stapling/troubleshooting various code signing issues.)

[1] https://github.com/akeru-inc/xcnotary

Xcode (the UI) is able to bypass GateKeeper checks for things it builds.

The "Developer Tool" pane in System Prefs, Security, Privacy is the same power. Drag anything into that list you'd like to grant the same privilege (such as xcodebuild). This is inherited by child processes as well.

The point of this is to avoid malware packing bits of Xcode with itself and silently compiling itself on the target machine, thus bypassing system security policy.

Reminds me of the AV exception folder our corporate IT created for developers. Soon absolutely everything developers needed or created was installed into that folder. Applications, IDEs, you name it.

Guilty as accused. I try to keep to an absolute minimum. Like docker data-dir and IDE. With that i can atleast use my machine.

otherwise this macos notarisation, along with a possibly of cpu heating issues with left thunderbolt usage and corporate av scanning, makes my machine, next to useless

Putting Terminal (and your favorite text editor) in this category and in "Full Disk Access" will change your life.

How does "Full Disk Access" help?

You can browse Time Machine backup directory trees from the CLI again.

Yes, falling victim to ransomware is definitely lifechanging if you don’t have good backups.

That is a non-sequitur.

It's not; they are stating that if you bypass these security checks, you open the machine up to ransomware.

better not turn on it at all, to be extra safe

So since these permissions apply to process trees, what happens if you put launchd in there?

The computer will probably hang while it tries to solve the chicken-egg problem.

Isn't launchd Mac's ‘init’? I.e. run before anything else.

Yes, and that's the point — everything you run will theoretically inherit the permission from it.

Can you advise on how to make the "Developer Tool" panel in "System Prefs, Security, Privacy" appear if it is not present? Cant find a way: https://stackoverflow.com/questions/60176405/macos-catalina-...

Thanks for the link. Tried it, but that did not work

GateKeeper only triggers the check for things downloaded from the internet. IOW, it checks if your binary has a quarantine flag attached via an extended attribute.

That is not correct starting with Catalina.

How do I get a "Developer Tool" pane in System Prefs? Do I have to install X-Code? I would really rather not

This is life-changing. Thank you!

What did you notice?

> The way to avoid this behavior is to staple the notarization ticket to your bundle (or dmg/pkg)

Maybe in some cases, but the article says "even if you write a one line shell script and run it in a terminal, you will get a delay!"

Shell scripts don't come in bundles. I don't think this kind of stapling is possible for them? I don't think it'd be reasonable to expect users to do this anyway.

The Gatekeeper behavior is specific to running things from Finder (not Terminal), and only if you downloaded it via a browser that sets the com.apple.quarantine xattr.

Two posts from Apple dev support (Cmd+F "eskimo") describe this in more detail.



I recently learned that `xattr -cr path/to/my.app` solves the “this App is damaged would you like to move it to the trash” you get when you copy an app from one Mac to another.

That might be the Windows-iest feature of OSX I've ever heard of.

It seems macOS is going downhill fast these days.

No, it’s just that they’re becoming more popular. When you become a popular desktop OS, governments and militaries want to start using it which comes with some strange requirements. It also means that you can’t rely on “obscurity” to provide any sort of security, where before you could overlook some things.

Can you cite any sources for your claim that these things are being implemented to satisfy government/military requirements?


I don’t know why grand op is downvoted. DoD requirements literally require a timeout setting for screensavers to begin locking. This has caught systems which have a race condition where you can move your mouse quickly and gain desktop access before it locks.

The long term effects come from the required changes to the development security model to remain productive and profitable (took MSFT a few OOB hotfixes and service packs to fix that example above, look when gnome kde xscreensaver etc introduced that feature etc)

> This has caught systems which have a race condition where you can move your mouse quickly and gain desktop access before it locks.

I fail to see how this is a race condition rather than how a screensaver is supposed to work?

Because it’s not, that’s why I pointed to xscreensaver feature implementation. Lock time is separate from screensaver activation time which is separate from energy saving activation time.

What defines when a locking screen saver is “locked”? 10m? Or 10m1s? You are making assumptions and that is what DISA spells out. Which forces the OS design to change in subtle ways. Like xattrs on files as great grand op was alluding to.

Does that provide clarity into how development security models evolve over the lifetime of an application?

What would that mean?

It would appear to mean it's a hacky, over-technical solution to a problem that shouldn't exist in the first place, as copying things from one computer to another should just work™. This is one place where macOS used to shine and seems to be increasingly falling behind in.

> The Gatekeeper behavior is specific to running things from Finder (not Terminal), and only if you downloaded it via a browser that sets the com.apple.quarantine xattr.

The article says the described problem isn't limited in this way:

> This is not just for files downloaded from the internet, nor is it only when you launch them via Finder, this is everything. So even if you write a one line shell script and run it in a terminal, you will get a delay!

If you read the comments of the article and do your own testing, you will find that reality appears to be more complicated than the article suggests. Users have shown using both timing and wireshark that the shell scripts do not appear to be triggering notarization checks.

Quinn The Eskimo at Apple's forums is a 10x support engineer, his posts have helped me fix dozens of problems.

Unless somebody took over his name he’s been at Apple for almost 25 years, and was already being interviewed as such 20 years ago (http://preserve.mactech.com/articles/mactech/Vol.16/16.06/Ju...)

His site (http://www.quinn.echidna.id.au/Quinn/WWW/) supports its claim “I'm not a great believer in web” :-)

It's interesting to see a time when Apple seemed to allow employees to have side projects…

He needs to be, because Apple Developer Technical Support is chronically understaffed.

This is the way things worked prior to Catalina but is no longer the case.

I mean, when I’m developing in a compiled language with the workflow edit code -> compile -> run (with forced stapling), changing it to edit code -> compile -> staple -> run doesn’t make it any less slow...

An update: flat out denying network access to syspolicyd using Little Snitch could cut down on the delay. (Yes, syspolicyd does send a network request to apple-cloudkit.com for every single new executable. Denying its access to apple-cloudkit.com only isn't sufficient either since it falls back to IP address directly.) Note that this might not be a great idea, and it still has nonzero cost — a network request has to be made and denied by Little Snitch.

Here's my benchmarking script:

  cat >$tmpfile <<EOF
  echo $RANDOM  # Use a different script each time in case it makes a difference.
  chmod +x $tmpfile
  setopt xtrace
  time ( $tmpfile )
  time ( $tmpfile )
  unsetopt xtrace
  rm -f $tmpfile
If your local terminal emulator is immune with "Developer Tools" access (interestingly, toggling it off doesn't bring back the delay for some reason), you should be able to reproduce the delay over ssh.

I can repro this locally as well. Interesting if it's inconsistent with Apple docs and when Gatekeeper should be firing, as running stuff locally without distributing/downloading is somewhat out of scope for notarization.

Reached out about this to Apple dev support, hope to get more insight.

> interestingly, toggling it off doesn't bring back the delay for some reason

Noticed the same; it should come back if you disable it and reboot.

Notarization/stapling/etc. is for distribution only, not generally part of your dev workflow.

But TFA and my personal experience do point to a noticeable delay after each recompile in dev workflows, and TFA claims this is due to notarization checks... So I guess I’m confused and you’re talking about something else?

How does mac identify a dev workflow and normal workflow?

When you use XCode you have different compilation options.

I'm confused. does macbook send executable to apple servers or just the hash?

Just the hash.

The way to avoid this behavior is to not buy a machine from a company that actively hates it's users.

In our company many of us have similar issues. I have always loved OSX but this time it is driving me crazy. I though the issue was some sort of company antivirus/firewall, or it could even be a combination of that and this issue (maybe my vpn + path to company firewall is what magnifies the issue in this post). The thing is that some commands take 1 second, some others take 2 minutes or even more. Actually, some commands slow down the computer until they are finished (more likely, until they just decide to start).

For example, I can run "terraform apply" and it could take up to 5 minutes to start, leaving my computer almost unusable until it runs. The weird thing is that this only happens sometimes. In some cases, I restart the laptop and it starts working a little bit faster, but the issue comes back after some time.

It's already been a few months since I try to run every command from a VM in a remote location, since I am tired of waiting for my commands to start.

I have a macbook air from 2013 which never had this issue.

Any easy fix that I could test? Disconnecting from the internet is not an option. Disabling SIP could be tried, but I think I already did and didn't seem to fix it, plus it is not a good idea for a company laptop.

Don't we have some sort of hosts file or firewall that we can use to block or fake the connectivity to apple servers?

IIRC the big thing that changed with 10.15 for CLI applications is that BSD-userland processes (i.e. ones that don't go through all the macOS Frameworks, but just call libc syscall wrappers like fopen(2)) now also deal with sandboxing, since the BSD syscall ABI is now reimplemented in terms of macOS security capabilities.

Certain BSD-syscall-ABI operations like fopen(2) and readdir(2) are now not-so-fast by default, because the OS has to do a synchronous check of the individual process binary's capabilities before letting the syscall through. But POSIX utilities were written to assume that these operations were fast-ish, and therefore they do tons of them, rather than doing any sort of batching.

That means that any CLI process that "walks" the filesystem is going to generate huge amounts of security-subsystem request traffic; which seemingly bottlenecks the security subsystem (OS-wide!); and so slows down the caller process and any other concurrent processes/threads that need capabilities-grants of their own.

To find a fix, it's important to understand the problem in fine detail. So: the CLI process has a set of process-local capabilities (kernel tokens/handles); and whenever it tries to do something, it first tries to use these. If it turns out none of those existing capabilities let it perform the operation, then it has to request the kernel look at it, build a firewall-like "capabilities-rules program" from the collected information, and run it, to determine whether it should grant the process that capability. (This means that anything that already has capabilities granted from its code-signed capabilities manifest doesn't need to sit around waiting for this capabilities-ruleset program to be built and run. Unless the app's capabilities manifest didn't grant the specific capability it's trying to use.)

Unlike macOS app-bundles, regular (i.e. freshly-compiled) BSD-userland executable binaries don't have a capabilities manifest of their own, so they don't start with any process-local capabilities. (You can embed one into them, but the process has to be "capabilities-aware" to actually make use of it, so e.g. GNU coreutils from Homebrew isn't gonna be helped by this. Oh, and it won't kick in if the program isn't also code-signed, IIRC.)

But all processes inherit their capabilities from their runtime ancestors, so there's a simple fix, for the case of running CLI software interactively: grant your terminal emulator the capabilities you need through Preferences. In this case, the "Full Disk Access" capability. Then, since all your all CLI processes have your terminal emulator as a runtime ancestor-process, all your CLI processes will inherit that capability, and thus not need to spend time requesting it from the security subsystem.

Note that this doesn't apply to BSD-userland executable binaries which run as LaunchDaemons, since those aren't being spawned by your terminal emulator. Those either need to learn to use capabilities for real; or, at least, they need to get exec(2)ed by a shim binary that knows how.


tl;dr: I had this problem (slowness in numerous CLI apps, most obvious as `brew upgrade` suddenly taking forever) after upgrading to 10.15 as well. Granting "Full Disk Access" to iTerm fixed it for me.

> IIRC the big thing that changed with 10.15 for CLI applications is that BSD-userland processes (i.e. ones that don't go through all the macOS Frameworks, but just call libc syscall wrappers like fopen(2)) now also deal with sandboxing, since the BSD syscall ABI is now reimplemented in terms of macOS security capabilities.

Is this actually new in macOS 10.15? I seem to recall this being a thing ever since sandboxing was a thing, even all the way back to when it was called Seatbelt.

> That means that any CLI process that "walks" the filesystem is going to generate huge amounts of sandboxd traffic, which bottlenecks sandboxd and so slows down the caller process.

Is this not implemented in the kernel as an extension? I thought the checks went through MAC framework hooks. Doesn't sandboxd just log access violations when told to do so by the Sandbox kernel extension?

> Unlike macOS app-bundles, regular BSD-userland executable binaries don't have a capabilities manifest of their own, so they don't start with any process-local capabilities (with some interesting exceptions, that I think involve the binary being embedded in the directory-structure of a system framework, where the binary inherits its capabilities from the enclosing framework.)

I am fairly sure you can just embed a profile in a section of your app's binary and call the sandboxing Mach call with that…

> I seem to recall this being a thing ever since sandboxing was a thing, even all the way back to when it was called Seatbelt.

Maybe you're right; I'm not sure when they actually put the Seatbelt/TrustedBSD interpreter inline in the BSD syscall code-path. What I do know is that, until 10.15, Apple tried to ensure that the BSD-userland libc-syscall codepath retained mostly the same behavioral guarantees as it did before they updated it, in terms of worst-case time-complexities of syscalls. Not sure whether that was using a short-circuit path that went around Seatbelt or used a "mini-Seatbelt" fast path; or whether it was by hard-coding a pre-compiled MAC ruleset for libc calls that only relied upon the filesystem flag-bits, and so never had to do anything blocking during evaluation.

Certainly, even as of 10.12, BSD-userland processes weren't immune to being exec(2)-blocked by the quarantine xattr. But that may have been a partial implementation (e.g. exec(2) going through the MAC system while other syscalls don't.) It's kind of opaque from the outside. It was at least "more than nothing", though I'm not sure if it was "everything."

One thing that is clear is that, until 10.15, BSD processes with no capabilities manifest, still had the pretty much exactly the same default set of privileges that they had before capabilities, which means "almost everything" (and therefore they almost never needed to actually hit up the security system for more grants.) I guess all Apple really needed to have done in 10.15 to "break BSD", was to introduce some more capabilities, and then not put them in the default/implicit manifest.

I suppose what actually happened in 10.15 can be determined easily-enough from the OSS code that's been released. :)

> Is this not implemented in the kernel as an extension? // I am fairly sure you can just embed a profile in a section of your app's binary and call the sandboxing Mach call with that…

Yeah, sorry, you're right; updated my assertions above. I'm not a kernel dev; I've just picked up my understanding of this stuff from running head-first into it while trying to do other things!

It's a new behavior that doing 'find ~' will trigger a MacOS (GUI) permissions warning dialog when `find` tries to access your photos directory, contacts file, etc.

That is new, but I believe the groundwork for that was mostly laid in 10.14 and is also mostly in the kernel.

Why would sandboxing be slower?

They are definitely doing something way too slow.

Apple replaced the very simple (i.e. function fits in a cache line; inputs fit in a single dword) BSD user/group/other filesystem privileges system, with a Lisp interpreter (or maybe compiler? not sure) executing some security DSL[1][2].

[1] https://wiki.mozilla.org/Sandbox/OS_X_Rule_Set

[2] https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sand...

This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly. It had already been put in charge of authorizing most Cocoa-land system interactions as of 10.12. But the capabilities-ruleset interpreter wasn't in the code-path for any BSD-land code until 10.15.

A capabilities-ruleset "program" for this interpreter can be very simple (and thus quick to execute), or arbitrarily complex. In terms of how complex a ruleset can get—i.e. what the interpreter's runtime allows it to take into consideration in a single grant evaluation—it knows about all the filesystem bitflags BSD used to, plus Gatekeeper-level grants (e.g. the things you do in Preferences; the "com.apple.quarantine" xattr), plus external system-level capabilities "hotfixes" (i.e. the same sort of "rewrite the deployed code after the fact" fixes that GPU makers deploy to make games run better, but for security instead of performance), plus some stuff (that I don't honestly know too much about) that can require it to contact Apple's servers during the ruleset execution. Much of this stuff can be cached between grant requests, but some of it will inevitably have to hit the disk (or the network!) for a lookup—in the middle of a blocking syscall.

I'm not sure whether it's the implementation (an in-kernel VM doesn't imply slowness; see eBPF) or the particular checks that need to be done, but either way, it adds up to a bit of synchronous slowness per call.

The real killer that makes you notice the problem, though, isn't the per-call overhead, but rather that the whole security subsystem seems to now have an OS-wide concurrency bottleneck in it for some reason. I'm not sure where it is, exactly; the "happy path" for capabilities-grants shouldn't make any Mach IPC calls at all. But it's bottlenecked anyway. (Maybe there's Mach IPC for audit logging?)

The security framework was pretty obviously structured to expect that applications would only send it O(1) capability-grant requests, since the idiomatic thing to do when writing a macOS Cocoa-userland application, if you want to work with a directory's contents, is to get a capability on a whole directory-tree from a folder-picker, and then use that capability to interact with the files.

Under such an approach, the sandbox system would never be asked too many questions at a time, and so you'd never really end up in a situation where the security system is going to be bottlenecked for very long. You'd mostly notice it as increased post-reboot startup latency, not as latency under regular steady-state use.

Under an approach where you've got many concurrent BSD "filesystem walker" processes, each spamming individual fopen(2)-triggered capability requests into the security system, though, a failure-to-scale becomes very apparent. Individual capabilities-grant requests go from taking 0.1s to resolve, to sometimes over 30s. (It's very much like the kind of process-inbox bottlenecks you see in Erlang, that are solved by using process pools or ETS tables.)

Either Apple should have rethought the IPC architecture of sandboxing in 10.15, but forgot/deprioritized this; or they should have made their BSD libc transparently handle "push down" of capabilities to descendent requests, but forgot/deprioritized that.

The Scheme interpreter only runs when compiling a sandbox. It's compiled into a simple non-Turing-complete bytecode, and that's what's consulted on every syscall. This has been the case since… 10.5 or something. It's always been on the path for BSD code. And Cocoa operations lower to BSD syscalls anyway. There's no system for them to get a "capability" for a directory tree; on the contrary, file descriptors ought to be able to serve as capabilities, but the Sandbox kext stupidly computes the full path for every file that's accessed before matching it against a bunch of regexes. This too has been the case as long as Sandbox has existed.

There is a bunch of new stuff in 10.15, mostly involving binary execs (and I don't understand all of it), but I'm pretty sure it doesn't match what you're describing.

> Lisp interpreter (or maybe compiler? not sure)

I believe it is actually a Scheme dialect, and I would be very surprised if it is not compiled to some internal representation upon load.

> This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly.

I am fairly sure Gatekeeper is mostly just Quarantine and other bits that prevent the execution of random things you download from the internet.

In the Apple Sandbox Guide v1.0 [1], it mentions Dionysus Blazakis' paper [2] presented at Blackhat DC 2011.

In the latter, Apple's sandbox rule set (custom profiles) is called SBPL - Sandbox Profile Language - and is described as a "Scheme embedded domain specific language".

It's evaluated by libSandbox, which contains TinyScheme! [3]

From what I could understand, the Scheme interpreter generates a blob suitable for passing to the kernel.


[1] https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sand...

[2] https://media.blackhat.com/bh-dc-11/Blazakis/BlackHat_DC_201...

[3] http://tinyscheme.sourceforge.net/home.html

That sounds about right. I was doing some work in this area very recently, which found a couple of methods to bypass sandboxing entirely, but somewhat humorously the issues did not require me to have any understanding of how the lower levels of this worked ;)

Blazakis' paper is a fascinating investigative/exploratory work, delving deep into the sandbox mechanism. I learned more than I wanted to know!

Yeah, it's on my reading list :)

> Much of this stuff can be cached between grant requests, but some of it will inevitably have to hit the disk (or the network!) for a lookup—in the middle of a blocking syscall.

Running any kind of I/O during a capability check is a broken design.

There is no reason to hit the disk (it should be preloaded), much less the network (such a design will never work if offline).

A command like `terraform` shouldn't trigger the check because the quarantine system is bypassed altogether when you download and extract an archive. Maybe this is a red herring and your initial gut inkling is correct.

Try sampling the process as it starts; I doubt your issue is the one shown here.

> For example, I can run "terraform apply" and it could take up to 5 minutes to start, leaving my computer almost unusable until it runs.

On a clean Catalina install this does not happen. Does “terraform version” have the same delay? If not, check your remote configuration - maybe run with TF_LOG=trace. Terraform Cloud will definitely highlight the inherent performance problems of using a VPN.

It is worth noting that `terraform version` connects to HashiCorp’s own checkpoint service by default so this may not be the best test.

docker run -i -t -v "$(pwd)":/project hashicorp/terraform:light apply /project/thing.tf . Maybe(if your projects terraform version is the latest.)?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact