Hacker News new | past | comments | ask | show | jobs | submit login
Catalina is checking notarization of unsigned executables (lapcatsoftware.com)
350 points by robenkleene 2 days ago | hide | past | web | favorite | 169 comments





Why don't companies come out and tell people what they're doing these days? Telemetry is getting to the point where people such as doctors and lawyers might be violating the law by using a modern computer. And people in the defense industry? Doesn't Apple employ thousands of forns? Who's audited their datasystems and ensured that this stuff stays private?

Much easier and better to just stop using it all and move to a system like Linux or BSD. 99% of people do everything in a browser these days anyhow.


If only moving to Linux were an option for everyone.

The other day I tried for the 100th time to move to Linux. I installed a recent build of a maintained, popular distribution (no it doesn't matter which one - I have tried them all), on hardware that is famous for it's Linux support.

Everything worked for a day and a half, then the sound just fucking died. No input or output.

I get millions of people use Linux daily, and are happy with it -- I'm genuinely grateful that's a thing. I would love to also use Linux, but I really don't have the time to diagnose why it broke yet again.

Any suggestions for people stuck on macOS? I guess I could block all Apple domains in my DNS resolver? Other than app updates, I can't think of anything that would stop working. That still sounds less painful than trying to deal with Linux's atrocious UX.


I had sound drivers die on me on famous brands laptops under Windows.

I had OSX lock up or lose any display on MBPs with NVidia chips.

On my wife's old windows desktop I had to plug a USB audio dongle, because of audio glitches.

Some of it is sloppy drivers, some, faulty or poorly designed hardware.

"Sound just died" is, unfortunately, not specific to Linux in any way.


Sure, though I am describing my latest attempt in like 15 years of using Linux on and off. At some point "every OS has its problems" just stopped being true for me.

Linux really sucks for anything other than servers. I hate to say that because I badly wish it weren't true, but it is.


How could it be any different actually?

A Linux conference is usually focused on the Linux kernel, drivers, filesystems, networking, more or less everything POSIXy.

If you want to learn about improvements at the UI level, there are XDG, GUADEC, Kacademy, each focused on their own silo, and other parts of the stack or UI tooling don't have any at all.

Meanwhile WWDC, Google IO, BUILD / Ignite are about all levels of the stack.


I've had good luck with System76 & PopOS, they put in the extra effort to make sure Linux just works on the hardware they sell and will respond to any tech support issues. Recently switched back to running Debian desktop on a older system and have ran into some intermittent sound issues that are frustrating, so I can relate but haven't had that kind of issue with any of the System76 laptops I've used.

Vagrant.

https://www.vagrantup.com/downloads.html

Spin up a Linux box in macOS and ssh into it directly. It is a true joy if you are comfortable working with text files (programming, admin, focused writing, etc.)

It will default to using VirtualBox as the underlying virtualization. That works a treat and hides all the GUI madness of VirtualBox.

However, if you open up VirtualBox then you can interact with the host you just created with “vagrant up” just fine, including using a graphical environment.


Personally I prefer VMWare Fusion for linux visualization. There are a number of tweaks that it comes with and I never really feel like anything is impossible. It may be poorly documented at times (custom networking, say) but it's all possible and it handles retina really well.

I've never used that before, but it sounds a bit like Docker? As in, it's got a VM in the background and I can interact with it?

The authors of both Vagrant and Docker give answers to Should I use Vagrant or Docker for creating an isolated environment?[1] on StackOverflow.

[1] https://stackoverflow.com/questions/16647069/should-i-use-va...


Technology aside, I would describe as being a bit more like an AWS instance, except it is running on your local machine.

odd personally i havent had those issue in about a decade for my custom desktop builds.

but in general if you want things to to just work (tm) use system76 or some other linux native vendor.


Had various Linux distributions running on bare metal on my MacBook Pro 2011 for most of its life. It has barely had sound issues in that time. My Bluetooth headphones work best with my Linux machine, Windows 10 won't let me use both microphone and headphones together in a call. Absolutely atrocious, Windows.

Do you have an obscure sound card or something? With consumer grade hardware I have rarely had issues with compatibility. Well yes recently with wifi USB adaptors.


I got a purism laptop that has carefully chosen hardware. All the hardware has blob-free drivers.

It has worked very well for me. I originally installed qubes years ago, but it was all the security of vm/containers with 1/10th of the convenience. I switched to arch, it was a completely painless install and that's what I have now.

(hardware-wise it is more of the same - standardized screws on the case, 19v power adapter with standard barrel jack, socketed standard memory, m.2, sata)


Yeah, I bought something similar to that. The entire machine can run off the Linux kernel with zero proprietary drivers. Unfortunately even that didn't help me with my bad luck so far.

I have decided to give up trying Linux, at-least for a few years.


Yeah I have a reload_alsa.sh script which reboots my audio. Basically the only problem I have. It's a shame, I know, but I still love my linux box.

> Everything worked for a day and a half, then the sound just fucking died. No input or output.

my condolences.


[flagged]


I mean, sound not working is pretty user hostile too. But yeah, I have come to the conclusion Linux isn't right for me. Sadly neither is Windows, and macOS is rapidly getting worse.

Do you remember when computers were fun?


This answer misses the point in a really big way.

Until my mother can use Linux, almost nobody will.


Multiple mothers in my family use Linux. The volume and nature of tech support questions are often not OS-specific. Last issue I dealt with was a dusty DVD Drive that needed to be replaced.

Lots of mothers already use Linux. Yours may be special in this respect but that does not imply much about anybody else.

> Why don't companies come out and tell people what they're doing these days?

It's a mystery. I'd certainly be much more willing to buy a machine if it came with good documentation. Back in the 1980's, they (Apple and others) used to include complete schematics for their computers.


It would be nice if my laptops bluetooth and audio would work > 99% of the time. Right now its a crapshoot

I don't know about BSD, but even "mainstream" Linux (i.e. Ubuntu and the like) has telemetry now. This sort of spyware is everywhere. I think Windows 10 was the first to really normalise such behaviour on the desktop, and all the others just followed along.

where people such as doctors and lawyers might be violating the law by using a modern computer

That reminds me of a story I heard not long ago --- a company wanted to have more defense against malware, so signed up for a "security solution" from one of the big vendors and got it installed on all the company's machines. After a developer who was doing network tracing discovered that it was phoning home on every executable being run, and further digging discovered that it was periodically uploading file hashes and sometimes actual files --- not just the executables being run but other random files --- to the security vendor's servers, the reaction was "oh hell no!" and they immediately terminated the service and removed the product from all their machines.


Does Debian do any telemetry besides popcorn? I'd be real surprised.

I haven't seen much with debian. they do updates and ntp pools but they are not for telemetry.

arch didn't seem to do anything.

heck, even pfsense phones home. Last I remember there was some data file it downloaded each time they used for metrics.

ubuntu phones home a lot.


ubuntu has lots.

huge privacy settings pane with legalese, motd-news phones home, snapd continues to reinstall itself and use resources, whoopsie and kerneloops phone home. amazon app, apport, ubuntu-report, unattended-upgrades...

I haven't tried 20.04 yet, don't know if it is worse or better.


Sorry, what's a forn?

Foreign national, probably.

Yep.

The acronym used on files is often NOFORN == no foreign nationals.

>"99% of people do everything in a browser these days anyhow."

This exaggeration is clearly absurd.


Are you certain about this? I think I was being conservative.

It's easy for us tech nerds in our little gadget bubbles to suppose that everybody is like us. But most people are simple browser users, and Office 365 and Google Docs have all but killed off office software on the desktop for many users.


On the contrary, it's easy in our tech bubble to assume that everybody else uses a computer just for mail, netflix and spreadsheets, when in reality most people have niche needs. It's just that there are many niches. E.g. I know people from scientific circles who use CAS software I've never even heard of, my sister is an architect and needs to use CAD software. YouTube video authors often use advanced video editing software. Musicians use audio editing software. Publishers use Adobe InDesign. Then there are gamers. This "not geek => mainly spreadsheet user" stereotype is really strange.

And basically everybody, whom I know personally, complains about the UX of anything web-based, so don't even think about putting CAD, CAS or InDesign into the browser.


> But most people are simple browser users, and Office 365 and Google Docs have all but killed off office software on the desktop for many users.

In reality, I see most people use desktop software instead of the browser (without using the internet in some cases) to do their work. (Think CAD, Adobe, DAW software, Excel, video production software) even on mobile/tablets Office can be used where no internet connection is available.

I seriously doubt that users would spend all their time in a browser window other than for consumption purposes like social media and video sites. The idea of 99% of people doing everything in the browser seems questionable to me and some data about this would be helpful here.

Apart from the computer science department, I also doubt that people would find it easier to go to Linux, BSD or the other galaxy of distros.


> Office 365 and Google Docs have all but killed off office software on the desktop for many users.

That hasn't been my experience at all. While those tools are definitely used - especially for collaboration - most people on my company's Office 365 subscription are downloading and using the full products for their daily work. This is true in both very large companies and the (non-tech) startup I work at now.


Honestly my experience with collaboration with Office 365 last year was pretty beer bad. At least for sight technical cases people they are many better solutions.

I work in an office that mostly uses Office, but doesn't use the web version for everything. Microsoft never fully implements everything when they make a substitute, so you're always faced with a case-by-case choice as to which one. And the software people use includes things that aren't part of the basic apps, like Project, PowerPoint, Visio, I don't know what else.

Why do you think so?

The ball is actually not in my court. The original claim 99% should be somehow substantiated. It is not. On a practical note desktop software is being used by countless professionals. There is nearly infinite amount of those tools in countless areas. Amount of small businesses is insane as well and you can hardly find one without some old PC/Laptop running some of their desktop software. None of that would exist if there was no market. 99% claim does not really fit into the picture.

"Telmetry" has been co-opted by the tin foil hat wearing privacy keyboard warriors.

It's akin to the people spending tens of thousands of dollars on disaster prep.

The only people who lose are the end-users of software. Who are forced to use crappy software.


You mean, like all the software that has been written since companies got into the habit of embedding telemetry in everything? That crappy software?

Telemetry has a specific use-case. Taking measurements in a place you can't go. What industry employs it for nowadays is much closer to spyware in the sense you can get so much more of it done without it producing a noticeable effect for the user in terms of how much work their computer is actually doing. So what if you spin through a couple rounds of telemetry gathering while the user's process is blocked, am I right? Not like they're using it. /s


I guess the list of things keeping me off catalina (and, by extension, new Mac hardware) just got one item longer.

I recently bought a new System76 laptop as a stopgap, but it might end up becoming permanent. Kind of a sad end for 25+ years of Mac use.


My 16” is a huge disappointment. After swearing off Intel PCs after a disaster ours X1 Carbon, I switched back to a 2013 15” until this month. Figured after six months bugs would be ironed out. Wrong. I’m seeing two major glitches that have macrumors threads dozens of pages long: 1) with an external display connected, dGPU utilization shoots up to 20W at idle. (The rest of machine draws well under 10W at idle.) That wouldn’t be a big deal if the CPU and GPU didn’t share a tight 70W power budget. 2) when connected to an external monitor or dock—I’ve tried two different TB3 docks—the machine kernel panics regularly, usually waking up from sleep.

I’m torn. I don’t want to return the machine because everything else is crap. At least the 16” works well as a laptop so long as you don’t plug anything into the ports. But Apple’s Q&A has seriously gone down the toilet ever since Steve Jobs died. Clearly him throwing staplers at people was the glue holding Apple together.


when connected to an external monitor or dock—I’ve tried two different TB3 docks—the machine kernel panics regularly, usually waking up from sleep.

I had a similar problem, and it turned out that the dock needed a driver. I don't think I've installed a driver for an external device since I switched to Macs years ago, so it never occurred to me that something like a dock would need a driver.

But it turns out that once I installed the vendor's driver, the problems all went away. I'm not sure who's fault that is.


I've had neverending problems with sleep mode on the Macs I've owned (2012 Mac Mini, now 2016 MacBook usually docked) - never really worked out the issue other than entirely disabling sleep when connected to power.

Since 10.15.4, my 16" started having kernel panics while waking up from sleep. Disabling Power Nap seems to mitigate this.

While this is an awful stopgap solution, at least I can get back to work.


Fixed in the latest beta.

I’m on the latest beta and it still kernel panics for me with the same message.

My partner is a graphic designer who loathed her 13" macbook that her work got her. She finally got an upgrade to a 16", i9, 64 gigs of ram.

It runs adobe software like total shit.

I think it's something to do with Catalina + accessing files in Google Drive File Stream + Adobe.

It runs illustrator horribly.

It's basically the saddest thing I've ever seen.

I think I'll get her a 17" XPS for christmas this year.


For what it's worth, I have _never_ had good luck with "big vendor" software (like Adobe) and using any sort of synced cloud-based filestore. I have had untold issues with things and as soon as I moved files local, everything magically went away. Might try that!

You can also set the relevant folders to keep the files always available offline.

Any third-party antivirus or other corporate compliance-ware? Google FS and AV don't work so well together.

I have both of the issues you describe.

Unfortunately, while the kernel panics will most likely be fixed eventually (10.15.4 is a complete shitshow, even by Catalina standards), it seems the dGPU is actually working as designed with the high idle power draw. If you search for “navi multiple monitor power draw” you can find reports of desktop AMD cards that predate the 16” MacBook Pro that exhibit the exact same behaviour. It’s something to do with memory clocks and mismatched resolutions/refresh rates between monitors, and I very much doubt it will ever be addressed via software (if it even can be).

Very annoying as it causes the fans to spin up audibly when you put it under the slightest stress.

Like you I don’t know what to do. I’m able to return it due to the extended return window they have currently, but I have absolutely no intent of switching to Windows or Linux.


> you search for “navi multiple monitor power draw” you can find reports of desktop AMD cards that predate the 16” MacBook Pro that exhibit the exact same behaviour. It’s something to do with memory clocks and mismatched resolutions/refresh rates between monitors, and I very much doubt it will ever be addressed via software (if it even can be).

At least when operating in clamshell mode with one external monitor, I can get the power usage to drop from 20W down to 5W by using switchresx and dropping the refresh rate from 59.88 to 56.88 Hz. When I do that, even light WebGL work doesn’t cause it to exceed 7-8 watts.

It sounds like some work around for the special case of a single external monitor with the internal display closed isn’t kicking in like it’s supposed to.


From the 60+ page MacRumours forum thread about the issue, a lot of people don't have the issue at all in clamshell mode (even without the SwitchResX hackery).

My understanding of the issue is that the card has variable memory clocks to save power. However, to avoid visual distortion/tearing, the clocks can only be changed during the monitor's v-blank. However, when you have multiple monitors, presumably you would need extra circuitry or at least some mechanism ensure each monitor is in sync, or to detect when the blanking intervals match when using monitors with different refresh rates. I don't have a strong knowledge of this sort of thing, so I don't know how exactly this is achieved, but in this case AMD has "solved" the problem by simply running the memory clocks at full tilt 100% of the time, thereby avoiding the need to precisely time changes in speed.


Right. The problem shouldn’t happen at all in clamshell mode because there is just one monitor. But it seems like in certain configurations the MacOS or the driver gets confused.

I was gnashing my teeth over exactly this last night — 26 years on a Mac for me:

https://wincent.com/blog/grieving-for-apple


What are the other problems with Catalina for you? I ask because every time there is an OS X update someone posts this exact sentiment but then over a few months the issues get resolved. Please don’t interpret this as an attack; I am genuinely curious and want to see if Apple ends up fixing things.

I my self have a maxed out 16 MacBook Pro and a for the first few weeks after the upgrade it was literally in usable because routine user input would result in the entire system locking up. I suspect it was actually this issue but, thankfully, the issue is now resolved.


Not a Catalina issue per se, but the big problem with Apple nowadays is:

1. Upgrades are not optional. The system will relentlessly nag me until I upgrade even if I don't want to upgrade.

2. Upgrades are crap shoots. An Apple upgrade nowadays is as likely to break things as it is to fix things.

3. Upgrades are difficult and sometimes impossible to revert. If an upgrade breaks something, I'm just screwed.

So I'm still running Mavericks. It works. It's reliable. It does everything I need it to do. And I can count on that still being the case tomorrow. If I upgrade, all bets are off.


Although I sympathize, this is one issue, not three. And hardly anything unique to Apple.

It’s not unique to Apple, but that didn’t used to be the case. I’ve used a Mac for almost 15 years. My previous Macs had issues, no doubt. Got bit by the peeling anti glare issue on my 2013 MBP 15” (after 5+ years of heavy use). But I’ve had maybe two kernel panics in all that time. With my new 16”, I’ve had half a dozen in two weeks. Waking from sleep used to be the basic functionality that “just worked” on Apple machines and where Windows and Linux laptops struggled. That was the benefit you got in return for spending extra on closed hardware platform.

It seems pretty unique to Apple in my experience. I have an ancient Android tablet. I don't even know how old the OS is on it. It never nags me to upgrade. My Linux boxes never nag me to upgrade. When I do upgrade, things mostly keep working, and if they don't it's pretty easy to roll things back.

Android never getting updates is not a feature!

I much prefer no updates over broken updates, especially when they are forced on me. Stable bugs are better than a never ending stream of new ones.

How do you roll back an upgrade in android and whatever linux distribution you're using?

I've never had to roll back a Linux upgrade but if I had to I could always restore from a backup. You can sometimes do that with MacOS, but some MacOS upgrades come with firmware upgrades which are one-way and prevent earlier versions from booting. And all iOS upgrades are one-way.

I don't know about Android. I only have one Android device. It is so old I don't even remember how old it is and I've only ever upgraded it once. It still works like a charm for all the things I need it to do.


Perhaps LVM snapshots are an option for Linux rollback?

I'm in a similar boat. The first few weeks with my 16" MBP was pretty bad. Everything seems resolved now, except the issue with the discreet GPU kicking in when an external monitor is plugged in. This, in turn causes the fans to spin up (which is annoying when trying to code).

My 2017 13" MBP (without discreet GPU) was barely usable when powering my 4K monitor but at least it was quiet. It makes me think that the more modern integrated Intel GPU in the 16" should be enough to power my monitor without fan noise. Sadly, Apple has decided I can't have that option.


I installed Catalina on my iMac several days ago and ImageCapture still has bugs! Although I can now select multiple photos to import from my iPhone 11, ImageCapture will not delete the photos after import. Previous to that, ImageCapture on Catalina would not import more than 10 photos without reporting an error. At least they fixed that bug.

I’m curious, is there a reason to use image capture when you can just AirDrop the images? Does Image Capture give you the HEIC format file or something?

Surely it's faster?

It requires finding a (USB 2.0) cable, connecting both devices, approving trust, and then opening an App and selecting and waiting.

AirDrop requires touch based selection and sharing on the iOS device and the transfer is very quick and if it’s your Mac the files go straight to your downloads folder.

So it’s more streamlined overall with AirDrop.

I had a colleague who used to do a similar dance with Image Capture. He had no idea he could AirDrop photos even though he airdropped files from Finder to others all the time.

Personally I just have them all sync via iCloud Photo Library.


The trust approval either never happens or it happened once and I don't remember. I also find it's a quicker interface than Airdrop, which is slow and also lengthened by my having to turn on bluetooth and then turn it off in the settings at the end. File transfer is much quicker.

Finding a USB cable though, sometimes that does take a search and a wee bit of cursing!


There's also the issue of losing all your data if you enabled Secure-Boot (which is default) and the T2 chip failed.

https://www.youtube.com/watch?v=6dwqxsDHkKQ


I've heard bad things about system76 laptop build quality (desktop I think is top notch). Has this improved at all?

System76 uses rebranded Sager And Clevo laptops, they don’t actually build their own. So you can probably get some more (and more diverse) reviews by searching for the model of the actual manufacturer.

How does the keyboard and trackpad for that System76 laptop feel? I've been looking at those quite a bit and am seriously considering one to replace my 2014 MacBook Air.

One thing that always turned me off of Windows was that I would be in the control panel or command line within 5 minutes of using any system to fix a preference, and how with OSX it was refreshing not to have to do the equivalent in System Preferences or terminal.

This is no longer true. It is a very similar and annoying experience for me.

I use OSX, Windows and various versions of Linux.

The browser is the real platform at this point and is the shared experience between all three.


With issues like this and the 4000 series Ryzen mobile processors, top specced MacBook Pros are very noticably slower than $1k alternatives.

I see the above comment heavily downvoted but I'm specifically looking at 4000 series Ryzen laptops as my 1st move away from MacBook Pros. Such incredible CPUs really make the decision a bit more acceptable. The laptop I'm eyeing is near $1000.

Out of curiosity - what are you looking at? I'm also starting to think about replacing my 2016 MBP and quite honestly, the current MacBook Air/MacBook Pro line doesn't really appeal to me.

Lenovo has a couple of AMD-based ThinkPads that are looking pretty appealing, although they come in around $1500 with the configuration I'd want.


Because I need a fast CPU for builds I'm looking at ASUS "gaming" laptops. The ASUS TUF Gaming A15 in particular is under $1000 before taxes here and has a good Ryzen. There are even better gaming laptops from ASUS that are thinner, but I think the A15 is a good 'workhorse'.

the problem is those are second citizens for most vendors; there's some shady Intel lock in that prevents AMD from getting prime place in the lineup. So you'll get a great CPU but second rate screens, trackpads, etc.

So it would seem but the CPU might still be worth it, even if I connect it to an external display. We'll see...

What are you eyeing?

The ASUS TUF Gaming A15. It's a bit flashy but for home office it works.

I upgraded my 2015 MBP 13" to Catalina and happily continue to Mac stint since 2006.

What does df -h in the terminal say?

It'll show the total of each partition (the normal, and the new read-only system partition) on their own line, thereby giving a false total. e.g. 1x 100GB disk, 50GB normal, 50GB system partition will show as capacity 100GB for both partitions which would mean a 200GB disk.

Small things like this just make me completely lose faith in Catalina.

EDIT: Other "fun" things I noticed within half an hour:

a. Text search in PDF no longer works

b. I can't create anything under /

c. I have to use synthetic.conf to map paths from / to my real partition, but the parser of synthetic.conf is very particular to tabs/spaces unlike any other /etc/ file format

d. Xcode wants to ask for my password to debug every single time I reboot and debug a C++ app. This is incredibly incredibly incredibly incredibly annoying.

Safari is faster in general use. But that's so far the only good point.

I'll keep it on a SSD for App Store submissions and keep my machine on an older decent version thanks


> Text search in PDF no longer works

In Safari or in general? I have only noticed the former.

> I have to use synthetic.conf to map paths from / to my real partition, but the parser of synthetic.conf is very particular to tabs/spaces unlike any other /etc/ file format

You may already know this, but man synthetic.conf will explain that you must use tabs.

> Xcode wants to ask for my password to debug every single time I reboot and debug a C++ app. This is incredibly incredibly incredibly incredibly annoying.

I can only offer you my condolences as I cruise by with SIP off.


I keep a copy of Skim around for any time I want to do a proper search of a document.

> a. Text search in PDF no longer works

Oh is this macOS? I'd just assumed all the PDFs I've tried to search for the past while have been poorly formatted with the text as images, but that makes more sense.

> I'll keep it on a SSD for App Store submissions and keep my machine on an older decent version thanks

FYI it's pretty easy to integrate binary upload to App Store Connect on the CLI of your CI system.


Checkout `DevToolsSecurity` on the command line. Could help with d. :)

Don't worry, Apple will change this. Big companies always make sure that customers just barely find their products acceptable. This was just a test to see if they could get away with it.

This must be a blacklist, since it doesn't block my own random scripts which it has never seen before.

If it's a global blacklist on apple servers, it should instead be downloaded to the client, and be a local blacklist.

Too big? Use a bloom filter. Now you only end up keeping less than one byte per blacklisted item. Update the bloom filter with an autoupdater. Any positive hit you can check against the server just incase it's a false positive.


Doesn't a blacklist also work only until the malware authors figure out how to randomize 8 junk bytes every time they serve an executable?

That's the crazy thing about this. There's already obfuscation techniques against hash blacklists, so what is this even for? There's no earthly way apple security engineers didn't know that. So what is actually happening?

My guess is that it's strictly for banning app store apps that they pull from the app store, but would like also to cripple retroactively on installed machines. But that doesn't explain why it had to run against random shell scripts? This is all still confusing. We don't have all the info.


Which they already do.

The k-anonymity scheme used by the haveibeenpwned api seems like a good fit here.

Bloom filters are probability-based and come with inaccuracy problems. If you're going to double-check with Apple anyway what does a bloom filter solve compared to the current response caching after querying Apple? How will you protect the locally cached blacklist from being tampered with?

Bloomington filters have probabilistic false positives, making it perfect for blacklisting. A negative means that the program can be run immediately, because it is guaranteed to not be on the list. A positive needs to be double checked, though.

So you're telling me that every time I install a program in OSX, it pings apple to let them know what program I'm installing, my IP address, my location, and my OS version?

Sounds very Orwellian for a privacy focussed company...


No, that's not what he's telling you. You're getting ahead of yourself with your question. He's telling us that macOS will consult with Apple regarding the "fingerprint" of an executable when you run it.

But the fingerprint of photoshop is the same for everyone. If apple knows what the fingerprint of photoshop is (which they could easily find out), now they have a giant list of who installed photoshop and when, and from which IP address, and which IP location.

That data would be a wet dream for some IP lawyer looking for pirate copies of software...


The Photoshop binary is signed (presumably; it's been years since I last ran it), so this check would NOT be conducted.

Edit: What I should have said is that the binary is signed, notarized, and the notarization stapled to it, as described here: https://developer.apple.com/documentation/xcode/notarizing_m...


The author of the article mentioned explicitly that he signed a binary, and the check still occurred.

Did he staple the notarization though? They are 2 separate steps.

I understand the privacy concern. We don't know if they store/log anything from the request, or even what it contains in itself besides the "fingerprint". I'm personally certain that Apple is not in cahoots with Big Software to put a squeeze on users in exchange for small money. It's not their business, and it's not something they are required by law to make their business.

> I'm personally certain that Apple is not in cahoots with Big Software to put a squeeze on users in exchange for small money

Well, this is the problem people have I think - that it comes down good intentions on Apple's part, no matter how trustworthy they are deemed to be.


With this logic you might as well not try at all. You have to trust Intel, your bios/UEFI, Apple/Microsoft, all the various builds of software closed and open source alike .. at some point you need to trust someone.

how is that a justification to heap on more "required" trust into a system?

Feels like somebody could flesh out this argument in terms of accidental vs necessary complexity, but in terms of how much you need to trust the other party.

Few would accept the argument "This code is already very complex, why do you have a problem with doubling the complexity?" on its own merits, so why is it sensible in terms of trust?


My personal wet dream is that I can download any shady Photoshop torrent and Apple will block the ones that have trojans baked in. Given that Apple won't even open up their infrastructure to US law enforcement, I can't see them teaming up with IP lawyers anytime soon.

This is valid question from someone. Why downvote? It's a question and not a statement.

Of course, if they say it's for your privacy, it's fine, and everything is all right. We have won the victory over ourselves. We love Apple.

No, they say it's for your security, and we can surmise that's the actual intention in Apple's case, but I definitely understand the privacy concerns that come with this method.

The article mentioned at the beginning is already discussed here: https://news.ycombinator.com/item?id=23273247

There is so much confusion here. The OP and most others are missing one of the biggest points: Look at the packet trace. There is _no data_, not even a hash, being sent. It's a TLS negotiation and then the connection ends. I have to suspect it's a bug...

I'm not sure what you're seeing, but that's not what I'm seeing. When I Wireshark both app notarization and script notarization, I see 2 packets of encrypted Application Data sent to Apple (567 and 101 bytes), and 1 packet of Application Data (varying length) returned from Apple, in each case. What do you see when you trace a regular app notarization check?

This is odd, my proxy doesn't seem to show this. I will try to load my root cert into Wireshark and check.

Edit: Checked and double checked: When I run a new shell script, syspolicyd just makes a connection with no application data


I'd recommend trying this: Download a notarized Mac app, delete any stapled notarization ticket (.app/Contents/CodeResources), and then trace the launch. What do you see, and does the system let you open the app? Does it say it checked for malware?

Ah I see, looks like we're not running quite the same experiment. I suspect that anything including an app bundle ID is going to see some more interesting traffic.

Don't suspect, test. ;-)

I'm running both experiments. I've tested and compared script notarization to app notarization.

You're getting apparently unusual results with script notarization. So the natural next step would be to compare against app notarization.


Agh, I think it was cert pinning. Looks like the connection is terminated if you're snooping. I see the same results as you now. Thanks!

In prior article "slow by design", this was reported to Apple and the bug was closed that it works like that by design.

I did see the previous article (another comment of mine should be easy to find on its HN post). Do you know how to find the issue that was referenced? There was an ID given but I have no clue what tracker that was on.

It's probably Radar, which is Apple's internal issue tracker, which isn't public (you can see issues you submitted, but nothing else). Sometimes people cross-post issues they submit to http://openradar.appspot.com, so you might be able to find it there.

The issue is FB7674490 but it is not on the OpenRadar. Looks like OpenRadar is not Apple's services and issues appear there only if author (which has access to the issue in the Apple system) submits them there.

Open Radar is community maintained and fairly poorly at that these days :(

FB numbers come from Feedback Assistant, which are similarly non-public and feed into Radar through a convoluted process.

That ID was likely for feedbackassistant.apple.com. However, those reports are not public, so you can't see them unless you're the one that reported them though. Knowing the ID is still useful for things like emailing Apple people and complaining about their declining software quality ;).

Looks like these issues are only visible to originator, so we have to trust the author about it. Perhaps the author could post it on the OpenRadar.

Any communication is data! There are tracking pixels that return 404! Why? Because once you've hit their endpoint, it did the job.

The TLS negotiation is enough to send quite enough info.


Is the handshake all that is needed to verify? Is the data you're expecting sent during the authentication phase of the handshake?

The Objective-C format strings in the URL would imply that the hash is sent as a path parameter.

That's a string found in the disassembly of syspolicyd (I've found it as well). However, the actual URL you see in the TCP logs has no path whatsoever.

"no data", just a TLS handshake. Of course information can flow! You could put a hash of the executable in a ClientHello extension, and if the server says "i don't know it to be malware" it can finalize the TLS connection normally.

I have a feeling that there is already a system out there which does something like that.

I can't reproduce the exact test specified in the article:

  $ echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh
  $ time /tmp/test.sh && time /tmp/test.sh
  Hello
  
  real 0m0.016s
  user 0m0.002s
  sys 0m0.010s
  Hello
  
  real 0m0.006s
  user 0m0.002s
  sys 0m0.004s
I don't believe the 0.01s difference is long enough, and could easily explained by filesystem caching. The article says:

> Some people try to explain away the delay, e.g., "I would put the 300 vs 5 ms down to filesystem caching", but such hand waving doesn't stand up to further scrutiny.

...but does not provide any "further scrutiny", so for me, occam's razor applies.


A few ideas:

1. confirm the checks are enabled:

    spctl --status
2. Make sure your terminal/shell/etc aren't already exempted System Preferences > Security & Privacy > Developer Tools.

3. If you already ran something that could generate a check in the last minute, the connection is still open. Most of the overhead people are recording is negotiation/handshake. If you're fairly close to the server, it seems plausible your observed time could be enough for the communication minus the negotiation. You can open Console.app and search `process:syspolicyd` in the device log to see the entries for the negotiation; wait for it to terminate.

4. Try removing and re-creating a new file as in the test you did before and check it a little more directly:

    spctl --assess -v --ignore-cache --no-cache /tmp/test.sh
If it's working, you should see a log entry with the text "summary for task success" in it with a detailed breakdown of the request (times taken per phase, bytes sent/received, etc).

I don't have a system to check this on, but Apple seriously named an option "asses"?

Ha, no. I'm the ass, here. Fixed :)

It is real; browse the previous thread on this topic: https://news.ycombinator.com/item?id=23273247

It pushed me to buy Little Snitch to block it, so I guess somebody won out.


There's no such "filesystem caching" phenomenon on macOS Mojave and earlier, so that theory falls apart rather quickly. There's also the consistent difference between timing results with internet connection enabled vs disabled.

I'm not sure why you couldn't reproduce the delay. There are several possibilities I can imagine, but these could only be proved or disproved by more testing. In any case, many people have reproduced the delay, on close to "factory default" Mac installs.


> You can verify that there's an online check by taking packet traces

It has to be said: if Apple published their source code, we wouldn't have to rely on reverse-engineering and speculation.

Little Snitch is a god-send for users of MacOS, be they voluntary users or forced into using it because their employer won't allow them to use anything else.

I've found Objective-See's LuLu to be even better in catching every application and obscure binary daemon that ever makes a connection to the network and allowing me to set a temp or permanent block/allow rule for it - and it's FOSS. Try it out:

https://objective-see.com/products/lulu.html


I didn't know about LuLu, thanks. I'm trying it out now and it's not been too bothersome (so far), which is always nice. I used Little Snitch for a few years but not only was it always in my face with the pop-up dialog (poor UI choice) but because it uses plist files for its database it became incredibly slow. Had to give up on it.

I tried Murus/Vallum[1] too for a while but it was still a bit too complicated and demanding. There's still a space for a firewall with a decent UI available.

Won't be a Mac for me after this machine though.

[1] https://www.vallumfirewall.com/


I also hadn't heard of LuLu, great suggestion!

What's the exact data being sent?

There is a lot of arguing going on here, but what is the data?


Seems to be a hash of the executable.

I am not sure of what is the whole point of this notarization thing. It would be great (ahem, let's say so) if there was a big and closed list of executables, but every shell / ruby / perl / python script can do many funny things, and you cannot notarize them all. Often, as in bash, by design. So?

Does anyone know if adding the Apple domain in question to piHole (or your hosts file pointing to 0.0.0.0) will suppress these checks?

If you're on MacOS, you want Little Snitch installed at all times, even if you have a pihole: a process that is denied network access is far stronger than juping DNS requests.

It appears as if OS X will revert to using an IP address if a domain lookup fails according to the thread we had on this issue yesterday.

It's great tool for mass-surveillance. Since Apple has a database of all the apps the user has run on their device (but no worries, Google has the same).

For your security ;)


> great tool for mass-surveillance

It’s not really that great for mass-surveillance. It “phone’s home” only on first run. And it doesn’t look like it sends data about your identity, device or location. They can have your IP address but another mechanism would be needed to relate that to you. (I doubt they are logging the IP address anyway. The only purpose would be to surveil you, but if they wanted to do that they would surely use a more capable mechanism . That makes keeping the IP addresses a burden and risk.)


>It “phone’s home” only on first run.

Do we know that? The cache may well have an expiration date. Does the cache stay after an OS upgrade? I suspect further research will discover it's not just 'first time'.

>They can have your IP address but another mechanism would be needed to relate that to you.... if they wanted to do that they would surely use a more capable mechanism.

The app profile would already tell a great deal. Once enough of these 'non-capable' surveillance mechanisms add up, you end up with a very capable surveillance mechanism.


So why mass-surveil Apple computer users this way?

My understanding is the only thing exclusive to Apple systems is developing Apple apps. But anything related on a platform like iOS is already going to be public knowledge.

Also the latest Mac hardware seems to go through great pains to make sure the primary storage device is both encrypted, unrecoverable if keys are unknown (T2 security chip), and non-removable (SSD soldered to board). So why would they make this back door for surveillance?


None of those measures you described counteract mass surveillance.

late edit (2): added 3 notes including performance impact observation

I'm concerned about this behavior (both from privacy and performance perspectives), but I'm also not (quite) convinced this is working as described/implied here.

Before I get started: If you poke at this, open Console.app first. You can see recent logged "assessment" checks logged in "Mac Analytics Data" with the search "process:syspolicyd". You can use the same search to watch log messages (including all of the TLS negotiation etc.) for the checks in the device log.

The part that seems weird is that, if it is transmitting a hash (which seems possible/logical) the caching behavior doesn't appear to care or respect it?

The article suggests the following test:

    echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh
    time /tmp/test.sh && time /tmp/test.sh
I tried this test and got real runtimes of 0m0.289s and 0m0.006s. Then, I changed the file:

    echo $'#!/bin/sh\necho Hellok' > /tmp/test.sh && chmod a+x /tmp/test.sh
When I re-ran the script, both runs are under 10ms. The content changed, but it didn't bother re-checking. I wrote the original script to a new file path:

    echo $'#!/bin/sh\necho Hello' > /tmp/test2.sh && chmod a+x /tmp/test2.sh
This ran with runtimes similar to the original (0m0.232s and 0m0.006s). Same content, new path, new check. Here too, if it cares about the hash, it either isn't bothering to use it for caching decisions or the hash includes the path.

Then I tried rming the file, writing it again, and running it. Once again, it checks on the first request. I think this suggests it may be caching the result by inode? The author said they saw new checks after saving changes in TextEdit--I don't know much about TextEdit, but I'd guess it is doing atomic write/rename here.

Other random details I noticed:

1. it holds the connection open for a minute, presumably to minimize connection overhead for executions that'll generate many checks. My first checks were all in the 280-300ms range, but I tried one additional check within the minute and it only took 72ms. Making multiple requests in less than a minute may make it harder to notice

2. The device log has a "summary for task success" entry with pretty precise timing details on all parts of the request.

3. On my system, each of these attempts produces a "os_unix.c:43353: (2) open(/var/db/DetachedSignatures) - No such file or directory" error in the log from the libsqlite3 subsystem after the response comes back.

4. The "Mac Analytics Data" log entry for each request has a good summary that looks like:

    assessment denied for test.sh
    com.apple.message.domain: com.apple.security.assessment.outcome2
    com.apple.message.signature2: bundle:UNBUNDLED
    com.apple.message.signature3: test.sh
    com.apple.message.signature5: UNKNOWN
    com.apple.message.signature4: 1
    com.apple.message.signature: denied:no usable signature
    SenderMachUUID: ...snip...
5. When I add Terminal to the Developer Tools exemption on the privacy tab it does appear to kill the check. I'm not sure if there's genuine protection this check provides at some level, but I'll be considering adding either Terminal or at least some specific build tools to the exemptions I add on a new system.

6. After adding the Developer Tools exemption, if you have the app open, it'll ask if it can quit it for you. I took the hint and restarted Terminal. It'll do the same thing when you remove it from the list. But I didn't see the checks actually return until I rebooted. Also, my system froze during reboot. Hopefully a coincidence. :)

7. To put a better number on how this performance impact can compound for the kinds of builds I do all of the time, I ran `nix-build ci.nix` in the local directory for one of my projects before and after enabling the Developer Tools exemption for ~/.nix-profile/bin/nix. The run took 1m22s before, 45.5s after.

8. Looks like this is the same check as is run by `spctl --asses -v <path>` (at least, per the Console.app logs). That may make it easier to play with.


> I think this suggests it may be caching the result by inode?

You may very well be right. I used TextEdit simply because it was easiest for me to guarantee a new notarization check every time, but I don't know the exact criteria that macOS uses to identify an executable as "the same". There's probably some combination of path and/or inode in addition to the hash.


I'm still trying to understand what the problem here is.

Nobody seems to have an issue with checking this for apps -- it's a good security feature to protect from malware, right? And which everyone knows about? And it only happens the first time you run something, so it's not a performance issue in everday usage.

And the article even states that there seems to be a valid reason for checking shell scripts, because they can be used to compile malware.

The original complaint was about slowness, but how often do you run something for the first time? The only scenario in which I can imagine this would become a practical peformance problem is if, somehow, you have an app that spawns new shell scripts all day long to execute, every few seconds, and a really flaky internet connection. Or new shell scripts hundreds of times a second, even with a good internet connection.

Is that something anyone ever needs to do? Programs can run shell commands directly, without a file, so it seems unlikely. Also, another comment here suggests that even if a shell script is modified, it isn't re-verified, so there would seem to be a trivial workaround anyways.

Or is the issue just that this is undocumented behavior? Or what am I missing here?


Great! They can block malware as soon as they find out. Love it!

Im running my dev environment in a OpenBSD vm. This makes me wanna use it for more stuff besides dev.

I was watching a Linus YouTube video on the upgradeability of Alienware Laptops. So envious! Now you can't upgrade anything on the newest MacBook Pros.

https://www.youtube.com/watch?v=J-RXqNafscs

And if something breaks on your MacBook Pro, most likely you will have to replace the entire motherboard or display.

PS: I own lots of Macs but sad to see the direction Apple is heading in.


This is a specious problem _at best_. We have a very secure operating system doing things that others don't even try to do (notary) and we are complaining because our shell scripts take n seconds to run? Really people? If you are running signed and notarized (stapled) binaries, the system never even reports them to Apple in the first place.

This is the height of insanity to think that Apple or anyone else would want this data or use it for some nefarious purposes...anonymous hashes of junk data are essentially useless outside of this purpose. It's fine to claim not to trust anyone for anything, but most of us aren't willing or able to build our own hardware, write our own operating system, and write our own applications. We have asked our vendors for devices and code that are more trustworthy, and when they give them to us we COMPLAIN about it incessantly. This makes no sense to me.


> We have a very secure operating system doing things that others don't even try to do

Perhaps because such a system is extremely centralized?

> we are complaining because our shell scripts take n seconds to run?

Yes? If I buy a computer and it spends time doing stupid stuff, then I think I am fairly justified in being angry.

> If you are running signed and notarized (stapled) binaries, the system never even reports them to Apple in the first place.

Great, let's notarize and staple tickets to every little piece of software you write…

> This is the height of insanity to think that Apple or anyone else would want this data or use it for some nefarious purposes...anonymous hashes of junk data are essentially useless outside of this purpose.

Executable hashes tell you if someone is running a specific piece of code.

> It's fine to claim not to trust anyone for anything, but most of us aren't willing or able to build our own hardware, write our own operating system, and write our own applications.

That's why I buy them from other people and would desire them to be good.

> We have asked our vendors for devices and code that are more trustworthy, and when they give them to us we COMPLAIN about it incessantly.

How exactly does this make the device more trustworthy?


Scripts becoming noticeably slow to start every time they're edited is a notable regression for programmers. Simple as that.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: