Yeah...Hint to Apple: When somebody discloses something, responsibly, you react to that with a proper process and understand you get 90 days in most cases as is industry standard (that's typically how Google Project Zero operates).
Otherwise don't be surprised when exploits end up sold for much more on a black market because nobody wants to cooperate with you.
Think we need a few more 0-days that cost Apple a few bad PR spins (as well as others like Valve) to make them wake up and actually have a proper process.
It was so bad at one point the legal team would contact the security team to get background anytime someone notified them of anything security related... just to avoid the obvious PR disaster that would be suing someone who was genuinely being helpful. (The key there was the legal council was a good guy and had an appreciation for the damage a poorly timed legal threat could do.)
(and ideally cut/reroute/raise the pay according to requisite nursing for idiot bossman)
If your intentions were purely altruistic then the CVE matters not. Just make a blog post with the details and link there.
But if perhaps on some level you were after recognition then yes I can see the desire for a CVE...
I've spent a lot of time involved in responsible / coordinated disclosure and it's always like this. The person disclosing "just wants to help", but the reality is the vast majority want (or feel entitled to) something for their efforts, be it their name behind a CVE, a bounty, or recognition of some type. Perhaps even the admiration of the company (wow, thanks we are impressed you found that!). That's totally normal and fine, I just wish more people would admit it.
FWIW, Apple should not be impressed I found that, they should be impressed it took someone so long to show them that their sandboxing strategy on macOS is totally broken. It's nice that they wanted to credit me and all, but they fixed the issue in a way that didn't fully resolve the problems that I brought up, which I consider worse than not crediting me or not paying me a bounty.
Also do note that Apple does claim to offer a bounty for such access, regardless of what the market is willing to pay. So clearly it is valuable to them.
Isn't this what Apple's approval process is supposed to be for? Or is that only good for preventing Apple-Tax Evasion nowadays?
Despite being a nonprofit organization, Mozilla's annual revenue is around half a billion dollars and they do offer a client bug bounty of up to $10,000.
Is my wild guess right? :)
Just file a security issue on bugs.chromium.org with demo/instructions. If it's a good one, they'll pay good.
I don't wanna give it out since it's publicly linked to my twitter and I've bad mouthed Apple here... (I don't publicly speak against them for obvious reasons)
I doubt it, "The exploit was used only against oppressed ethnic people" was good enough to give Apple clean chit last time.
Please don't do this when reproducing exploits. Yes, it's just source code, and yes, the url is dead. But it's still source code that, when compiled, will grab your real safari data and attempt to upload to a url that could be switched on.
There's a difference between repro'ing an exploit and weaponizing an exploit. alert(1) is generally the best thing to aim for, or even alert(some user data) to illustrate the point. Whereas upload(some user data, my server) is a bit too close to the moral equivalent of `rm -rf $HOME/*`: all of these illustrate the point, but rm'ing a homedir is generally not a great thing to have in your repro; ditto for uploading real user data.
exmaple.com was explicitly reserved for scenarios like this, which would accomplish the same thing safely.
It may seem like a pedantic point, but it was something I was taught as a pentester: don't weaponize exploits; simply reproduce them. So my instincts kicked in, and it's impossible not to mention it. (That said, I don't mean to make a fuss – it's not a big deal in this case.)
By the way, I found this post via the author's twitter: https://twitter.com/lapcatsoftware I've been following them for about a year now, and their tweets on Mac programming have been quite informative.
Yes, 0.0.0.0 really resolves to 127.0.0.1 on Linux at least. Try it.
So are <anything>.invalid, <anything>.test, or <anything>.example. They're all reserved and don't route.
If the client is not aware of a special domain, then you are dependent on the infrastructure to behave in a predictable manor. Attackers can alter that behavior.
It's very unlikely that those reserved domains will be controlled by someone nefarious in the foreseeable future.
I've made it a habit to use them instead of something like "insertyourdomain.com" in example configurations or dummy data for tests (where I perhaps need a valid-looking domain). In the unlikely event that something is ever sent to it, example.com is a much better choice than just some random string, since someone could register it and accept that traffic.
> exmaple.com was explicitly reserved for scenarios like this
I'm guessing this is also why we perhaps should avoid even "example.com"-URL's if it's really bad if someone receives the traffic :-)
If every attempt to improve something were disproven by the presence of flaws, it would disprove all attempts to do anything with software ever. I get that people don't like the macOS privacy protection efforts, but that's no reason to construct a logical fallacy.
> There are two fundamental flaws in TCC that make this exploit possible
We know that TCC is a major burden for legitimate Mac apps. But is it a major burden for malware? That's the question, and it seems to me the answer is no. There are so many holes in this system, it only stops the good developers who wouldn't stoop to using the countless hacks readily available to malware developers.
It's a burden for me as a user!
My home theater setup is basically just a Mac connected to a projector. Every button on my Harmony remote runs an Applescript. Many of them start with lines like:
tell application (path to frontmost application) to
I make very heavy use of Applescript for all sorts of things on my computer. It's one of the things that has kept me on Mac over the years, because there is no broadly-supported equivalent on Windows.
I get the sense that no one at Apple uses Applescript much, though, because if they did, they wouldn't have added an impossible-to-disable feature which renders it effectively useless.
Does the Harmony process request Apple automation permissions, and is the Harmony process enabled for it if so? (Whatever the parent process of the scripts you're launching is, i.e. Harmony.app in the chain Remote button -> Harmony.app -> Your Apple.scpt)
Does exiting the Harmony process and all scripts, purging all of your events decisions with `tccutil reset AppleEvents`, and then restarting the Harmony process and running a script result in any improvements?
Is Developer Tools new in Catalina, or do I need to install XCode or some such in order for it to appear? Never saw it in Mojave.
Fwiw, at one point I had a 250 rep bounty on this StackExchange question, and got nothing. :(
'The endless bugs in TCC demonstrate that its burden is not worth the costs to developers.'
What was written in the post did not lead me to understand this, even including the quantity/repetition modifier "over and over again". I think the missing piece for me is the cost to developers bit — without that, it reads as "the bugs prove that this isn't worth the privacy improvement", with that it reads as "the bugs prove that the cost to developers isn't worth the privacy improvement".
There's a couple reasons locks work IRL despite this, one of which is that they don't really stop honest visitors. You don't usually want anyone coming into your house that you haven't let in yourself, unless they're family members with keys.
> I was really hoping they'd take some time to address Catalina's glaring issues — its slowness, bugginess, just general sloppiness — and instead they did the opposite.
I didn't include "security exploits" in that list because of course they're going to fix security exploits, right? Especially one that they're already aware of, and can reproduce — they wouldn't just sit on it in favour of making the UI glossier, right?
Apple have just been getting so much wrong lately. This pretty much dashes any hopes I had of ever upgrading, because even if this particular flaw gets fixed by the time Big Sur comes out, there are almost certainly others that they've ignored. I expected to eventually have to grit my teeth and upgrade so I wasn't behind on security updates, but I guess that's not the case. Ugh, now I have to figure out how to mute that annoying (1) in System Preferences. I've heard they keep making it come back now.
I like the visuals too, at least compared to Yosemite-era; Leopard-era still wins out overall. Using the OS has made me feel much better about the Mac as a platform, although I'm certainly still nervous.
Same! However, I've never actually used Catalina for a significant period of time, so I can't compare them directly. I downgraded back to High Sierra after just using Mojave for a few weeks, partly because I was frustrated with TCC breaking my scripts, but also because I'd noticed Mojave was slower and buggier than High Sierra. When Catalina came out and the problem reports started rolling in, I resolved not to touch it with a ten-foot pole.
I only installed Big Sur because I wanted to try out the new design. I was fully expecting a dumpster-fire, and I wouldn't have even blamed Apple for it, given that Big Sur is currently an early developer preview (I was only able to download it by hacking Apple's catalog URLs). I did not expect to actually like the Big Sur.
There are a couple of odd bugs that I expect to get ironed out by the fall—for example, the menu bar sometimes shows a wifi-disconnected icon even when the internet is working fine. On the whole though, I think the current build of Big Sur would make for a fine daily driver. (Although it would probably still be a bad idea if you're working on something important!)
More than anything else—and I know this will be very hardware-specific—I just can't overstate how fast Big Sur feels. It's really as if I got a new computer. I will note that I did a clean install, but I do those regularly anyway, and they don't help all that much.
Substitute "security" for "privacy" and you immediately see why the argument is flawed. "Other people and I have found bugs that bypass security features, ergo security features are security theater and only harm legitimate developers. Better have a free-for-all OS and be mindful of what you install." (Yes, you should be mindful of what you install, regardless.)
So this is just another abuse of the term "security theatre." Door locks, for example, are legitimate security. There are plenty of lock picking sets on the market. But that does not mean door locks do not work. TCC works because a casual user isn't trying to override it. But if it didn't exist then users would have no protection at all. It still comes down to the person sitting behind the computer is in complete command of it. And security knowledge is the best security you can buy.
No I'm not. If I had complete command, I could actually turn off the damn prompts so my own scripts worked.
Customers buy the products and can logically say "I value my security so I buy Apple products".
This 10.9 incarnation on TCC is much weaker, however. It doesn't apply to most Apple Events—only UI scripting—and once the user whitelists an application, that application can control any other app on the machine. Also, because SIP wasn't introduced until 10.11, there was originally nothing to stop an application with admin privileges from editing tcc.db directly and whitelisting itself—as Dropbox later did!
This is a pretty glaring security issue actually - after reading this, it seems like Apple's choice to track app permissions / security exceptions by the app's bundle ID and not its file path was a pretty big mistake.
I wonder if this is a case of iOS security engineers working on macOS, forgetting that app bundle IDs aren't enforced by a central install flow on the platform?
At the least, couldn't they maintain a cache of verified signatures, based on the hash of the file? Then on subsequent loads, they could just hash the file and see if the hash was cached. Not as safe as checking on each load, mind, but surely a bug improvement over checking it once and blindly assuming no changes!
I mean, if this was Windows it would be absolutely huge - they'd be ridiculed in infosec and HN circles alike, and IT teams across the globe would be nervously scrambling to get the patch applied before they got pwned.
It seems like Apple is getting off too lightly here, IMO.
I really disagree. This feature has revealed that Google Drive by default spies on my ~/Downloads folder - something it has no business doing, nor did I ever intend for it to do.
I actually love the many things Apple are implementing to improve user privacy from third-party apps and services right now. To an extent, their privacy brand actually is real, and helpful.
Just don't expect any privacy from Apple themselves (and by extension the government). At least they help reduce Google and Facebook's rampant surveillance. That's some good news, in today's era of tech.
Or phrased differently. All installations of the same app have the same privilege.
In principle, it works with any app that can be convinced to run arbitrary code by changing its resources. What Safari is doing isn't wrong and wouldn't cause an issue if TCC would check the entire app, including resources.
It's true that you can't use this to copy the privileges of _any_ app, only of those that have this property.
It is limiting the scope very much. TCC shouldnt check entire app for signature.
Safari should verify its resources, is what I am saying
I guess any Electron apps and apps using webview for its local resources do it too.
Basically if you can do with any app, it is OSX, otherwise apps fault
Even apple guides state:
You must also verify that the file you intend to read from or write to is the same file that you created.