Project Zero has stuck to their 90 policy with everyone. They aren't picking on Apple, they've released unpatched bugs for most major vendors.
You may disagree with publication, but keep in mind this is a bug that may be known to some already. There are literally thousands of "private exploits" available for sale if you know where to look.
The 90 day policy forces companies to make their customers safer, and if not 90 days you'd still have to pick an arbitrary length of time but companies may take a longer deadline even less seriously and miss that too (and people would criticize that too).
The "additional bug" could just be a compromised update of any popular application available as a free download (see )? Then it is serious, because people download and run all kinds of applications from any website that seems useful to them.
iOS is of course a very different story and some localhost macOS bugs might have parallels in iOS (not this one though).
This is untrue for iOS, because iOS actually does have a pretty robust security model which has only-semi-trusted apps running on your skeleton key to everything.
You probably have all the locally available data you could want. You may not have access to passwords, or other data that comes from actively using an online site which doesn't currently have active credentials (because they time them out or the user logged out previously). For example, depending on the person and the documents they keep locally, you might not have access to any banking information.
Privilege escalation opens up a whole slew of capabilities that make it much easier to achieve that, not to mention allowing a rootkit to get access back immediately after a reboot.
Local account exploits are like a burgler breaking in (even repeatedly) while you aren't home. Privilege escalation is like them installing a remote camera system and recording your every action while you are home. Both are pretty horrible, but certain things are much more likely to be exposed with active monitoring of what you are doing over extended periods...
As an aside, any app that triggers gratuitous keychain prompts (I’m looking at you, Skype and Outlook for mac) is a weapon of mass-miseducation. People should think long and hard before authorizing keychain access; if you keep firing prompts at them, they’ll end up clicking/tapping without reading what it’s for, which is a recipe for phishing.
I don't think it's that hard: it's basically equivalent to finding the value of a variable in a coredump, which is a thing I've done a lot and which debuggers explicitly help you with, even if it's a release build without debug symbols. One way to do it is to inject a controlled password into the same build of the program on your own machine, dump its memory, search for the password, and then reverse ASLR if needed (which is easy if you have full memory read/write access; the point of ASLR is to make limited-access vulnerabilities like small stack overflows hard to exploit, but sort of by definition the information about where libraries are is located somewhere in the process so that the process itself can run). You can then script doing the same thing once per current-ish release build per current-ish OS, and get a robust automated attack.
In general I think people overestimate the difficulty of doing clever things with process memory (one of my favorites here is the "Vudo" exploit from 2001 http://phrack.org/issues/57/8.html in which sudo could be tricked into temporarily writing a 0 byte one past the end of a buffer and then restoring the original byte, and it's still exploitable to get root). If you haven't done it yourself, it sounds harder than it is, and betting that it's hard because it sounds complicated when there's no actual isolation boundary is more-or-less security by obscurity.
The same thing applies to triggering keychain prompts: use a side channel (e.g., polling the stack of a legitimate process) to figure out when the user is likely to need a keychain unlock, then trigger a keychain unlock for your app at the same time. (Note that you can either pop up a dialog on your own or actually request keychain access for your own process.) You also might not even need to time it: just rely on the fact that the vast majority of users are going to respond to security prompts in a way that appears to unblock their jobs, without evaluating whether the prompt is reasonable. See studies on click-through rates of cert warnings before they got hard to click through, the continued effectiveness of phishing / spear-phishing, etc. If you pop up a dialog saying "Java Updater needs your password," I bet you'll get a ton of people who don't even have Java installed type in their password.
More fundamentally, an unsandboxed process can do things like set environment variables, update config files, modify applications installed into the user's home directory, read your browser cookies, etc. as a consequence of traditional desktop apps having unrestricted access to the user's home directory, so preventing reading memory doesn't really prevent one app from attacking another.
Perhaps I'm also completely misinformed or misunderstanding what GP is saying but each process is meant to get it's own virtual address space
macOS (Mach): https://developer.apple.com/documentation/kernel/1402405-mac...
Linux: /proc/*/mem or http://man7.org/linux/man-pages/man2/process_vm_readv.2.html
and so forth.
As opposed to what other operating system? Even Linux with SELINUX allows that...
Of course, it's a different matter for shared computers in school computer labs. Or if mac servers were a thing.
So tell me kind sir: what OS do you use?
Some kind of embedded/industrial control system? z/OS?
Given the requirements that a secondary process should even be able to modify a file that is already open, I guess the expected behavior is that the 1st process's version should remain cached in memory while allowing the on-disk (CoW) version to be updated? While also informing the 1st process of the update and allowing the 1st process to reload/reopen the file if it chooses to do so. If this is the intended/expected behavior, then it follows that pwrite() and other syscalls should inform the kernel and cause prevent the origional cache from being flushed.
Wouldn't the issue be the change on disk in the first place ?
If I've understood correctly, I think it works something like this:
A privileged process needs to use some shared-object/library, and the OS loads that file into memory for it.
An unprivileged process creates an exact duplicate of the library file and asks the OS to load that into memory. This triggers the copy-on-write system to realize that these are identical, and creates a link between them so that both processes are using the same chunk of memory.
The privileged process then doesn't actually use the library for a while, so the OS pages it out of memory, while keeping that copy-on-write link around.
This is where the sneaky/buggy part lies, in that it's possible for the unprivileged process to modify its version of the library file without the copy-on-write system being informed of the write (and thus separating the two diverging copies). This is apparently related to some details of how mounting filesystem images works.
The next time the privileged process tries to use the library, the OS sees that it has already been paged out, but hey here's this copy-on-write link pointing at this "identical" version over here, and loads the poisoned version into the privileged process.
but with file mapped pages this is going to be a bit weird because the processes with a file page mapped MAP_SHARED should see others modifications of a MAP_SHARED page and presumably now when process modifies a file mapped page that is marked read only the kernel now has to update all the other processes to point at this new writable page. i think it is easy to modify the current process's page table, not sure how easy it is to modify other process's view of the page table. though, maybe the kernel wouldn't let you RPC a MAP_SHARED mmap. maybe it would only let you do it with a MAP_PRIVATE mmap.
Also, why isn't this a problem in Linux or BSD?
Are they giving any reasons for the delay?
90d seems like sufficient time for many processes, particularly if you're used to iterating in somewhat flexible or always-releasable environments, but as soon as you start involving multiple systems with different timetables, the time requirements to meet everyone's process needs can balloon, particularly if it's something requiring substantial changes (and thus, substantial testing).
(I'm in favor of the default 90d disclosure policy, just wanted to point out that there are sometimes legitimate justifications for longer timelines.)
(Work for elgooG, not on anything remotely related to GPZ, opinions my own, etc.)
Exceptional case. Deadline WAS extended. Reported 01. Jun 2017, Disclosed Jan 2018
and here is the list bugs where the deadline expired:
There was also the issue that gregKH complained about, where the different major Linux distros on the embargo basically got siloed by Intel out of talking to each other, so they ended up building their own solutions. 
 suggests FBSD got to the NDA sandbox relatively shortly before it went public.  says OpenBSD indeed did not.
 - https://lists.freebsd.org/pipermail/freebsd-security/2018-Ja...
 - https://marc.info/?l=openbsd-tech&m=151521435721902
 - https://www.eweek.com/security/linux-kernel-developer-critic...
Say what you want about Google, but groups like Project Zero and Talos from Cisco are awesome. The things they find...
(Meaning, is this about CoW in general or about memory management related to mounted images?)
Is 90 days enough to fully fix for example spectre and meltdown? I'm not sure all bugs can be fixed in 90 days. Some will take decades to resolve.
Also it affected multiple vendors, not one, so coordination across a minimum of 10 major organizations (google, apple, MS, Amazon, Intel, amd, Linux, etc.) breaks the normal assumptions behind project 0.
Given those issues, different rules make sense. It's not a normal case where a single vendor has a flaw that can be fixed with a single patch.
Privately disclosed to Apple, 90 days later they published. Simple as that.
Is it perhaps possible that equitable treatments of vulnerabilities and companies might not be particularly high on the list of priorities for GPZ? Some might even argue that past attempts at equitable treatment have backfired badly, with many cases of companies abusing the time this gets them to not fix vulnerabilities.
Again, you're completely correct. Though I would genuinely love to hear your ideas of what equitable policy would look like - it could easily be better!
I understand the positive incentives for publishing when companies do not respond to flaws.
However Google has no particular right to police other companies.
If they disclose at 90 days, and harm ensues, there is no defense. Google is responsible.
The way you laid things out, Google should just collect zero-days and sit on them? Do you see the absurdity of that? From a business perspective, having these vulnerabilities around makes it easer for their competitors to collect the same kinds of data about internet search and private emails from people around the internet that Google collects from legit means. Getting vulnerabilities fixes widens Google's data moat.
Disclosing issues is not "policing". They are not arresting people, or taking any action other than stating the truth, that some software is vulnerable.
If they disclose at 90 days and harm ensues, the user bears responsibility for continuing to use the software. If they trust the software vendor to issue timely updates, then they can turn around and lay blame at the vendor for not fixing the issue. Or they can blame the hacker.
Some bugs may take longer than that to fix. I still don't think it's an unreasonable question to ask.
Google is being more than generous, doubling to 90 days.