(I'm of course not saying P0 shouldn't target Google, just that Google shouldn't have to be publicly accountable to Google P0).
At least to me, it seems like there's no downside to publicly tracking responses from Google itself. Ideally P0 should operate mostly independently.
Agreed that there should be more P0 like efforts from other companies though. The more the merrier.
Google is, of course, ethically obligated to rigorously test its own products, and if P0 has expertise that the other security orgs at Google lacks, it's ethically obligated to train that expertise on Google products. I'm just saying that Google isn't ethically obligated to include itself in its vendor tracking statistics.
There’s another reason I can think of: Google has a great security culture, but like every company, it’s made up of people who may come and go. It takes constant investment to maintain and improve this culture. Project Zero helps to set standards and raise the bar across the industry, including for Google. So it’s not just setting an example; it also actively forces Google Product teams to exercise their security muscles (which includes reacting, communicating, patching, etc.). I imagine that Google would be happy having other friendly actors doing the same and sees it as a positive. (Obviously, this isn’t the only thing you’re doing to invest in the security culture, but it’s one thing.)
Why? This is just dog food. They should be testing that bugs can be submitted through normal channels. Then they can know how their own system compares to others from a user standpoint.
> if Microsoft and Apple are unhappy that P0 is targeting them, they should respond by standing up their own P0 teams and hammering Google, rather than having everyone operate under the fiction that it's OK for Google to be the only major vendor doing this work.
I mean shouldn't they? If you have a trillion dollar tech business that depends on software built out-of-house then I would imagine your security highly depends on such a team. I don't understand why all big tech doesn't have their own project zero.
This trust is the foundation of industry cooperation, otherwise Google P0 would be perceived as a weapon of Google focusing on attacking and disclosing bugs in other companies, most of which are their direct and top competitors.
I’m not saying there’s absolutely no brand damage to Google, but in the grand scheme it seems negligible outside of HN and similar venues.
What's stopping them? Google started this team, stated it's goals publicly and engaged with third parties. They're free to ignore this, or to start their own teams and do the same.
Do any of the other major vendors have the same incentives that lead Google to create P0? I expect most of them don't.
So if a case came up against Google, I imagine they would very much prefer to have this available as a defense, and draw analogies to the real world if necessary (like the home trespassing example in the link above).
I did in fact link to 2 entire pages of what might apply, based on my layman understanding:
- https://news.ycombinator.com/item?id=30310902 (which is one potential "theory of law" that might apply)
- https://news.ycombinator.com/item?id=30316448 (a lot of actual cases against actual individuals, each based on different legal theories)
Obviously I don't know if any of theory would make it unlawful (again, I'm not a judge or a lawyer). I just know security researchers have been sued in the past, and so far it seems to me that they have been either (a) settled out of court, (b) dropped, or (c) been scoped too narrowly to set much of a general precedent.
You don't have to feel compelled to knock anything down if you don't know; I don't really expect anyone to know at this point to be honest. (The second website I linked to also mentions this dearth of court rulings.)
Project Zero doesn't do any of this kind of research.
Nobody is going to be able to sue Project Zero for finding iOS bugs. You have an almost unlimited right to conduct security research on a phone you buy, or a piece of software you install based on a click-through license.
What you need to be very careful about is, again, testing other people's computing devices. There, you have almost no rights at all (save for services that publicly waive their own rights by standing up bounty programs --- and, don't be confused, Project Zero doesn't depend on Apple's bounty programs to conduct iOS research).
These distinctions are super-clear to people who actually work in this field, but clearly unclear to people outside it, because we end up having the same picky debates about them every time vulnerability research comes up. I get it, it looks fuzzy on the outside. But it is not fuzzy to practitioners; the rules you have to be aware of to conduct research are actually fairly straightforward. Don't mess with other people's machines.
> Violations of the Digital Millennium Copyright Act; violations of the Computer Fraud and Abuse Act; contributory copyright infringement; violations of the California Comprehensive Computer Data Access and Fraud Act; breach of contract; tortious interference with contractual relations; common law misappropriation; and trespass.
So yes, companies have tried to sue over disclosure of security vulnerabilities in the past. In this one they even ended up settling with one of the other defendants (whom they may have had a bit more of a case against, thanks to the DMCA if nothing else), but I think they realized they had no case against me and most of the others and dropped the lawsuit. They still filed it, though, and I had to get a lawyer, which was not a fun few months.
* "Researching" serverside apps --- software running on computers the researcher doesn't own --- which is widely understood to fall afoul of CFAA and categorically isn't the kind of work P0 does.
* Breaching contracts, which happens commonly when vuln research firms take on pentest vendor assessment contracts for companies considering purchases, where the pentester access to the target was explicitly arranged under NDA.
* Stuff that isn't vulnerability research under any sane definition, as when people find open S3 buckets, grab all the files off them, and then try to "conduct research" based on the contents of the stolen files.
None of this is at play in the kind of work P0 does, and there are basically no modern stories about straight vulnerability research done under P0 terms where meaningful legal threats have been made. There was a time around the turn of the last century where it was briefly believed that the DMCA might be wielded against vuln researchers, but that didn't pan out.
And if one has accepted a license agreement to use the product, there's often breach of contract available as a possible basis too.
Are you saying no one has ever been sued over publishing vulnerabilities in competitors' products?
Also their blogs describing how security exploits work are always super interesting
What is the argument for this?
> For nearly ten years, Google’s Project Zero has been working to make it more difficult for bad actors to find and exploit security vulnerabilities, significantly improving the security of the Internet for everyone. In that time, we have partnered with folks across industry to transform the way organizations prioritize and approach fixing security vulnerabilities and updating people’s software.
This feels pretty reasonable, we don't have a spare universe to act as our control, but their intervention does seem likely to have made us more secure overall than otherwise.
iOS 76, Android Samsung 10, Android Pixel 6
>The first thing to note is that it appears that iOS received remarkably more bug reports from Project Zero than any flavor of Android did during this time period, but rather than an imbalance in research target selection, this is more a reflection of how Apple ships software. Security updates for "apps" such as iMessage, Facetime, and Safari/WebKit are all shipped as part of the OS updates, so we include those in the analysis of the operating system. On the other hand, security updates for standalone apps on Android happen through the Google Play Store, so they are not included here in this analysis.
so kinda what's the point of putting that column there? people will use it as an argument that Android is safer :P
Anyone with Android 6.0 or later (a 2015 OS) gets Chrome updates. Those updates are fast, automatic, and transparently happen in the background, with no user input or downtime (e.g. need to restart). They're also not arbitrarily delayed to fit into with some rigid OS update release schedule, like iOS or macOS.
That's especially important in this era where people are skeptical of updates, or just too busy, and keep delaying and skipping updates. Every time I use a family member's device, and check the version, I see they haven't updated in months. Many aren't even aware there are updates available.
I told the story before, but my grandma only has 4G, no Wi-Fi, and only has an iPhone, no laptop to download IPSWs from (and obviously average users can't do that); therefore she only gets iOS updates when I visit once or twice a year. Is Apple happy with that? Provocatively: is that the best they can do? Google proves it's not.
Sure, the Android OS update situation isn't great, but your biggest 0day exposures will be apps exposed to the internet (Chrome, chat apps) and those all get regular, transparent background updates with no user input; and Android users are much safer for it, than they'd be if Google imitated Apple's OS update strategy.
This is an Apple-wide problem, not just iOS. Anyone with OS X El Capitan 10.11 or later (again, a 2015 OS) is running the latest Google Chrome. Yet their Safari version is riddled with catastrophic 0day bugs (which can now be called 1,314-day bugs, since there have been 1,314 days since the last El Capitan security update).
Add to that the fact that Apple doesn't really support "n-2" OSes with security updates, as tech people say, since their unstated policy seems to be that only bugs which Apple thinks are "exploited in the wild" will be backported to any OS that isn't the very latest; most (not all) other known security patches are never backported.
Apple's security stance is much worse than Microsoft's or Google's when comparing Apples to Apples.
It doesn't really matter that it's not a legal entity, just "whoever is responsible for fixing the bug and releasing the fix officially". In security we call open source projects vendors because they act as such just like Apple would.
Of course with open source you can fix it yourself, but in this context the stats are about how upstream behaves.
Why am I not surprised?
For all software & hardware vendors, this will help raise the standards & revenue but it will also raise the barrier to entry for new entrants as sole developers or small teams will have to consider more than the basic function of their app/project/hw. Legislation like GDPR already means some projects may never get to fly today as regulatory burden is too great, and the bug front is another domain which is maturing adding to the burden.