Hacker News new | past | comments | ask | show | jobs | submit login
A walk through Project Zero metrics (googleprojectzero.blogspot.com)
130 points by arkadiyt on Feb 11, 2022 | hide | past | favorite | 58 comments



This will sound very weird, but I kind of hate that they include Google among the vendors they report to, provide a deadline and grace period for, and track responses from. It's actually not their responsibility to do anything like that; if Microsoft and Apple are unhappy that P0 is targeting them, they should respond by standing up their own P0 teams and hammering Google, rather than having everyone operate under the fiction that it's OK for Google to be the only major vendor doing this work.

(I'm of course not saying P0 shouldn't target Google, just that Google shouldn't have to be publicly accountable to Google P0).


Could you expand more on why?

At least to me, it seems like there's no downside to publicly tracking responses from Google itself. Ideally P0 should operate mostly independently.

Agreed that there should be more P0 like efforts from other companies though. The more the merrier.


I guess I'd start by saying I don't see the advantage to P0 operating independently. Threads about P0 often devolve into debates about conflicts of interest, but there's no conflict here; every vendor has in principle the right to conduct lawful vulnerability research against other vendors, including competitors, and there's no ethical standard that dictates what those vendors should choose to target.

Google is, of course, ethically obligated to rigorously test its own products, and if P0 has expertise that the other security orgs at Google lacks, it's ethically obligated to train that expertise on Google products. I'm just saying that Google isn't ethically obligated to include itself in its vendor tracking statistics.


I don't think they are ethically obligated, but adds credibility to the idea that P0 is trying to improve security across the industry as a whole. Perhaps that makes their work easier because people are more receptive knowing that they are being treated "fairly" (at least in the same manner as the organization sponsoring the work).


See, this is my problem, because it should be self-evident just from their output, no matter who the vendor targets are, that they're improving security across the industry as a whole. It shouldn't even be a question.


Disclaimer: I work at Google, but this is just my opinion.

There’s another reason I can think of: Google has a great security culture, but like every company, it’s made up of people who may come and go. It takes constant investment to maintain and improve this culture. Project Zero helps to set standards and raise the bar across the industry, including for Google. So it’s not just setting an example; it also actively forces Google Product teams to exercise their security muscles (which includes reacting, communicating, patching, etc.). I imagine that Google would be happy having other friendly actors doing the same and sees it as a positive. (Obviously, this isn’t the only thing you’re doing to invest in the security culture, but it’s one thing.)


If you ever get a chance to sit down with someone from Google security, ask them if P0 is buddy buddy with product. Just make sure they aren’t talking a sip of anything at the time.


I agree there's no obligation, but including it gives other people some assurance that P0 isn't going easy on its patron.


Maybe think of it as Dog Fooding ?


> I kind of hate that they include Google among the vendors they report to, provide a deadline and grace period for, and track responses from.

Why? This is just dog food. They should be testing that bugs can be submitted through normal channels. Then they can know how their own system compares to others from a user standpoint.

> if Microsoft and Apple are unhappy that P0 is targeting them, they should respond by standing up their own P0 teams and hammering Google, rather than having everyone operate under the fiction that it's OK for Google to be the only major vendor doing this work.

I mean shouldn't they? If you have a trillion dollar tech business that depends on software built out-of-house then I would imagine your security highly depends on such a team. I don't understand why all big tech doesn't have their own project zero.


Gosh, why? From any outsider's perspective, this gives Google P0 much more credibility to show that they are operating on equal terms and not playing a game favoring Google itself.

This trust is the foundation of industry cooperation, otherwise Google P0 would be perceived as a weapon of Google focusing on attacking and disclosing bugs in other companies, most of which are their direct and top competitors.


Why not? It really strengthens the message.


I don't think it does strengthen the message, unless you think Google does such a good job responding to P0 that they're setting a standard Microsoft, Apple, and Adobe have to adhere to, and I think that's pretty debatable (the really important thing P0 does to set a standard is the 90 day deadline).


It prevents google execs from burying P0 reports to other teams at Google though, which seems like the bigger risk?


What’s the value of the public’s opinion of their objectivity in this context? Let’s say that my mom thinks P0 are a bunch of bullies and blowhards. So what?

I’m not saying there’s absolutely no brand damage to Google, but in the grand scheme it seems negligible outside of HN and similar venues.


> if Microsoft and Apple are unhappy that P0 is targeting them, they should respond by standing up their own P0 teams and hammering Google, rather than having everyone operate under the fiction that it's OK for Google to be the only major vendor doing this work.

What's stopping them? Google started this team, stated it's goals publicly and engaged with third parties. They're free to ignore this, or to start their own teams and do the same.


> fiction that it's OK for Google to be the only major vendor doing this work.

Do any of the other major vendors have the same incentives that lead Google to create P0? I expect most of them don't.


Maybe IBM, but that's where engineering orgs go to die these days, so I don't really expect it of them.


I wonder if part of it is providing a way around internal politics/prioritization to force the teams to actually take action, to make delaying a fix a complete non-option.


I imagine it might be a legal thing? Could their competitors file suits claiming they're being targeted/treated unfairly through the disclosure timelines and whatnot? This would seem to mitigate that.


Nope, there's no law that says you can't do independent vulnerability research.


Is publicly revealing vulnerabilities/exploits that can damage a competitor considered part of what you're referring to as "research"?


It is considered that, because that is what it is. There is no law dictating how (or why) vulnerabilities are disclosed, and the disclosure of vulnerabilities is a public service.


Has there been any court case where this interpretation has been sufficient defense?


Thousands of vulnerabilities are disclosed every year. Nobody has ever been successfully sued. The burden is on your argument, not mine.


Yeah I understand that part. But to my understanding this is just due to a lack of clear court rulings on this (cases often settle/drop before court?), not due to the law being interpreted explicitly in favor of this by a court. e.g., https://securityboulevard.com/2021/07/what-the-van-buren-cas...

So if a case came up against Google, I imagine they would very much prefer to have this available as a defense, and draw analogies to the real world if necessary (like the home trespassing example in the link above).


You haven't even presented a theory of law that would make this work unlawful. It can't be on me to come up with such a thing just to knock it down.


> You haven't even presented a theory of law that would make this work unlawful.

I did in fact link to 2 entire pages of what might apply, based on my layman understanding:

- https://news.ycombinator.com/item?id=30310902 (which is one potential "theory of law" that might apply)

- https://news.ycombinator.com/item?id=30316448 (a lot of actual cases against actual individuals, each based on different legal theories)

Obviously I don't know if any of theory would make it unlawful (again, I'm not a judge or a lawyer). I just know security researchers have been sued in the past, and so far it seems to me that they have been either (a) settled out of court, (b) dropped, or (c) been scoped too narrowly to set much of a general precedent.

You don't have to feel compelled to knock anything down if you don't know; I don't really expect anyone to know at this point to be honest. (The second website I linked to also mentions this dearth of court rulings.)


Every case in the article you cited involved someone conducting "research" on computers they did not themselves own, as happens when you portscan a remote host, or look for XSS vulnerabilities on someone's SAAS app, or try to pentest the media system on an airliner.

Project Zero doesn't do any of this kind of research.

Nobody is going to be able to sue Project Zero for finding iOS bugs. You have an almost unlimited right to conduct security research on a phone you buy, or a piece of software you install based on a click-through license.

What you need to be very careful about is, again, testing other people's computing devices. There, you have almost no rights at all (save for services that publicly waive their own rights by standing up bounty programs --- and, don't be confused, Project Zero doesn't depend on Apple's bounty programs to conduct iOS research).

These distinctions are super-clear to people who actually work in this field, but clearly unclear to people outside it, because we end up having the same picky debates about them every time vulnerability research comes up. I get it, it looks fuzzy on the outside. But it is not fuzzy to practitioners; the rules you have to be aware of to conduct research are actually fairly straightforward. Don't mess with other people's machines.


Thanks for clarifying that.


You've got things reversed here: what would you sue someone for that anyone would need to mount a defense?


Well, Sony tried to sue me for disclosing a vulnerability in the PS3. This is what they claimed:

> Violations of the Digital Millennium Copyright Act; violations of the Computer Fraud and Abuse Act; contributory copyright infringement; violations of the California Comprehensive Computer Data Access and Fraud Act; breach of contract; tortious interference with contractual relations; common law misappropriation; and trespass.

Yes. Trespass.

So yes, companies have tried to sue over disclosure of security vulnerabilities in the past. In this one they even ended up settling with one of the other defendants (whom they may have had a bit more of a case against, thanks to the DMCA if nothing else), but I think they realized they had no case against me and most of the others and dropped the lawsuit. They still filed it, though, and I had to get a lawyer, which was not a fun few months.


Journalists have tried to put stories together on vulnerability researchers being threatened (and, hey, I'll raise my hand here: I've been threatened several times). But if you look at the actual instances where things have gotten far enough along to report, the fact patterns break down. It tends to turn out that people are getting threatened for:

* "Researching" serverside apps --- software running on computers the researcher doesn't own --- which is widely understood to fall afoul of CFAA and categorically isn't the kind of work P0 does.

* Breaching contracts, which happens commonly when vuln research firms take on pentest vendor assessment contracts for companies considering purchases, where the pentester access to the target was explicitly arranged under NDA.

* Stuff that isn't vulnerability research under any sane definition, as when people find open S3 buckets, grab all the files off them, and then try to "conduct research" based on the contents of the stolen files.

None of this is at play in the kind of work P0 does, and there are basically no modern stories about straight vulnerability research done under P0 terms where meaningful legal threats have been made. There was a time around the turn of the last century where it was briefly believed that the DMCA might be wielded against vuln researchers, but that didn't pan out.


Except none of the categories you describe apply to what I did, and I got sued. And indeed, the only case they could plausibly have in the scenario I was part of is under the DMCA, because although we did it to run Linux, security vulnerabilities in game consoles can also be used to pirate games.


I don't know, I'm not a lawyer. But I imagine it wouldn't be hard to find some kind of vague basis to sue someone who intentionally causes you harm. Quick Googling suggests business torts are a thing; maybe there's more beyond that: https://www.findlaw.com/smallbusiness/business-laws-and-regu...

And if one has accepted a license agreement to use the product, there's often breach of contract available as a possible basis too.

Are you saying no one has ever been sued over publishing vulnerabilities in competitors' products?


I dislike many aspects of google, but project zero is not one of them and has greatly improve the overall security of the industry.

Also their blogs describing how security exploits work are always super interesting


> [P0] has greatly improve the overall security of the industry.

What is the argument for this?


Project Zero's argument is:

> For nearly ten years, Google’s Project Zero has been working to make it more difficult for bad actors to find and exploit security vulnerabilities, significantly improving the security of the Internet for everyone. In that time, we have partnered with folks across industry to transform the way organizations prioritize and approach fixing security vulnerabilities and updating people’s software.

This feels pretty reasonable, we don't have a spare universe to act as our control, but their intervention does seem likely to have made us more secure overall than otherwise.


Is “the security of the Internet” just a proxy for, “the number of vulns in the wild”?


Sure, you could also phrase that sentence a hundred other ways.


They've been very effective at getting many large companies to actually fix security bugs, and release updates with those fixes in an actually reasonable time frame. They've been very strict about their "we will publish the details of this security bug in 90 days unless you can provide a very good reason not to".


Like the number of critical bugs they found and pushed the vendors to fix? Like finding bugs like Spectre that forever changed the chip industry's understanding of side channel attacks and subsequent designs?


that iOS vs Android table kinda makes no sense as they said

iOS 76, Android Samsung 10, Android Pixel 6

>The first thing to note is that it appears that iOS received remarkably more bug reports from Project Zero than any flavor of Android did during this time period, but rather than an imbalance in research target selection, this is more a reflection of how Apple ships software. Security updates for "apps" such as iMessage, Facetime, and Safari/WebKit are all shipped as part of the OS updates, so we include those in the analysis of the operating system. On the other hand, security updates for standalone apps on Android happen through the Google Play Store, so they are not included here in this analysis.

so kinda what's the point of putting that column there? people will use it as an argument that Android is safer :P


I would argue that Android is safer, specifically from 0days.

Anyone with Android 6.0 or later (a 2015 OS) gets Chrome updates. Those updates are fast, automatic, and transparently happen in the background, with no user input or downtime (e.g. need to restart). They're also not arbitrarily delayed to fit into with some rigid OS update release schedule, like iOS or macOS.

That's especially important in this era where people are skeptical of updates, or just too busy, and keep delaying and skipping updates. Every time I use a family member's device, and check the version, I see they haven't updated in months. Many aren't even aware there are updates available.

I told the story before, but my grandma only has 4G, no Wi-Fi, and only has an iPhone, no laptop to download IPSWs from (and obviously average users can't do that); therefore she only gets iOS updates when I visit once or twice a year. Is Apple happy with that? Provocatively: is that the best they can do? Google proves it's not.

Sure, the Android OS update situation isn't great, but your biggest 0day exposures will be apps exposed to the internet (Chrome, chat apps) and those all get regular, transparent background updates with no user input; and Android users are much safer for it, than they'd be if Google imitated Apple's OS update strategy.

This is an Apple-wide problem, not just iOS. Anyone with OS X El Capitan 10.11 or later (again, a 2015 OS) is running the latest Google Chrome. Yet their Safari version is riddled with catastrophic 0day bugs (which can now be called 1,314-day bugs, since there have been 1,314 days since the last El Capitan security update).

Add to that the fact that Apple doesn't really support "n-2" OSes with security updates, as tech people say, since their unstated policy seems to be that only bugs which Apple thinks are "exploited in the wild" will be backported to any OS that isn't the very latest; most (not all) other known security patches are never backported.[0]

Apple's security stance is much worse than Microsoft's or Google's when comparing Apples to Apples.

[0]: https://www.intego.com/mac-security-blog/apples-poor-patchin...


There are advantages and disadvantages to bundling applications with the core OS; having these security bugs become part of the OS release vehicle (along with the heavyweight process that implies) seems like a disadvantage. With respect to the table, I think there’s a decent argument either way.


Do you think that faster response from Linux is due to fact that it is OSS or is corporate structure the reason for response longer time from big players?


What was the most serious vulnerability or set of vulnerabilities identified by Project Zero?


Spectre is probably the most interesting one. As of right now it isn't as critical as other things since it is difficult to actually exploit but its implications are gargantuan, similar to the development of return-oriented-programming a while back.


I'm partial to the watering hole attack they found with TAG that had a bunch of browser 0days in it, personally:

https://googleprojectzero.blogspot.com/2021/01/introducing-i...


Is it meaningful to include "Linux" as a discrete vendor? How would you compare an OSS project to a company like Microsoft or Google?


The Linux Kernel is a product and the Linux Foundation is it‘s vendor. I would assume they mean Them. Especially when they had Red Hat and Cannonical in „other“


The Linux Foundation is not the vendor for Linux. "Vendor" here refers to the Linux kernel community and how it acts when security issues are discovered. That is, if you want to be pedantic, it's Linus Torvalds and everyone under him.

It doesn't really matter that it's not a legal entity, just "whoever is responsible for fixing the bug and releasing the fix officially". In security we call open source projects vendors because they act as such just like Apple would.


Linus Trovalds works for the Linux Foundation? So if we're being pedantic, which I think you are, then If it's Linux Torvalds in his professional capcitiy then it is for Linux Foundation.


Linus Torvalds does not answer to the Linux Foundation, and the Linux Foundation has little to do with how the Linux kernel handles security reports. It doesn't matter that they sponsor him; they aren't the "vendor" for Linux in any meaningful sense. They are just a nonprofit entity established to support Linux development in various ways.


The fact you needed to use meningful sense, to me, means you know you're wrong. You're just being pedantic and being pedantic and wrong at the same time is not good look.


In the context of security policy, open source projects are "vendors". It doesn't matter that it's not a company. You only care about the result (when things are patched and released), not how it happens.

Of course with open source you can fix it yourself, but in this context the stats are about how upstream behaves.


Average days to fix: Linux at top, Oracle at the bottom.

Why am I not surprised?


Kudos for the TLDR. Interesting to not that vendors are taking bugs more seriously. Intelligence wise, I wonder what P) learn from the communication that takes place between P0 & the vendor, ie are there lots of questions asked over time giving insight into thought processes or does P0 see a massive data grab and then communication silence until the bug is fixed?

For all software & hardware vendors, this will help raise the standards & revenue but it will also raise the barrier to entry for new entrants as sole developers or small teams will have to consider more than the basic function of their app/project/hw. Legislation like GDPR already means some projects may never get to fly today as regulatory burden is too great, and the bug front is another domain which is maturing adding to the burden.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: