Also, thank you Tom for your patience and being responsible. Also, I could not find your name in Hall of Fame list.
 - http://www.computerworld.com/s/article/9239477/Google_engine...
 - http://www.zdnet.com/google-researcher-publishes-windows-zer...
 - http://nakedsecurity.sophos.com/2010/06/15/tavis-ormandy-ple...
 - http://www.computerworld.com/s/article/9177948/Google_resear...?
Tom was dealt with nearly same manner as Tavis says his ordeal was. Tavis immediately went ahead with publishing the exploit, but Tom but gave them 2 weeks, even after persistently demonstrating the criticality of the exploit.
Culture inside corporations is very different than outward facing one. Facebook's behavior recently could be excused partly due to poor wording (though onus was on FB), but what is the explanation here?
I am one of the co-founders of the Vulnerability Reward Program at Google. It's one of the longest-running and most generous programs of this kind: since 2010, we have paid out around $1M in rewards for more than 1,500 qualifying bug reports in web applications alone. We take great pride in keeping the process responsive, friendly, and hassle-free.
Of course, it takes just one bad experience to undo much of that. Tom's report is a valid issue. The reward panel - of which I am a member - decided that it did not meet the bar for a financial reward. I stand by this decision, but I think we should have been more forthcoming, precise, and responsive when communicating that. In other words, I think we messed up.
PS. If you ever run into any problems of this type - or just want a friendly soul to chat - please do not hesitate to poke me at firstname.lastname@example.org :-)
We welcome all reports of security vulnerabilities, we try to fix them quickly, and we credit the researchers - but we offer rewards only for higher-impact flaws. You can check out this page for more info:
In this context, phishing issues are tricky. Because many of our products simply have to do things such as displaying snippets of potentially attacker-controlled text and multimedia, we try to evaluate phishing concerns on a case-by-case basis. In essence, we ask ourselves how easy it would be to exploit a particular behavior to mount a convincing attack.
My take on this bug is that the attack vector is severely constrained in well-behaved e-mail clients; and that in badly-behaved clients, the existing exposure is already considerably worse than any incremental hazard caused by this flaw. It's valid and worth fixing - but does not quite meet the bar for the reward tiers set up for higher-impact bugs.
Here, we handled the communications poorly, and I think it's OK to call us out on that. In fact, I think it would be wrong to offer a reward in hopes of buying silence from the reporter :-)
The phishing attack I described in my blog post affects all e-mail clients that are able to render HTML and CSS. As for rendering remotely included CSS, this was not necessary, as one might as well include a <style> element.
If you are referring to just GMail as a well-behaved e-mail client, you are most likely correct that it wouldn't be possible to create a legit-looking phishing e-mail (as GMail only allows in-line styles). I think that most other e-mail clients allow the use of <style> or <link> in e-mails. The screenshot of the "phishing e-mail" in the blog post came from Mail.app (version 6.5)
In this case a bug was filed, but it took some prodding to get it fixed.
1. I was not complaining that I did not get paid for this bug, but that different Bug Bounty programs have different thresholds on whether a vulnerability qualifies for a monetary reward. This seems like useful information for someone who would like to make a living out of bounty-hunting.
2. Even though I participate in these bounty programs as a hobby, which is about 1-2 days per month, I was already awarded generously. If I did this full-time, I would expect to earn a great deal more than when I would be working directly for Google (unless they have crazy wages)
I once reported Chrome because it crashed when I tried to load a 65536x65536 bitmap image. Since it was a crash I, of course, claimed it was a security issue, in the hope that was enough to get a reward. Of course, they didn't accept that, but it does make me think the other side of this issue may be that Google is now receiving so many of these they are unable to properly evaluate them all and applying the "HR" solution (employ someone underqualified explicity to fob off as many people as possible so that only super-qualified candidates get through).
As for the Chrome report you mention, if you provide the bug ID I can check. However, given your description, it definitely seems like the bug was closed because you were observing the intended behavior of terminating the renderer process on an out-of-memory condition. The bug reporting form links to guidelines on reporting security bugs, and explains why this specific case is not a security issue:
And to be very clear, Chrome's process isolation is far from security theater. The sandbox prevents renderer processes from accessing or manipulating any system state directly. It's a hard security boundary that's been extremely effective in preventing exploits. It also allows Chrome to entirely isolate different classes of renderer processes from each other, such as those used for Web content versus filesystem, extensions, apps, or system settings.
That stated, we're still working on improving Web content isolation because our final goal is to entirely isolate different origins in the same class. But doing so is far beyond anything attempted in a production browser before, and entails a massive engineering effort. The team working on that has various bugs you can follow in our tracker, and a public design document:
You are thinking inside of "the box" that the only resource manager is the OS and the only resource in question here is virtualized memory. Yes: I entirely agree with the security advantages of letting OOM kill a process. My argument is that there is something wrong if my web browser allows untrusted code to attempt to load a 16GB large bitmap (65,535 x 65,535 RGB) and that that is simply allowed to crash a large number of unrelated tabs, including my e-mail client (which has now died on me way too many times due to other tabs).
Heap memory is what is causing the kill, but it isn't the allocation of the heap memory from the OS for this bitmap that is the semantic problem in this situation: there should be other resource limits that apply much earlier that keep a tab from loading a 16GB large bitmap; some of these resource limits I will even say the browser already is in a position to track (such as specifically, the amount of space available in the disk cache). It isn't about recovering from malloc(16GB): it is about avoiding that malloc.
My argument then is that the browser is a virtual machine that is allowing largely-untrusted turing complete code to run on my computer. These programs have tons of limits associated with them: one of them should be a VM object heap space limit, exactly as Java applets have. Java doesn't die when it hits OOM, it kills itself when it hits its heap space limit. There may still be ways that the process can end up exhausting available RAM, and in those cases yes: the OS should still kill the process.
But for all those web pages that have a lot of large data assets, like images, this problem should be being solved by the disk cache: resource limits should cause images to be unloaded from RAM and potentially pinned in the disk cache (so that they can't be deleted until that tab is closed). If the disk cache finally can't take the situation and needs to delete resources being used by tabs (that can't just be yanked due to HTML semantics), then the page (not the tab) can be sacrificed.
Not even the entire tab should not crash because one page wants to load a very very large bitmap, much less a bunch of unrelated tabs. This is not a "crash" scenario: this is a virtual machine that had some of its data evicted. The browser is already modeling individual web pages in ways where it can throw them away as a group: if the disk cache refuses to store something, that machine should be killed, much like the OS kills processes that want memory that can't be backed by swap.
Even without such amazing resource limits, the way a massive resource should be handled is that it gets streamed to the disk cache (and if the disk cache refuses to hold it, it should just be denied) and then memory mapped (so the pages for it will page back to the file, and don't cause memory pressure). There is simply no reason why the browser should ever be trying to allocate 16GB of private memory for purposes of loading a bitmap: that's clearly far on the other side of absurd ;P.
To be clear: I am not saying "Chrome should already be doing all of this"; I am simply arguing that claiming this is impossible, or even impractical, is wrong, and that the current solution doesn't seem to be helping (despite the wide-spread belief, and even occasional claim, that it does).
> As for how Chrome's process sharing works, it's opener based for Web content. ... This means the process is shared in cases where the HTML standard explicitly requires that the child browsing context be able to navigate the parent opener or frame. So, at that point, required Web functionality achieves an equivalent to the DoS you mention.
Which means that if I have an attack against Chrome and I want access to your e-mail data, I simply get you to click a link that opens your e-mail client target="new". It will almost certainly be running in the same process, and then I use my exploit, steal your data, and upload it back to my server. The result of this way of dividing processes means that the attacker has nearly complete control over which websites will be in the same process when it comes time to exploit Chrome.
> Process sharing can also be triggered when resource limits are reached or during page transitions, but that's rarely a significant factor in practice and not controllable by a Web site.
The fact that I can't control this isn't terribly important, because I have complete control over process isolation from the previous paragraph. However, for completeness, I will point out that this decreases (slightly) the probability of the direct attack "steal data from saurik's e-mail", but does nothing to mitigate the more general attack "steal sensitive information from saurik", because the result of resource limitations means that sensitive websites end up distributed through every single tab process over time. You can't be guaranteed of hitting my e-mail client (as in the previous paragraph), but you can get something juicy from any of my processes.
> And to be very clear, Chrome's process isolation is far from security theater. ... It also allows Chrome to entirely isolate different classes of renderer processes from each other, such as those used for Web content versus filesystem, extensions, apps, or system settings.
Sure. I am only talking about the separation of tabs into multiple processes. Having actual privilege separation, where certain types of things can't be done by just any process, is certainly advantageous. However, having my tabs in separate processes, when the tabs have control over what web page content is in that process, and can even open new tabs that are associated with the same process, is "security theater": it is billed as a security feature, but it is barely a speed bump.
I'm not an expert on sandboxing browser tabs, but so far I haven't had any (memorable) experience where a rogue Chrome tab crashed the whole browser. Just yesterday someone posted a jsfiddle on HN  that crashes tabs in Chrome, but the entire browser when opened in Firefox.
Chrome, however, also claims to isolate tabs from each other, so that one tab cannot affect the behavior of another tab; but, in practice, I have tons of tabs that all have ten totally unrelated websites rendering in them (everything from my e-mail client to 4chan), so that isn't actually offering me any advantage: it is still possible for rogue websites that are able to exploit only their rendering process to steal data from any other website that ended up in the same process.
This demonstrates exactly what's wrong with Google's interpretation of "openness". Sure, if one finds some Google insider, one may get information. Normally (read: in almost all other bigger Free Software projects), one could simply have a look at the bug tracker on one's own, without being on the mercy of Google.
You can go ahead and see for yourself:
You may want to reevaluate your policy, because it is incredibly short-sighted.
Phishing is one of the most effective attack vector (link to a page with a zero-day browser exploit), and even though it may not endanger data held by Google, it puts users at risk of ending up with malware that may steal much more than Google-held data.
It's baffling that tech firms seem to have people without strong technical aptitudes responsible for incoming security reports.
I take it that people who find these vulns do it for fun, even if it's their job - if you don't have a contract to start looking for issues, there is no reason to do so other than fun. So the only reason people bother with 'responsible disclosure' is, as far as I can tell, because not doing so would damage their public persona. But it only got to that point because big vendors pushed the moral superiority of 'responsible disclosure' on us over the last decade. Back in the 1990's (when I was last sort of active in the scene), nobody would think of giving vendors weeks or months of time to fix their own damned bugs - if your PoC exploit worked at 3am (with real, working shell code, none of that 'call ::MessageBox(NULL, "U got 0wned") nonsense), you'd post it to bugtraq at 3:15 so that you could see the responses when you got out of bed in the morning.
It's not about the hall of fame. Is about making the web a safer place.
Plus - the fastest way of getting a vuln fixed is by having it out in the open, hopefully in a way so public that the affected companies' PR department needs to get involved. That at least incentivizes them to proactively look for issues, rather than set up a security@ alias, hire some well-known names from the scene to fix issues send there and calm the pocket protector crowd when shit hits the fan (cough, ctrl-f this page for examples), and pay reporters a fraction of the market value of their work (if anything at all).
But in what way is responsible disclosure not looking out for oneself? Even if you don't care about the potential reputation hit of releasing non-responsibly, how are you better of by not waiting a couple of weeks or so?
Instead of handing out soup to a hundred folks, they help the entire internet.
Good deeds don't really need justification.
The money is a secondary issue IMO. For some it is encouragement to try again and potentially find something else useful to submit, for others it is an alternative to flogging the exploit for more on relevant forums (though without the right contacts I expect getting good money this way is not as easy as some suggest). For others it is just a happy little bonus, they'd keep going anyway and continue to submit their findings but they're not daft enough to turn down a little cash if offered.
And of course the tertiary issue is that you are helping to improve the security (and/or reliability more generally) of a product that you yourself rely upon, and therefore want to see improved as far as is possible in terms of security and reliability.
The average pay-out is not a terribly good measure though: people aren't aiming for the average and the payout for critical issues is much higher. There are quite a few awards for more minor issues which are easier to find (and sometimes are found by relatively effort free semi-automated methods) which skews the average making the effort of finding one of those critical issues look less rewarding that it actually could be.
FWIW, the average amount is sort of meaningless: we pay between $3k and $20k for high-impact bugs.
There are also some researchers who prefer quantity over quality, and go after low-hanging fruit in acquisitions and isolated, non-sensitive services - often using custom automated tools. These findings usually pay around $100, skewing the total.
But what the heck is the justification for deciding it's a security vulnerability that needs to be fixed only when the guy says he's going to advertise it publicaly? What the hell is that?
If he had sold it privately, without telling Google, instead of letting them know he'd be advertising it publicly -- then it still wouldn't be worth fixing?
1. Additionally, as you said you can register google-scholar.com and phish from there as well.
Yes you can do it that way too, but now the link to that domain would have been provided through a Google sanctioned email. Driving traffic to that domain alone vs. from an phishing email through author's vulnerability would have different impact.
2. we do not believe that there is a security sensitive change that needs to be done here.
Wow. Security researchers at Corporations professionally working to secure products. This vulnerability was passed onto Kevin after being vetted by Aleksandr. Maybe the team works on more potent vulnerabilities.
3. Since these emails are sent by the Google Scholar back-end, through Google’s legitimate servers, such a phishing mail would be more likely to bypass email client spam filters (compared to, say, an attacker registering google-scholar-hax.net and sending phishing emails from there).
4. The affected emails would still have Google’s DKIM email signature, proving that it was sent by Google (which would be impossible from a custom attacker-controlled domain). In other words, an attacker could send an email containing any HTML he wants, to any academic email address, and Google still puts a stamp on it saying “this is legit”.
All Google is doing is damaging its reputation.
I think this means Kevin said they are not going to change the code on their end.
Also, author stated: "Even if this vulnerability doesn’t qualify for a reward, I strongly believe that it should be fixed promptly to protect end users."
Do I read the intent behind this correctly. Please do correct me. Because, to be frank, me and labmate are peeved right now about this. Google Scholar is a functional way to spread awareness about own research, but Google's response got on my nerves pretty badly.
It is a script that allows one to control behavior (even though in a limited way, but yet authentic way) of Google's web property.
Why did they say that this was not a security sensitive issue? And why did the tone change upon seeing public disclosure looming?
Anyways, I am not happy about handling of this situation. Good on Tom for being responsible, and the exploit did not wander into irresponsible hands. Because, within a day everybody on Google Scholar would have got this exploitable email.
He could have entered ANY email address into the "change my email to" form - and Google would send an email there.
With suitable preparation work on his profile name, he could essentially have Google send a custom crafted HTML email to his intended victim - complete with verified domain sender information (and hiding the actual 'click here to verify your email' link so that the email address never actually gets changed).
Getting Google to send phishing emails on your behalf is a pretty big deal.
or yes, as pudquick put it more nicely :)
His bug allowed him to inject links into verification emails sent by Google Scholar. He claimed that he could inject CSS links too, but that didn't make this problem any worse. Why? Because it's up to mail clients to load the linked CSS stylesheets or not. Gmail, for example, would never load those remote CSS files. If your webmail client does that, it's time to switch to a better one.
So he could inject links, which is annoying, but still a very minor issue. It may make phishing a bit easier, but you know what phishing has always worked against average Joe if you try hard enough. That means that this problem doesn't really give an attacker any advantages that he couldn't do by himself.
Disclaimer: I'm a member of the team that handles VRP.
Academic users constitute an audience that's quite often bound to use a single, potentially-outdated webmail system. In our experience, academic/University users are utilizing what we consider "nonstandard" email systems with an order of magnitude more regularity than nonacademic users. You may wish to better understand the audience of the product before asserting that they should "switch".
And claiming something is not a security issue because you're trusting a client you don't control to behave well is naive at best, and apologist at worst. It's genuinely upsetting to hear coming from someone who classifies vulnerabilities.
This particular comment thread was mostly about webmail clients. But to your specific question... take a look at the link for an incomplete list of email clients that runs JS
As long as any field allows enough chars to:
....anything is possible from there.