Hacker News new | comments | show | ask | jobs | submit login
Why you should not trust emails sent from Google (vagosec.org)
353 points by tomvangoethem on Sept 11, 2013 | hide | past | web | favorite | 94 comments



Now compare this with the attitude of their Security researcher, Tavis Ormandy, bashing Microsoft's lackadaisical approach towards fixing bugs and has publicly published 0-days twice [1, 2, 3, 4]. Google only moved upon fear of public disclosure, and that too inspite of researcher being meticulous and patient.

Also, thank you Tom for your patience and being responsible. Also, I could not find your name in Hall of Fame list.

[1] - http://www.computerworld.com/s/article/9239477/Google_engine...

[2] - http://www.zdnet.com/google-researcher-publishes-windows-zer...

[3] - http://nakedsecurity.sophos.com/2010/06/15/tavis-ormandy-ple...

[4] - http://www.computerworld.com/s/article/9177948/Google_resear...?


That was not done by a "Google Researcher". That was done by an independent human who happens to also work at Google. And there are valid arguments in favor of releasing 0day.


And what about Google's dismissive attitude here? The same attitude that Tavis got fed up of. The motive was to highlight just that. Tavis bashed MS because he was riled up by their attitude, because maybe culture at Google was different regarding patching critical bugs and moved fast. Well then, look no further than Tom's post.

Tom was dealt with nearly same manner as Tavis says his ordeal was. Tavis immediately went ahead with publishing the exploit, but Tom but gave them 2 weeks, even after persistently demonstrating the criticality of the exploit.

Culture inside corporations is very different than outward facing one. Facebook's behavior recently could be excused partly due to poor wording (though onus was on FB), but what is the explanation here?


Funny that you're so quick to trust such clueless reports who have no insight into what happened.


Hey folks,

I am one of the co-founders of the Vulnerability Reward Program at Google. It's one of the longest-running and most generous programs of this kind: since 2010, we have paid out around $1M in rewards for more than 1,500 qualifying bug reports in web applications alone. We take great pride in keeping the process responsive, friendly, and hassle-free.

Of course, it takes just one bad experience to undo much of that. Tom's report is a valid issue. The reward panel - of which I am a member - decided that it did not meet the bar for a financial reward. I stand by this decision, but I think we should have been more forthcoming, precise, and responsive when communicating that. In other words, I think we messed up.

PS. If you ever run into any problems of this type - or just want a friendly soul to chat - please do not hesitate to poke me at lcamtuf@google.com :-)


Tom states about monetary compensation, "Even if this vulnerability doesn’t qualify for a reward, I strongly believe that it should be fixed promptly to protect end users." But then you guys are experienced, so maybe know more about the impact. {1500 bug reports sure earns you that (and it shows :p)} I would like to hear your side of the story.


Tom Van Goethem makes a strong case that this is a security vulnerability and it does deserve an award. Can you explain why not?


I think it's a valid security bug report.

We welcome all reports of security vulnerabilities, we try to fix them quickly, and we credit the researchers - but we offer rewards only for higher-impact flaws. You can check out this page for more info:

http://www.google.com/about/appsecurity/reward-program/

In this context, phishing issues are tricky. Because many of our products simply have to do things such as displaying snippets of potentially attacker-controlled text and multimedia, we try to evaluate phishing concerns on a case-by-case basis. In essence, we ask ourselves how easy it would be to exploit a particular behavior to mount a convincing attack.

My take on this bug is that the attack vector is severely constrained in well-behaved e-mail clients; and that in badly-behaved clients, the existing exposure is already considerably worse than any incremental hazard caused by this flaw. It's valid and worth fixing - but does not quite meet the bar for the reward tiers set up for higher-impact bugs.


So chuck him a C note and move on. I don't think it's worth the bad PR to quibble over what is clearly a security bug no matter how minor. HTML injection is sort of like the bike shed of security vulnerabilities, every web developer understands it, so you'll get a perverse amount of attention and discussion on it.


In essence, we have a reward structure that we think is internally consistent, attracts the right sorts of research, and makes an optimal use of our resources - and we try to apply it fairly.

Here, we handled the communications poorly, and I think it's OK to call us out on that. In fact, I think it would be wrong to offer a reward in hopes of buying silence from the reporter :-)


I don't think that giving me some money would have refrained me from writing this blog post. The main issue here is that it was not recognized as security sensitive, and would most likely not be fixed if I didn't insist on it.


How would you describe a well-behaved e-mail client with regards to this vulnerability?

The phishing attack I described in my blog post affects all e-mail clients that are able to render HTML and CSS. As for rendering remotely included CSS, this was not necessary, as one might as well include a <style> element.

If you are referring to just GMail as a well-behaved e-mail client, you are most likely correct that it wouldn't be possible to create a legit-looking phishing e-mail (as GMail only allows in-line styles). I think that most other e-mail clients allow the use of <style> or <link> in e-mails. The screenshot of the "phishing e-mail" in the blog post came from Mail.app (version 6.5)

I intentionally did not classify this vulnerability as "Cross-Site Scripting", although XSS vulnerabilities also rely on injecting HTML content, as the main impact here was not the execution of Javascript code in the user's e-mail client, but rather changing the visual output of an e-mail so it can be used for phishing.


What happens in the gmail web interface?


One would assume when he says "well-behaved e-mail clients", he was including his own company's product.


This type of vulnerability can be used to aid phishing attacks but it cannot be directly exploited by an attacker to obtain or modify user data. Phishing attacks are not listed as qualifying in the Program Rules http://www.google.com/about/appsecurity/reward-program/ although they are evaluated as security issues on a case by case basis.

In this case a bug was filed, but it took some prodding to get it fixed.


Anyone participating in this program are cheap employees if you look at it from Google's perspective.


Come on now. One guy is complaining and you made it look a hacker sweat-shop. You don't like the bounty, don't participate. As simple as that.


Sure. A sweatshop with voluntary arrival and departure times.


it doesn't matter if you get paid or not... the amount is too small for the risk you are taking, unless it's a hobby. But I think that most of these hackers work for free and never get paid (and also don't get any of the benefits from working directly for Google).


Seems there is some misconception here:

1. I was not complaining that I did not get paid for this bug, but that different Bug Bounty programs have different thresholds on whether a vulnerability qualifies for a monetary reward. This seems like useful information for someone who would like to make a living out of bounty-hunting.

2. Even though I participate in these bounty programs as a hobby, which is about 1-2 days per month, I was already awarded generously. If I did this full-time, I would expect to earn a great deal more than when I would be working directly for Google (unless they have crazy wages)


No matter what he tried to explain they just kept replying that he didn't qualify for the reward. It sounds like they have become super defensive about acknowledging bugs because the reporter will immediately try to claim a reward. If so, it's the exact opposite of the intent of the program.

I once reported Chrome because it crashed when I tried to load a 65536x65536 bitmap image. Since it was a crash I, of course, claimed it was a security issue, in the hope that was enough to get a reward. Of course, they didn't accept that, but it does make me think the other side of this issue may be that Google is now receiving so many of these they are unable to properly evaluate them all and applying the "HR" solution (employ someone underqualified explicity to fob off as many people as possible so that only super-qualified candidates get through).


I can assure you that Google takes VRP reports very seriously. Two members of Google security (f- and adobkin) have provided context elsewhere in this thread on the bug described in the post.

As for the Chrome report you mention, if you provide the bug ID I can check. However, given your description, it definitely seems like the bug was closed because you were observing the intended behavior of terminating the renderer process on an out-of-memory condition. The bug reporting form links to guidelines on reporting security bugs, and explains why this specific case is not a security issue: http://www.chromium.org/Home/chromium-security/reporting-sec...


I shouldn't be able to terminate the rendering process, which contains the contents and information of numerous tabs by just loading a large image into it: it should realize the image is too large and stop loading it... otherwise that's a denial of service attack you can use against someone and potentially cause them data loss from one of the other tabs in that process. (That Chome shares processes between lots of tabs, by the way, was a massive disappointment after the original video that made it sound like tabs would all be isolated... it frankly is largely a worthless token show of security theatre to have things separated the way Chrome ended up doing.)


Attempting to recover from OOM is almost always a bad idea, which is why Chrome terminates processes by default on OOM. To underscore this point, I should note that I've found a number of serious vulnerabilities resulting from applications attempting to recover from OOM. Here's a detailed writeup of one I found in Firefox several years ago: http://blogs.iss.net/archive/cve-2008-0017.html

As for how Chrome's process sharing works, it's opener based for Web content. That is, if one browsing context opens other browsing contexts (either via iframes or popups) where the opened context is retained by the opener, then they will run in the same process in order to preserve resources and the relationships required by Javascript. This means the process is shared in cases where the HTML standard explicitly requires that the child browsing context be able to navigate the parent opener or frame. So, at that point, required Web functionality achieves an equivalent to the DoS you mention. Process sharing can also be triggered when resource limits are reached or during page transitions, but that's rarely a significant factor in practice and not controllable by a Web site.

And to be very clear, Chrome's process isolation is far from security theater. The sandbox prevents renderer processes from accessing or manipulating any system state directly. It's a hard security boundary that's been extremely effective in preventing exploits. It also allows Chrome to entirely isolate different classes of renderer processes from each other, such as those used for Web content versus filesystem, extensions, apps, or system settings.

That stated, we're still working on improving Web content isolation because our final goal is to entirely isolate different origins in the same class. But doing so is far beyond anything attempted in a production browser before, and entails a massive engineering effort. The team working on that has various bugs you can follow in our tracker, and a public design document: http://www.chromium.org/developers/design-documents/site-iso...


> Attempting to recover from OOM is almost always a bad idea, which is why Chrome terminates processes by default on OOM.

You are thinking inside of "the box" that the only resource manager is the OS and the only resource in question here is virtualized memory. Yes: I entirely agree with the security advantages of letting OOM kill a process. My argument is that there is something wrong if my web browser allows untrusted code to attempt to load a 16GB large bitmap (65,535 x 65,535 RGB) and that that is simply allowed to crash a large number of unrelated tabs, including my e-mail client (which has now died on me way too many times due to other tabs).

Heap memory is what is causing the kill, but it isn't the allocation of the heap memory from the OS for this bitmap that is the semantic problem in this situation: there should be other resource limits that apply much earlier that keep a tab from loading a 16GB large bitmap; some of these resource limits I will even say the browser already is in a position to track (such as specifically, the amount of space available in the disk cache). It isn't about recovering from malloc(16GB): it is about avoiding that malloc.

My argument then is that the browser is a virtual machine that is allowing largely-untrusted turing complete code to run on my computer. These programs have tons of limits associated with them: one of them should be a VM object heap space limit, exactly as Java applets have. Java doesn't die when it hits OOM, it kills itself when it hits its heap space limit. There may still be ways that the process can end up exhausting available RAM, and in those cases yes: the OS should still kill the process.

You might think this is limiting ("but what about my amazingly great and large webapp?"), but it isn't: most web pages people browse to that use a lot of memory are really just using very large images. If they actually have a large number of JavaScript objects in play they should probably have to get a memory limit increase warning, exactly like they do if they want more HTML5 localStorage. Such large applications might also be Chrome Apps anyway, and can have manifest configuration.

But for all those web pages that have a lot of large data assets, like images, this problem should be being solved by the disk cache: resource limits should cause images to be unloaded from RAM and potentially pinned in the disk cache (so that they can't be deleted until that tab is closed). If the disk cache finally can't take the situation and needs to delete resources being used by tabs (that can't just be yanked due to HTML semantics), then the page (not the tab) can be sacrificed.

Not even the entire tab should not crash because one page wants to load a very very large bitmap, much less a bunch of unrelated tabs. This is not a "crash" scenario: this is a virtual machine that had some of its data evicted. The browser is already modeling individual web pages in ways where it can throw them away as a group: if the disk cache refuses to store something, that machine should be killed, much like the OS kills processes that want memory that can't be backed by swap.

Even without such amazing resource limits, the way a massive resource should be handled is that it gets streamed to the disk cache (and if the disk cache refuses to hold it, it should just be denied) and then memory mapped (so the pages for it will page back to the file, and don't cause memory pressure). There is simply no reason why the browser should ever be trying to allocate 16GB of private memory for purposes of loading a bitmap: that's clearly far on the other side of absurd ;P.

To be clear: I am not saying "Chrome should already be doing all of this"; I am simply arguing that claiming this is impossible, or even impractical, is wrong, and that the current solution doesn't seem to be helping (despite the wide-spread belief, and even occasional claim, that it does).

> As for how Chrome's process sharing works, it's opener based for Web content. ... This means the process is shared in cases where the HTML standard explicitly requires that the child browsing context be able to navigate the parent opener or frame. So, at that point, required Web functionality achieves an equivalent to the DoS you mention.

Which means that if I have an attack against Chrome and I want access to your e-mail data, I simply get you to click a link that opens your e-mail client target="new". It will almost certainly be running in the same process, and then I use my exploit, steal your data, and upload it back to my server. The result of this way of dividing processes means that the attacker has nearly complete control over which websites will be in the same process when it comes time to exploit Chrome.

> Process sharing can also be triggered when resource limits are reached or during page transitions, but that's rarely a significant factor in practice and not controllable by a Web site.

The fact that I can't control this isn't terribly important, because I have complete control over process isolation from the previous paragraph. However, for completeness, I will point out that this decreases (slightly) the probability of the direct attack "steal data from saurik's e-mail", but does nothing to mitigate the more general attack "steal sensitive information from saurik", because the result of resource limitations means that sensitive websites end up distributed through every single tab process over time. You can't be guaranteed of hitting my e-mail client (as in the previous paragraph), but you can get something juicy from any of my processes.

> And to be very clear, Chrome's process isolation is far from security theater. ... It also allows Chrome to entirely isolate different classes of renderer processes from each other, such as those used for Web content versus filesystem, extensions, apps, or system settings.

Sure. I am only talking about the separation of tabs into multiple processes. Having actual privilege separation, where certain types of things can't be done by just any process, is certainly advantageous. However, having my tabs in separate processes, when the tabs have control over what web page content is in that process, and can even open new tabs that are associated with the same process, is "security theater": it is billed as a security feature, but it is barely a speed bump.


> That Chome shares processes between lots of tabs, by the way, was a massive disappointment after the original video that made it sound like tabs would all be isolated

I'm not an expert on sandboxing browser tabs, but so far I haven't had any (memorable) experience where a rogue Chrome tab crashed the whole browser. Just yesterday someone posted a jsfiddle on HN [0] that crashes tabs in Chrome, but the entire browser when opened in Firefox.

[0] https://news.ycombinator.com/item?id=6358727


There is a difference between separating "web page rendering" from "browser UI" or "networking" and the kind of tab separation I am discussing: those are privilege separations, which at their bare minimum mean that when some JavaScript crashes, it doesn't take down the UI. This is both a functionality and a security benefit that I did not and will not argue against.

Chrome, however, also claims to isolate tabs from each other, so that one tab cannot affect the behavior of another tab; but, in practice, I have tons of tabs that all have ten totally unrelated websites rendering in them (everything from my e-mail client to 4chan), so that isn't actually offering me any advantage: it is still possible for rogue websites that are able to exploit only their rendering process to steal data from any other website that ended up in the same process.


> As for the Chrome report you mention, if you provide the bug ID I can check.

This demonstrates exactly what's wrong with Google's interpretation of "openness". Sure, if one finds some Google insider, one may get information. Normally (read: in almost all other bigger Free Software projects), one could simply have a look at the bug tracker on one's own, without being on the mercy of Google.


I asked for the bug ID so that I could verify it was triaged properly, not because the report isn't public. Some reports (like security bugs) are initially private, as is typical in any open source project. However, security bug reports are made public as well at some after they're resolved.

You can go ahead and see for yourself: https://code.google.com/p/chromium/issues/list


For example, the KDE security mailing list is definitely private, and some KDE bugtracker bugs are private as well.


Having reported a bug in Chromium myself that I thought was security related (but in the end was not - better safe than sorry), I think it's a reasonable decision to not show security related bug reports/tickets from the general public - you'd otherwise present all security holes in the browser to every evil guy in the world on a silver platter.


Not usually for reported security vulnerabilities


But I thought the point was that they didn't consider this to be a security vulnerability.


It appears that phishing-related vulnerabilities do not qualify for the reward program.

You may want to reevaluate your policy, because it is incredibly short-sighted.

Phishing is one of the most effective attack vector (link to a page with a zero-day browser exploit), and even though it may not endanger data held by Google, it puts users at risk of ending up with malware that may steal much more than Google-held data.


I recently reported a security issue in an API to a major tech vendor (not Google) and was shocked to get a reply from a "security analyst" at the firm who basically said it was a non-issue because it didn't occur when they went to the API url in their web browser.

It's baffling that tech firms seem to have people without strong technical aptitudes responsible for incoming security reports.


Why would you expect a reward for a bug just because it was a crash? I work in SVG and have reported a handful of crashes and never expected to qualify for their security reward.


Oh I absolutely didn't. I felt a bit bad afterwards for assigning it as a security bug. But that is really my point - when there's cash on offer people are apt to take a very liberal definition of what a security bug actually is.


Arbitrary content injection into signed emails from Google, and it's not a security risk??? Incredibly poor response from them. Props to the author for being patient and trying multiple times to convince them to actually fix it.


It's too bad that Google decided that he didn't deserve compensation for reporting this security vulnerability.


start a crowd-funded reward like reported at http://www.theinquirer.net/inquirer/news/2290380/crowdfunded...


I'd rather pressure Google into delivering on their promise. But then again, I don't use Gmail.


What does Gmail have to do with it? It was the way Google Scholar was building emails that was at fault, right?


Right. And Google Scholar is part of Google.


Here's an honest question: why do people still bother with the 'responsible disclosure' nonsense? What's in it for them? Days of work, weeks of waiting and frustration, for a 'mention' in some imaginary 'hall of fame'? $1mm over 1500 bugs, that's $666 / bug. That's about a day worth of work if your rates are low and you are in a low CoL area, or half a day or less if you work for Google.

I take it that people who find these vulns do it for fun, even if it's their job - if you don't have a contract to start looking for issues, there is no reason to do so other than fun. So the only reason people bother with 'responsible disclosure' is, as far as I can tell, because not doing so would damage their public persona. But it only got to that point because big vendors pushed the moral superiority of 'responsible disclosure' on us over the last decade. Back in the 1990's (when I was last sort of active in the scene), nobody would think of giving vendors weeks or months of time to fix their own damned bugs - if your PoC exploit worked at 3am (with real, working shell code, none of that 'call ::MessageBox(NULL, "U got 0wned") nonsense), you'd post it to bugtraq at 3:15 so that you could see the responses when you got out of bed in the morning.


I think it's fairly simple: if you get Google to accept that there's indeed a problem, they'll fix it and 100% of the possible victims of this exploit are now safe. If you disclose it non-responsibly, you are letting a pretty big crowd know there's a problem, while leaving a time window open for them to exploit it and pretty much assuring that someone will get "0wned". By disclosing responsibly, you are making sure that the least possible number of users are affected.

It's not about the hall of fame. Is about making the web a safer place.


Okay, maybe it says something about me as a person, but I can't quite understand that as a reason. I mean, I understand why people volunteer at a soup kitchen, but not this. Why would somebody go out to protect some unknown people on the internet from the mistakes of a for-profit company that is probably screwing over those same people, and themselves, in 100 other ways? I'm not anti-corporations or anything like that, but I do recognize that they are looking out for themselves (as is their right), I don't understand why everybody who deals with them in some way or another doesn't do that, too.

Plus - the fastest way of getting a vuln fixed is by having it out in the open, hopefully in a way so public that the affected companies' PR department needs to get involved. That at least incentivizes them to proactively look for issues, rather than set up a security@ alias, hire some well-known names from the scene to fix issues send there and calm the pocket protector crowd when shit hits the fan (cough, ctrl-f this page for examples), and pay reporters a fraction of the market value of their work (if anything at all).


I suspect that for most security bugfinders, the people they are protecting are not the giant corporations, it's the grandmas whose computers will be pwned within about a day or two of a 0-day hitting the open market. The number of people who read security blogs is miniscule compared to the number of people who use products with security flaws. You are not going to convince most of the latter to read vulnerabilities, but they're the ones who get hurt when a bug makes it out into the open unreported.


* I do recognize that they are looking out for themselves (as is their right), I don't understand why everybody who deals with them in some way or another doesn't do that*

But in what way is responsible disclosure not looking out for oneself? Even if you don't care about the potential reputation hit of releasing non-responsibly, how are you better of by not waiting a couple of weeks or so?


It's a computerized version of the soup kitchen.

Instead of handing out soup to a hundred folks, they help the entire internet.

Good deeds don't really need justification.


this is more like volunteering to clean up broken glass from your local park than volunteering at a soup kitchen. the internet is a shared public space, and some of the inhabitants are better equipped to help improve it for everyone.


I think a lot of it is the kudos. If they are interested in security research (or just breaking stuff) either as a learning experience or a challenge, then they might be doing these things anyway. Once something is found, what do you do with it? File it and enjoy the inner glow of pride perhaps, but if someone like Google publicly acknowledges your achievement it becomes a validated success that goes on your CV and/or gets talked about in an interview as a measure of your knowledge/skill. Even ignoring the CV/interview: in some circles it is worth it just for the bragging rights.

The money is a secondary issue IMO. For some it is encouragement to try again and potentially find something else useful to submit, for others it is an alternative to flogging the exploit for more on relevant forums (though without the right contacts I expect getting good money this way is not as easy as some suggest). For others it is just a happy little bonus, they'd keep going anyway and continue to submit their findings but they're not daft enough to turn down a little cash if offered.

And of course the tertiary issue is that you are helping to improve the security (and/or reliability more generally) of a product that you yourself rely upon, and therefore want to see improved as far as is possible in terms of security and reliability.

The average pay-out is not a terribly good measure though: people aren't aiming for the average and the payout for critical issues is much higher. There are quite a few awards for more minor issues which are easier to find (and sometimes are found by relatively effort free semi-automated methods) which skews the average making the effort of finding one of those critical issues look less rewarding that it actually could be.


Motivations vary, and it's not for everyone - but we're quite happy with the participation we're getting out of it.

FWIW, the average amount is sort of meaningless: we pay between $3k and $20k for high-impact bugs.

There are also some researchers who prefer quantity over quality, and go after low-hanging fruit in acquisitions and isolated, non-sensitive services - often using custom automated tools. These findings usually pay around $100, skewing the total.


Maybe they can justify thinking it wasn't really a security vulnerability, or maybe they can say, hey, everyone makes mistakes, we didn't realize it was a security vulnerability.

But what the heck is the justification for deciding it's a security vulnerability that needs to be fixed only when the guy says he's going to advertise it publicaly? What the hell is that?

If he had sold it privately, without telling Google, instead of letting them know he'd be advertising it publicly -- then it still wouldn't be worth fixing?


This peeved me much too, as everybody in my lab, infact the whole research floor has Google Scholar accounts including my professor. Phishing of researchers through an email officially sent by Google, to researchers official emails would have had repercussions. Excerpts from the email exchange described in the article:

1. Additionally, as you said you can register google-scholar.com and phish from there as well.

Yes you can do it that way too, but now the link to that domain would have been provided through a Google sanctioned email. Driving traffic to that domain alone vs. from an phishing email through author's vulnerability would have different impact.

2. we do not believe that there is a security sensitive change that needs to be done here.

Wow. Security researchers at Corporations professionally working to secure products. This vulnerability was passed onto Kevin after being vetted by Aleksandr. Maybe the team works on more potent vulnerabilities.


Some more points:

3. Since these emails are sent by the Google Scholar back-end, through Google’s legitimate servers, such a phishing mail would be more likely to bypass email client spam filters (compared to, say, an attacker registering google-scholar-hax.net and sending phishing emails from there).

4. The affected emails would still have Google’s DKIM email signature, proving that it was sent by Google (which would be impossible from a custom attacker-controlled domain). In other words, an attacker could send an email containing any HTML he wants, to any academic email address, and Google still puts a stamp on it saying “this is legit”.


I don't understand this pervasive mentality among companies that run such a cash-for-bugs scheme. Isn't the idea to encourage people to properly report bugs by rewarding them financially, thereby discouraging them from selling the details to the highest unrelated bidder?

All Google is doing is damaging its reputation.


I definitely commend the author for his work, but I think that there might have been a slight misunderstanding here. In his last email, the author talks about how public disclosure would "force" Google to fix the vulnerability. But I read Google's response as simply saying that they did not think the bug qualified for the program, not that they didn't intend to fix it. Then again, my reading is definitely influenced by my time at Google and how seriously my team took this sort of thing.


"we do not believe that there is a security sensitive change that needs to be done here." - Google's Kevin.

I think this means Kevin said they are not going to change the code on their end.

Also, author stated: "Even if this vulnerability doesn’t qualify for a reward, I strongly believe that it should be fixed promptly to protect end users."

Do I read the intent behind this correctly. Please do correct me. Because, to be frank, me and labmate are peeved right now about this. Google Scholar is a functional way to spread awareness about own research, but Google's response got on my nerves pretty badly.


I think that quote could also be interpreted as saying that Google simply didn't see it as a change that qualified for the program, not that they weren't planning on fixing it. But again, I could be entirely wrong; I have no experience with either the team or the program. I just have a hard time imagining any of the engineers that I know there getting assigned a bug like this and clicking 'working as intended - will not fix'.


Then I would ask why is this vulnerability not up for monetary compensation?

http://www.google.ca/about/appsecurity/reward-program/

It is a script that allows one to control behavior (even though in a limited way, but yet authentic way) of Google's web property.

Why did they say that this was not a security sensitive issue? And why did the tone change upon seeing public disclosure looming?

Anyways, I am not happy about handling of this situation. Good on Tom for being responsible, and the exploit did not wander into irresponsible hands. Because, within a day everybody on Google Scholar would have got this exploitable email.


I think I might be missing something - as a Google service user, I'd have to update my own name to be Mr Test<!--BAD STUFF HERE in order to perform a phishing attack on myself?


What you're missing is that the email was generated and sent by Google to the new email address that the author of the article was changing his account to - and this email was the one that he was able to inject arbitrary HTML content into it, including a link to a third-party domain.

He could have entered ANY email address into the "change my email to" form - and Google would send an email there.

With suitable preparation work on his profile name, he could essentially have Google send a custom crafted HTML email to his intended victim - complete with verified domain sender information (and hiding the actual 'click here to verify your email' link so that the email address never actually gets changed).

Getting Google to send phishing emails on your behalf is a pretty big deal.


Chinese spammers are abusing Picasa galleries to spam people now. Couldn't find a way to report that and never heard back from them on Twitter. Marking the emails as spam doesn't stop more making it through. (Actually, marking as spam seems to do SFA.)


I just got spam via a Google calendar event. Basic 419, location in Syria and lots of weapons and money was involved. Followed the link through Sandboxie, ended up actually at Google Calendar, marked as spam and went on with my day.


It's not obvious, from the article, that ANY HTML entities will work -- be unescaped -- for the name display; the article is worded as if the vulnerability was only theorized and not physically tested. Perhaps just a very naive "regex" that is looking for "<[A-z]" start of a token before escaping, which won't pick up "<!". Granted, it is a flawed product, but the article did not describe testing or producing such a PoC, which may be why Google was not willing to award the reporter -- no security threat, only parts of an email could be commented out.


A PoC was produced (see the part of the guy named `</a><a id=a href=http://goo.gl/CQtK5F><link rel=stylesheet href="http://tomvglabs.be/css.php">`), and was sent to the Google Security Team in the initial report.


Understood; from the article, I read "Here's an example of what an email [...] might look like" and understood it to be hypothetical. That's fairly damning, as it's not at all a tricky sequence -- standard, low-hanging "XSS."


I will agree with you here, I thought it was only conceptual as well and that he had not verified the Google team wasn't stripping HTML entities in profile names, etc.

Considering that they missed one though and with the amazing things I've seen done with limited characters and Javascript, I would be surprised if it was not exploitable in some fashion.


Note that your HTML email client is responsible to prevent XSS from mail, so while this is a bit ugly, it's not a security issue comparable to XSS in a web page.


Not quite, if you were the attacker, you'd update your own name to `Mr Test<!--BAD STUFF HERE` and then try to change your e-mail to victim@some.edu and perform the phishing attack on the victim

or yes, as pudquick put it more nicely :)


Wow. Give me a break, please. What the OP reported was a super minor issue, and he's already got what he deserves.

His bug allowed him to inject links into verification emails sent by Google Scholar. He claimed that he could inject CSS links too, but that didn't make this problem any worse. Why? Because it's up to mail clients to load the linked CSS stylesheets or not. Gmail, for example, would never load those remote CSS files. If your webmail client does that, it's time to switch to a better one.

So he could inject links, which is annoying, but still a very minor issue. It may make phishing a bit easier, but you know what phishing has always worked against average Joe if you try hard enough. That means that this problem doesn't really give an attacker any advantages that he couldn't do by himself.

Disclaimer: I'm a member of the team that handles VRP.


> If your webmail client does that, it's time to switch to a better one.

Academic users constitute an audience that's quite often bound to use a single, potentially-outdated webmail system. In our experience, academic/University users are utilizing what we consider "nonstandard" email systems with an order of magnitude more regularity than nonacademic users. You may wish to better understand the audience of the product before asserting that they should "switch".

And claiming something is not a security issue because you're trusting a client you don't control to behave well is naive at best, and apologist at worst. It's genuinely upsetting to hear coming from someone who classifies vulnerabilities.


I didn't say that I trust mail clients. I said mail clients shouldn't accept arbitrary HTML markups and tags in emails. That's a serious problem that needed to be addressed, regardless of Google Scholar being exploited to send emails with arbitrary links or not.


The onus is still on Google - the rule of the web is that any clients that one allows connections from are fair games that one has to address.


If you were using an email client that executes arbitrary HTML, you'd be owned since a long time anyway. That'd like using a browser that doesn't have any cross domain security boundary - it's just not a realistic attack vector, these things don't exist - or do you know an email client that actually interprets JS?


You don't need to execute JS in order to phish, as the original link alludes to with the html comment trick.

This particular comment thread was mostly about webmail clients. But to your specific question... take a look at the link for an incomplete list of email clients that runs JS

http://en.wikipedia.org/wiki/Comparison_of_email_clients#Tem...


What? He could have just as easily used inline styles.


I don't see the author's name in the linked Honorable Mentions page. Did someone from Google pull it because they didn't like this blog post? Searched for "tom", "Mathias", and "vago". No recent results for any of these search terms.


I was expecting them to put my name up there by the time I posted the blog, should be there in the near future though...


Yes, but you also expected them to pay more attention to your exploit?


He's there now. Apparently they did a bulk update. It went from having 0 people named Tom on the whole page to having two people named Tom! https://www.google.com/about/appsecurity/hall-of-fame/distin...


You should not trust emails.


I thought it was going to be about emails sent through GMail, but good catch and well done to Tom.


It looks like your options for formatting the content are pretty limited and you can't change the subject line nor the preamble about Google Scholar, so you wouldn't be able to, say, masquerade as a password recovery email or anything like that. Still, I personally feel like any content injection should be treated seriously.


I agree on the subject line (which is "Verify your email address") and that the mail is sent from "Google Scholar Citations", but as it was possible to include arbitrary CSS files, the content could be changed to pretty much anything... Although there was a length-restriction on the user's name, there were two other fields that weren't escaped properly and thus could be used to insert more HTML-content.


>Although there was a length-restriction on the user's name

As long as any field allows enough chars to:

<script src="http://evilbadpersondomain.com/forTheLulz.js"></script>

....anything is possible from there.


There are email clients that execute JavaScript?


Ah, yeah smartphone and desktop apps don't do js(afaik) - but the web-browser access is still big. If gmail's web interface went down for a day, I think a lot of people would notice.


Gmail doesn't run JavaScript in e-mails, though.


I am not sure - when you post enough js-code into a gmail compose "window" it will crash.


My guess: the WYSIWYG editor uses its own HTML parser and filter, written in JavaScript, that has some O(n^terrible) corner case that your paste test is hitting.


I don't know of any email clients that execute JS inside mails. I don't think we will see such clients anytime soon.


Make that `<link rel=stylesheet href=http://ø.xx>` as JavaScript won't execute in recent mail clients.


Re. current discussions of security and code review because of NSA and other government entities corrupting standards. If Google and Facebook cannot find such simple errors and then even balk at implementing a fix, which turns out to reveal an even larger flaw, what hope is there.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: