Hacker News new | past | comments | ask | show | jobs | submit login
Shopify has paid over $300k in security exploit bounties (hackerone.com)
251 points by mrusschen on Dec 17, 2016 | hide | past | web | favorite | 78 comments

This wasn't unexpected outside of the extend of the bounties.

What you have to realize is how important Security is to Shopify. We are a trust based business to an extreme extend. We host the livelihoods of hundreds of thousands of other businesses. If we are down or compromised all of them can't make money ( as some of you saw during Black Friday, to the tune of $300k+ a minute at times ).

One of the best ways for us to augment our internal security team is to work with the white hat community. This was a pain before Hacker One but now is significantly easier.

One challenge is that Shopify (still) hasn't really got the profile in the tech industry that a lot of Silicon Valley local companies have. This is totally fine by me, but it's means that if a top white hat sits down and decide what to work on, we are not automatically top of mind.

So we decided to overspent as a kind of "marketing" investment. Hacker one is a classical two sided market place. There is plenty of supply of skilled researchers but also a lot of demand for their services. We want to be known for being one of the most responsive companies and also pay top dollars for top findings.

So the basic idea is that when we launch something new, we 10x the payouts to bootstrap the process of familiarization. We also provide a very convenient local environment for doing the work in. It should be more fun and more lucrative to make Shopify related discoveries then other companies. After this initial period we then reduce the payouts somewhere slightly above community standards. Its all just business 101.

Internally we are actually thrilled how the shopify-scripts/mruby program went. Most (all?) of what was found would have been caught by our sandboxing but we don't want to rely on this. As everyone who does security knows - lots of exploits, even if superficially contained, can sometimes combine into "the big one".

Not really related but as a Shopify customer of 5 years, we are unbelievably happy with the platform.

The thing I love the most is the customer service, we are UK based and it doesn't matter what time of day I call, I get through to someone and they are always incredibly well informed and helpful. I've never had to be transferred, it doesn't matter if I'm calling to talk through a weird DNS issue, ask what the current best way to work on our theme locally is or to simply add a specific feature from the top tier (real-time carrier rates) to our plan, the first person I speak to is the one that helps the whole way through. This is so rare with a tech, and frankly, any company and is the single biggest reason why we never look anywhere else or consider moving.

As far as stability goes we've never had an outage and never had a slow down due to high traffic even after national radio appearances.

And you're right about livelihoods being on the line - our Shopify store isn't our biggest revenue stream but it's the one that generates most of the profit as we make higher margins selling direct, if our store were to go down it would be a nightmare.

So I'd be interested to know your thoughts on bug bounties as against "traditional" security reviews.

Have these areas of your application been through external reviews before being opened up to bug bounty or did you decide to start there?

I was thinking that for the amount you've paid out in bounties you could've engaged a reasonable team for several man-months, so was interested in what led you more down the bug bounty line for this.

> you could've engaged a reasonable team for several man-months

Doubt that.

1) The daily rate for this kind of work is high.

2) It's hard to find people who can execute. (read: security as in pen testing, not security as in filling a PCI compliance checklist).

3) Multiply 1 and 2 by the number of people you want in the team.

Sorry, but got to disagree with you there. I'm a security tester and have been in the industry for 15+ years either as a buyer or provider of services, for small and large companies, I've been involved in procurement of multi hundred $k tests and involved in the delivery similar sized engagements.

You absolutely can get good security testing consultants for $2k/day for example, and probably less depending on the region and exact speciality.

for $368k at a $2k/day rate that would be 184 person-days or several person months (as stated).

It's well known there are teams who specialize in specific technologies, and with a sophisticated customer, not too tricky to interview and make sure you get the right people.

Sorry. Should have clarified. They could get pen testers and it would cost as much as they paid so far.

Managing a pen test or a bounty program are very different things. I don't think that they planned to spend $300k in exploits when they first rolled out the bounties.

A company serious about security should have both anyway: Security audits + bounty programs. They fundamentally cover different things, with some overlap.

Ahh indeed, that was kind of my initial question for the top-commentor. I'm interested in whether Shopify explored getting security consultants in to review this area before going for a bug bounty on it, or went straight to the bug bounty.

My personal feeling is that the order of play should be

Internal Security Review --> External Security Review --> Bug Bounty

as you can use the first two stages to catch all the basic stuff and some of the advanced stuff then leave the bug bounty to pay out for things the first two elements missed, but that you still want to know about.

So I'm interested in data that suggests that companies are either going for that route, or have decided to cut out the external consultant review and go straight to bug bounty. In this case I'd guess part of that would be whether Shopify did indeed expect a $300k+ bug bounty programme or whether that was a surprise to them.

I think companies should get standard compliance stuff done first (they're in payments so they certainly have lots of these to have) + a set of standard vulnerability scans (Nesus / FireEye). These things are "cheap" and easy to get, it's standard package 1-week-audit-for-XYZ.

Then get custom pen testing and bug bounty programs later. They're a lot of work to get done and get right. pen testing is a lot of investment and preparation upfront[1], bug bounty is on a longer term.

[1] Don't bring people at $2k a day if you didn't think through what they're gonna do.

I think what 'xal is trying to say is that this bounty had more to do with security marketing than with accomplishing a particular tactical security goal. Their comment even concludes with a note that most or all the findings were accounted for with a sandboxing design they'd already planned.

Interesting approach to marketing, I'll give them that. Will be interesting to see how it works out for them in the medium-long term.

It's security marketing, in particular: they're trying to increase engagement with their bug bounty program. A big problem bug bounty programs that run without promotion run into is that the median submission is of terrible quality, but the best submissions are so good it's hard to get them through any other vector.

If you're looking to run a bug bounty for a specific feature and want to maximize quality while minimizing effort triaging terrible submissions, I think there are much more cost-effective ways to accomplish that by structuring the bounty program (for instance: I might not run it on a platform like Hackeroni at all).

But if you're looking to run bounties for all your stuff in the future and want to maximize the likelihood that the good bounty hunters will pay attention to you to begin with, this might be a pretty cost effective way to do that.

The penetration test that would have reliably generated the same outcome as this bug bounty program would have cost ~1/6th as much as the bounty did.

How many times do you need to repeat the pen test to catch new bugs in new releases?

The outcome of the project altered the design of Shopify Scripts to moot most of the bug classes people reported. So: not that often.

$2k/day is a reasonable rate, just a shade towards the high side for vanilla web security work.

You seem to assume we set out to pay this amount to begin with. Indeed for this amount we could have went other ways, but hindsight is 20/20.

No one expected to get so many valid sumbmissions in such a short time. We set the payout amounts this high as a way to attract talent at the beginning of the program, which worked quite well to bootstrap it.

I literally just got off the phone with Hacker One on Friday and they recommend the exact OPPOSITE of what you did. Start with low or no bounty to get the easy stuff off the plate and figure out what class of reports you want -- then ramp up the bounty over time.

Which is what we'll be doing!

Ah no I didn't really assume that, it was part of my question, as to whether it was a deliberate strategy to jump straight to bug bounty with an expectation of a potentially large number of reported issues or an unexpected event, where the number of reported bugs and their severity, was more than you'd expected.

So (and I'm guessing from your comment you work for Shopify) I guess I can take from that, that it was the latter.

Of course we use both. One provides depth, the other breadth.

Paying one engineer's worth of salary to bug bounties isn't overspending, it's freelancing

we expect most vulnerabilities will no longer be exploitable without additional bugs in the kernel or seccomp itself, and so we are lowering the payout amounts for our program to 10% of previous levels.

I don't quite follow this logic. If bugs are now going to be more difficult to find, one would think they would be more valuable, not less.. and that by lowering the bounties they are lowering the incentive for people to search for them.

No, the point is that vulnerabilities in MRuby (the scope of this bug bounty) are now less impactful for them.

They are still paying for them even if you don't have a sandbox escape, but less because it's. Ow less critical for their security.

Yes, I got the impression they'd been trying to build & secure a shared-process sandbox for customer-supplied logic, and have now given up on that and (wisely IMO) moved to a separate process model.

Last time I needed to do something like this, I just asked people to give me an AWS Lambda endpoint to talk to. "You want your custom logic, fine, run it in a container you're responsible for."

More expensive to discover, but less valuable to discover. It's harder for white hats to find them, so you gotta pay more if it's important, but it's also harder for blackhats to find them, so it's less pressing to find them quickly.

I suppose because the vulnerabilities would actually be vulnerabilities in someone else's code they don't feel it should come under their umbrella?

That's flawed reasoning IMO. Do they expect the underlying porject maintainers to have the same resources they do to compensate third party vulnerability research?

It really should be the other way around, public facing, revenue generating projects should do all they can to subsidise vulnerability research and upstream their findings. The alternative would be to start paying up more for the code they use from third party and what are the odds of that.

With those lower 'underlying project' bugs there are multiple actors who can compensate for vulnerability research, so the market rate goes down.

It makes sense to either: lower the payout to reflect market rate or start a seperate scheme for those projects that others can buy into. Unfortunately if you use a seperate scheme you end up paying for bugs that don't affect you.

Personally I'd have split my own payouts into things from my own project (100%) and things from other projects (10%).

The fact they haven't done this suggests to me that they consider the bounty system too expensive - either in payouts or maintenance. By reducing payouts you will likely reduce interest and increase signal to noise at the cost of less signal.

or send the signal to other less well intentioned parties who see the value of owning vulns to popular underlying libs

I don't know how strong that argument is, yet I suspect not very.

The problem with selling to 'less well intentioned parties' is that they are hard to get a hold of, hard to trust, and time consuming to work with. I very much doubt that many people who sell to them are not already close to them and their ilk. I also see this much like the arms trade, where illegal trading is an intrinsic property of the trader, not a function of the market.

Doesn't this assume that as each bug in MRuby becomes more difficult to find, they're also more severe? Couldn't it instead be the opposite, that each bug is less severe because all of the serious ones have been closed?

Very funny

Imagine someone compromises your system and downloads the PIA of all of your customers. Does it really matter where the exploited vulnerability was in your stack? The business effect is the same: your customers are still pissed at you.

As someone who has built a company around working closely with the Shopify platform, I'm very happy Shopify is taking these initiatives.

I like that Shopify isn't your typical Silicon Valley tech company. But coming from a background as a tech and security consultant for Fortune 500 companies, Shopify does feel like I'm back in the tech little leagues sometimes.

And this is an unfair image association problem Shopify has. Their tech is quite amazing and a lot of very brilliant people work there.

It’s great to see Xal, the CEO of a publicly traded company with a close to $4bn market cap, this active on HN. I’ve always considered him one of the most brilliant engineers of our generation ever since the Active Merchant days. To me these programs and the way they are being shared on HN really help bringing his company the credit it deserves.

The found vulnerabilities are mostly on mruby itself so it's pretty interesting.

A lot of PoC are very simple:

    a = Decimal.new
    a.initialize a

    A ||= break while break

    a = Symbol.new


I don't know if it's the right place, but does anyone has feedbacks regarding the Hacker One platform? Especially for small SaaS (between 1-2M ARR)?

As someone who has used hackerone on both sides (managing and reporting bugs) I'd suggest starting a private program first. Select a small group of researches known to provide good reports and wait for them to start rolling in. Use this as a pilot, if you see value in what's being reported keep it open.

Keep in mind you're going to see a lot of reports in the beginning, it will level off as you apply fixes. You'll need to prioritize these bug fixes in your organization, if you do not fix them within a time period the researcher has the ability to disclose the bug publicly.

I recommend you review your program guidelines with a lawyer before starting it.

+1 to starting a private program first which is recommended by all bounty programs.

If helpful I wrote down my notes about starting a bounty program although my experiences were formed by larger companies https://medium.com/@collingreene/bug-bounty-5-years-in-c95cd...

Agreed with others that it's worth considering a small private program. You can do time boxed bounties with a capped cost, that way you're getting results without committing to a huge budget. Check out Bugcrowd's "on demand" bounty: https://bugcrowd.com/solutions

I strongly urge you to find security management people at existing startups to talk to before starting a bug bounty program at your own startup. There are things about them that are good, but those things can be counterintuitive.

I haven't had to manage one (yet), but because we'll no doubt be doing that for several startups this year I've been talking to friends about what their bounty programs have been like, and I've learned a lot of stuff. Frankly, bounties are something I might push back on for a lot of startups.

> Frankly, bounties are something I might push back on for a lot of startups.

Care to elaborate why?

If you introduce a bug bounty too early, you will be paying out for vulnerabilities that could be caught or prevented in a much more cost effective manner (vulnerability assessments, penetration tests, developer training, appropriate monitoring).

Daniel Miessler has a good breakdown of when to consider various types of security testing: https://danielmiessler.com/blog/when-vulnerability-assessmen...

Sqreen also have a handy basic security checklist: http://cto-security-checklist.sqreen.io Specific to bug bounties they say "You need security aware people inside your development teams to evaluate any reports you receive."

Alex Stamos gave a great talk a while back at https://www.youtube.com/watch?v=2OTRU--HtLM while he was at yahoo. Among the things he covered were the risks of bug bounties.

[Edited to add following]

Another article http://searchsecurity.techtarget.com/opinion/Is-the- bug-bounty-program-concept-flawed "There can be a lot of noise in these systems, and the quality isn’t always there, nor are the findings always significant."

And from the same article Google says "Approximately 90% of the submissions we receive through our vulnerability reporting form are ultimately deemed to have little or no practical significance to product security,"

The issue I've read about (I'm not a security practitioner, more like a hobbyist) is that the sheer mass of bogus bounty submissions take valuable time to evaluate. If you start up a bug bounty program, you're essentially signing up to read hordes of submissions that you'll be obligated to check out, the overwhelming majority of which pan out to be nothing. And many (most?) of those, will contain petulant and arrogant demands that the bounty be paid even though the "finding" presented is no actual vulnerability at all.

(Disclosure: I work for Bugcrowd) That's why we suggest going with a 'managed' bounty. That's where Bugcrowd triages all of the incoming bugs and then passes along the valid bugs for you to prioritize and reward. It cuts out all of the noise and only gives you the results.

Every startup with significant bounty programs I've talked to either staff an internal triage team or outsource triage --- but, either way, they are spending extra money on triage. I haven't talked to any that don't do this.

The concerns I've had raised to me about the value of these programs in practice all assume you're already paying extra to triage.

Right but the cost differential between staffing it yourself and paying someone else to do it is substantial. Doing it yourself will cost you 3-5x more than paying someone else who is able to do it at scale.

I'd rather see if someone else wants to take a swing at that softball lob first.

Still cheaper than one good security engineer. :)

This bug bounty program was limited to MRuby and paid by Shopify. Does anyone know where they use MRuby in their stack?

From [1]:

>> The Script Editor app lets you create scripts that are run each time a customer adds items to their cart. Shopify Scripts can have many uses, from discounting products with specific tags to running promotions such as "buy 2, get 1 free". Shopify Scripts are written with a Ruby API that allows a great deal of control and flexibility.

The description on the bug bounty page says those 'Shopify Scripts' are executed in an MRuby environment, which they are trying to keep sandboxed.

[1] https://help.shopify.com/api/tutorials/shopify-scripts

Shopify seems to basically have given up on application-level sandboxing, and now relies on process-level sandboxing (e.g. seccomp).

This is probably wise; the track record of in-language sandboxing is pretty bad (see also: Java applets.)

I was under the impression that JVM, CLR and probably e.g. V8 are reasonably secure. I'd like to learn more about recent sandbox-escaping vulnerabilities in these runtimes. Got any resources?

For JVM, consider this update which happened in 2016:

"Multiple flaws were discovered in the Hotspot and Libraries components in OpenJDK. An untrusted Java application or applet could use these flaws to completely bypass Java sandbox restrictions. (CVE-2016-3606, CVE-2016-3587, CVE-2016-3598, CVE-2016-3610)"


"reasonably secure" isn't good enough. Chrome puts additional sandboxing around V8

about a year's salary for a security-focused engineer. Did they get more or less bang for their buck? I guess we need to ask haquaman how many hours he spent in collecting that $49k (by my count)

I spent 3 days on it and collected $70k. Per hour that's near a top lawyer :P

Were you recruited to work on the bounty?


pft, the salary of one engineer & you get pure results from him. Who wouldn't do that?

Honestly the highest bounty is $2000 only, seem low for remote code execution.

The page won't load for me, it said, because my browser, Opera Mini 4.x, is not supported. "But I'm browsing on a Nokia feature phone," I vehemently object. "No exploits will run. Even ads don't."

I'm surprised it's so little. 300k is very little compared to the financial burden that a security breach would bring. Talk to Target and Yahoo about cost. If anything they might start looking into way of increasing it.

It's very delicate to talk about financial burden, given the following references:

"Two months after damaging data breach, Target stock has its best day in 5 years" http://blogs.marketwatch.com/behindthestorefront/2014/02/26/...

"Sad reality: It's cheaper to get hacked than build strong IT defenses" http://www.theregister.co.uk/2016/09/23/if_your_company_has_...

"The Cost of Cyberattacks Is Less than You Might Think" https://www.schneier.com/blog/archives/2016/09/the_cost_of_c...

And my take on this topic

"Is it really cheaper to get hacked?" https://dadario.com.br/is-it-really-cheaper-to-get-hacked/

In the end, it's really the consumer's fault for not caring. But then again, there's really nowhere to run to anyways. The companies are beginning to cover their asses with cyberinsurance so that they can cover their legal fees and obligations. The only loser is the consumer.

So, the best way to monetize a breach is to play with the company's stock while you disclose the breach. Interesting.

Yeah that's already a thing http://www.zdnet.com/article/cybercriminals-turn-talents-to-...

Also monetizing security vulnerabilities by place bets on stock when disclosing has already happened http://www.careersinfosecurity.com/st-jude-medical-files-law... . It does carry the risk of a lawsuit however...

and about $150k is made of only 3 people being rewarded interestingly enough.

"It looks like your JavaScript is disabled. To use Hacker One, enable JavaScript in your browser and refresh this page."

Kinda ironic that a site that is supposedly for hackers wants you to expose yourself to zillion browser vulnerabilities before you can see its content.

'Cos a true hacker would be able to enable JavaScript without exposing himself.


Up-to-date browser in Virtualbox with uMatrix for manual whitelisting?

To go more tinfoil, a "trash" laptop on its own subnet.

Better keep your real laptop at a safe distance unless you want VM escape --> Bluetooth propagation --> pwned.

Well, web Bluetooth is a thing now, so no need to escape VM in that scenario

Paranoid much?

You have a point when it comes to irony.

What's funny is Facebook -> has a publically faced image server that has NO authentication required to see even private messages. When FB Security was contacted ... they say it was not a "guessable" URL, ergo security through obscurity was their "security method" of choice. This was two days ago.

If anyone wants to test this theory - setup 2 FB accounts, message an image one FB account to the other. Click on the image with the second account (to bring up the lightbox custom thingy they have). Drag that image into notepad (to get the URL)... then try and logout of both accounts, clear your cache, and you'll see the image is COMPLETELY public -> meaning no authentication is required.

They refused to acknowledge this as a "security risk". I laughed, then was really pissed that a PRIVATE image shared between two parties can be viewed w/o authentication above it.


Hi "FBSecuritySux",

I'm not a member of the Facebook security team, but I work in the industry and your comment frustrates me. I can understand criticizing companies for poor security decisions if they are legitimately bad decisions, but I don't think that's the case here...

I just tested this between two Facebook accounts, and got a URL like this: https://scontent.fsnc1-1.fna.fbcdn.net/v/t35.0-12/12628848_1...

Let's imagine, for the sake of argument, that all those numbers in the URL are predictable and 100% the security relies on the "oh" and "oe" parameters. Taking a rather naive approach both of these appear to be exclusively hex strings. Therefor "oh" is 16 bytes and "oe" is 4 bytes making the total 8*(16+4) = 160 bits

In other words, assuming both parameters are truly random, an attacker would have to try (worst-case) this many combinations to view a victim's image: 2,135,987,035,920,910,082,395,021,706,169,552,114,602,704,522,356,652,769,947,041,607,822,219,725,780,640,550,022,962,086,936,576

I guess that all the money Shopify makes allows them to afford weak reasoning/engineering.

Giving your users a ruby interpreter inside your infrastructure is a terrible idea. They're just one unreported bug away from disaster!

One could think of a few alternatives, all of them involving decoupling Shopify's servers from users' scripts.

It could be anything from Docker/k8s to AWS lambda to a custom DSL. I'm not saying any option is easy - proper solutions tend to require effort.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact