What you have to realize is how important Security is to Shopify. We are a trust based business to an extreme extend. We host the livelihoods of hundreds of thousands of other businesses. If we are down or compromised all of them can't make money ( as some of you saw during Black Friday, to the tune of $300k+ a minute at times ).
One of the best ways for us to augment our internal security team is to work with the white hat community. This was a pain before Hacker One but now is significantly easier.
One challenge is that Shopify (still) hasn't really got the profile in the tech industry that a lot of Silicon Valley local companies have. This is totally fine by me, but it's means that if a top white hat sits down and decide what to work on, we are not automatically top of mind.
So we decided to overspent as a kind of "marketing" investment. Hacker one is a classical two sided market place. There is plenty of supply of skilled researchers but also a lot of demand for their services. We want to be known for being one of the most responsive companies and also pay top dollars for top findings.
So the basic idea is that when we launch something new, we 10x the payouts to bootstrap the process of familiarization. We also provide a very convenient local environment for doing the work in. It should be more fun and more lucrative to make Shopify related discoveries then other companies. After this initial period we then reduce the payouts somewhere slightly above community standards. Its all just business 101.
Internally we are actually thrilled how the shopify-scripts/mruby program went. Most (all?) of what was found would have been caught by our sandboxing but we don't want to rely on this. As everyone who does security knows - lots of exploits, even if superficially contained, can sometimes combine into "the big one".
The thing I love the most is the customer service, we are UK based and it doesn't matter what time of day I call, I get through to someone and they are always incredibly well informed and helpful. I've never had to be transferred, it doesn't matter if I'm calling to talk through a weird DNS issue, ask what the current best way to work on our theme locally is or to simply add a specific feature from the top tier (real-time carrier rates) to our plan, the first person I speak to is the one that helps the whole way through. This is so rare with a tech, and frankly, any company and is the single biggest reason why we never look anywhere else or consider moving.
As far as stability goes we've never had an outage and never had a slow down due to high traffic even after national radio appearances.
And you're right about livelihoods being on the line - our Shopify store isn't our biggest revenue stream but it's the one that generates most of the profit as we make higher margins selling direct, if our store were to go down it would be a nightmare.
Have these areas of your application been through external reviews before being opened up to bug bounty or did you decide to start there?
I was thinking that for the amount you've paid out in bounties you could've engaged a reasonable team for several man-months, so was interested in what led you more down the bug bounty line for this.
1) The daily rate for this kind of work is high.
2) It's hard to find people who can execute. (read: security as in pen testing, not security as in filling a PCI compliance checklist).
3) Multiply 1 and 2 by the number of people you want in the team.
You absolutely can get good security testing consultants for $2k/day for example, and probably less depending on the region and exact speciality.
for $368k at a $2k/day rate that would be 184 person-days or several person months (as stated).
It's well known there are teams who specialize in specific technologies, and with a sophisticated customer, not too tricky to interview and make sure you get the right people.
Managing a pen test or a bounty program are very different things. I don't think that they planned to spend $300k in exploits when they first rolled out the bounties.
A company serious about security should have both anyway: Security audits + bounty programs. They fundamentally cover different things, with some overlap.
My personal feeling is that the order of play should be
Internal Security Review --> External Security Review --> Bug Bounty
as you can use the first two stages to catch all the basic stuff and some of the advanced stuff then leave the bug bounty to pay out for things the first two elements missed, but that you still want to know about.
So I'm interested in data that suggests that companies are either going for that route, or have decided to cut out the external consultant review and go straight to bug bounty. In this case I'd guess part of that would be whether Shopify did indeed expect a $300k+ bug bounty programme or whether that was a surprise to them.
Then get custom pen testing and bug bounty programs later. They're a lot of work to get done and get right. pen testing is a lot of investment and preparation upfront, bug bounty is on a longer term.
 Don't bring people at $2k a day if you didn't think through what they're gonna do.
If you're looking to run a bug bounty for a specific feature and want to maximize quality while minimizing effort triaging terrible submissions, I think there are much more cost-effective ways to accomplish that by structuring the bounty program (for instance: I might not run it on a platform like Hackeroni at all).
But if you're looking to run bounties for all your stuff in the future and want to maximize the likelihood that the good bounty hunters will pay attention to you to begin with, this might be a pretty cost effective way to do that.
No one expected to get so many valid sumbmissions in such a short time. We set the payout amounts this high as a way to attract talent at the beginning of the program, which worked quite well to bootstrap it.
Which is what we'll be doing!
So (and I'm guessing from your comment you work for Shopify) I guess I can take from that, that it was the latter.
I don't quite follow this logic. If bugs are now going to be more difficult to find, one would think they would be more valuable, not less.. and that by lowering the bounties they are lowering the incentive for people to search for them.
They are still paying for them even if you don't have a sandbox escape, but less because it's. Ow less critical for their security.
Last time I needed to do something like this, I just asked people to give me an AWS Lambda endpoint to talk to. "You want your custom logic, fine, run it in a container you're responsible for."
It really should be the other way around, public facing, revenue generating projects should do all they can to subsidise vulnerability research and upstream their findings. The alternative would be to start paying up more for the code they use from third party and what are the odds of that.
It makes sense to either: lower the payout to reflect market rate or start a seperate scheme for those projects that others can buy into. Unfortunately if you use a seperate scheme you end up paying for bugs that don't affect you.
Personally I'd have split my own payouts into things from my own project (100%) and things from other projects (10%).
The fact they haven't done this suggests to me that they consider the bounty system too expensive - either in payouts or maintenance. By reducing payouts you will likely reduce interest and increase signal to noise at the cost of less signal.
The problem with selling to 'less well intentioned parties' is that they are hard to get a hold of, hard to trust, and time consuming to work with. I very much doubt that many people who sell to them are not already close to them and their ilk. I also see this much like the arms trade, where illegal trading is an intrinsic property of the trader, not a function of the market.
I like that Shopify isn't your typical Silicon Valley tech company. But coming from a background as a tech and security consultant for Fortune 500 companies, Shopify does feel like I'm back in the tech little leagues sometimes.
And this is an unfair image association problem Shopify has. Their tech is quite amazing and a lot of very brilliant people work there.
It’s great to see Xal, the CEO of a publicly traded company with a close to $4bn market cap, this active on HN. I’ve always considered him one of the most brilliant engineers of our generation ever since the Active Merchant days. To me these programs and the way they are being shared on HN really help bringing his company the credit it deserves.
A lot of PoC are very simple:
a = Decimal.new
A ||= break while break
a = Symbol.new
Keep in mind you're going to see a lot of reports in the beginning, it will level off as you apply fixes. You'll need to prioritize these bug fixes in your organization, if you do not fix them within a time period the researcher has the ability to disclose the bug publicly.
I recommend you review your program guidelines with a lawyer before starting it.
If helpful I wrote down my notes about starting a bounty program although my experiences were formed by larger companies https://medium.com/@collingreene/bug-bounty-5-years-in-c95cd...
I haven't had to manage one (yet), but because we'll no doubt be doing that for several startups this year I've been talking to friends about what their bounty programs have been like, and I've learned a lot of stuff. Frankly, bounties are something I might push back on for a lot of startups.
Care to elaborate why?
Daniel Miessler has a good breakdown of when to consider various types of security testing: https://danielmiessler.com/blog/when-vulnerability-assessmen...
Sqreen also have a handy basic security checklist: http://cto-security-checklist.sqreen.io Specific to bug bounties they say "You need security aware people inside your development teams to evaluate any reports you receive."
[Edited to add following]
Another article http://searchsecurity.techtarget.com/opinion/Is-the-
bug-bounty-program-concept-flawed "There can be a lot of noise in these systems, and the quality isn’t always there, nor are the findings always significant."
And from the same article Google says "Approximately 90% of the submissions we receive through our vulnerability reporting form are ultimately deemed to have little or no practical significance to product security,"
The concerns I've had raised to me about the value of these programs in practice all assume you're already paying extra to triage.
>> The Script Editor app lets you create scripts that are run each time a customer adds items to their cart. Shopify Scripts can have many uses, from discounting products with specific tags to running promotions such as "buy 2, get 1 free". Shopify Scripts are written with a Ruby API that allows a great deal of control and flexibility.
The description on the bug bounty page says those 'Shopify Scripts' are executed in an MRuby environment, which they are trying to keep sandboxed.
This is probably wise; the track record of in-language sandboxing is pretty bad (see also: Java applets.)
"Multiple flaws were discovered in the Hotspot and Libraries components in OpenJDK. An untrusted Java application or applet could use these flaws to completely bypass Java sandbox restrictions. (CVE-2016-3606, CVE-2016-3587, CVE-2016-3598, CVE-2016-3610)"
If anyone wants to test this theory - setup 2 FB accounts, message an image one FB account to the other. Click on the image with the second account (to bring up the lightbox custom thingy they have). Drag that image into notepad (to get the URL)... then try and logout of both accounts, clear your cache, and you'll see the image is COMPLETELY public -> meaning no authentication is required.
They refused to acknowledge this as a "security risk". I laughed, then was really pissed that a PRIVATE image shared between two parties can be viewed w/o authentication above it.
I'm not a member of the Facebook security team, but I work in the industry and your comment frustrates me. I can understand criticizing companies for poor security decisions if they are legitimately bad decisions, but I don't think that's the case here...
I just tested this between two Facebook accounts, and got a URL like this: https://scontent.fsnc1-1.fna.fbcdn.net/v/t35.0-12/12628848_1...
Let's imagine, for the sake of argument, that all those numbers in the URL are predictable and 100% the security relies on the "oh" and "oe" parameters. Taking a rather naive approach both of these appear to be exclusively hex strings. Therefor "oh" is 16 bytes and "oe" is 4 bytes making the total 8*(16+4) = 160 bits
In other words, assuming both parameters are truly random, an attacker would have to try (worst-case) this many combinations to view a victim's image:
"Two months after damaging data breach, Target stock has its best day in 5 years" http://blogs.marketwatch.com/behindthestorefront/2014/02/26/...
"Sad reality: It's cheaper to get hacked than build strong IT defenses"
"The Cost of Cyberattacks Is Less than You Might Think" https://www.schneier.com/blog/archives/2016/09/the_cost_of_c...
And my take on this topic
"Is it really cheaper to get hacked?"
Also monetizing security vulnerabilities by place bets on stock when disclosing has already happened http://www.careersinfosecurity.com/st-jude-medical-files-law... . It does carry the risk of a lawsuit however...
Kinda ironic that a site that is supposedly for hackers wants you to expose yourself to zillion browser vulnerabilities before you can see its content.
To go more tinfoil, a "trash" laptop on its own subnet.
Giving your users a ruby interpreter inside your infrastructure is a terrible idea. They're just one unreported bug away from disaster!
One could think of a few alternatives, all of them involving decoupling Shopify's servers from users' scripts.
It could be anything from Docker/k8s to AWS lambda to a custom DSL. I'm not saying any option is easy - proper solutions tend to require effort.