I'm honestly not that impressed by this. They're basically picking some basic types of exploits and are claiming to guard against those, which is nice and all but it seems like they're targeting the buzzword issues and aren't talking about anything really special.
There were a few things that really stood out to me-
* SQL Injection is something every web application should do. This is a completely separate issue from CSRF (cross site request forgery), but they conflate the two as if they're one. Talking about specific issues they're focusing, especially when those things are not only extremely basic to deal with but also what I would consider programmer buzzwording makes me wonder what they aren't doing.
* Payment Industry Best Practices means a hell of a lot more than "we through an SSL certificate on the site" and "we encrypt your junk". If you're going to claim that you're following payment industry standards I want to hear a little bit about PCI compliance and I sure as hell want an external audit.
* The Bounty Program looks nice, but the fact that it has so many people who have used it and that they clearly aren't disclosing the issues that come up leaves me a bit concerned. How serious are these issues, why weren't they found in advance, and what internal changes took place to prevent them from happening again?
The main point I'm getting at here is that this seems like a marketing site, not a real security disclosure page, and that when you really get down into it they're saying a whole lot of nothing. If they really want to impress me they can get into more technical detail, and if they don't want to do that I'd love to see a third party audit them properly.
This is exactly the page most startups should have.
First, startup customers to a first approximation don't care about the distinction between CSRF attacks and SQL injection. The neuroreceptor this page is trying to trip is "this company understands the concept of application security". For most companies, counterintuitively, the more you delve into the specifics, the less confidence you instill: you're increasing your customers perception of risk.
Second, there's nothing a typical company can say to clear the bar you're implicitly setting, which is "convince a technologist familiar with the issues that their application is free of vulnerabilities". Nobody is free of vulnerabilities. There are no tea leaves to be read here about code quality. To understand code quality, you have to look at or test code.
Third, since the objections they're addressing on this page are nebulous, appeals to authority through naming best practices or citing industry analogs are just fine. Also, what do you expect to learn from "PCI compliance"? PCI is a joke.
Fourth, most companies don't disclose vulnerabilities. Contributors to this bug bounty have disclosed, which presumably means that payout on the bounty doesn't include an NDA. So what are you complaining about? There's a list of named bugfinders on the bounty page. Go ask them what they found.
Github has over the last few years built one of the best appsec teams in the business. Look at their security page. Coinbase's is, if anything, better. Dial back your expectations for pages like this. Coinbase makes it easy for people who have found vulnerabilities to report them to Coinbase, and makes it clear that they understand the basic concept of security for application providers. I grade security pages "pass/fail", and this one clearly passes. Startups should take cues from it and pages like it.
(I don't know anything about Coinbase's actual security practices or the wisdom of keeping "90% of bitcoins offline" or whatnot; I'm talking exclusively about the page itself. I don't like Bitcoin and find it very difficult to take seriously.)
To elaborate just a wee bit on what Thomas said, there are many, many startups which transact real money (via, e.g., taking credit cards on their website, even if via one of the methods where it doesn't get POSTed at their server) which don't go as far as saying "Here's the address you can talk to if you find something critical. We WILL get back to you."
Startups without this page have often found out about security vulnerabilities via posts at third party sites. Regardless of the moral righteousness of that, that is for better or worse the cultural expectation of many security researchers.
Also, since it's on your website, you're going to have a bit of tension in serving the "Needs to report a security vulnerability" audience at the same time as you're supporting non-technical customers who care about "security" for business reasons. Those are very different conversations. I had one with a stakeholder at a large organization who was worried about the physical security of my servers recently. I told him that they were in a professionally managed datacenter, behind a gate, which required a keycard to access, and that if I showed up at the door they would turn me away because that isn't the model at my host. His response was, I kid you not, "Oh, wow, you're Fortune 500? Sorry, I just have to ask that because a lot of our vendors keep the server in their home or office."
First- yeah, sure, which is why I explicitly said "that this seems like a marketing site".
Second- That's not the bar I'm setting at all, and putting that extreme viewpoint in my mouth is silly at best. I do agree with your underlying point though, which is that you can't gauge code quality without looking at the code. That's why I think they'd be better off saying "and we do regular audits" and detail that bit. Otherwise this is basically them saying "we do security well, trust us!".
Third- They aren't really naming the best practices, simply stating that they follow them. If they're going to say they follow the standards of the payment industry, they should have some standard to point at and say they follow (even if it is a joke).
Fourth- I'm not expecting them to disclose everything, just pointing out that without that disclosure the number found there seems a bit high (especially for a company that's only 18 months old).
The major difference between these guys and Github is that I haven't seen the Github CEO and Founder post his security page on this site. This was clearly an advertisement to the people on this site, and I don't think it does a good job at all of saying that they understand security. What it tells me is that they understand security blogs, as most of the content in that post seems like it was ripped from topics on this site over the last year.
(I also like asides- if Github's appsec team is so good how come their enterprise product gets nailed with exploits all the time? Their last set of releases broke their authentication system so badly that it allowed anyone to log in as anyone else, including admins).
>90% of customer funds are stored offline.
>Offline storage provides an important security measure against theft or loss.
>We distribute bitcoin geographically in safety deposit boxes around the world.
Using offline, split, encrypted files has a huge amount of security: Breaking into n+1 bank vaults (simultaneously) is not enough. That's pretty nice. It's not 100% secure, but probably better than average compared to online wallets or exchanges.
>SQL Injection is something every web application should do. This is a completely separate issue from CSRF (cross site request forgery), but they conflate the two as if they're one.
It sounds like they're describing a web application firewall that detects and blocks both of those things together. (I assume the application itself is also secured to their knowledge against these separately.)
Sadly, they're doing a lot better than most of the Bitcoin community. I recently found two exploitable XSS issues on Blockchain.info, a website which runs the largest number of Bitcoin based wallets in the entire network. To get a response from them, I had to use a public front-page post on reddit just to get an email address to contact.
Some of them are fairly interesting and not just simple exploits, for example Coinbase prices BTC based on a third party exchange and due to the way that they fetch the pricing it was possible to manipulate the price down to below market levels and make a bunch of money, they paid out 5 BTC for that and it's on the bug bounty page, although it's not a "security" issue per-se. Probable that some of their others are similar in nature.
Isn't security is under one of those "things you can fix later" startup problems these days?
As far as I can tell, it goes: hype to serve growth to create a fad to exponentiate your user base to get you more funding to ... then start considering fundamental problems of architecture and security.
All startup writing focuses on growth at all costs by manipulating pleasant surface experience. The current model of "just keep iterating until users stick" is also: know as little as possible and keep changing things until you generate a random key to the lock of your market. That model of company building is in opposition to security and stability.
Just keep paying 21 year olds 150k salary+100k bonus to make rails apps. It'll all work out in the end.
Isn't security is under one of those "things you can fix later"...
The problem is that the saying is: make it work, make it good, make it fast. Most programmers stop after the first step. "Make it secure" is not even an afterthought, and generally you only think about it after being bitten.
To be honest, "make it work" can be hard in itself. How do I justify spending four hours to add an issue and fifty to go through the other steps? I can imagine telling my boss "oh yeah, I added the feature two days ago, then I cleaned up the design, now I'm optimizing it and then I'll think of ways in which it can be exploited".
It would be great if they could fix their customer service, and their false level 2 account verification status, that still flags the information you provide as false. That status is what makes it 'instant', but you cannot even achieve it with valid info.
Then they have the audacity to send you the same email signed by different support staff members. Coinbase is garbage and it's only running because they're the only competitors who have not burned their house down. I'm waiting for improvement, or a valid competitor so I can be on to the next one.
This seems like a good set of technical controls to mitigate the inherent risk in storing third-party bitcoins.
The main thing I'd be concerned about would be insider controls; what happens if someone kidnaps someone significant to one of the founders and threatens to do bad things unless he subverts the control. While it's quite reasonable to lose $5mm or whatever bitcoin Coinbase currently controls to save someone's life, the potential for this kind of attack is what makes it at all likely -- if you could articulate exactly why that attack wouldn't work, it wouldn't happen.
("Someone kidnaps someone important to a staff member" is the hard problem; it also implies a solution to the "staff member goes evil", "has always been evil", "gambling or drug debt", etc. The weakest attack of this type is "someone pwns and employee's laptop or online accounts", which potentially could subvert the display, so a user approves a $10 transaction and a $500k transaction is actually approved.)
You'd have to articulate a multi-person control over large pools of the "cold" bitcoins to really deter this kind of attack. This security should be implemented in such a way that people can't easily defeat it, even over time. That's a hard problem in a rapidly growing organization.
Strong audit systems to catch this after the fact, combined with preventive controls to minimize the actual scale of an exploit, is fine. I have zero concerns with a loss of less than $5mm or so at Coinbase; the equity value of the company would cover it.
I would assume that the cold stored coins are encrypted requiring t of n keys to decrypt using some standard secret sharing scheme. This should stop the 90% offline being an easy target for internal issues and also for physical hold-ups / robbery. I'm not sure if publishing the exact method of this would be a good, or a bad thing.
Having their bank / storage require a fixed notice period before allowing access to the offline funds - like a time lock - would also make it harder to steal the offline funds.
Not knowing their stack outside of the guessed Ruby/Rails, I'd guess the weakest point lies around code deployment.
What happens in the event of a world catastrophe, like a terrorist attack, plague, or meteor strike, killing key people, limiting travel, and otherwise inhibiting recovery of all these distributed tokens and keys?
Security is not only protection from being hacked, but protection form loss.
If Bitcoin is to survive political, economic, and environmental turmoil, shouldn't we worry about our coins being stored with such potentially fragile recovery plans?
I like the sound of most of that, especially the two-factor authentication on all accounts. One thing wasn't clear to me,
"Wallets (and private keys) are stored using AES-256 encryption."
Are individual users wallets stored with a key derived from the users password? Or, rather, could you act, under coercion say, to transfer my funds without my password? (i.e. in a "bank robbery" situation)
Could use something like PaperBack, which can handle >1MB/A4 sheet. Combine that with a decent quality printer/paper and autofeed scanner, and you could quite easily dump a few hundred MB without too much manual effort.
I believe "offline wallets" require only the (relatively) short keypair to be stored, which would make this a practical solution.
If they're using much fewer wallets, or are confident that the paper would be last-ditch restore only, they could print (semi-)human-readable data in an OCR optimised font, to give them some chance of recovering from otherwise corrupted media. I'd put more trust in the automated bitmapping with sufficient redundancy & forward-error correction, but wouldn't discount extra semi-manual methods for any high-value wallets.
They would be done in the way that I personally do, or at least something similar. The private key is split using Shamir's Secret Sharing Scheme into several pieces, which is then printed as a high-redundancy QR code and then hidden. The key only needs to be 256bit, so there's not a lot of data to hide.
Does beauty have to involve difficulty in reading? I mean, maybe great beauty does involve not communicating a lot of factual information clearly. Great art and even beautiful print magazines generally don't do this.
Having the text all across the page did not make the information very accessible to me.
And going from the huge light-blue banner to the gray and white was actually rather jarring. It took me a minute to decide I had to scroll down to the text rather than clicking a "next" button or something.
And icons looked nice but like most icons were more eye-pleasing than actually communicative.
I'm surprised some people find it beautiful but I'll file it under "once interface design made computers (barely) usable, designers decided they had to make them unusable again (but now beautiful)". It's the world of "satisficing".
This is designers and programmers talking at cross purposes.
I don't know if I've been elated by a website lately but I'm a bit doubtful I'd even want to be. Mostly I want the useful information to go down easy without excess eye-strain and only then do I notice beauty (and naturally I prefer the understated version of beauty).
Wikipedia and hn are two of many examples of sites that are easy to read (though the text on hn is rather small, it's right for it's purpose since it makes threading easy).
I do have wall paper of great art if want to be inspired or elated but mostly I chose "real life" activities for my elation.