Hacker News new | past | comments | ask | show | jobs | submit login

Startups optimize for fast growth and monetization with limited resources. Time spent securing a site is considered time wasted not improving SEO or user experience. It's already well known that people will sign up for an insecure site because they really don't care until something happens, and apologies after you've gotten traction (and their money and/or data) are less risky than potentially going live later than sooner.

In other words, not only do many startups not care, they would consider application security to be actively harmful.

This is of course assuming they know. Vulnerabilities may exist in libraries, packages and frameworks which are not known about, or in the case of PHP, old and unsafe practices are easier to copy and paste and tend to proliferate on tutorial and Q/A sites.




Considering AppSec harmful is ridiculous.

I know a particular startup that received millions of dollars in VC to start a payment processor. I RCE'd them in under 5 minutes and they remained vulnerable for more than a month after I reported it (this was within the last two months). This is frankly ridiculous and is a good extreme that shows why appsec being harmful is just not the case.

There is a difference between emphasizing only security, and building software with appropriate security measures in place.


You're misunderstanding the point. Pretend you're a cash-strapped startup, still financially in the red. You discover a security hole in your software. You have these choices on what to do today:

1) Spend the day implementing a new feature that will very likely improve revenue/traction

2) Spend the day doing preventative security measures to protect you from something that might only be a problem when you're actually making lots of money

You choose option #1. Choosing #2 is "harmful" to you and your investors.


A payments startup had better get security right from the start, because that is their business.

Your average web startup that's providing jazzy ringtones and a new poke app, sure, should leave security for last.


This might be tough to answer due to survivorship bias, but can anyone name a startup, payment focused or otherwise, that failed because of a security vulnerability/vulnerabilities?


Would you consider DigiNotar a startup? It's certainly a famous example. Also, I happen to know of "RockYou", the facebook app people who were pretty much demolished not long after their massive account breach. Co-incidence or cause? They may be still alive but under new management.


A number of Bitcoin services have gone under after being hacked. http://www.theguardian.com/technology/2014/mar/18/history-of...


I don't see a point to arguing this. Both are harmful. You can end up like CurrentC at the top of hacker news, or you can be 1 day behind.

Both are bad. You can maybe prevent yourself from having to make this decision by being security aware from day0


You are ignoring the argument - a fundamental tenet of economics is there is a scarcity of resources - this is even more true with a startup, and, you need to make a decision as to where you will spend those incredibly limited resources.

The point being made is that in the beginning, the primary focus has to be getting traction with your potential customers.

Yes, you need to take make commercially reasonable efforts to secure your website, but, dedicating too much effort to prevent criminals from hacking you, to the point that you are not focussing on building your business, is counter productive. You may end up with a perfectly secure site that absolutely nobody is using - far better to have an incredibly popular site, that may have some security issues, that you can then lock down once you have the resources to do so.

But, this is about startups. CurrentC has millions and millions of dollars to spend - they have zero excuse for getting hacked.


Well, it depends on the cost of a breach, and the probability thereof. Those in turn depend on how you choose to calculate them.

If my company leaks our customer's e-mail addresses and plaintext passwords, how much cost is incurred? Is it a huge cost, because some of the customers have reused their e-mail password and they lose a bunch of personal data and accounts on dozens of sites? Or is it a small cost because they should have followed good security practices by using different passwords on different sites, so it's their fault they had more than an e-mail address leaked? Or is no cost incurred at all because hey, it's our customers not us who suffer from this. It's not like we're going to pay them any compensation!

Likewise, if there are hundreds of people scanning the web trying to exploit the security problem and it's easy to automatically detect, it's almost certain to get hit - on the other hand if it's difficult to find or exploit the risks may be lower. Of course, if you coded the bug in the first place, you might not be informed enough to assess this accurately.


The unfortunate truth is that the long-term costs to startups of compromised security is low because most people don't really care about it anyway. Target, Home Depot, Adobe, et al aren't seeing a massive loss of revenue due to being compromised.

There's a lot of press about incidents for a day or maybe three, it gets posted to HN and everyone has a <stuffy>very serious and very academic</stuffy> discussion about it, and within a week it's forgotten entirely.

People just don't care. It's nothing more than a temporary nuisance to most people. There aren't any consequences that seriously impacts anyone's life.

In the real world, startups are probably correct to focus first on new features and then patch security later. In an ideal world, that would be a mistake that would kill them.


Regarding Target, http://www.foxbusiness.com/industries/2014/05/21/target-post... and http://www.cbsnews.com/news/data-breach-costs-take-toll-on-t...

The costs to Home Depot i would argue are not known.

The danger to a startup in the payments business that gets breached before they open their doors could in fact have a long-lasting effect.

It's nothing more than a temporary nuisance to most people. There aren't any consequences that seriously impacts anyone's life. My friend who is an FDA consultant who gets called when medical device manufacturing lines get shut down would very much disagree with that statement.

The thing about the Adobe breach is that it hurts the ecosystem. Folks who put an email and a password out there that happened to use that same password for a bank or other critical resource are now more vulnerable. And the trick with the Adobe thing is that most of the folks that I surveyed that have emails out there don't remember signing up.

People just don't care. And that is the crux of the problem.


The point is that the incentives are such that it's not necessarily rational for a young startup to ensure their product stays perfectly secure.

Startups are so precarious and need so much attention on other areas that for most types of startups, security holes are not the biggest existential threat.

And those incentives partially come from users who are more than happy to sign-up for a website someone threw together in a month without any thought about whether it's secure.


CurrentC is ramping up and isn't even taking real customers yet. There target user doesn't read Hacker news. This doesn't affect their business at all except for the slow down of trying to figure out the attack.


Security is one of those things you can't just toss in at the very end. In order to do it right, you really have to include security in your design from the ground up.

It seems like bullshit at first. The model of "secure software development" tends to culminate in notably different software designs. For instance, a proper payments service would have security at its forefront. A naive one might process all requests that come its way; OTOH, a secure payments service might choose to process only those requests that seem trustworthy (does the request look suspicious? does the requester have a history of abuse? etc.).

Security is something that will fundamentally shape the architecture and design of your system. You can't just go back and make these types of gigantic changes.

I guess it all comes down to whether you are willing to accrue technical debt by not designing securely from day 1. If you get funded, you can go back and change things, but (in practice) you can only change so much.


Isn't this a false dichotomy?

Option 3) Spend the day fixing the security problem, then write a transparent post about it thus improving revenue/traction with new/existing users.


No, this isn't an option.

Writing a blog post about "fixing security problems" will not attract new customers for 99.99% of startups, unless security is a core value prop for your product (Stripe, for example).

Your average Airbnb user does not read Hacker News, waiting to signup for security conscious services.

More so, writing a blog post about how you spent all day preventing XSS, SQLIs and similar vectors wouldn't even get to the top of Hacker News because these are mundane and basic problems with simple fixes. The reaction by a technical audience like HN would a simple "congrats...?" pat on the back.


There is also the consideration that you will never know your security was insufficient until the day you do.

You can take reasonable efforts to secure your site. You can follow the best practices that you find. You may even have the best programmers money can buy.

But, in the end, until you are caught with your pants down, you may have no idea that you are actually vulnerable.

We make mistakes. Some of them are stupid.


you may have no idea that you are actually vulnerable as in Home Depot who found out that they were hemorrhaging from the Secret Service.


I don't see why writing secure code and new features are mutually exclusive tasks.


Thank you.


AppSec doesn't solve their problems.

The startup problem is that they need a good product to attract users. If they have no users, poor security doesn't affect many people and can be tolerated. So all their efforts go into improving the product from an end-user standpoint, and security is left for a time when there are enough users to justify.

Security and moving fast are often at odds.


This is true, and you can easily view this as a form of abuse against userbases: your data will be exploited and not properly secured while a company profits and you suffer the consequences.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: