Hacker News new | past | comments | ask | show | jobs | submit login

As a developer, do you really want to live in a world where "security is a top priority" at every company? Does such a world even make economic sense after accounting for the opportunity cost of the time most that developers would otherwise spend actually building new products and features?

While companies could probably do better than they are right now, hacks like this are probably never going to be eliminated. There are too many companies and too many developers for nobody to make mistakes, even when they're being mindful not to. Investing in solutions that assume hacks will happen seems reasonable to me.




Yes, yes I do as a developer! The thing is that a lot of these "hacks" aren't even that sophisticated. A lot of them are engineers not paying enough attention. The security dimension of many, many products can be improved tremendously picking off some low hanging fruits. Ever since companies like Google pushed for HTTPS, it's proliferated all over the place. Just by Google emphasizing it and talking about the need for secure communication even inside one's own network, my own company started doing the same. Enabling HTTPS and SSL wasn't that hard, especially since companies Let's Encrypt came along. It just wasn't prioritized. Once it was, our engineering team made it super easy to get certificates from LE and we all learned standardized ways for securing our traffic. Security is often low priority because people are really bad at planning for unlikely events with potentially catastrophic consequences.

I'm not saying we can be invulnerable but we need to raise the lowest common denominator so that it's not a walk in the park to steal millions of records. You just need the weakest link to make everyone vulnerable but I do think positive collective behavior can counter that -- especially when you make it easy with things like Let's Encrypt.


> Yes, yes I do as a developer! The thing is that a lot of these "hacks" aren't even that sophisticated. A lot of them are engineers not paying enough attention.

I dont think you have quite thought it through. Do you honestly want to have to do code audit on all libraries you use? Freeze all versions? Have a chain of signoffs for every change?

I have briefly done consulting in a place like that -- developers were absolutely miserable. Think about every single corporate IT policy that exists and apply it not just to your desktop/laptop/phone but to what you do on that desktop/laptop/phone.

Security is about management of risk.


I dont think you have quite thought it through. Do you honestly want to have to do code audit on all libraries you use? Freeze all versions? Have a chain of signoffs for every change?

If developers demand that the tools they use are better built, then the market will deliver tools/frameworks/etc... that are secure from the start.

"Good" coding has become "good enough" coding, and the problem exists from the bottom of the stack to the top.


> If developers demand that the tools they use are better built, then the market will deliver tools/frameworks/etc... that are secure from the start.

This is never going to happen because what is considered secure in one place is not considered secure in another place.

> "Good" coding has become "good enough" coding, and the problem exists from the bottom of the stack to the top.

Because it is about risk management, not about absolutes. It is absolutely irrelevant that a smart samsung TV that I have in my office has garbage security because it is used as one thing and one thing only - dumb 48" HDMI monitor not connected to wireless network. Its Wifi antenna connector has been cut. It matches my risk profile.


It’s not “either or”, but as someone who has worked at various places along the spectrum of practices ranging from “default password is password” to DO-178B [1], I greatly prefer environments with strict and rigorous design, testing, change control, and security auditing. The chaos of moving fast and breaking things (and fixing them, and breaking them again, and fixing them again, then getting hacked and having to pull 24 hour days to mitigate...) is a recipe for burnout.

1: https://en.m.wikipedia.org/wiki/DO-178B


If DO-178B were to be applied to "internet" I would not be surprised if we were thinking that uucp is amazing invention in 2018.

I'm going to repeat it again - we do not have a security problem with software. We have a risk management problem.

There's absolutely no reason for Marriott store information on previous guests past certain statue of limitations. In fact, they could probably offloaded it to Iron Mountain after 180 days. Storing it online has a certain risk profile. That risk was not correctly evaluated ( probably not evaluated at all ) and hence it was not minimized.

Storing credit card information ( even encrypted ) after the card was charged and transaction creates another risk profile. It also was not evaluated and it was not mitigated.

Businesses are obsessed with data without understanding the risk.


'As a car designer, do you really want to live in a world where "safety is a top priority" at every company? Does such a world even make economic sense after accounting for the opportunity cost of the time most that designers would otherwise spend actually building new products and features?'

Most professions and companies are (at least in theory) held accountable for their impacts.


No car on the market is as safe as the absence of a car. Car companies make tradeoffs towards safety where it's reasonable and economical, but still fulfill their baseline mission, which is inherently dangerous. People are injured and killed in car crashes every day; car companies are not "held accountable" unless there's a specific defect and they should have known better.


Such as a company knowing better than to keep their servers patched, to have a process to make sure their servers are patched, to have a process that shows a list of servers that are _not_ patched, etc.

There are a lot of really stupid mistakes made in a lot of these data disclosures that a competent IT team (and dev team) can prevent from happening. The current state of things is that there are hardly any consequences for losing people's data, just make a bulk purchase of credit monitoring and call it a day. This is cheaper than actually hiring the right people and implementing the correct processes.


Q: What's safer than sky diving? A: Not sky diving.


As a car driver, do you want to live in a world where "braking for pedestrians in crosswalks is a top priority" on every trip? Does such a world even make economic sense after accounting for the opportunity cost of the time that most drivers would otherwise spend moving toward their destinations?


Haha THIS is spot on. Sure, 1 person's address isn't the end of the world ... but 500,000,000 people's information in 1 incident is class action material


And it's not like I'm advocating that every single company needs bulletproof security that can stand up to nation-state adversaries with budgets bigger that the company, I agree with GP that it just wouldn't be economical.

To stretch the car/driver analogy, you could limit all cars to 10 mph so that they can stop fast enough when a deer runs into the road unexpectedly, but that's probably not worth the tradeoff.

Pedestrians, on the other hand, are a predictable fact of life that you need to deal with when you get in a car. So are bad people on the internet. If you put something on an internet connection and aren't constantly aware of that, you should not be putting it on the internet.


Car companies absolutely quantify risks and make decisions based on it. It is still more about bottom line than safety. When a version of a car fails some tests, they will estimate the cost of a recall versus the cost of a lawsuit. Whichever is smaller wins.


> Car companies absolutely quantify risks and make decisions based on it. It is still more about bottom line than safety.

Right, which is why we should increase consequences when there are data breaches so companies may actually care about them when they happen.


That "care" would come out from customers pockets.

Why do you think customers are ready to pay extra for the extra data security?


As a developer, yes.

I really wish more developers had at least a basic ethical grounding and didn't just go "fuckit, revenue!". (Or, in larger companies, "fuckit, my boss told me")

And when you consider opportunity cost - even just double-checking you aren't affected takes a minute of time, as a consumer, that means this hack just wasted close to a thousand years of human life.

Where's the accounting for the opportunity cost of that?


There is no such thing as being "done" with security. You can as deep as you want, with as large a team as you want, and never be able to say "okay, we're secure now."

If basic ethical grounding requires security to be the top priority, and security work is inexhaustible, then it must be unethical to ever work on the product being secured.


No, but there is such a thing as "following best practices".

An ethical approach requires you to reason about which actions are moral, not to be "done" with something. As I said, even a basic knowledge would be really helpful.


It's easy to reference amorphous "best practices." As Tannenbaum said, it's nice to have so many to choose from. The real challenge is deciding which practices apply, and what authority to figure to recognize when determining "best."


That involves both declaring bright lines, and making tradeoffs between cost and risk. It's almost as if it was a question of ethics.


I agree. But following best practices is a completely different thing from treating security as the top priority. Best practices include tradeoffs that balance security risk with cost and businesss needs.


There is no such thing as being "done" with safety. You can go as deep as you want with as large a team as you want and never be able to say "okay we're safe now."

If basic ethical grounding requires safety to be the top priority, and safety work is inexhaustible, then it must be unethical to ever work on the product being safe.


Absolutely correct. Safety is about managing risks, not eliminating them completely at all costs. An airline which truly saw safety as the top priority would never put a plane in the air. Making money is the top priority; safety (or security) is one consideration that influences how you go about it.


Only a handful of people will actually bother to check whether they were affected.


That will be more than mitigated by the large number of people who'll have to fight off identity theft. It's also an intentionally low-balled number.

In other words, this is an irrelevant nit that serves no purpose except derailment.


As an architect do you really want to live in a world where "structural stability is a top priority" at every company?

Does such a world even make economic sense after accounting for the opportunity cost of the time most that building designers would otherwise spend actually building funky new shapes?

Investing in solutions that assume buildings will collapse seems reasonable to me.


> As an architect do you really want to live in a world where "structural stability is a top priority" at every company?

Sql injection is bad but not "a bridge with 50 cars collapsed over a city" bad


How about if 150M cars were robbed? Is SQL injection as bad as a building collapse then?


I want to unpack a few assumptions before responding.

1) There are 150 million vehicles which can be remotely controlled via the vehicle manufacturer's software, which has generally mediocre application security.

2) The software in question is vulnerable to SQL injection, allowing up to 150 million vehicles to be remotely commandeered by a small group of attackers.

3) No hostages are taken and no owners of cars are deliberately harmed, because this is an application security scenario and not a kidnapping scenario (which is orthogonal).

The scenario you've posed is oddly florid...thinking through it, no, I don't think the robbery of 150 million vehicles is as serious as a bridge collapse with 50 (presumably occupied) vehicles on it.

Speaking more directly to the point - I think this is a really poor comparison. Logistically speaking it's hard to take seriously the idea that 150 million cars would actually be stolen because of any single SQL injection vulnerability. SQL injection is really bad, but it doesn't directly result in injury or loss of life. It's also hard to conceive of a situation in which SQL injection has the potential to cause systemic collapse like you're describing...maybe SQL injection to a database containing credentials that have write access to a server which can launch ICBMs?

In the modal case, I think it's okay to admit that application security is not as serious a concern as architectural stability. But this entire discussion is pretty much a sideshow; we can just all agree that security needs to be taken seriously and that some bureaucratic scar tissue is okay to make that happen.


It is funny that after all posturing in this thread @throwawaymath's post which is one of the few discussing risk assessment is getting downvoted while posts throwing around absolutes and lofty goals are getting upvoted.


It's hard to imagine an SQL injection scenario ever being worse than structural buildings collapsing (Rana Plaza Bangladesh comes to mind), but anything is possible I guess.


tell that to iran. not literally. you see where i’m going.


Iran is, probably, better off with their nuclear program delayed.

In any case, the damage to Iran from the hack was not as significant as building collapse.


Yes. If companies are not considering privacy and security of their customers to be their top priority, then they have no place on the internet. None.


Do you want to take away my freedom to choose how much security I have to deal with on internet web sites?


Yes. If I have to give up my security and privacy so that you can have less friction, then I most certainly want to take that away. If the online business or organization can give you less friction or security without impacting my security or privacy, then I do not see an issue.

The situation is identical to you wanting to have an untamed lion in your back yard. Provided you have the right security in place to ensure it can't hurt me, your neighbor, then the litter box is your problem. If however you do not have the right protections in place, then I have every right to ensure the lion is removed from the neighborhood.


Yes, and it will feel weird, but developers who value privacy will have the insurance companies backing up their advocacy for storing less data on customers.

If you don't store valuable data, you won't have large premiums.

If your business model requires storing such data, you better have the revenue to pay the premiums.


As a developer, no. As a consumer, yes.


I doubt it - it would increase the cost and slow down the innovation for questionable gain.


As a developer, I want to live in a world where I can make a business case for security of something along the lines of "if we don't do this, we'll be hit by crippling fines"


As a developer, yes I do - it will only drive up the demand for skilled practioners of the craft.


The problem with the internet is that security is an after-thought. The solution is to build security into the communication protocols, and that involves data structures like merkle trees and blockchains.


Why do we need a blockchain or a merkle tree? We have TLS, SSH, PGP, a number of VPN solutions...blockchains and merkle trees are consensus and versioning protocols, not security protocols. Their use of cryptography is orthogonal to the traditional security goals of confidentiality and authentication.


The problem is that incentives aren't aligned. Companies don't care about your personal data they only care about your metadata, so they won't invest resources into protecting your personal security. TLS, SSH, PGP are all communications protocols, they provide no rules concerning value exchange, which is what account creation is. When you create an account on a 'free' platform, your de-facto making an exchange of data for value. The issue here is that the transaction is one-sided because there are no guarantees on personal security. If your account information follows you around the web in the form of a public key, then your in control of your personal security.


Yes.

I can see a time that software developers (leads at least) will need to be chartered just like someone that designers a bridge.


As a developer yes. If that cost can't be baked into building new products, either the developer needs to learn how to emphasize the importance, or that company needs to go out of business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: