Hacker News new | past | comments | ask | show | jobs | submit login
We can no longer leave online security to the market (nytimes.com)
215 points by jaredwiener on Oct 11, 2018 | hide | past | favorite | 163 comments



Government regulation? They can't keep themselves secure. There was just a post here a couple of days ago saying how vulnerable the DOD's systems are. How are they going to police others when they can't police themselves?

I work in the banking industry where security IS regulated (by the FDIC). We have government auditors come and review our technology once a year. These guys don't know what the hell they are doing. We have had blatant security problems (now addressed) that they couldn't see right in front of their nose. Community banks have terrible security. Larger ones are better, but still rife with problems.

I fail to see how government regulation and intervention has helped in my industry, or how it would help in any. If by regulations, you mean that we would get fined if some data got compromised, that already happens through negligence lawsuits. It is not an effective motivator though.

In my experience, the threat/worry of bad publicity is actually the best motivator in a company getting their security up to par.


> Government regulation? They can't keep themselves secure

Wouldn't this be akin to say "Criminal laws ? the cops can't even police themselves!". It can be true, and you'd still need a framework to define the wanted behaviour anyway. Enforcing the standard is a important and separate issue.


> Wouldn't this be akin to say "Criminal laws ? the cops can't even police themselves!"

yes it is. not too long ago there was a major issue with undercover cops in baltimore committing many of the serious crimes that they were supposed to be policing! [0] the commissioner (rightly imo) suspended undercover enforcement indefinitely.

[0] https://www.washingtonpost.com/local/public-safety/plainclot...


Criminal laws exist though, so you miss the mark in your objection.


> In my experience, the threat/worry of bad publicity is actually the best motivator in a company getting their security up to par.

If that were just true Facebook will not exist.

> I work in the banking industry where security IS regulated. We have auditors come and review our technology once a year. These guys don't know what the hell they are doing.

Regulations do not make problems disappear but make the situation better. If you vote for politicians that want to improve it, instead of politicians that are paid by lobbyists to free companies of their responsibilities.


> Regulations do not make problems disappear but make the situation better.

I also work in banking (major financial hub in Europe). Regulation is the bane of security and data management because it adds several layers of complexity on top of already complex processes. It leads to people performing repetitive tasks to comply with regulation, leaving no time for in-depth analyses, process reviews and enhancements, and the clean-up of sensitive data.

You provide a baseless assertion shoehorned with a comparison to lobbyists nobody ever brought up. I can't prove a negative but you sure didn't prove your positive.


A big problem is that regulation tends to be pretty porous. Rather than curbing bad behaviour, it just adds, as you say, several layers of complexity on top of the bad behaviour. And the task of handling that extra complexity ends up on the desks of the working grunts keeping the system churning.

Like with GDPR, the regulation was to give people control of their data and make privacy by default an available option. But it's just given users more hoops to jump through before scooping up a user's data anyway.

Regulations tend to be a bit of a nudge in the right direction, but play out as something systems have to work against to keep things running the way they were before.


A second huge problem is that governments ... don't know how to do security. So they just mandate some random measures.

And then the problem is that people follow their measures ... and see this as absolving them of further responsibility. In many cases in the financial world that isn't just laziness: that's actually how the law works.

So much of the regulation burden doesn't just force the whole market into large companies, it actually opens up and legally mandates not security, but security holes.


Can you please provide a single case of high profile security breach that was caused solely by regulation? That must be easy if what you say about regulation opening holes is true.


The point is not that regulations make the situation better for banks, but for the general public.

I'm also working for a big European financial institution and are directly involved with reporting to the various financial authorities. Granted, this is complex and the worst is that there's very little tolerance for mistakes (each non-reported trade, which is supposed to be reported costs a bank thousands in fines)

But you know what? After all that shit that our employers pulled against society at large in 2007 / 2008 I totally support those requirements.

Yeah, self regulation of the financial industry! What could ever go wrong with that?

edit : Added timeline


My suggestion would just be to significantly ramp up the fines. No need to bother with pointless tickbox compliance audits and all that other stuff. Obviously you would also have to have some pretty strong rules around covering up security breaches - I would suggest explicitly making it a serious criminal offence.

Hopefully the GDPR will have a positive effect here. If you suffer a security breach, you can expect to face severe financial penalties. I'm sure companies will figure out how to secure themselves surprisingly quickly after they see a few of their competitors get fined several hundred million euros.


IF we apply the same to political organisations leakage, then ok.

Keep in mind that Congress, EU parliament, EU commission and I'm sure many others were all hacked in the past 2 years. Needless to say, they all see themselves as above this whole regulation thing.

And of course, those penalties cannot come from the tax coffers. They need to be leveled against the pay of the politicians, because otherwise how could they ever work ?

The EU parliament's websites are currently clearly in breach of the GPDR as well. Let's start there, shall we ?

As long as this is their attitude, I feel like this is not an acceptable solution.


You are correct sir, however the EU commission believes it doesn't actually have to follow the gdpr at all! They were called out on their non compliant website shortly after the law activated and announced that for "legal reasons" they didn't have to follow it.


> They can't keep themselves secure... they can't police themselves?

What does that have to do with it? Better laws on security will force the government to police itself better too.

Simple example: a law requiring all passwords to be stored with unique salt and encryption of certain minimum strength. Or a law preventing IoT devices from functioning on a network when their password is still set to the default.

How do you fail to see how simple actions such as these would help?


Both examples that you give are sound, and I would support regulations that enforced these basic security guidelines. The question is whether these are the types of regulations we would get. I expect there would be rather a lot of useless and silly regulations that do nothing but drive up costs.


This is not a rhetorical question, I'm just trying to better understand how this process would work. Who would be designing and brainstorming these laws in the government?


I do not know the answer to this question. It seems reasonable that a "committee of experts" would be designated by the politicians for this purpose, but I don't feel confident that one could be sure of the expertise involved, or whose interests would be served.


>We have government auditors come and review our technology once a year. These guys don't know what the hell they are doing. We have had blatant security problems (now addressed) that they couldn't see right in front of their nose.

I've seen the same issues in SarbOx audits. The auditors don't know beans about the underlying technologies. A lot of evidence requests take the form of screen captures showing x. Well... I can give you a screen capture showing you whatever you want whether it represents reality or not. Ultimately, with our without regulation, it comes down to people being honest professionals. Regulation is all for show.


>In my experience, the threat/worry of bad publicity is actually the best motivator in a company getting their security up to par.

I think the Equifax debacle has shown otherwise.

Big corps have too much Lobbying power and PR presence for public shame to make a lasting impression. Facebook is on the hot seat right now but that too will pass since legislators are fickle and myopic. Heck Google straight up shut down G+ because of "Security Concerns" and no one batted an eye.


> In my experience, the threat/worry of bad publicity is actually the best motivator in a company getting their security up to par.

The banking industry got the regulation it has now because this did not work.

If the situation is as bad as you describe, then apparently not even the threat of government regulation was sufficient motivation for banks to get their act together.


And as we noticed in 2000, 2008 and in the EU crises:

Regulation does not work either. For 2 main reasons:

* Regulations are stupid and do not catch all problems, which then causes those uncaught problems to become systemic and threaten not just the bank, but the entire country, because regulation often also forbids or discourages banks from checking other problems (or at the very least pushes an attitude of "if you check compliance with the regulations, security check done"

* Governments cannot be trusted to carry out the regulations ("too big to fail")


This is all true. Security is best implemented when it’s baked into an organization’s processes. The government barely has enough budget to pay for server space let alone invest heavily into dedicated security teams. Most work is handed off to outside private contractors but they are hamstrung by the same budget issues.

Security concerns are almost certainly best handled by private industry except in rare cases like national security or the public markets. For example if Boeing becomes known for being easily hacked and flying unsafe planes, how long do you suppose they’ll be around?

A company’s livelihood relies on the perception of being secure and they are well aware of this so the ones that want to succeed absolutely invest very heavily in security. A successful hack doesn’t mean companies don’t invest in security or that people don’t pay for it.


Companies lived are on the line, I mean it is terrible that Equifax doesn't exist after losing all that customer data after being hacked...

Wait


I'm going to exploit this comment which is at the top of the thread and stands above another comment that argues against government regulation in software industry to tell you guys this: hopefully this industry will be regulated from top to bottom before too long. GDPR came, hopefully more will come, w.r.t. security, privacy, and even UX standards (e.g. all companies should be required to accomodate all sorts of disabled people, probably by allowing assistive tech in browser to work properly on their websites).

You guys will not and want not to fix the status quo where shitty software is pushed onto us. You guys will not stop implementing unethical, "agressive" software. So someone should be watching over you, entrepreneurs and devs, and that someone is the government.

Government regulation need not be perfect. But it needs be there. That means companies will be more incentivised to keep their shit together. Surely your bank would be doing worse if nobody was watching over. If more budget and worktime is devoted to such regulation, it will become better.

I understand that no regulation is a strong political position in the US, but I call bullshit on it. I wouldn't bother writing as I'm mostly at the user side of things these days but I wanted to write this given most of you are devs here. It is not about some silly social network or an irrelevant SaaS anymore. The world runs on this, software is as important as medicine and food to our livelihood, and the software industry needs to be regulated like medicine or food industries are. Something simple like Twitter and Facebook affects lives of the masses. You'll have to get your... act together.


Your argument is that banks don't have the will to fix security issues. The parent was arguing that security is hard and that the government is not particularly competent at it so is not in a position to define raised standards. You're not even having the same conversation.


Some complex CPU or encryption bugs is what makes full security hard. But most security breaches are because of people doing stupid things. Unprotected public databases or s3 buckets, sql injections, plain text / easy to guess passwords, out of date software, etc. I am ready to bet that those alone constitute more than 90% of the breaches. And this is the result of mere amateurism. If tech people do not care about security or aren't competent enough to take even the most basic steps, regulation is absolutely the right response.


So make banks liable for damages as a result of losses from security breaches. Presumably they already are. That solves the problem.


> The parent was arguing that security is hard [...]

It isn't though, at least not compared to the state of things. Pretty much any government would be competent enough to mandate some sort of two-factor authentication that would greatly improve security and make a lot of phishing and hijacking a thing of the past. Of course different governments would have different success rates, if not in terms of security at least in terms of elegance. But that is like everything else. People die everyday by the lack of road safety and healthcare.


I used to work in the security industry, I've seen what government compliance looks like. Regulations usually sound logical and great from a shallow analysis but once you've seen some implemented you'll realise they're often atrocious at achieving their intent.


And what impedes their ameliorement so as to dismiss them in their entirety?


I'm not trying to say that regulations can't and never work, just that most of the time they don't because they're extremely hard to get right. They suffer from the same problems as law - it's extremely difficult to codify intent. Couple that with the fact that lawmakers usually have very little awareness of technical details. Say for example they do in fact mandate 2FA for all banks. But then all banks rush out various implementations to meet the requirements. Some provide SMS-based solutions which have known security risks, some provide codes that don't lock out, some do everything right but now people who can't get a 2FA app (those who don't have smartphones for example) can't access online banking any more. There are accessibility concerns. In the meantime, the security industry has finally cracked federated identity, but banks can't offer it because all access has to be through their 2FA solution.

Obviously that's a fairly bad-case (though not worst-case) example of how things could play out, but I think it serves to prove my point that "just force them to do X" is not always a sound approach. Well-designed regulation using sufficient consultation with experts (actual experts rather than snake-oil consultancies) and with a view to the future and how the state of the art might change can be effective (though still not flexible enough to accommodate exceptional circumstances) but that's the exception rather than the rule.


> Say for example they do in fact mandate 2FA for all banks. But then all banks rush out various implementations to meet the requirements. Some provide SMS-based solutions which have known security risks, some provide codes that don't lock out, some do everything right but now people who can't get a 2FA app (those who don't have smartphones for example) can't access online banking any more. There are accessibility concerns.

I'm not qualified to speak on this subject, but these are excellent points. Could you expand on any other implementations that sound like a great idea on the surface, but would have limitations? It sounds like accessibility is just one of many concerns. I'm itching to hear more.


those are just off the top of my head, and I'm not an expert on 2FA either. If we're talking about alternative solutions to someone logging in as someone else, you still have to provide a "2FA" solution because "2-factor" is a description of the problem to be solved (the multi-factor authentication problem) - to prove who you are in high-security situations it's insufficient to just provide something you know i.e. a password, because someone else can learn that thing. Thus you must provide "something you have" ie. proving you possess your phone via 2FA apps, or "something you are" via biometrics a la faceID. There are alternative solutions to this like USB 2FA tokens or those little pin-pads that banks provide you that are already required by most banks in order for you to access your account. Other options are proving email ownership via access links like slack does, automated phone calls, probably some other venues. But the fundamental requirement to boost password security is to provide a non-knowledge-based proof of identity.


How to regulate this is up to each and every country, just like it is up to each country how to regulate things like pollution, traffic and infrastructure.

That a bank who can't handle security compromises their customers user experience rather than their customers security is a good thing.

The reason the regulate these things aren't because it is fun. It is because there are fundamental security problems that needs to and will eventually be fixed. Companies like Apple have largely already, or at least potentially, fixed these problem just only for themselves. If you want to fix it for everyone you very likely need some sort of mandate.


> That a bank who can't handle security compromises their customers user experience rather than their customers security is a good thing.

Why do you say that? It's only true if the cost of breach of security is actually taken by the customer. The alternative is that banks are required to rebate customers for fraud caused by poor bank security, which makes sense to me because it provides financial pressure for banks to beef up security while at the same time leaving in the flexibility to define how that security is improved. It's "here's the problem you need to solve" via financial pressure, not "here's the solution you need to implement" via mandate.


Oh yeah ... I see it now. Instead of the "do you accept cookies" in your face idiocies we now need to identify ourselves using 2 factor authentication on every website.

That sounds SO great.

Obviously there are no realistic security measures that are 100% effective. All this will amount to is further cementing the power of large internet companies. You know this, so why ask for it ?


"Do you accept cookies" is only relevant because there isn't a separate login mechanism in HTTP. Actually knowing whether you are sharing data with the website, and what website that is, would be a major improvement. Security measures don't have to be 100% effective. Just like road safety you should focus removing the impact of flaws, not to prevent flaws as such. A separate authentication mechanism would remove a large amount of security issues, including potentially phishing and password leaks entirely. These common security issues of compromising the system of the user or the company would simply not have the same impact anymore.

A not insignificant part of the large Internet companies power comes from that they are the only ones who can handle, or people trust to handle, security. It isn't that hard today to create your own e-mail system or smart phone. But managing those systems, especially for a reasonable cost at scale, is just beyond what most new entrants in the market can handle.


Government mandated authentication mechanism. This question is almost a joke: what could go wrong ?

Everything can go wrong.

> A not insignificant part of the large Internet companies power

So it's about breaking the power of large internet companies ? Figures. Can we please do that WITHOUT destroying the web ? The last regulation that tried to break the power of large internet companies was the GPDR, and that has significantly entrenched the position of the large internet companies instead, while creating a ridiculous amount of inconvenience for everybody. This ... will do the same.

People WANT to share that data. Or perhaps I should say, they want the things that happen when they do. Quick searches that get them the products they want, on Google, on Amazon, on clothing shops and on tons of small webshops. Even the obnoxious image ads. People want them.

That means that a login mechanism will just be an extra hurdle with zero of the effects you want.


Taking an argument to its extreme is bound to make it seem ridiculous. Certain websites require certain levels of security. Not every govt building has troops with war grade guns waiting them.


Perhaps, but would you have said the same if I put a comment about "accept cookies" nonsense in a pre-GPDR discussion ?

So ... perhaps not.


GDPR is a terrible law. EU had one terrible law which forced cookie popups on every website and now GDPR forces even more meaningless popups nobody reads and I don't even in EU.


now GDPR forces even more meaningless popups

But it doesn't.

GDPR forces companies who are collecting user data to obtain explicit consent for that collection. The fact that companies decided to make your user experience shittier instead of fixing their approach to data collection is the problem there.


The problem is that people don't care, but governments are trying to force them to care. People don't care, because they don't bother to look for other websites without cookies, so companies don't care either.

If you enforce rule on two parties minding their own business, you encourage changing this rule in some type of mindless ritual. Ok, we will do what you say in the letter of the law, but we won't try to follow the spirit because no one cares.

And this is exactly what is happening with cookies. Companies don't want legal risks(GDPR or not), consumers don't care, voilà! Mindless cookie banners, stupidly long and expansive Terms of Service, etc.


That's a very bad argument. Sure, people may not care enough now, but that's just because the threat is new and poorly understood. There was a time where people didn't care about getting lung cancer from smoking, but then it changed.


I would become an EU citizen if I could the day GDPR went official. It is the second best thing to happen after the invention of WWW itself in this industry. Hopefully more is to come and more countries adopt similar measures, and browsers start providing standardised UI for GDPR etc. related options so that those popups, if they exist, are rendered futile.


Browsers already offered standardised UI for accepting or rejecting cookies. The EU didn't care and now every website has its own totally non standard popup that can't be scripted or automated away. It's a disaster that shows how clueless governments can be about this stuff.


Government's move slowly. Security best practices are evolving quickly.

My team is working through FedRAMP/NIST compliance right now. We have sadly adopted the saying, "Security or compliance, choose one." We have literally rolled back a more secure implementation in order to be compliant. Regulations can't keep up.

I'm not ideologically against regulating the software industry, but I have doubts it can be done successfully.


> Surely your bank would be doing worse if nobody was watching over. If more budget and worktime is devoted to such regulation, it will become better.

Banks that got into trouble recently:

* Cypriot banks

* Greek banks

* Monte Dei Paschi

What has the government done : None of the savers still have 100% of their money. So no, I don't think my bank would be doing worse ...

Or perhaps you mean the expert government handling of the bank problems of 2008 ? Yeah ...

So what was the point again ?


Wut? We write bugs all day every day, we should just quit coding entirely if we follow that sort of logic. Mistakes happen, they get fixed, those in good faith among us help them get fixed.


I tend to agree that there needs to be some regulation. Probably as simple as fines for breaches that are based on the volume and details of the data leaked. This will make business think twice about what they store. It also makes it easier for providers / devs to argue for increased security if it's legally required.

Users (the market) are a big part of the problem. I run a SASS product, and I've chosen to enforce a Magic Link login for (one-time token based sign in, like Slack), to mitigate the issue of horrible insecure user passwords (E.g. sn0w3d1n). I get a significant amount of pushback on this though, and it was probably a bad idea from a business perspective, but luckily I'm in a position to force it anyways.

It's a simple formula. When you increase security, you also decrease convenience. My hope has been that the market eventually demands security (it becomes a service differentiator people value), and maybe some day it will. But you're probably right in that we need big brother to enforce it on people's behalf because most care more about convenience than security and probably don't realize how much damage could be done to them by using the same terrible password for 10 different web sites (including email and banking). I generally take a libertarian point of view on most things, but not exclusively. I do think we benefit from government involvement in some issues, and this is probably one of them. I don't mean to say this from an elitist point of view either. Security is complex and people are busy. It's naive to expect everyone to grok it.

I'd love to enforce 2FA, but it's going to be a complete mutiny from my customers if I do.


Banking seems to be doing pretty fine, actually. Can't remember any cases where customers lost money.

So maybe they are doing something right.

> the threat/worry of bad publicity

Yeah , that hasn't worked for Exxon Valdez, nor for any of the recent data dump incidents.


To the contrary. Banking is scared shitless. Internet access to banking account is not a matter of if you lose money but when.

The Underworld is now multibilion dollars business and is getting better at it's trade with every day. It is relatively safe and lucrative.

Identity theft, SIM swap, SWIFT half a bilion dollars theft in Bangladesh, South Africa - Japan credit cards. Just to name a few.

Remember that money at the end of the day is based on trust. If you cannot trust that the money on your account are safe. Or if your money is not liquid because banks have to manualy verify dubious transactions then your money are loosing value.

1 https://en.m.wikipedia.org/wiki/Bangladesh_Bank_robbery

2 https://www.bbc.com/news/world-asia-36357182

3 http://fortune.com/2017/05/05/wire-transfer-fraud-emails/


Presumably we're ignoring Identity Fraud when we say banking is doing pretty fine?

Obligatory Mitchell and Webb "Identity Theft" link: https://www.youtube.com/watch?v=CS9ptA3Ya9E


Banks are doing security right because it is in their best interest.

The average IT company however doesn't care much about leaks of customer data, except perhaps for the publicity effects.


I disagree with Bruce's assessment that government regulation is the best solution to this problem. As someone who has read a lot of government regulations regarding technology, I have often found existing regulations to be unsatisfactory when it comes to actually protecting consumers. (recommending old and broken encryption schemes, arbitrary/nonsensical password requirements, etc.) The speed at which the industry moves far outpaces the ability to regulate effectively.

I would much rather see software engineers follow the lead of electrical engineers and embrace non-profit (or even for profit) certification companies à la underwriters laboratories. It would be easier for consumers as the could just see a seal of approval and know they are getting a quality product. (Think LEED certification for buildings)

If anyone is interested in starting such and organization let me know as I do think it could do a lot of good in the world.


Having the government outline minimum penalties for data breaches would go a long way toward fixing the problem. It’s much easier to justify fixing a known issue or dedicating time to updating dependencies if you know there’s a defined cost (per customer!) of failing to do so.


> Having the government outline minimum penalties for data breaches would go a long way toward fixing the problem.

Then companies start covering up data breaches because disclosing them would cost millions in fines, resulting in people not even knowing when they've been compromised.

You also have the problem where politicians/media have no idea what they're talking about, e.g. calling the Google+ issue a "data breach" when it was actually a vulnerability discovered internally with no evidence of anyone having ever used it. If that's the standard then every time there is a vulnerability in a major operating system or TLS library, no one will be safe from the litigious trolls.


> Then companies start covering up data breaches because disclosing them would cost millions in fines, resulting in people not even knowing when they've been compromised.

Couldn't you make this same argument about any law that punishes bad behavior? As an extreme example, if we make murder illegal, that incentivizes covering up the act, at the expense of closure for victims' families. It seems flawed to me.


It's a lot harder to cover up a murder than a data breach. People notice when a someone turns up dead or mysteriously disappears. If some criminals break into your servers, who has any way to know other than you and the criminals?

There is also the issue of intent. Murder is illegal when you intend to do it. Nobody intends to have a data breach. In that case sunlight is more important than punishment because it's in everyone's interest to prevent it happening again, which requires understanding how it happened, which requires cooperation. Putting otherwise-aligned people on opposite sides creates unnecessary conflict at odds with the common goal.


I'm not sure you could cover up a data breach that easily. Those data dumps are going to be sold on the black market eventually, and I speculate that in many cases government agencies will be able to identify unannounced breaches.

Slap a 10x (or even 100x) fine on companies whose data breaches are discovered independently and covering stuff up won't look like such a good idea anymore.


> Those data dumps are going to be sold on the black market eventually, and I speculate that in many cases government agencies will be able to identify unannounced breaches.

Sure, but how do you prove it was covered up rather than merely discovered externally before it was discovered internally?


Covering up a data breech sounds like criminal behavior in the example above. Which you know, lands people in jail? I seriously doubt many employees will risk jail time so their company is spared a fine.

If it is mandatory to disclose data breeches and equally mandatory to cooperate with full transparency to fix the issue, then we are assuming that employees would act criminally with being accountable themself because it would be best for the company?


  Covering up a data breech sounds like criminal
  behavior in the example above. Which you know,
  lands people in jail?
Only if the cover-up isn't successful.


I would like to meet those employees who are that loyal that they will risk jailtime for their company. Sure, some people will risk persecution to work in outlawed political groups, but that is some real dedication in the example. Especially as they have nothing to gain whatsoever.


> Especially as they have nothing to gain whatsoever.

Other than their stock options, their relationships with other employees and potentially their job and career. True, whistleblowers have little to gain, but they have much to lose.


They have their freedom to loose and nothing to gain by not sending in an anonymous tips.

You have groups of multiple peoples there, where only one has to talk for everyone to go to jail. This is how organized crime has been prosecuted for years. The only one of the guilty who gets out is the snitch.


> They have their freedom to loose and nothing to gain by not sending in an anonymous tips.

Sending an anonymous tip that could result in their company losing a lot of money if not going out of business has a highly undesired effect on their continued employment, future raises, stock options, etc.

> You have groups of multiple peoples there, where only one has to talk for everyone to go to jail. This is how organized crime has been prosecuted for years. The only one of the guilty who gets out is the snitch.

Prosecuting organized crime works by busting the little fish and cutting a deal to go after the big fish. There is no starting point for that process when you're dealing with an otherwise non-criminal organization. If you're not already aware of their offense you have no reason to be investigating them to begin with and nobody there has the incentive to tell you when they don't expect you to have any other way to find out.

"The only one of the guilty who gets out is the snitch" is also obviously incompatible with remaining anonymous. Anyone would be able to deduce what happened.

You get whistleblowers when someone is outraged at what the company is doing sufficiently to take the risk to try and stop them. Not when the government is threatening severe penalties for a past mistake that has already been remediated.

The NTSB method produces better outcomes than the War On Drugs method.


>It's a lot harder to cover up a murder than a data breach.

Is it? You can murder someone by yourself and be the only person who knows what happened. You as the murdered are strongly incentivized to never tell anyone if you want to remain free.

In a corporate IT department a bunch of people will have to know just to make a decision as to whether or not to publicly disclose it. An anonymous tip could have zero consequences for the individual even while they remain in their current job. What is the turnover among IT staff? Once they have another job they have virtually zero motivation to keep their former employers dirty secrets.


You could totally make the same argument. And so you go with whichever rules lead to the best societal outcomes. It may be different per crime.


> You also have the problem where politicians/media have no idea what they're talking about, e.g. calling the Google+ issue a "data breach" when it was actually a vulnerability discovered internally with no evidence of anyone having ever used it.

When (1) there's no evidence of anyone ever having exploited the issue; and (2) the logs where that evidence would appear, if it existed, only go back two weeks...

...it seems fine to assume that people have exploited the issue, the evidence was there once, but it isn't now.


By this logic everything is already compromised, because there are no major operating systems that have never had a security vulnerability and most logs don't go back more than a couple of months.


Is that a problem?

"Our logs don't show evidence of any data compromise" is not stronger evidence of anything than "our logs for the last two weeks don't show evidence of any data compromise" if you don't have logs that go back more than two weeks. How much evidence do you think that is? How do you think it might play in the press if the denial was accompanied by the two-week qualifier?


Allowing class action lawsuits of unlimited scope would encourage whistle-blowers and incentivize investigators. Not to mention scare the living daylights out of would be violators.


A treble damages multiplier with criminal charges and mandatory minimum sentences in the decades would be a rather effective deterrent. A breach can be just as devastating to someone as losing a partner or parent to senseless violence and should be punished in proportion to the number of victims. Fines are ineffective. There needs to be life sentences and company dissolution for incidents like Equifax.


The War on Drugs method then.

High penalties are irrelevant when people don't expect to get caught. They often make things worse by creating a "no snitching" culture because the disproportional penalties are seen as unfair by would-be informants who are then less inclined to cooperate.


I think it is important to make the distinction that it is the impact of the breach that is important and not the breach itself. If the information gained from a breach, say credit card numbers, is immediately rendered useless by the action of the breached company should they be penalized? I also find it unlikely that penalties would be accurately priced, which is a whole separate conversation of perverse incentives.

I think most data breaches boil down to poor culture and management. Software maintenance gets cheaper as you do it more frequently since the complexity of deferred maintenance scales exponentially. It's a lot easier to justify doing things when the costs to do them are nearly nothing. This is ideally where we should aim as an industry.


I think it's not simple to predict the impact of a breach. I would rather the breach itself be penalized.

I prefer incentives that prevent spilling the milk to post-spill arguments about how damaging the spill may have been.


> I prefer incentives that prevent spilling the milk to post-spill arguments about how damaging the spill may have been.

Then what you want is subsidies to audit popular software/hardware for vulnerabilities.

This is a classic high transaction cost tragedy of the commons. The manufacturer has no incentive to make secure devices because customers still buy the insecure ones. Imposing liability is difficult because the issues are highly technical (difficult for judge/jury to understand) and the damages are highly speculative and hard to calculate. Imposing specific security standards is equally problematic because of the same bad interaction between technical complexity and politicians.

But the solutions are known -- it basically just requires money for security hardening. So have the government provide the money. Without specific byzantine standards it allows the job to be done properly, and providing the money removes the incentive to cut corners.


> But the solutions are known -- it basically just requires money for security hardening. So have the government provide the money. Without specific byzantine standards it allows the job to be done properly, and providing the money removes the incentive to cut corners.

If you don't force the audit, why a company would want to take a risk with testing their product? I feel that at best, it'll end up like "quality seals" on food items. Yes, I can draw a quality seal in Photoshop too.

But even if companies would be somehow willing (how, without forcing them?), then you need to ensure the audits are reliable, and prevent companies from creating a fake rubber-stamping auditing entity, and going to market either way. What's the preferred way to accomplish that?

Companies are in a race to the bottom, and they'll do their best to weasel out of "unnecessary" costs.


The point of the audit isn't to get a seal of approval, it's to identify the security problems, which is 97% of the work of fixing them.

There will always be the company which is literally on fire because it's 0.0013% cheaper in the short term, but that's true no matter what you do because that company will be out of business in six months regardless. You can't change their behavior because they're already in the midst of self-destruction by the time you even become aware of their existence.

Any kind of normal company is going to be happy to have a free confidential security audit, and offering that would in practice significantly improve the security of this garbage.


I see your point better now. I'm still not sure if a normal company is really to be going so happy about free audits (due to IP and extra administrative workload). Do we have an existing precedent of something like this working in other industries, or is this something that hasn't been tested before?


It's the same general principle as insurance companies offering no-copay annual medical checkups.


s/milk/oil/, and suddenly your metaphor is an order of magnitude stronger.


I don't disagree, but it's really really hard to determine what proper liability should be. It's easy with things like "Company X left their S3 bucket publicly accessible then put PII in it." It's much less easy when "Company Y got hacked because of a 0-day in their OS kernel, or in their web server that is an open-source project, or even a 0-day in a codebase that they own but have generally very good practices on (stuff happens, even very good devs write bugs sometimes)."

Does the Linux Foundation or a group of random devs on github that gave their code away for free get handed a massive and possibly bankrupting fine for that? And if so, why would anybody release code for free? If not, and you say that "GPL/MIT/etc. warrants no serviceability so it's on the adopter," then why would anybody use open source and open themselves up to the liability when they have zero control over the project and its associated quality control and process?

For this reason I don't think governments can do a good job at making fair regulation that doesn't have severe unintended consequences.

Something that might help tho, is providing easier civil recourse for those affected. For example, if Equifax gets hacked and leaks my identity and somebody buys a car in my name, it should be very easy for me to sue them to make it right. That's something that is clearly broken with our current system.


I say companies should pay a painful fine for all data breaches no matter what. They'll have to buy insurance against it, and the insurers will eventually learn to price the risk properly. And it will also be in the insurers' interest to learn how to audit their client companies.


Yup. Take GDPR. It doesn't try to price user data. If you screw up real bad, it just says, "up to €20 million, or 4% of the worldwide annual revenue of the prior financial year, whichever is higher".


Failing to disclose a breach should be treated like covering up a crime. Companies are dissolved, fines levied, people go to jail, damages are paid.

Failing to address security vulnerabilities quickly, or publicize their existence and suspend operations until they're fixed is negligent and is most similar to reckless endangerment. Similarly, hiring inexperienced, unqualified, or otherwise knowingly incapable people to perform a job is negligence. Failing to know and perform what precautions one needs to take is likewise negligent. People go to jail, fines are levied, damages are paid.

A previously unknown exploit used to gain access to a company is like a car accident and should be treated as such. Company pays damages and moves on.


Imo, the most effective part of gdpr is the self disclosure stuff. Transparency has already improved and pressure to avoid disclosable leaks has gone up.

Idk that fines would help that much. I would just double down on the disclosure part. Improved policing of nondisclosure (with penalties).


The problem with fines as an enforcement mechanism is that incidents are rare and big.

Fines work when they are assured, fast, and high. But because the incident that triggers fines is not the actual behaviour, but only its later consequences, it's too easy to put off security "for some other time".

It's like a single superhero randomly killing shoplifters every few months: the penalty is far disproportionate, yet the policy is still not going to stop anyone.


This is how I feel also. Businesses will always weigh the potential costs of a breach vs cost for proper security methods. Until you can quantify that with hard numbers for a breach, they will not take it seriously.


"I disagree with Bruce's assessment that government regulation is the best solution to this problem. As someone who has read a lot of government regulations regarding technology"

I pointed out it worked in the past for security and is working right now for safety:

https://news.ycombinator.com/item?id=16618846

For some reason, word about the successes never gets out. It should since their defect rates are super low.


I think this works as long as the companies face proper financial punishment for data leaks. The reason that UL works is because it is required by insurance. This insurance would probably only exist if data leaks were expensive to deal with.


I think you are both right as odd as it sounds. Their regulations are terrible, every time they have been asked to come up with an algorithm it involves a backdoor that comes to bite everyone. Meanwhile security and data usage has been a blindly obvious externality and impossible to ignore externality as of Equifax.

Arguably it goes back to identity theft being the consumer's problem in the first place and not the merchant who didn't properly validate. Since if Alice were to say that Bob the car dealer that Carol was the one paying for her brand new car when she wasn't even there it isn't Carol's responsibility. Alice would be the one guilty of fraud and Bob robbed - Carol isn't even a party to the transaction.


Bruce Schneier's point is that this stuff needs to stop being optional and there needs to be accountability and liability. There currently is none. This is not true for other domains. If your car crashes because of some systematic defect that might result in expensive recalls and liability suits where the manufacturer might end up having to pay damages. Electrical engineers have a legal responsibility and certification is not optional for them.

If it is optional and just a nice to have feature for which you charge extra, there will be companies not bothering. And there will be companies offering cheaper certificates and confusing customers with important looking but utterly meaningless certificates. Our industry is already full of snake oil sellers offering meaningless certificates and guarantees. This type of ass coverage only works when there's a legal stick that makes companies make sure they qualify for the right certificates.

Currently companies get away with really bad stuff. There's no liability. This needs to change. Liability leads automatically to people covering their ass to avoid ending up paying damages or ending up having to do expensive product recalls. E.g. router manufacturers stop shipping updates as soon as they can get away with it; typically when they have a new model they want to sell. If the old one gets hacked their solution is selling you a new one. Not their problem. That's what needs to change. If it has a network connection, it has to be able to receive updates and those updates need to be for the lifetime of the product, not the warranty or the calendar year. Failing to ship critical updates for known vulnerabilities needs to have consequences.

Google currently believes it is totally acceptable to stop shipping security updates for their phones after 3 years. Not their problem if you get hacked apparently. They don't care. The only responsible alternative (from a security point of view) for them to do would be to effectively brick phones when they stop supporting them so that users are not able to get into trouble. That would obviously be unacceptable but somehow leaving your users exposed to known security vulnerabilities is perfectly OK, perfectly legal, and completely without consequences when the obvious things happen.

When bad stuff happens, there needs to be positive proof from all parties involved that they did their legally required best to prevent things. If it's a zero day bug, fine, you had no way of knowing. Shit happens. But if it then goes unpatched for 6 months because you can't be bothered to do anything about it, that's different matter. If it is a bug that was reported and fixed years ago and you product gets hacked because you couldn't be bothered to update your products that needs to have consequences.


Mandatory ethics/competency training on the engineering side (like ASE cert but for sw peeps) and quality certification on the product side, would go a long way


The govt shouldn't mandate what exact steps companies must take to be secure, because as you say it's woefully behind the time usually. But it should heavily punish security failures. Companies can choose whatever method they want to keep customer data safe, but if they fail there will be hell to pay. That would be the ideal system as far as I can see


For profit certification companies can be bought out e.g. the bond ratings agencies in 2008.

I think government regulations can work if they have a mandatory review every X years or self-destruct clause such that forces lawmakers to update the laws.


Really it is a problem of short-term greed. An impartial and accurate one has an inherit long term value that can't be beaten except by their own failure. The odd thing is that the big three responsible for lousy ratings didn't wind up supplanted by rivals after that debacle - perhaps I am missing something.


I’ve been thinking the same recently. Regulation doesn’t keep up.

An industry standard code of conduct or framework, with certification and a seal to show your software adheres to the certification may be the best approach. perhaps source code or data handling processes could be audited in the scheme.


like a certified organic seal? (which are meaningless at this point)

you regulate the effect; don’t dictate a solution.


As an alternative to government regulation, how about an approach like Underwriters' Laboratories?

Those guys got their start around 130 years ago with regulations about fire doors in factories. Now they have standards for all sorts of EGoT (Electric Grid of Things) devices, from lamps to toasters.

They get their teeth from the fire-insurance companies who back them. If you have non-UL junk in your office, an insurance risk-manager inspector will instruct you to change it. If you have that kind of junk in your apartment, heaven help you if you have a fire and make a claim.

Industrial shops have Factory Mutual filling the same role. One place I worked required Factory Mutual certification. Those guys are not fooling around; they improved our products.

If Walmart and Best Buy discontinued selling uncertified IoT products it would help the cause. Even MicroCenter and Fry's could help get the ball rolling. But they can't do that until a certification process is workable.

A UL or FM approach is more workable with USA attitudes toward government regulation. And workable is what we need.


Fires are incredibly expensive, which makes everyone buy fire insurance and incentivizes fire insurances to push fire safety on their customers.

In contrast, most data breaches are very cheap. In most industries the market doesn't seem to punish breaches at all, so there's only the unquantifyable cost of that data benefiting your competitors.

If there was a multi-million-dollar fine on data breaches regardless of fault and countermeasures, we would get exactly what you describe: the market working to reduce risk and average insurance payouts, making everyone's life better in the process.


This is a "cost for screwing up" approach. The best way to do that is to charge or fine them for such screwups. But, an under-regulated capitalism ideology stands in the way.


Authentication is the real issue. We treat SSNs as a lifelong shared “secret” - shared with just about everyone. When so many need to have this “secret”, trying to hide SSNs is futile.

Someone should be able to steal a database of my info and whatever the shared secret is should only be tied to that org, and of course not stored in plain text.


We should publish all ssn and then move to a smartcard model where you have a government certificate and can sign things safely.


How will everyone keep their dozens or hundreds of shared secrets safe?


Password managers appear to be growing into this role. As secret questions become increasingly ineffective better answers are random and unique. It quickly reaches the point that human memory and paper can not accommodate.

EDIT: forgot 'not'


Perhaps the consumer wouldn't have to, and what the consumer gives out is a public key that lets the org request a unique to them access key. The consumer then gets a notification that Org X has requested an SSN access key and the user confirms or denies they get access. If a company is breached they get their keys revoked. A customer can log into a gov site to manage them.


From the tone of articles like these sometimes I feel like I must be the only person left in the world without an internet connected refrigerator or robot vacuum cleaner.


You’re not alone! I had to go with the monoprice Sous Vide cooker, since it was the only one that could be operated without a smart phone.

It was also cheaper, which is nice.


I wonder when lobotomizing "smart" hardware will become an established practice? That is, opening it up, cutting out the IoT chips, and replacing them with an off-line interface? Just like in the past, we used to convert PlayStations (to run pirate disks), remove SIM locks, and now we jailbreak our phones, I wonder when we'll start lobotomizing our fridges and desk lamps.

(Also, I consider to have just coined the term in this context, unless prior use is demonstrated.)


your jailbroken iPhone is easier to break into, BTW. baseband processor, firmware, all of that is still there and exploitable by the carrier or authorities.


That was a good choice. There is a nasty worm going around that will turn your water up from 55 degrees Celsius to 56 for 15 minutes ruining any chance of perfect 48 hour short ribs. It hit a lot of Michelin star restaurants in Iran really hard back in 2010.


I’m more interested in having my Sous Vide cooker continuing to work after the company goes under or Google buys them and cancels support.


You jest, but sillier things have been exploited to mine bitcoin or conduct DDOS attacks before.


Pretty sure it was a lettuce centrifuge.


I just ended up building a sous vide cooker with a dumb thermostat, heating element, water pump, and waterproof container. Cheap, and won't be subjected to the 'security' whims of any Internet of Trash manufacturers.


That's why I'm sticking with my old Honeywell 316. No bluetooth or USB on that baby.


I don’t get the premise of this. On the one hand he’s making the assertion that government regulation is needed because consumers won’t pay for added security.

He then immediately goes on to say that in the past security breaches weren’t life threatening but now they are because refrigerators and cars are connected to the network.

Okay so people won’t pay more for security when it’s not life threatening but this time the threats are different, but people still won’t pay more. How does he know this if the threats are different this time?

Tbh though he lost me when he advocated for government regulation as the fail safe solution.


One thing I've learned in ~25 years of working in what the NYT would now call "cybersecurity": Internet hacking is always "about to get much worse".


I mean, are they wrong? It sure appears to have been getting progressively worse.

Granted, some chunk of that is from an expanding surface area vulnerable to attack, and an expanding amount of valuable data available for the taking.


They are wrong (so far), and it is not getting progressively worse. In the 1990s, it was realistic for an amateur hacker to aim at owning up a whole backbone network. You broke into computers by running "showmount -e" and looking to see which ones were exporting their root filesystems r/w to the entire Internet. In the early 2000s, worms targeting Win32 vulnerabilities were so effective there was almost legislation. Nothing was sandboxed (except for, ironically, Java applets), and virtually every web application on the Internet was riddled with SQL injection. The first time I ever did a professional consulting application penetration test, I logged in as "admin" with 'OR''='.

It's a lot more fun to be an attacker today (I mean, if you dig computer science), but I don't know a lot of people in this field who think it's gotten less challenging.


Sure, perhaps hackers can do less damage than they used to be able to do.

On the other hand, there appears to be a lot more damage for them to do, and a lot more data that's been lost/leaked recently. I'd call that "worse".


I think you can agree that _worse_ doesn't necessarily imply _harder_. More critical systems are online, software is more complex, more actors are in the mix, etc. Feels like semantics, anyway.


I don't agree at all that more critical systems are online. What I see instead is a greater recognition of the variety of critical systems that are and always have been exposed, leading in turn to better security for those systems. And we're kidding ourselves if we think that the attackers we're facing today weren't active 10 years ago.

20 years ago, owning up someone's voice mail was a funny joke (teenagers were literally owning up switching systems.) Today, we're all carrying HSMs in our pockets. Things are better, not worse.


Owning someone's voice mail, or even a PC, is ultimately very low impact on societal scale. But what about the increasing amount of physical systems that are on-line - factories, powerplants, hospitals, cars, pacemakers (via phone), etc.? Is this not as big of a problem as it seems to be?


The point isn't that voicemail is super important; the point is that infrastructure wasn't even secure a decade and a half ago. The systems you're talking about were all exposed then too.


As far as I can tell, the first openly networked pacemaker was implanted in ~2009 [0] (Probably earlier in not-announced mode, but not by much). A decade and a half ago, people pretty much only had a home PC connected to the internet in their house. Now they have everything from their lights to thermostats to home security systems connected. 15 years ago, there were probably some internet systems with large collections of personally identifying information on them, but western society as a whole hadn't yet decided to put all the data one could want to know about them in one place (multiple times over).

Everything might be more secure, as you say, but there are so many more ways a small hole could be exploited to do damage now.

[0] https://www.popsci.com/scitech/article/2009-08/first-patient...


It's clear to me that this is nowhere near accurate and I'm not sure why you insist on making these sort of claims.

One only has to look at self-driving cars to disprove you.

Dan Geer also entirely disagrees with what you wrote [1] [2] and you're no Dan Geer, sorry to say..

[1] http://geer.tinho.net/geer.indiana.19x17.txt

[2] http://geer.tinho.net/geer.uncc.5x16.txt


Look, Dan Geer is fine, so I won't snark and say "that's one of the nicer things anyone has said about me on HN"†, but let's be clear: Dan Geer and I have a very different kind of day-to-day workload. We'd probably come to different conclusions about all sorts of things. I would also in a million zillion years never quote Nassim Taleb on anything. He's wrong here, as he has been in the past. We've all been wrong about things! I just happen to be right about this one thing.

Sure, I just did, but I'm being upfront that it's a cheap and unfair thing to say. I'm human.


You don't agree at all that increasingly critical parts of society have been subsumed by the Internet during the last _28_ years? What planet are you living on?

Please elaborate because I don't see how you can even remotely defend what you wrote.


Well, angry anonymous commenter, I've been working in software security since (checks notes) 1993, and professionally since 1995, and the claim you're making just doesn't hold up. "The Internet" may have subsumed all sorts of things, but it's ~1.5 behind computers and telecommunications. Before there was an Internet, the world ran on dial-up modems and X.25, and people were breaking into things then too.

Things have gotten better, not worse, and personally, if I was being more aggressive about the argument (which I guess I am now), I'd go further and say you can't have been paying any attention in the 1990s (or to the history of what happened in the 1980s) and think otherwise.


I'm stating the following since I've seen you appeal to your work history and say "trust me, I've been in this for a long time" far too many times to give you a pass here. There are plenty of tptacek posts on HN, where it is crystal clear to anyone with similar years in the domain as yours that you're either entirely wrong or deliberately misleading. You need to make a proper argument if you want to convince me.

There were computers, telecommunications, dial-up modems, X.25 and private networks in the 90s but the degree of cohesion, sublimation and intra-connectivity wasn't anywhere close to what we have today. Consequently, the actor domain looked very different and concepts such as cyberwarfare weren't even in the public eye. Morris worm vs NotPetya. Sure, barrier to entry was very low compared to now. But, as Dan Geer has repeatedly shown, risk has grown tremendously even if the field has gotten a lot harder. You don't think that completely disproves you?


Dan Geer is wrong. I already made the argument, upthread. I'm responding to your appeal to his authority with an appeal to my own experience. It has gotten harder to break into things, not easier. The Morris Worm was arguably bigger deal than NotPetya --- certainly, it was more sophisticated. Every couple years, there's some malware or other that manages to infect huge numbers of machines. IIRC, Nimda took down the entire Naval Marine Corps Internet. They said then that we had crossed some threshold, and from now on attackers were just going to get worse. Who cares? Guess what: the opposite thing happened.


I can't disagree from a technical perspective, nor should everyone else. But that just isn't that relevant. Sure, you could hack the entire world 20 years ago, but there just weren't that much impact.

If you read almost any constitution they protect "life and liberty". Today those things are being impacted by a lack of security. Peoples messages, private pictures, assets, infrastructure, opinions and even geopolitics are all affected.

Yesteryear the most you could do was largely to expose someones password, read their university e-mail and steal some source code. Relative to the impact security is a lot worse today.


well, hasn't it?


There are already a lot of safety regulatory agencies in the world. These government sponsored organisations managed to make food and other consumer products a lot safer.

These type of discussions often end up with mass control by the government versus free market will solve it by itself.

The sweet spot is somewhere in the middle, like many of these agencies have proven for decades.


Here’s a hard reality: The people who can fix the security problem are the ones who are already working on it. There’s no magic dust or elite squad of cyber security professionals that are going to walk through the doors of a FAANG and turn things around. Security is hard, and security at global scale is still an unsolved problem that no one has the answer for. What I do know from working in the valley is that neither the government nor the media has the damndest idea of what they’re talking about when it comes to technology, and they certainly wouldn’t know how to secure Google let alone keep search running globally for a night.


Most people working on security don't have much power, and are reduced to tinkering with small incremental improvements that don't break anything.

case in point: If you look at the history of malware infactions in organisations, one vector historically stands out since the early 2000's: office/pdf attachments in emails. It's been obviously a catastrophic combination to feed untrusted, unauthenticated complex office formats to insecure productivity applications, but nothing was done about it despite weekly new public vulnerabilities and pwnage continuing over 20 years.


The market won't solve it. Regulations won't solve it. Why can't we just have meaningful legal frameworks where impacted parties can sue these negligent corporations?


There’s a YouTube talk from Uncle Bob where he foretells a future in which software kills some people and then regulations are put into place governing software development.

I think he was close in his vision. It will be Facebook and Google and millions of IOT devices that push us to that future.

Big companies won’t be hurt, but your startup better be able to afford the certs or too bad.

https://youtu.be/ecIWPzGEbFc


The #1 risk are nation states. They became a threat by means of convincing politicians of "cybercrime" being ordinary violence. Violence is something the state has a monopoly thus the idea is easy to sell. However, online security should be left to the market otherwise we'll only see more "violence". Vulnerabilities kept secret and online targets being compromised.


If anything should be regulated it's ethical hacking and security research. Having a bug bounty program that is accountable to some government agency should be mandatory for every single enterprise that does some form of development and the prizes/awards should be some function of the revenue and the severity of the vulnerability.


Governments are not neutral parties. They have interest in mass surveillance. Moreover, I don't believe a neutral party is possible. Organized groups with enough motivation and resources will find a way to influence neutral parties. Just take a look at journalism, or even education, today. Both of those groups were intended to be neutral, but they are not because they are heavily influenced from outside. I know education in regulated, but it shouldn't be. I don't believe the original intention was for heavy handed political parties to use education to meld young influential minds to their liking.

On a side note: companies and governments alike believe that a single security audit before each release is sufficient (many don't even do that much). They are wrong. Instead, they should be hiring a team of full-time penetration testers that work in parallel with standard quality control testers.

Now back to my original train of thought. I believe the solution is exactly what we have -- natural selection. When the financial loss exceeds the executives tolerance threshold they will either fold, or adapt. The organizations that are better at adapting will survive. It will take time, but as long as the losses are great enough natural selection will affect the course of things to come.


"The primary reason computers are insecure is that most buyers aren’t willing to pay — in money, features, or time to market"

Im not sure this is true. That the market is not producing adequately secured stuff is fact, but... It strikes me as similar to "journalism is broken because people aren't willing to pay for good journalism anymore". Maybe be it's true in a sense, but I don't think it's a useful sense.

It's not like computers come in regular or secure, with a 20% discount on regular. Money is not always a direct lever on things. Some software has crappy UI. This does not generally correlate to UI spending. A much bigger influence is the type of market that software is in. "Enterprise" will likely be much worse than consumer stuff, because of market structure, incentives and hard feedback loops.

Bureaucracy/rules come with costs that can't be easily priced too.

For example, gdpr...

The writer complains that current laws are written from a naive perspective, as if the internet existed within its jurisdiction. That nativity is inherent in regulatory/rule-based systems.

GDPR was written as if it will be written by a person writing software. It's not. It is written by lawyers, hired by companies to "do gdpr." Mostly, lawyers reduced this to paperwork. Policies that must be meticulously written. Checkbox software that must be installed. Agreements with vendors that must be updated.

..All things that cost money, put lawyers and compliance officers in more powerful positions, and do very little to improve user privacy and agency over their data.

If you want to start a company in a regulated market, your first hire is a compliance expert, preferably one with a personal relationship with that specific regulator.

Regulators are process oriented, not results oriented.

For example, let's say some drug is overprescribed. Regulators respond with new small print that must be included in ads. They will meticulously measure "compliance," but may not even take an interest in results. Ie, they may not even check to see if sale/consumption of the overprescribed drug have gone down.

Anyway... Whether through regulation or whatever, security is hard. It is almost always reactive, responding to past crisis.

Personally, I'd start with laws (not regulators) targeting after-the-fact disclosure. I think self reporting is the most useful/successful part of gdpr, for example.

Light helps. It can also create the pressures, incentives and information required for change.


Such gorvernment certfification organization (or something like that) will harm local firms and benefit chinese, because chinese companies will ignore all those regulations and flood market with cheap devices. Honestly, I cannot see any solution other than mass education system that teach people what security is and how to use it. Every consumer MUST clearly undestand what exactly he/she is going to lose when using insecure appliance.


> flood market with cheap devices

To some extent that already happens with safety regulations, & imports/sales of these products are illegal. That doesn’t mean that you should just give up.


Exactly. It's trivial today to purchase electronics without any required certifications (UL, FCC, etc) from aliexpress, etc. However, those certification programs are still alive and well, and knowledgable consumers can still seek them out.


"The National Institute of Standards and Technology’s Cybersecurity Framework is an excellent example of this... The Cybersecurity Framework — which contains guidance on how to identify, prevent, recover, and respond to security risks — is voluntary at this point, which means nobody follows it."

How "excellent" of an example could it be if no one follows it?

If he's worried about low-cost devices today that don't have security teams, it seems that fining companies for having security issues could lead to some percentage of them going bankrupt, which in turn would lead to more devices that are abandoned by their manufacturer post-launch.

I also think it would, to some degree, stifle innovation. Even if what's involved is paying some fee for some new security technology or license, that's still less money that a startup can spend on the part of the product that customers are paying for.

I wouldn't say we shouldn't have any sort of regulation whatsoever, I'm just skeptical that the government could do a good job of it.


>> We also need our standards to be flexible and easy to adapt to the needs of various companies, organizations, and industries. The National Institute of Standards and Technology’s Cybersecurity Framework is an excellent example of this, because its recommendations can be tailored to suit the individual needs and risks of organizations. The Cybersecurity Framework — which contains guidance on how to identify, prevent, recover, and respond to security risks — is voluntary at this point, which means nobody follows it. Making it mandatory for critical industries would be a great first step. An appropriate next step would be to implement more specific standards for industries like automobiles, medical devices, consumer goods, and critical infrastructure.

> How "excellent" of an example could it be if no one follows it?

I can be very excellent indeed, from the computer security perspective. The problem of why it's not followed is probably twofold: 1) organizations don't know about it (and aren't motivated to find out) and 2) business leaders don't want to spend the money to implement it if they do know. Making it mandatory nicely solves both of those issues.

> If he's worried about low-cost devices today that don't have security teams, it seems that fining companies for having security issues could lead to some percentage of them going bankrupt, which in turn would lead to more devices that are abandoned by their manufacturer post-launch.

That's no big loss, because those devices are inevitably abandoned today.

> I wouldn't say we shouldn't have any sort of regulation whatsoever, I'm just skeptical that the government could do a good job of it.

The government will do a better job at regulating in this area than anyone has ever done before, because no one has ever tried.


Continuous security upgrades means byers no longer buy a device. They buy service and should be charged for one


I don't buy the regulation argument. Regulators will not have a sufficient understanding of the systems they are regulating, and of course regulation is too slow to react to a dynamic phenomenon. Perhaps more basic regulation stipulating liability if something goes wrong would be of use.

I suggest we apply some lateral thinking to the underlying problem and approach it from another perspective entirely. Over the last 30 years or so, really since the end of the Cold War, high on idealism and technological utopianism, we've built a whole new high tech infrastructure to replace the low-tech infrastructure that preceded it. In so doing, we have invariably embraced technologies that we did not and do not understand, technologies that have never really been tested (as in been subjected to the test of time). Was this wise? Should we be using new, unproven technologies for security critical systems? These new systems have untold vulnerabilities, and their often centralised structure makes them very susceptible to disruptions. Should we not be building robust, decentralised, low tech solutions instead? Could something as fundamentally vulnerable as modern undersea cables have survived a cataclysm like the Second World War? I anticipate that any valuable data sitting on a networked device anywhere is at risk of eventually being lost, leaked, or stolen. Any networked safety critical system will be hacked or otherwise exploited (or fail catastrophically). It is only a matter of time. So much of modern (hybrid) warfare hinges on sowing discord, confusion, using disinformation and misinformation to cripple adversaries--and we have collectively built an infrastructure that is tailor made for this kind of disruption. What I mean is virtually everything that has come into being in the last 30 or so years, from complex global supply chains to modern banking. What that exists now would survive a SHTF scenario (not hard to imagine)? Again, we should be designing systems to be robust, decentralised, secure, and wherever possible, totally independent of high tech gadgetry. What use would 'identity theft' have been in the 1970s? Exactly.


Why? Incompetence will always exist. Now youre just wanting to stifle the free internet.


We can no longer leave article writing to Muppets. All journalists should be HIGHLY regulated, certified and only licensed journalists are legally allowed to write articles for the public.


I think he's right - the line between regular hacks and cyber warfare has been blurred to invisibility, and defending the nation is one of the most basic of governmental responsibilities.


that’s a really interesting point. A good example is cops patrolling streets. But if these cops force their way into your house to “protect” you without your consent it’s not protection anymore, so voluntary consent is neccesary to make it protection and not a regulation.


Cops have always been able to force their way into your house to protect society. They aren't just protecting you, they are protecting you as part of a larger group that employs them.


We should fine companies for data breaches.


At best, we will end up like China.


online security will always be cat and mouse. that has nothing to do with the market. if you get the government to step in, then it will just be the government that fails instead of the market.


Its not entirely cat and mouse. Especially in some older banks. There is a local home town bank whose online banking only works in IE6. Its a joke.


the bank example is interesting because I do think banks might belong to a special class of institution that deserves to be regulated, but I think most other businesses really don't need it.


I would rather take any insecure device over any regulation from any government, particularly the US. Your government has no business interfering in what hardware I decide I should run and from who. You are a muppet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: