I work in the banking industry where security IS regulated (by the FDIC). We have government auditors come and review our technology once a year. These guys don't know what the hell they are doing. We have had blatant security problems (now addressed) that they couldn't see right in front of their nose. Community banks have terrible security. Larger ones are better, but still rife with problems.
I fail to see how government regulation and intervention has helped in my industry, or how it would help in any. If by regulations, you mean that we would get fined if some data got compromised, that already happens through negligence lawsuits. It is not an effective motivator though.
In my experience, the threat/worry of bad publicity is actually the best motivator in a company getting their security up to par.
You guys will not and want not to fix the status quo where shitty software is pushed onto us. You guys will not stop implementing unethical, "agressive" software. So someone should be watching over you, entrepreneurs and devs, and that someone is the government.
Government regulation need not be perfect. But it needs be there. That means companies will be more incentivised to keep their shit together. Surely your bank would be doing worse if nobody was watching over. If more budget and worktime is devoted to such regulation, it will become better.
I understand that no regulation is a strong political position in the US, but I call bullshit on it. I wouldn't bother writing as I'm mostly at the user side of things these days but I wanted to write this given most of you are devs here. It is not about some silly social network or an irrelevant SaaS anymore. The world runs on this, software is as important as medicine and food to our livelihood, and the software industry needs to be regulated like medicine or food industries are. Something simple like Twitter and Facebook affects lives of the masses. You'll have to get your... act together.
It isn't though, at least not compared to the state of things. Pretty much any government would be competent enough to mandate some sort of two-factor authentication that would greatly improve security and make a lot of phishing and hijacking a thing of the past. Of course different governments would have different success rates, if not in terms of security at least in terms of elegance. But that is like everything else. People die everyday by the lack of road safety and healthcare.
Obviously that's a fairly bad-case (though not worst-case) example of how things could play out, but I think it serves to prove my point that "just force them to do X" is not always a sound approach. Well-designed regulation using sufficient consultation with experts (actual experts rather than snake-oil consultancies) and with a view to the future and how the state of the art might change can be effective (though still not flexible enough to accommodate exceptional circumstances) but that's the exception rather than the rule.
I'm not qualified to speak on this subject, but these are excellent points. Could you expand on any other implementations that sound like a great idea on the surface, but would have limitations? It sounds like accessibility is just one of many concerns. I'm itching to hear more.
That a bank who can't handle security compromises their customers user experience rather than their customers security is a good thing.
The reason the regulate these things aren't because it is fun. It is because there are fundamental security problems that needs to and will eventually be fixed. Companies like Apple have largely already, or at least potentially, fixed these problem just only for themselves. If you want to fix it for everyone you very likely need some sort of mandate.
Why do you say that? It's only true if the cost of breach of security is actually taken by the customer. The alternative is that banks are required to rebate customers for fraud caused by poor bank security, which makes sense to me because it provides financial pressure for banks to beef up security while at the same time leaving in the flexibility to define how that security is improved. It's "here's the problem you need to solve" via financial pressure, not "here's the solution you need to implement" via mandate.
That sounds SO great.
Obviously there are no realistic security measures that are 100% effective. All this will amount to is further cementing the power of large internet companies. You know this, so why ask for it ?
A not insignificant part of the large Internet companies power comes from that they are the only ones who can handle, or people trust to handle, security. It isn't that hard today to create your own e-mail system or smart phone. But managing those systems, especially for a reasonable cost at scale, is just beyond what most new entrants in the market can handle.
Everything can go wrong.
> A not insignificant part of the large Internet companies power
So it's about breaking the power of large internet companies ? Figures. Can we please do that WITHOUT destroying the web ? The last regulation that tried to break the power of large internet companies was the GPDR, and that has significantly entrenched the position of the large internet companies instead, while creating a ridiculous amount of inconvenience for everybody. This ... will do the same.
People WANT to share that data. Or perhaps I should say, they want the things that happen when they do. Quick searches that get them the products they want, on Google, on Amazon, on clothing shops and on tons of small webshops. Even the obnoxious image ads. People want them.
That means that a login mechanism will just be an extra hurdle with zero of the effects you want.
So ... perhaps not.
But it doesn't.
GDPR forces companies who are collecting user data to obtain explicit consent for that collection. The fact that companies decided to make your user experience shittier instead of fixing their approach to data collection is the problem there.
If you enforce rule on two parties minding their own business, you encourage changing this rule in some type of mindless ritual. Ok, we will do what you say in the letter of the law, but we won't try to follow the spirit because no one cares.
And this is exactly what is happening with cookies. Companies don't want legal risks(GDPR or not), consumers don't care, voilà! Mindless cookie banners, stupidly long and expansive Terms of Service, etc.
My team is working through FedRAMP/NIST compliance right now. We have sadly adopted the saying, "Security or compliance, choose one." We have literally rolled back a more secure implementation in order to be compliant. Regulations can't keep up.
I'm not ideologically against regulating the software industry, but I have doubts it can be done successfully.
Banks that got into trouble recently:
* Cypriot banks
* Greek banks
* Monte Dei Paschi
What has the government done : None of the savers still have 100% of their money. So no, I don't think my bank would be doing worse ...
Or perhaps you mean the expert government handling of the bank problems of 2008 ? Yeah ...
So what was the point again ?
Users (the market) are a big part of the problem. I run a SASS product, and I've chosen to enforce a Magic Link login for (one-time token based sign in, like Slack), to mitigate the issue of horrible insecure user passwords (E.g. sn0w3d1n). I get a significant amount of pushback on this though, and it was probably a bad idea from a business perspective, but luckily I'm in a position to force it anyways.
It's a simple formula. When you increase security, you also decrease convenience. My hope has been that the market eventually demands security (it becomes a service differentiator people value), and maybe some day it will. But you're probably right in that we need big brother to enforce it on people's behalf because most care more about convenience than security and probably don't realize how much damage could be done to them by using the same terrible password for 10 different web sites (including email and banking). I generally take a libertarian point of view on most things, but not exclusively. I do think we benefit from government involvement in some issues, and this is probably one of them. I don't mean to say this from an elitist point of view either. Security is complex and people are busy. It's naive to expect everyone to grok it.
I'd love to enforce 2FA, but it's going to be a complete mutiny from my customers if I do.
Wouldn't this be akin to say "Criminal laws ? the cops can't even police themselves!".
It can be true, and you'd still need a framework to define the wanted behaviour anyway. Enforcing the standard is a important and separate issue.
yes it is. not too long ago there was a major issue with undercover cops in baltimore committing many of the serious crimes that they were supposed to be policing!  the commissioner (rightly imo) suspended undercover enforcement indefinitely.
If that were just true Facebook will not exist.
> I work in the banking industry where security IS regulated. We have auditors come and review our technology once a year. These guys don't know what the hell they are doing.
Regulations do not make problems disappear but make the situation better. If you vote for politicians that want to improve it, instead of politicians that are paid by lobbyists to free companies of their responsibilities.
I also work in banking (major financial hub in Europe). Regulation is the bane of security and data management because it adds several layers of complexity on top of already complex processes. It leads to people performing repetitive tasks to comply with regulation, leaving no time for in-depth analyses, process reviews and enhancements, and the clean-up of sensitive data.
You provide a baseless assertion shoehorned with a comparison to lobbyists nobody ever brought up. I can't prove a negative but you sure didn't prove your positive.
Like with GDPR, the regulation was to give people control of their data and make privacy by default an available option. But it's just given users more hoops to jump through before scooping up a user's data anyway.
Regulations tend to be a bit of a nudge in the right direction, but play out as something systems have to work against to keep things running the way they were before.
And then the problem is that people follow their measures ... and see this as absolving them of further responsibility. In many cases in the financial world that isn't just laziness: that's actually how the law works.
So much of the regulation burden doesn't just force the whole market into large companies, it actually opens up and legally mandates not security, but security holes.
I'm also working for a big European financial institution and are directly involved with reporting to the various financial authorities. Granted, this is complex and the worst is that there's very little tolerance for mistakes (each non-reported trade, which is supposed to be reported costs a bank thousands in fines)
But you know what? After all that shit that our employers pulled against society at large in 2007 / 2008 I totally support those requirements.
Yeah, self regulation of the financial industry! What could ever go wrong with that?
edit : Added timeline
Hopefully the GDPR will have a positive effect here. If you suffer a security breach, you can expect to face severe financial penalties. I'm sure companies will figure out how to secure themselves surprisingly quickly after they see a few of their competitors get fined several hundred million euros.
Keep in mind that Congress, EU parliament, EU commission and I'm sure many others were all hacked in the past 2 years. Needless to say, they all see themselves as above this whole regulation thing.
And of course, those penalties cannot come from the tax coffers. They need to be leveled against the pay of the politicians, because otherwise how could they ever work ?
The EU parliament's websites are currently clearly in breach of the GPDR as well. Let's start there, shall we ?
As long as this is their attitude, I feel like this is not an acceptable solution.
What does that have to do with it? Better laws on security will force the government to police itself better too.
Simple example: a law requiring all passwords to be stored with unique salt and encryption of certain minimum strength. Or a law preventing IoT devices from functioning on a network when their password is still set to the default.
How do you fail to see how simple actions such as these would help?
I've seen the same issues in SarbOx audits. The auditors don't know beans about the underlying technologies. A lot of evidence requests take the form of screen captures showing x. Well... I can give you a screen capture showing you whatever you want whether it represents reality or not. Ultimately, with our without regulation, it comes down to people being honest professionals. Regulation is all for show.
I think the Equifax debacle has shown otherwise.
Big corps have too much Lobbying power and PR presence for public shame to make a lasting impression. Facebook is on the hot seat right now but that too will pass since legislators are fickle and myopic. Heck Google straight up shut down G+ because of "Security Concerns" and no one batted an eye.
The banking industry got the regulation it has now because this did not work.
If the situation is as bad as you describe, then apparently not even the threat of government regulation was sufficient motivation for banks to get their act together.
Regulation does not work either. For 2 main reasons:
* Regulations are stupid and do not catch all problems, which then causes those uncaught problems to become systemic and threaten not just the bank, but the entire country, because regulation often also forbids or discourages banks from checking other problems (or at the very least pushes an attitude of "if you check compliance with the regulations, security check done"
* Governments cannot be trusted to carry out the regulations ("too big to fail")
Security concerns are almost certainly best handled by private industry except in rare cases like national security or the public markets. For example if Boeing becomes known for being easily hacked and flying unsafe planes, how long do you suppose they’ll be around?
A company’s livelihood relies on the perception of being secure and they are well aware of this so the ones that want to succeed absolutely invest very heavily in security. A successful hack doesn’t mean companies don’t invest in security or that people don’t pay for it.
So maybe they are doing something right.
> the threat/worry of bad publicity
Yeah , that hasn't worked for Exxon Valdez, nor for any of the recent data dump incidents.
The Underworld is now multibilion dollars business and is getting better at it's trade with every day. It is relatively safe and lucrative.
Identity theft, SIM swap, SWIFT half a bilion dollars theft in Bangladesh, South Africa - Japan credit cards. Just to name a few.
Remember that money at the end of the day is based on trust. If you cannot trust that the money on your account are safe. Or if your money is not liquid because banks have to manualy verify dubious transactions then your money are loosing value.
Obligatory Mitchell and Webb "Identity Theft" link: https://www.youtube.com/watch?v=CS9ptA3Ya9E
The average IT company however doesn't care much about leaks of customer data, except perhaps for the publicity effects.
I would much rather see software engineers follow the lead of electrical engineers and embrace non-profit (or even for profit) certification companies à la underwriters laboratories. It would be easier for consumers as the could just see a seal of approval and know they are getting a quality product. (Think LEED certification for buildings)
If anyone is interested in starting such and organization let me know as I do think it could do a lot of good in the world.
Then companies start covering up data breaches because disclosing them would cost millions in fines, resulting in people not even knowing when they've been compromised.
You also have the problem where politicians/media have no idea what they're talking about, e.g. calling the Google+ issue a "data breach" when it was actually a vulnerability discovered internally with no evidence of anyone having ever used it. If that's the standard then every time there is a vulnerability in a major operating system or TLS library, no one will be safe from the litigious trolls.
Couldn't you make this same argument about any law that punishes bad behavior? As an extreme example, if we make murder illegal, that incentivizes covering up the act, at the expense of closure for victims' families. It seems flawed to me.
There is also the issue of intent. Murder is illegal when you intend to do it. Nobody intends to have a data breach. In that case sunlight is more important than punishment because it's in everyone's interest to prevent it happening again, which requires understanding how it happened, which requires cooperation. Putting otherwise-aligned people on opposite sides creates unnecessary conflict at odds with the common goal.
Slap a 10x (or even 100x) fine on companies whose data breaches are discovered independently and covering stuff up won't look like such a good idea anymore.
Sure, but how do you prove it was covered up rather than merely discovered externally before it was discovered internally?
If it is mandatory to disclose data breeches and equally mandatory to cooperate with full transparency to fix the issue, then we are assuming that employees would act criminally with being accountable themself because it would be best for the company?
Covering up a data breech sounds like criminal
behavior in the example above. Which you know,
lands people in jail?
Other than their stock options, their relationships with other employees and potentially their job and career. True, whistleblowers have little to gain, but they have much to lose.
You have groups of multiple peoples there, where only one has to talk for everyone to go to jail. This is how organized crime has been prosecuted for years. The only one of the guilty who gets out is the snitch.
Sending an anonymous tip that could result in their company losing a lot of money if not going out of business has a highly undesired effect on their continued employment, future raises, stock options, etc.
> You have groups of multiple peoples there, where only one has to talk for everyone to go to jail. This is how organized crime has been prosecuted for years. The only one of the guilty who gets out is the snitch.
Prosecuting organized crime works by busting the little fish and cutting a deal to go after the big fish. There is no starting point for that process when you're dealing with an otherwise non-criminal organization. If you're not already aware of their offense you have no reason to be investigating them to begin with and nobody there has the incentive to tell you when they don't expect you to have any other way to find out.
"The only one of the guilty who gets out is the snitch" is also obviously incompatible with remaining anonymous. Anyone would be able to deduce what happened.
You get whistleblowers when someone is outraged at what the company is doing sufficiently to take the risk to try and stop them. Not when the government is threatening severe penalties for a past mistake that has already been remediated.
The NTSB method produces better outcomes than the War On Drugs method.
Is it? You can murder someone by yourself and be the only person who knows what happened. You as the murdered are strongly incentivized to never tell anyone if you want to remain free.
In a corporate IT department a bunch of people will have to know just to make a decision as to whether or not to publicly disclose it. An anonymous tip could have zero consequences for the individual even while they remain in their current job. What is the turnover among IT staff? Once they have another job they have virtually zero motivation to keep their former employers dirty secrets.
When (1) there's no evidence of anyone ever having exploited the issue; and (2) the logs where that evidence would appear, if it existed, only go back two weeks...
...it seems fine to assume that people have exploited the issue, the evidence was there once, but it isn't now.
"Our logs don't show evidence of any data compromise" is not stronger evidence of anything than "our logs for the last two weeks don't show evidence of any data compromise" if you don't have logs that go back more than two weeks. How much evidence do you think that is? How do you think it might play in the press if the denial was accompanied by the two-week qualifier?
High penalties are irrelevant when people don't expect to get caught. They often make things worse by creating a "no snitching" culture because the disproportional penalties are seen as unfair by would-be informants who are then less inclined to cooperate.
I think most data breaches boil down to poor culture and management. Software maintenance gets cheaper as you do it more frequently since the complexity of deferred maintenance scales exponentially. It's a lot easier to justify doing things when the costs to do them are nearly nothing. This is ideally where we should aim as an industry.
I prefer incentives that prevent spilling the milk to post-spill arguments about how damaging the spill may have been.
Then what you want is subsidies to audit popular software/hardware for vulnerabilities.
This is a classic high transaction cost tragedy of the commons. The manufacturer has no incentive to make secure devices because customers still buy the insecure ones. Imposing liability is difficult because the issues are highly technical (difficult for judge/jury to understand) and the damages are highly speculative and hard to calculate. Imposing specific security standards is equally problematic because of the same bad interaction between technical complexity and politicians.
But the solutions are known -- it basically just requires money for security hardening. So have the government provide the money. Without specific byzantine standards it allows the job to be done properly, and providing the money removes the incentive to cut corners.
If you don't force the audit, why a company would want to take a risk with testing their product? I feel that at best, it'll end up like "quality seals" on food items. Yes, I can draw a quality seal in Photoshop too.
But even if companies would be somehow willing (how, without forcing them?), then you need to ensure the audits are reliable, and prevent companies from creating a fake rubber-stamping auditing entity, and going to market either way. What's the preferred way to accomplish that?
Companies are in a race to the bottom, and they'll do their best to weasel out of "unnecessary" costs.
There will always be the company which is literally on fire because it's 0.0013% cheaper in the short term, but that's true no matter what you do because that company will be out of business in six months regardless. You can't change their behavior because they're already in the midst of self-destruction by the time you even become aware of their existence.
Any kind of normal company is going to be happy to have a free confidential security audit, and offering that would in practice significantly improve the security of this garbage.
Does the Linux Foundation or a group of random devs on github that gave their code away for free get handed a massive and possibly bankrupting fine for that? And if so, why would anybody release code for free? If not, and you say that "GPL/MIT/etc. warrants no serviceability so it's on the adopter," then why would anybody use open source and open themselves up to the liability when they have zero control over the project and its associated quality control and process?
For this reason I don't think governments can do a good job at making fair regulation that doesn't have severe unintended consequences.
Something that might help tho, is providing easier civil recourse for those affected. For example, if Equifax gets hacked and leaks my identity and somebody buys a car in my name, it should be very easy for me to sue them to make it right. That's something that is clearly broken with our current system.
Failing to address security vulnerabilities quickly, or publicize their existence and suspend operations until they're fixed is negligent and is most similar to reckless endangerment. Similarly, hiring inexperienced, unqualified, or otherwise knowingly incapable people to perform a job is negligence. Failing to know and perform what precautions one needs to take is likewise negligent. People go to jail, fines are levied, damages are paid.
A previously unknown exploit used to gain access to a company is like a car accident and should be treated as such. Company pays damages and moves on.
Idk that fines would help that much. I would just double down on the disclosure part. Improved policing of nondisclosure (with penalties).
Fines work when they are assured, fast, and high. But because the incident that triggers fines is not the actual behaviour, but only its later consequences, it's too easy to put off security "for some other time".
It's like a single superhero randomly killing shoplifters every few months: the penalty is far disproportionate, yet the policy is still not going to stop anyone.
I pointed out it worked in the past for security and is working right now for safety:
For some reason, word about the successes never gets out. It should since their defect rates are super low.
Arguably it goes back to identity theft being the consumer's problem in the first place and not the merchant who didn't properly validate. Since if Alice were to say that Bob the car dealer that Carol was the one paying for her brand new car when she wasn't even there it isn't Carol's responsibility. Alice would be the one guilty of fraud and Bob robbed - Carol isn't even a party to the transaction.
If it is optional and just a nice to have feature for which you charge extra, there will be companies not bothering. And there will be companies offering cheaper certificates and confusing customers with important looking but utterly meaningless certificates. Our industry is already full of snake oil sellers offering meaningless certificates and guarantees. This type of ass coverage only works when there's a legal stick that makes companies make sure they qualify for the right certificates.
Currently companies get away with really bad stuff. There's no liability. This needs to change. Liability leads automatically to people covering their ass to avoid ending up paying damages or ending up having to do expensive product recalls. E.g. router manufacturers stop shipping updates as soon as they can get away with it; typically when they have a new model they want to sell. If the old one gets hacked their solution is selling you a new one. Not their problem. That's what needs to change. If it has a network connection, it has to be able to receive updates and those updates need to be for the lifetime of the product, not the warranty or the calendar year. Failing to ship critical updates for known vulnerabilities needs to have consequences.
Google currently believes it is totally acceptable to stop shipping security updates for their phones after 3 years. Not their problem if you get hacked apparently. They don't care. The only responsible alternative (from a security point of view) for them to do would be to effectively brick phones when they stop supporting them so that users are not able to get into trouble. That would obviously be unacceptable but somehow leaving your users exposed to known security vulnerabilities is perfectly OK, perfectly legal, and completely without consequences when the obvious things happen.
When bad stuff happens, there needs to be positive proof from all parties involved that they did their legally required best to prevent things. If it's a zero day bug, fine, you had no way of knowing. Shit happens. But if it then goes unpatched for 6 months because you can't be bothered to do anything about it, that's different matter. If it is a bug that was reported and fixed years ago and you product gets hacked because you couldn't be bothered to update your products that needs to have consequences.
I think government regulations can work if they have a mandatory review every X years or self-destruct clause such that forces lawmakers to update the laws.
An industry standard code of conduct or framework, with certification and a seal to show your software adheres to the certification may be the best approach. perhaps source code or data handling processes could be audited in the scheme.
you regulate the effect; don’t dictate a solution.
Those guys got their start around 130 years ago with regulations about fire doors in factories. Now they have standards for all sorts of EGoT (Electric Grid of Things) devices, from lamps to toasters.
They get their teeth from the fire-insurance companies who back them. If you have non-UL junk in your office, an insurance risk-manager inspector will instruct you to change it. If you have that kind of junk in your apartment, heaven help you if you have a fire and make a claim.
Industrial shops have Factory Mutual filling the same role. One place I worked required Factory Mutual certification. Those guys are not fooling around; they improved our products.
If Walmart and Best Buy discontinued selling uncertified IoT products it would help the cause. Even MicroCenter and Fry's could help get the ball rolling. But they can't do that until a certification process is workable.
A UL or FM approach is more workable with USA attitudes toward government regulation. And workable is what we need.
In contrast, most data breaches are very cheap. In most industries the market doesn't seem to punish breaches at all, so there's only the unquantifyable cost of that data benefiting your competitors.
If there was a multi-million-dollar fine on data breaches regardless of fault and countermeasures, we would get exactly what you describe: the market working to reduce risk and average insurance payouts, making everyone's life better in the process.
Someone should be able to steal a database of my info and whatever the shared secret is should only be tied to that org, and of course not stored in plain text.
EDIT: forgot 'not'
It was also cheaper, which is nice.
(Also, I consider to have just coined the term in this context, unless prior use is demonstrated.)
He then immediately goes on to say that in the past security breaches weren’t life threatening but now they are because refrigerators and cars are connected to the network.
Okay so people won’t pay more for security when it’s not life threatening but this time the threats are different, but people still won’t pay more. How does he know this if the threats are different this time?
Tbh though he lost me when he advocated for government regulation as the fail safe solution.
Granted, some chunk of that is from an expanding surface area vulnerable to attack, and an expanding amount of valuable data available for the taking.
It's a lot more fun to be an attacker today (I mean, if you dig computer science), but I don't know a lot of people in this field who think it's gotten less challenging.
On the other hand, there appears to be a lot more damage for them to do, and a lot more data that's been lost/leaked recently. I'd call that "worse".
20 years ago, owning up someone's voice mail was a funny joke (teenagers were literally owning up switching systems.) Today, we're all carrying HSMs in our pockets. Things are better, not worse.
Everything might be more secure, as you say, but there are so many more ways a small hole could be exploited to do damage now.
One only has to look at self-driving cars to disprove you.
Dan Geer also entirely disagrees with what you wrote   and you're no Dan Geer, sorry to say..
† Sure, I just did, but I'm being upfront that it's a cheap and unfair thing to say. I'm human.
Please elaborate because I don't see how you can even remotely defend what you wrote.
Things have gotten better, not worse, and personally, if I was being more aggressive about the argument (which I guess I am now), I'd go further and say you can't have been paying any attention in the 1990s (or to the history of what happened in the 1980s) and think otherwise.
There were computers, telecommunications, dial-up modems, X.25 and private networks in the 90s but the degree of cohesion, sublimation and intra-connectivity wasn't anywhere close to what we have today. Consequently, the actor domain looked very different and concepts such as cyberwarfare weren't even in the public eye. Morris worm vs NotPetya. Sure, barrier to entry was very low compared to now. But, as Dan Geer has repeatedly shown, risk has grown tremendously even if the field has gotten a lot harder. You don't think that completely disproves you?
If you read almost any constitution they protect "life and liberty". Today those things are being impacted by a lack of security. Peoples messages, private pictures, assets, infrastructure, opinions and even geopolitics are all affected.
Yesteryear the most you could do was largely to expose someones password, read their university e-mail and steal some source code. Relative to the impact security is a lot worse today.
These type of discussions often end up with mass control by the government versus free market will solve it by itself.
The sweet spot is somewhere in the middle, like many of these agencies have proven for decades.
case in point: If you look at the history of malware infactions in organisations, one vector historically stands out since the early 2000's: office/pdf attachments in emails. It's been obviously a catastrophic combination to feed untrusted, unauthenticated complex office formats to insecure productivity applications, but nothing was done about it despite weekly new public vulnerabilities and pwnage continuing over 20 years.
I think he was close in his vision. It will be Facebook and Google and millions of IOT devices that push us to that future.
Big companies won’t be hurt, but your startup better be able to afford the certs or too bad.
On a side note: companies and governments alike believe that a single security audit before each release is sufficient (many don't even do that much). They are wrong. Instead, they should be hiring a team of full-time penetration testers that work in parallel with standard quality control testers.
Now back to my original train of thought. I believe the solution is exactly what we have -- natural selection. When the financial loss exceeds the executives tolerance threshold they will either fold, or adapt. The organizations that are better at adapting will survive. It will take time, but as long as the losses are great enough natural selection will affect the course of things to come.
Im not sure this is true. That the market is not producing adequately secured stuff is fact, but... It strikes me as similar to "journalism is broken because people aren't willing to pay for good journalism anymore". Maybe be it's true in a sense, but I don't think it's a useful sense.
It's not like computers come in regular or secure, with a 20% discount on regular. Money is not always a direct lever on things. Some software has crappy UI. This does not generally correlate to UI spending. A much bigger influence is the type of market that software is in. "Enterprise" will likely be much worse than consumer stuff, because of market structure, incentives and hard feedback loops.
Bureaucracy/rules come with costs that can't be easily priced too.
For example, gdpr...
The writer complains that current laws are written from a naive perspective, as if the internet existed within its jurisdiction. That nativity is inherent in regulatory/rule-based systems.
GDPR was written as if it will be written by a person writing software. It's not. It is written by lawyers, hired by companies to "do gdpr." Mostly, lawyers reduced this to paperwork. Policies that must be meticulously written. Checkbox software that must be installed. Agreements with vendors that must be updated.
..All things that cost money, put lawyers and compliance officers in more powerful positions, and do very little to improve user privacy and agency over their data.
If you want to start a company in a regulated market, your first hire is a compliance expert, preferably one with a personal relationship with that specific regulator.
Regulators are process oriented, not results oriented.
For example, let's say some drug is overprescribed. Regulators respond with new small print that must be included in ads. They will meticulously measure "compliance," but may not even take an interest in results. Ie, they may not even check to see if sale/consumption of the overprescribed drug have gone down.
Anyway... Whether through regulation or whatever, security is hard. It is almost always reactive, responding to past crisis.
Personally, I'd start with laws (not regulators) targeting after-the-fact disclosure. I think self reporting is the most useful/successful part of gdpr, for example.
Light helps. It can also create the pressures, incentives and information required for change.
To some extent that already happens with safety regulations, & imports/sales of these products are illegal. That doesn’t mean that you should just give up.
How "excellent" of an example could it be if no one follows it?
If he's worried about low-cost devices today that don't have security teams, it seems that fining companies for having security issues could lead to some percentage of them going bankrupt, which in turn would lead to more devices that are abandoned by their manufacturer post-launch.
I also think it would, to some degree, stifle innovation. Even if what's involved is paying some fee for some new security technology or license, that's still less money that a startup can spend on the part of the product that customers are paying for.
I wouldn't say we shouldn't have any sort of regulation whatsoever, I'm just skeptical that the government could do a good job of it.
> How "excellent" of an example could it be if no one follows it?
I can be very excellent indeed, from the computer security perspective. The problem of why it's not followed is probably twofold: 1) organizations don't know about it (and aren't motivated to find out) and 2) business leaders don't want to spend the money to implement it if they do know. Making it mandatory nicely solves both of those issues.
> If he's worried about low-cost devices today that don't have security teams, it seems that fining companies for having security issues could lead to some percentage of them going bankrupt, which in turn would lead to more devices that are abandoned by their manufacturer post-launch.
That's no big loss, because those devices are inevitably abandoned today.
> I wouldn't say we shouldn't have any sort of regulation whatsoever, I'm just skeptical that the government could do a good job of it.
The government will do a better job at regulating in this area than anyone has ever done before, because no one has ever tried.
I suggest we apply some lateral thinking to the underlying problem and approach it from another perspective entirely. Over the last 30 years or so, really since the end of the Cold War, high on idealism and technological utopianism, we've built a whole new high tech infrastructure to replace the low-tech infrastructure that preceded it. In so doing, we have invariably embraced technologies that we did not and do not understand, technologies that have never really been tested (as in been subjected to the test of time). Was this wise? Should we be using new, unproven technologies for security critical systems? These new systems have untold vulnerabilities, and their often centralised structure makes them very susceptible to disruptions. Should we not be building robust, decentralised, low tech solutions instead? Could something as fundamentally vulnerable as modern undersea cables have survived a cataclysm like the Second World War? I anticipate that any valuable data sitting on a networked device anywhere is at risk of eventually being lost, leaked, or stolen. Any networked safety critical system will be hacked or otherwise exploited (or fail catastrophically). It is only a matter of time. So much of modern (hybrid) warfare hinges on sowing discord, confusion, using disinformation and misinformation to cripple adversaries--and we have collectively built an infrastructure that is tailor made for this kind of disruption. What I mean is virtually everything that has come into being in the last 30 or so years, from complex global supply chains to modern banking. What that exists now would survive a SHTF scenario (not hard to imagine)? Again, we should be designing systems to be robust, decentralised, secure, and wherever possible, totally independent of high tech gadgetry. What use would 'identity theft' have been in the 1970s? Exactly.