On the other hand, they could have bankrupted them if they wanted to. Do we really want the government to have that much say in whether businesses live or die?
> Do we really want the government to have that much say in whether businesses live or die?
The company broke the law in a way that could have potentially harmed* individuals (passwords are critical secrets, many users do not have more than one, and so could lose their entire identity and maybe several years worth of funds. Sure, you can say that is the fault of the user, but there is the assumption of security here that was given by the company and not fulfilled).
If you rent a deposit box at a bank, with the assumption of safety, and it turns out they leave all the doors unlocked, don't hire guards, and someone came in and stole everyone's things and took them to the market, then closure of the bank is absolutely deserved.
Conversely, if a prison gives the assumption of security, but doesn't hire guards, or bother locking cells, and all the inmates walk out and some people are killed, does the prison deserve to go bankrupt?
These examples are exaggerated, yes, but roughly similar in circumstance. Hopefully enough to show that, yes, there are circumstances in which we want the government to have that hold over companies.
Now, do we want governments to have restraint? Sure. But it seems clear to me that they very openly are acting in restraint in this case. 20k EUR is a pittance to what was potentially lost by the people involved in the breach.
* - (Indeed, in America and a few other countries with weak to non-existent social welfare nets, loss of identity and money is likely to lead to homelessness and eventual death for the person, if they do not have family to rely on).
> If you rent a deposit box at a bank, with the assumption of safety, and it turns out they leave all the doors unlocked, don't hire guards, and someone came in and stole everyone's things and took them to the market, then closure of the bank is absolutely deserved.
OK, but I don't think any government action to close the bank would be necessary in that case.
In that example the goods stolen are tangible and have real, solid value that anybody can recognize.
In the case of things covered by GDPR the value is opaque and intangible, and only valuable to certain people. Letting 'the free hand of the market, driven by the layman who doesn't understand this, is not effective.
If you want proof, well, it's in the pudding.
Edit: also, if banks wheew getting knocked over as much as sites and apps, the government would intervene
I have yet to see a person against regulation that didn't run screaming for the authorities as soon as they were tricked, swindled, defrauded, abused by some company because no regulation was put in place.
Most recent cases that come to mind are Bitcoin thefts. "The government shouldn't do anything to regulate or even look in the general direction of Bitcoin". Oh, the exchange left your pockets empty? "The Police and the government should do something!"
Companies aren't scared of consumers because as a mass they have no idea what's good for them or how to protect themselves. Without a watchdog we'd all just be open for abuse with no leverage to stop it. Companies are far more motivated and capable to abuse you than you are to defend yourself.
It's not the same situation because in the case of a bank the risk is obvious to anyone, but it's might be hard for non-tech users of a chat app to understand the dangers of unencrypted passwords. It's the same like regulating the other industries, say layman can't assess how good lawyer or civil engineer someone is, so government requires them to have certain education and licenses or they can't work. We will need something like that sooner or later in IT because there's simply too much amateurism in industry now, where everyone and their mother has a startup these days and people with no proper experience and knowledge are trying to handle often very sensitive things. Just look at crypto-market and all those hacked sites - not hacked because hackers are evil geniuses, but because programmers who built them suck at their job... and market can't regulate it itself because the demand is so damn high
> ...passwords are critical secrets, many users do not have more than one, and so could lose their entire identity and maybe several years worth of funds. Sure, you can say that is the fault of the user, but there is the assumption of security here that was given by the company and not fulfilled...
I have so little sympathy for this position that despite generally having been pro-GDPR I no longer can support it if it is going to be used to encourage users to do something fundamentally stupid :/. The reality is that if we are going to make users think using one password is somehow ok then we need to be using authentication protocols that don't involve sending that password to a server in the first place: this is 2018, and challenge response takes like an hour to implement; the idea that sending a critical password to someone in the hope that they immediately hash it and store it is something that is somehow protected is insane.
I have little sympathy for the position that users just need to be not stupid and do things differently like they have been told to do for the last 20 years and yet have never done.
...but what they are doing will never actually be secure no matter how hard we all cover our ears and scream into the void so we don't hear it when they keep getting screwed over and over again. We can't actually make that secure no matter how many passwords we hash, we can only attempt to mitigate some fraction of the eventual carnage.
What we need to be doing is spending our time either on a massive education campaign or on technological solutions (like challenge response passwords, which HTTP never added even while every other old protocol was carefully adapting to get this right), not entrenching the position that users should be allowed to do the thing that will never ever ever be secure by putting up laws that somehow make it look sane.
The point here isn't "shame on the users": the point here is "shame on developers and lawmakers for working against users by encouraging them to do this horribly insecure thing rather than doing something sane". Seriously: how about we instead make a law (with a GDRP-like lead up of multiple years to "get ready for it") which says that "handling passwords at all isn't legal, even in transit (whether on a third-party SSL termination service or on your own servers)", and lets see how many mere hours it suddenly takes the Chrome/Firefox teams to come up with a real solution to this problem?
Passwords will never die. also by outlawing them either you force everyone to re-register to every site they want to visit or give all the power to single-sign-on services like facebook and google.
The fact that users do insecure things—and that real-world systems need to be designed to deal with that fact—is orthogonal to the idea that we should be making things more secure as well. Yes – security is broken and we need to fix it. The way to do that is emphatically not to pretend that it's already fixed.
The problem is that this does pretend it is "already fixed": the attitude that we have legal mechanisms in place to try to make "users do this thing that will never ever ever actually be secure" somehow legally protected means that people think it should be ok and keep doing it.
I mean, seriously: why would the law make it so illegal to allow passwords to be dumped, with technical people coming out saying "these passwords might protect something really important as the user is using the same password on more than one thing", if that wasn't at all how you were supposed to use passwords?
Imagine if the opposite were the case: if banks were not liable for any money lost by a user if it could be shown that the user was using their banking password for any other service. That would be a very different conceptualization of the risks and how passwords work, and frankly "fits reality" better than what people seem to want out of this situation (which is the exact opposite of that) :(.
If you don't want to claim that the issue is "already fixed" by "a bunch of websites that know what they are doing hash passwords and hopefully never have code bugs or people tapping their web servers or employees who can modify the code or debug logs that dump POST requests accessible to anyone or any other number of ways this password could be leaked because you sent it to this company's web server (which quite often might have used some SSL termination from some third party and so who knows how many people have access to your password!)", we might come up with actual solutions (or maybe spend the effort to educate users, as right now I don't know anyone who really tries).
> if banks were not liable for any money lost by a user if it could be shown that the user was using their banking password for any other service.
I believe in some places this is the case.
If you get hacked or use 123456 as password it is not the bank fault. If a service get hacked and the attacker discover your bank password it is not the service fault if they did their due diligence in securing your data.
Even if I use a unique strong password, if the chat service stores it in plain text, a hacker can get the pass from the database and login with my identity and cause who know how many problems, having a strong password and unique did not helps me if the service stores it in plain text or some bad hash.
Implementing challenge response in your javascript that you serf yourself is, for the general public, not much different from trusting them to hash it on the server.
We've only had like, what, two decades to add a feature to a web browser that handles this better than "type your password into this box, rendered by the server and controlled by the server to be sent to the server, in a form where it is given exactly the password"?
If your business operates outside of the law of course it should be shutdown. I think fines are a good way to cause compliance without causing more harm than necessary.
Are you advocating government not be able to influence businesses to comply with the law at all?
GDPR is actually a very well thought through piece of legislation and I've implemented compliant systems: best rule of thumb I heard - treat peoples personal information as if it is credit card data and you will comply in a fairly straight forward way.
One problem is that the law significantly changed right under their noses. I don't think that they have either a significant development team/effort for their platform, nor a very good revenue. It's basically a very old chat platform I used about 15 years ago...
Regardless of the law, it's completely irresponsible to store passwords in plaintext, and it's been widely considered so for decades - the company's behavior here is inexcusable, and I really can't understand why anyone would try to defend it.
Even if it had changed right under their noses, that would still be OK in my book. The Lawmakers are elected by the people (somewhat -- the EU could be more democratic, but that's a different topic), and they should therefore be able to change the laws on the behalf of the people.
That would have been a reasonable argument if they were found to have been using something like DES-based crypt(3) to hash their passwords. But they didn't, they were just plain text.
It's a worthwhile question, but remember that the alternative is companies that aren't accountable to anyone, and which are cavalier with their users' security.
They are accountable to their users. If a company is negligent with user data and their customers actually do care about this issue, this company will either eventually improve or go out of business. And if customers don't deem this important, it's not a government's role to decide it is.
To be clear, I'm not arguing for no control at all. However, fines should not be so egregious that it ends up being up to bureaucrats to decide whether a company lives or dies. This will easily lead to selective enforcement and corruption.
> If a company is negligent with user data and their customers actually do care about this issue, this company will either eventually improve or go out of business.
I think we've seen enough of how this theory works in practice (or how it doesn't) to be able to say that there is absolutely no good reason to rely on it.
1. The problem is that most population is terribly poor at defining and managing risk, by biological design and social selection - those who are good at it are usually not the best neighbors you want to have.
2. In many businesses, the actual customers are not the end-users, whose data is leaked (all the nice free services you're getting over this invisible thing called internet), they are the merchandise business is selling to somebody else (ads, etc.).
3. There are two ways of coping with this:
3.1 darwinian, where stupid users who choose to hand their data to dumb businesses all jump off the cliff holding hands
3.2 paternalistic, where we elect somebody competent to make choices for the rest of the community, which would prevent people's poor judgment to both hand data to insecure businesses and for businesses to be insecure in the first place.
4. We tried darwinian one since the day 1 in many fields. Reverting it comes at cost (antibiotics would be one good example to think of).
So you are saying, if a company negligently loses some important data about me; lets say enough to create a fake identity or access my medical records, my only recourse should be to stop using them?
> If a company is negligent with user data and their customers actually do care about this issue, this company will either eventually improve or go out of business.
1. Knuddels is largely targeting minors
2. its customers are other companies not its users
3. in the real world there are externalities (like the network effect)
That's assuming the company explains in enough detail how it secures its product for consumers to tell the difference. More likely is that company A has a breach, and consumers who care about security move to company B, which is just as insecure but hasn't recently been hit by the risk realisation bat.
Take this argument over to Facebook. How many breaches of trust have happened over the past year? Have they improved? Not a bit. If anything, they are becoming worse.
That is not even theoretically valid. Even if customer care, customer has no way to review security. Moreover, companies lie a lot about their systems security. All systems totally secure and there were no succesfull attacks until laws about mandatory announcements came around.
It is only after publicly known exploit that small customer can know about issue.
Not the "government" the political unit, but the legal system: courts and regulatory bodies.
Turn this statement around. Would you say that no matter what it does, no matter how many crimes are committed or citizens harmed, a business should be able to carry on without interference? That companies should be superior to people - entirely above the law?
(It's not a very big harm in this case, but I don't see anyone arguing that e.g. companies should be able to dump unlimited amounts of toxic waste into rivers any more)
> Do we really want the government to have that much say in whether businesses live or die?
Sure, because before the GDPR businesses needed to follow no laws and never would have been closed by the government by not following laws and regulations.
> On the other hand, they could have bankrupted them if they wanted to. Do we really want the government to have that much say in whether businesses live or die?
"The government" have the ability to remove the freedom of people who have erred. In some backwards countries the government even has the ability to execute people.
> Do we really want the government to have that much say in whether businesses live or die?
I would say "no", but you have to understand that's coming from the perspective of an anarchist.
Rationally, this power pales in comparison to the others that governments already possess. If a state wants a business to fail, it will fail. GDPR has zero impact whatsoever on that.
There is a fundamental difference between America and Europe, in that Americans distrust their government but trust corporations and the free market. Europe is the exact opposite.
You can agree or disagree our world view, but that doesn’t change the fact that in average, more than 70% of Europeans trust the EU to have their best interests in mind, mich higher for GDPR.
I don’t think it’s really start-up hostile either. I think there is a huge potential for disruption for privacy centric companies.
> Americans distrust their government but trust corporations and the free market
I don't think that's true; that's a common European misinterpretation. I think, in general, Americans trust their government less than they trust corporations, but they are generally distrustful of any entity that wields or appears to wield significant power over them. Exceptions are made for entities that "agree" with a particular person (and these exceptions are not entirely rational).
There's the idea that a corporation has at least some economic incentive to do a decent job and government doesn't since it has zero competition. The classic example to the average person of the government's level of hustle being your local DMV.
My point is that it should not be discretionary. Either apply the law universally (without taking the viability of the company into account), or lower the fines to a meaningful level.
So you want them to bankrupt small companies while letting large companies getting away with not even a slap on the wrists? That seems strange. The law as written has leeway in the amount to make it more fair than it would be if you applied it universally.
> My point is that it should not be discretionary.
Right. But that won't work.
We set the fines at say, 20% of the business.
So a 'mom and pop' firm breaks it, gets reported. Ok, now they're out say, 20,000 EUR (scale to whatever is appropriate in your country for a small company).
Then a big business breaks it for maybe, gets reported. Ok, now they're out $35.57bn.
These might indeed be acceptable amounts. Now imagine that for example, instead of having a fair number of people affected, it is just two people. Or three people. And perhaps in this instance the people involved were only marginally affected by it, perhaps at worst they would have lost a couple of hundred $currency.
Technically speaking, the companies still broke the law, but suddenly perhaps the punishment does not quite fit the crime!
That's not how the law operates in basically any jurisdiction. If we allow discretion in punishment for things like murder, why not this? If you think discretion should be removed from all law then that could be a sensible argument, but it's a tough one to make and the GDPR is probably not the place to start with it.
One would assume the same care would be taken not to drive Amazon out of business, but considering the difference in size of the two businesses, the fine would most likely be much larger.
Except that fundamentally biases the law towards a certain size of business.
Universally large fines would threaten small businesses while reasonably charging large businesses. Universally small fines would potentially be reasonable for small businesses, but would be negligible for large businesses.
Why shouldn't it be tailored to the company's size/viability?