A citizen's data can be collected, badly secured, stolen, and used by criminals without the user ever being aware of step 1. Just like a citizen can get ill from swimming in a river without ever being aware of the factories upstream.
The solution is not to force citizens to constantly be on the lookout. It's to severely punish polluters and leakers.
When a CTO says "let's collect geolocations", the CEO should should have legal and business reasons to say "no way, it's not worth the financial risk of losing them; it could destroy our company."
Update: I do not think the government should mandate specific practices; it's too complicated, too fast-changing, and too hard to police.
It should be entirely results-based. You lose people's data, you pay big bucks. Figuring out how not to lose it is your problem. The government sets the rules, and the market plays the game.
The Equifax hack was the equivalent of data malpractice. It's like a hospital mixing up the labels on saline and hydrogen cyanide and then saying, "Whoops. Sorry about that." The cavalier attitude that companies have about data security infuriates me. Americans will be dealing with the repercussions of this for the rest of their lives.
Meanwhile, Equifax keeps making money.
Equifax's entire business was trading off of our data, but protecting that data was evidently not a priority for them. They should be fined into oblivion.
I don't even think that's a strong enough comparison. It's more like a hospital not bothering to label their bottles at all, and then go "oh, sorry about that" when the inevitable happens.
Hire the cheapest 'talent' you can find, refuse them training or the tools or environment or time needed to do a competent job, and give an MBA the reins to ignore their concerns and protests but push something out the door. It's a recipe for disastrously flawed systems and big piles of money for executives.
That is, many other brands of cars had been reported to have the same issues by drivers. And basically a driver was put into a stressful situation, thought they were hitting the brakes, but were actually hitting the gas. Then, panicking that they can't stop the car, hit the "brakes" harder, exacerbating the problem.
I know Wikipedia isn't exactly a great primary source here, but:
> From 2002 to 2009 there were many defect petitions made to the NHTSA regarding unintended acceleration in Toyota and Lexus vehicles, but many of them were determined to be caused by pedal misapplication, and the NHTSA noted that there was no statistical significance showing that Toyota vehicles had more SUA incidents than other manufacturers.
In any case, I believe companies can definitely be guilty of criminal negligence (and Toyota did a lot of bad things during their SUA crisis). But I think the use of SUA in the comment I originally responded to sort of misrepresents the situation and mostly spreads a lot of FUD around self-driving cars.
I don't think that was conclusively shown to the cause of any of the incedents though.
That way people will realise that their personal info is worth something.
That sort of hard stance seems extremely unworkable. If I sign up for Hulu they need my email and billing information at the least. So they are supposed to pay me now because I gave them that to get a service? So then ot evens out so my streaming is free now? Goodbye any useful paid service. Goodbye any free service.
What you're really advocating here though is abolishing free society as we know it.
You think I'm kidding? All court proceeding would have to be in secret and unrecorded as saying "so and so convicted of manslaughter" is now illegal. All public records abolished entirely, no accountability anymore. No journalists could publish a story about anyone ever. Harvey Weinstein sexually assaults hundreds of women? Can't tell anyone or publish that information, Harvey Weinstein owns it. Even telling someone else about your date last night would become illegal.
> "Other entities should not be allowed to keep or trade information about individuals without paying them directly.
I interpreted the comment to mean if I subscribe to say Hulu they are not allowed to sell that information to third parties and also if I cancel my Hulu account my data is not kept around since its no longer needed.
I certainly don't want to be the person to have to draw a line in the sand between "non commercial data" and "commercial data".
Then again, maybe I want to sell access to brain scans of people using hardware I made to a 3rd Party in the future… :P
And how many countries are companies able to skirt through financial engineering?
Yeah, people can create all the rules they want, but I'm more interested in how people plan on enforcing it… because from where I sit, that's where countries are lacking (esp with politicians accepting some kind of bribes/kickbacks/revolving door/election financing/lobbying doors in them all), and it isn't getting any better…
Even when the hardware for brain scans get cheaper and more open, the raw data it self in the hands of the average WeChat/Facebook/Line, user is useless unless one understands how to process it into something useable…
Companies that can attract talent will increasingly start making all the data that they collect public by default… side stepping the pain of data breeches when you keep costs down by making it public by default… how they process it, that's a different story… maybe those companies will get their useds pissed off enough to actually think about their choices of using their platforms and not use them… though, that last part seems increasingly unlikely.
To be the devils advocate for a moment, what about the other side of that? Should businesses not be able to keep and share records saying "This person owes us money but refuses to pay"?
No, because you would have practically no recourse if you were wrongly on such a list, and might not be able to prove you even were on it or that it exists. It's called "blacklisting".
You have a common name and share a birthday with a felon? Say goodbye to your chances of getting a credit card!
If you’re name is John Smith, good luck!
It's true that they could/should be stronger/tightened/held to a higher standard, but there is a process to dispute your credit file. Creditors are legally required to give you certain information when they deny you for credit.
That process is extremely flawed and unreliable. Have you ever tried disputing your credit? good fucking luck. I've been arguing with Equifax for over a year because they "have it on record" that I lived somewhere that I've never lived. I have no legal recourse unless I want to spend thousands of dollars on lawyer fees.
Before that, I'd send them a letter clearly marked "Notice of Intent to Sue" (on the letter and envelope) stating what information is incorrect, and that you plan to file a suit in [insert name of small claims court] on [insert reasonable date] seeking $1,000 in damages for each violation per 15 USC 1681(n), unless the disputed information is removed before that date. And enclose any supporting documentation, and history of previous fruitless correspondence.
IANAL, but this has worked for me before with the credit bureaus, and I've never actually had to file the threatened suit. My best guess is that threatening litigation gets the case assigned to a legal team that is actually empowered to correct errors. It's a shame that that's how they've chosen to run their business, but then again, I'm not their customer, I'm their product.
Also a problem; they use simple knowledge of a single magic number as authentication ('identification').
The three current companies in the US could have gotten together and worked around the lack of a national ID and created a private SIN cryptographic... but that would cut in to their profits. Though I hear other countries that have done this also have issues with 'business individuals' being contacted and scam attempts rampantly abusing their data...
I think you have done a lot of good for infosec. Thank you. Here are my thoughts: You will be in a room full of lawyers. Examples that they understand are important. For example, (Full credit to @strandjs 2017 DerbyCon keynote) in the Crowdstrike v. NSS labs case, they sued to prevent "third parties to access or use the products" and prohibited "any competitive analysis on the product".
Sorry to get into the weeds here, but the TL/DR is the following.
- Delaware District court Judge Gregory Sleet's ruling supporting product performance assessments.
- Government funding to support projects like ModSecurity that contribute to US economic security.
- Whitelisting (that works) is the future.
Today's security vendor marketing seems to have a free pass to lie. Thankfully, on 2017-02-13, Judge Gregory Sleet of the Delaware District court ruled against Crowdstrike writing "The consumer review fairness act of 2016 underscores the public's interest in performance assessments. The new law voids provisions of form contracts that restrict a party to that contract from conducting a performance assessment of or other similar analysis of..." "The court finds the public has a very real interest in the dissemination of information regarding products in the marketplace." It goes on to say if NSS's data is inaccurate, Crowdstrike could publicly rebut that data with evidence and the public would benefit from the exchange helping inform the public if they should trust future NSS reports. He concludes "the public interest weighs strongly in favor of denying Crowdstrike's motion."
Security is hard. I have been researching the Apache Struts2 exploits that Equifax was hit by. Assume another vulnerability like this exists right now. It could be Struts or some other web framework. Webshells used in the Struts hack are really hard to stop for many reasons. As far as I understand, if ModSecurity's open source web application firewall was installed and properly configured (not a simple task), the CRS (core rule set) would have prevented the Apache Struts2 exploit from working. Open source projects that make major contributions to protect United States national and economic security should receive more support and funding.
As a recognition for the contribution of all open source, Richard Stallman and Linus Torvalds should be recommended to receive the Presidential Medal of Freedom.
ModSecurity works by looking for known malicious patterns and blocks them. I hope one day we can get web application firewalls to work well using a whitelist setup. Instead of trusting everything and blocking things that look bad, on highly sensitive systems like Equifax, I hope to see a way to trust nothing and allow traffic that is known good. For example..
data = f.read()
data = data.upper()
answer = findWords(data)
for a in answer:
$ python3 words.py | sort | uniq -c | sort -n
The next step is to create a modSecurity rule that uses the same regex "\b[^\d\W]+\b" to only allow REQUESTS that contain words that are on an approved list using the @pmf parameter file as in this following example. Note I just started looking into this, so I will leave the rest as an exercise for the reader :)
See github owasp-modsecurity-crs/rules/REQUEST-932-Application-ATTACK-RCE
Of the parts that were readable, I find myself strongly disagreeing with a majority of your paragraphs, notably:
- I don’t see how Crowdstrike vs NSS has any relevance in this at all, especially given the preamble on Troy’s site. Reasonable people can disagree on the outcome of that case (as a former Crowdstrike employee, I can acknowledge my own biases there), but I just don’t see how it’s relevant.
- Similarly, I don’t see how formally recognizing Torvalds and RMS does anything meaningful, and I can think of a dozen other researchers who have had objectively more concrete contributions to SECURITY than those two.
- Whitelisting every url pattern isn’t going to ever be a viable solution - in large part because the white lists will eventually be regexes and people are bad at regex.
whitelisting with modsecurity tutorial...
Step 8: Writing simple whitelist rules
"Using the rules described in Step 7, we were able to prevent access to a specific URL. We will now be using the opposite approach: We want to make sure that only one specific URL can be accessed. In addition, we will we only be accepting previously known POST parameters in a specified format. This is a very tight security technique which is also called positive security: It is no longer us trying to find known attacks in user submitted content, it is now the user who has to proof that his request meets all our criteria."
"Our example is a whitelist for a login with display of the form, submission of the credentials and the logout. We do not have the said login in place, but this does not stop us from defining the ruleset to protect this hypothetical service in our lab. And if you have a login or any other simple application you want to protect, you can take the code as a template and adopt as suitable."
SecRule REQUEST_FILENAME \
"@rx ^/login/(displayLogin|login|logout).do$" \
# If we land here, we are facing an unknown URI...
People have said that if there are consequences for losing customer data, companies will be motivated to cover up their mistakes. Part of the solution there would be "whistleblower" laws similar to what we have for OSHA violations. But another part would be legitimizing white hat hacking.
Suppose my apartment has some hazardous problem, like exposed wires. It's perfectly legal for me to notice that and tell my landlord. I can take pictures for proof, and if necessary I can report it to the government. The landlord will not be allowed to ignore me.
If, however, I notice a glaring security problem on a web site I use, there's no government agency to tell. If I tell the site owners, there's a good chance that they can ignore me or even punish me for noticing.
Now, going along with my "results-based" argument, unlike building codes, we don't want our laws to specify security practices. But if an outsider can demonstrate that they can obtain personally identifiable information from a computer system, the owners of that system should be fined and required to fix it, and the person who found the problem should be legally protected.
Imagine the mess a landlord would be in if somebody died because of a hazardous condition that they'd been notified of six month earlier. Now imagine that web sites were held to the same standard. "You had a massive data breach, and this security researcher has proof of notifying you six months earlier of the vulnerability. You're in big trouble."
Eben Moglen makes exactly this point in his lecture, Snowden and the Future Part III. He puts it as privacy being 'ecological not transactional', and uses 'pollution' as you have.
I can't resist an extended quote; Moglen puts it very eloquently:
> Those who wish to earn off you want to define privacy as a thing you transact about with them, just the two of you. They offer you free email service, in response to which you let them read all the mail, and that's that. It's just a transaction between two parties. They offer you free web hosting for your social communications, in return for watching everybody look at everything. They assert that's a transaction in which only the parties themselves are engaged.
> This is a convenient fraudulence. Another misdirection, misleading, and plain lying proposition. Because — as I suggested in the analytic definition of the components of privacy — privacy is always a relation among people. It is not transactional, an agreement between a listener or a spy or a peephole keeper and the person being spied on.
> If you accept this supposedly bilateral offer, to provide email service for you for free as long as it can all be read, then everybody who corresponds with you has been subjected to the bargain, which was supposedly bilateral in nature.
Full transcript+video+audio at http://snowdenandthefuture.info/PartIII.html
You often hear today that "Data isn't oil" but from a breach perspective it is a good analogy when considering its toxicity.
Ideally the liability would actually go to compensate victims (for example, via class action). But even if the government keeps it it might be better than nothing.
A potential improvement would be to require companies to carry insurance (or be able to solvently self-insure) for the maximum possible liability if all the personal data they store was disclosed. That way a company like Equifax has to price the full risk of the data they store even if it is larger than their whole market cap. And the insurance industry might learn to do some due diligence, and be a source of "regulation" with much better incentives to be optimal than a government regulator has.
1. a cost for losing personal data.
2. a cost per person's data that was stolen.
That ensures there is a minimum penalty. Enough to make it CHEAPER to take a lot of security measures, even if you don't have many people's data yet. But also a gigantic penalty for the next Equifax.
No one would have known about Equifax if they had not self disclosed. It doesn't even have to be the CEO - if I am the employee who is about to cost my company $$$$, I might just quietly fix it and say nothing.
Person(s) act to cover up a data breach -> Individual criminal liability -> A long time in federal PRISON.
A data breach is disclosed -> Massive FINES on the company (up to and including liquidating and shutting the company down), but no personal liability or criminal charges.
So, the only really hard part of this problem is the political will (vs lobbying / powerful people) to implement law at all to address this. Until we start taking privacy and data leaks VERY SERIOUSLY from the perspective of liability, nothing will happen. Sadly, to me this means nothing will happen :(
Vs. Pretend I never saw any breach (how will you prove different?) and have a chance to avoid all penalties?
You get the idea. If the legal system was setup like that, you would be better off disclosing the breach and losing your job rather then risking prison time. You can always find another job.
That creates an incentive for employees to cause data breaches and then disclose them for massive profit.
I think this is a false premise. What is to stop an IT employee from installing Oracle database in production without a license (assuming technical leadership is incompetent like at Equifax), waiting for a bit, and going to the BSA or Oracle and saying, "hey come get your pay day".
One, I think anyone with a shred of ethics won't do it and two, I think if we put the ceo AND the board in prison for the actions of the corporation and its employees (while following directions), then we create a strong incentive for corporations to be of manageable size. Now, I hear a lot from our politicians crow about how much they love small business and how much they hate "too big to fail". This will be a big boon for that. However, I am not holding my breath.
Is it also illegal to cause a breach? Do you understand the backwards motivations you are giving people?
He can disclose the breach, cause himself (or his co-workers) to risk prison, and cost his company $$$$, which is likely to get people fired.
Cover the whole thing up (i.e. quietly fix the bug), and no one will ever know.
And if they do find out, he can say "I didn't know anything about a breach, I proactively fixed a bug, I didn't realize someone had taken advantage of the bug" - i.e. he's safe either way.
As far as I can tell, the perverse incentive you're describing is that the if the penalty for a breach occurring is too harsh, it could lead companies to cover up a breach rather than disclose it. A penalty for failing to disclose a breach would be meant to discourage that, and I don't see how that penalty is itself a perverse incentive.
The penalty for a data breach must be harsh enough that it would cost a company more than it would to guard against it. I don't believe that there exists a penalty that satisfies that condition yet is also light enough that a company wouldn't cover up any breaches that occurred.
Or anything that touches the EU, really.
There is a gross lack of information and consent, and a violation of the proper ownership of PII, which should be with the user. Right now data is collected under the cover of overly broad user agreements that are never read, if even that, and data is dispersed among third parties without further user knowledge. The user is completely in the dark. They did not really agree with the collection of data, and cannot really see what was collected, where it ended up, and how it was used, not even at an aggregate level. Furthermore the user finds it very difficult to withdraw, even if they had actually given informed consent, since it's so hard to delete anything off of premises you do not own. It's a distraction to talk about how to punish people for not safekeeping something they stole from you in the first place.
After all, a so-called "breach" is just one thief stealing from another -- would you expect a thief to care that much on your behalf?
But the reality is that no matter how much contractual solemnity you require to let other people trade in your personal information, most people will do it in order to get credit. If to get a home loan you had to copy down a contract giving the bank the right to do credit monitoring in longhand with a fountain pen and sign it sixteen times in your own blood, my guess is that people would still be doing it, because credit monitoring really does make loans cheaper. So as a practical matter, I don't think the Equifaxes of the world would go away if the problem you point to was solved. Whereas there's no reason they shouldn't at least internalize the cost of losing your information through incompetence.
Admittedly, smaller companies will have minuscule teams and probably an increased risk of slipping up, but also smaller databases to lose or target. But I don't like the idea of only the largest companies having the ability/confidence to power on.
If Uber is getting a $20k fine, I doubt that's a deterrence for them, but it would be for a tiny startup.
We need to look at it from the other direction. Leaking personal data harms consumers. If you do it and are shown to have been negligent, there should be a meaningful penalty. The legal/financial calculus needs to force corporations to take this stuff seriously, or not at all.
Corporation X does not have a right to exist in the future just because its business works in the present.
The good news for European citizens is that global organizations are accountable for safeguarding their data. The good news for global citizens is that European organizations are accountable for protecting their data too.
The gap? Non EU custodians of non EU citizens' personal data.
That's a big gap!
We need a Global Data Protection Accord.
Where'd you get that impression? Almost all data public data breaches concern Western companies and Western hackers. And people have been warnings against the lax security standards and worse coding practices for decades.
Your trying to shift the blame to the russkies is just another example of denialism.
Why install an intrusion detection system if it could potentially cost the company millions when it triggers? Better just to not know that data was exfiltrated. You don't even have to cover anything up if you never learn there was a problem to begin with.
How about a ban on the data collection for a period, or lose the ability to use it.
> otherwise it will just be risk-assessed.
The key is to make the cost so large that the risk isn't worth it.
What's my geolocation history worth to Google? $20? $200? Make them pay $2,000 if they lose or misuse it.
Sure, maybe they'll buy "data loss insurance". But even then, there will be motivation to do things better. Eg, the insurer will refuse to cover claims where proper encryption wasn't used, will prorate policies based on data collected, etc.
Which I think would hit Apple rather hard.
Not especially hard; you send in auditors to delete the collected data from all servers. Oh, that kills the company? Too bad.
If they sell ad space based on your geolocation, is that a misuse?
If they sell your geolocation, ip address, and search history together, is that a misuse?
Sounds like the EU Data Protection law, which says you need a legitimate reason to store & process personal data.
Do you punish at step 3?
Party A's step 3 is Party B's step 1.
I don't much see the difference between them.
What if Party A willfully gave the data to Party B? No punishment? Equifax is a good example of Party B in this case. Can't we punish all the Parties A?
Fines of up to 4% of global revenue of the offender.
Looking forward to massive lawsuits against Facebook, Google, Uber, etc.
Breeches are inevitable. "when", not "if". Today, for interop, all demographic data must be stored as plaintext, because we don't have national identifiers.
The only fix is to issue globally unique identifiers. Then we can encrypt demographic data at rest. Greatly mitigating the damage of breeches.
That's why we need a federal level solution.
How does that follow? I work in a country with such identifiers, and I don't see the connection.
By the way, just because such identifiers exist, doesn't mean people are keen on giving them out to every company, and in fact, asking and storing it is frowned upon by our national data protection commission, unless you have a good reason to do so (just like any other personal data).
Translucent Databases: Confusion, Misdirection, Randomness, Sharing, Authentication And Steganography To Defend Privacy
Then I created regional health exchanges and worked in election integrity policy (voter registration databases).
Without guids, there is no way to both link demographic records across systems AND encrypt those records at rest.
I’d be okay with a handful of id (guid) issuing authorities per use case. Voting, health care, pension, etc.
I very much wanted infratructure for faceted identities (personas) which could not be correlated. But now I don’t think such a thing is possible. I believe, but cannot prove, that Big Data will always win the privacy vs deanonymization arm’s race.
PS- Corps like Facebook, Lexis/Nexus, NSA, etc have already uniquely identified everyone living and dead. My proposal daylights that fact, restoring power to the people, and insisting that our data is encrypted at rest.
It's not complicated for government rulings; the law just states what information is considered private (and this doesn't change too often), and make it obligatory to protect it. Which it already is I'm sure.
Like if an app is going to transmit your email address book back to headquarters, it must specifically disclose it is doing so, otherwise the app maker is subject to enforcement action.
See also my reply to @pishpash below.
I'm not saying that Equifax and equivalent companies were not grossly inept and incompetent, but eventually we have to start putting pressure on the criminals and reduce the incentives. Cyber crime currently does pay. Extremely well.
> but eventually we have to start putting pressure on the criminals
Hmm. On the one hand you are certainly right - but on the other hand I'm sick of the consequences of decades of US interventions in foreign countries. To make stuff worse, China, Russia and Eastern Europe is, basically, one block. If the US were to intervene in ANY of the countries, and if only with economic sanctions, I can easily see either a hot war or more stealthy "counter interventions" like the Russian support for Trump on the horizon.
Pressure and (military?) intervention only really works against tiny/poor-ish countries in Africa and Southern America. The rest can do whatever they want and are accountable to no one.
The US Congress (and for that matter European Parliament) are limited to acting upon companies and people within their borders.
Who should you go after? The people that keep stealing drugs from the pharmacy, or the pharmacist who keeps the keys to the drug locker on a chain beside the door.
It has been proven that U.S. agencies hold flaw information so that they may use it against foreign or domestic actors rather than informing the entities responsible for these vulnerabilities or the greater public.
This is truly a global issue and it is a mistake to focus blame on any singular nation states. You must not let the U.S. propeganda machine cloud your judgement.
If security researchers could report vulnerabilities with impunity it would drastically reduce the incentive to sell vulnerabilities to black market buyers.
This is a full-blown national security issue. If security researchers could work with our three letter agencies in defending our infrastructure it would go a long way towards securing the US against increasingly advanced opponents.
As it stands right now, if I found a critical vulnerability in a government system I think I would just drop it and tell nobody. I think it's more likely that I would be punished rather than rewarded, which destroys any incentive I would have to help.
This is how they think. Their priorities are pretty far removed from "cybersecurity".
They also blew their chance with cybersecurity, when they passed a supposed Cybersecurity Act of 2016, that was meant to stop all of these data breaches from happening. But as all the critics said back then, the law was nothing more than additional surveillance powers given to the NSA and DHS/FBI. They never actually intended to use it to stop any of the data breaches - they just wanted to collect more data on people and add to the big stack of hay, in which they want to find their needle.
This is key. Currently, there appears to be no business downside to collecting PII, so companies collect as much as they possibly can. If, through regulation or some other means, data became a liability (or at least came with some downside risk) then perhaps companies would become more thoughtful about what they tried to collect and store.
GDPR forces us to justify and be transparent about each data point we collect.
This has increased the urgency of cleaning up any non required data points, adding expiration to data and moving data from subcontractors in house.
Guess that a lot of companies are in our position.
Guess it will be harder to enforce when a company does not have an EU sub though.
We need to move toward a recognition that collecting PII is inherently dangerous. Holding on to PII forever is inherently dangerous. And that's before the not insubstantial privacy risks these databases incur. I worry that any policy that solely concentrates on breaches is just going to lead companies to gamble that they won't be in the relatively small minority of companies that actually get hacked.
What can we do to incentivize a CTO not to have a customer database, or failing that to keep her company's database as minimalist as possible, even if she is 100% certain that database will never be stolen?
The issue is bank fraud. When someone uses stolen personally identifiable information to make fraudulent purchases/accounts/whatever it's the banks liability for allowing the wrong person to perform those actions.
In the old days you needed to walk into a bank with photo-ID and talk to a teller to open an account, let alone get credit.
The big banks have saved money by closing many branches.
These days, you can get a credit card online easily, no face-to-face interaction involved. Hackers pretending to be you can do it as easily.
Data breaches wouldn't be the issue they are if banks were liable when they were fooled. This is definitely Don Quixote windmill territory though, the banks have a well-funded lobby.
If it's in public, you can't read it to them, but you might be able to hand it to them personally.
This will be far more powerful than almost anything else you could do. It's not a problem "Americans" have... it's a problem they have.
If its already freely available on the dark web, the bad guys already have this data. Might as well level the playing field and make the public aware of the extent of the issue.
At least prepare the number of .gov and .mil accounts that have been compromised, huge blackmail risk for people who have access to top secret info.
Some relevant quotes from the text:
"If the fact that everyone can tell his own story makes it easier for people to challenge the assurances of the powerful that certain policies are to everyone’s advantage, the fact that narrative is seen as less authoritative than other discursive forms may weaken that challenge."
"We are ambivalent about storytelling, not dismissive of it. There is a strong vein of skepticism toward professional expertise in American culture. Against that skepticism, the authenticity of personal storytelling makes the form trustworthy—sometimes more trustworthy than the complex facts and figures offered by certified experts."
"...those wanting to effect social change must debunk beliefs that have the status of common sense, familiar stories are more obstacle than resource."
"when feminists brought sex discrimination suits against employers, they struggled to prove that women were underrepresented in higher-status, traditionally male jobs because they were discriminated against and not because they had no interest in those jobs. Arguing that women wanted the jobs put them at odds with a familiar story in which girls grow up to want women’s work, and men grow up to want men’s work. To judges, feminists seemed to be saying that women were just like men, something that flew in the face of good sense."
1) The role and responsibilities of security researchers in discovering and investigating data breaches (maybe also discuss the spectrum from white-hat to black hat, yet the tools they all use are effectively the same).
2) The role and responsibilities of journalists in reporting the said breaches.
3) The impact of laws and litigation leveraged by governments and corporations to protect themselves in the event of data breaches.
4) Why a fair and happy balance between the interests of 1), 2) and 3) is a necessary part of mitigating and reducing the possibility of data breaches along with unhappy examples and their consequences.
I'll admit that four questions above are a kind of scope creep to the intended discussion, but my concern is pretty real. The laws and norms that we have today, while imperfect, are the reason why Troy is being asked to appear as a subject matter expert. Whatever the laws and norms of the future are, they will need to sufficiently flexible to allow future subject matter experts to learn and operate, so that they too can make meaningful contributions to the issues of their time.
I think they are absolutely relevant, specifically around the context of disclosure. Frame it as "people shouldn't be afraid to report crimes or vigilant in watching for them" and you can get support. Yes, Troy should make it clear how he and his ilk are chilled by these same offending companies.
Here's an excerpt:
> The providers of electronic communications services shall ensure that there is sufficient protection in place against unauthorised access or alterations to the electronic communications data, and that the confidentiality and safety of the transmission are also guaranteed by the nature of the means of transmission used or by state-of-the-art end-to-end encryption of the electronic communications data. Furthermore, when encryption of electronic communications data is used, decryption, reverse engineering or monitoring of such communications shall be prohibited. Member States shall not impose any obligations on electronic communications service providers that would result in the weakening of the security and encryption of their networks and services
And here's an article, too:
An alternative approach: assume everything is compromised all the time. Identify the material harms of such compromise, and work to minimize those harms. The SSN-is-proof-of-identity system is obviously a big source of harm, not so much applied to whoever was deceived, but inflicted on the actual person trying to put everything right again. There are many many changes to this one system that would help minimize the damage, ranging from complete overhauls to just minor things like being able to change one's SSN with more ease, or putting more pressure on companies to verify people instead of trusting the SSN. This is probably one of the few cases where doing just about anything to improve the system even a little is much better than doing nothing.
I doubt anything will come of it though. Congress routinely talks to domain experts who warn about the problems in the future should they not do what the expert suggests. Nothing gets done, and the problems manifest, sometimes worse than predicted.
You're implying HN doesn't want this. If companies pin liability on the engineers, engineers will respond by doing due diligence and ensuring nothing happens that is their fault. This will slow down developement near security critical features, which companies would hate, but engineers are begging for an excuse to do.
Companies would try to hire poor engineers who don't know any better but to take a cavalier approach to security while accepting monetary liability. This is where software engineers would need to organize, not as a union but as a profession. They wouldn't seek to regulate wages, just require that every practicing SE be a member of the profession. This would ensure that everyone who accepted responsibility for breaches was capable and willing to write secure code.
Of course, this is all extremely unlikely. If a breach costs a company several million dollars, then the responsible engineer couldn't pay the fine if they tried. The company would still be on the hook for the rest of the fine, which they would pay out of pocket. They would have accomplished screwing over an employee, and scaring future employees away, to no benefit. If companies were to pin blame on engineers, it would be because they were serious about security, and wanted their employees to be as serious as them, not because they expect employees to be able to cover the fines.
Not every data breach has a material harm. What if private picture were released and made public (ala "fappening")? Nobody gains money by having these pictures, and the original owner doesn't lose money either. Despite the lack of material loss, the owners of the data are still interested in preventing it from becoming public.
Contracts aren't laws and, particularly, don't have unlimited ability to create new forms of liability. Even if this would be effective, it can simply be negated by a safe harbor clause protecting individual employees without specified degree of authority from liability for breaches outside of actively concealing vulnerabilities from the employer or revealing them to exploiters.
It's totally implausible that companies are going to start regularly suing their engineers personally for writing bugs. They'd recover very little money and make it impossible to hire. Programmers already write bugs that create liability in businesses all over the world every day. Worry about something else.
> Worry about something else.
Oh this is a very long way down my list of worries. But programmers should look very carefully at all the drawbacks of professional engineering before trying to shape software engineering into it, which is what regulation on something so domain cross cutting as data is doing. The fallout of GDPR will be interesting to watch. With the potential fines being in the millions, have any insurance companies appeared yet to offer insurance for US companies wanting to do business in the EU but not wanting to spend the time and money making sure they're fully compliant with each rule?
Individual liability seems much less likely to me, in the absence of statues explicitly forcing this liability onto individuals. If you are a consultant, and the market equilibrium actually shifts so that customers are demanding security warranties, your insurance costs would go up, but unless you are less competent than average your equilibrium income should go up to match. The legal incidence, and plausibly the economic incidence, is all on companies that actually handle personal data.
I don't know where it fits in but I wish congress understood how silly it is that knowing our birthdays and SSNs is still treated as proof of our identity in 2017
That's if it's not just a ruse to lure him to the US where they can then arrest him for being in possession of too much illegally-obtained personal info.
I think it is fundamentally problematic to try to regulate the sending / receiving of information. That's why the movie and music industries have had so much trouble, that's why classified info leaks are more common, and that's why cryptocurrencies haven't been squashed (yet). Trying to regulate this will have far-reaching bad side effects for all parties.
I also think the vast, silent majority is not actually as concerned about this stuff as perceived. They'll say so in polls, but if they're allowed to choose between the status quo or paying (even a little) to use Facebook, Google, etc. without data collection, they'll choose the free option almost every time. People don't like that their data is collected, but they like not paying for things more.
That said: I think the fundamental issue that needs to be addressed is that the data needs to be valueless.
The Experian hack is a problem because the data leaked has value. Some of it is public record stuff (effectively valueless), but SSNs follow you forever and cannot be changed. If it could be changed (or destroyed for a replacement), then the SSN would hold little value.
Mailing addresses are similar. In 2017, we should have a way of giving anonymous (perhaps even re-assignable) addresses to the organizations we interact with so we're not explicitly telling each of them where we live. If I could generate a new "pseudo mailing address" for each of these companies, I could then destroy it if it is ever compromised--and as a bonus, I'd be able to see how it's being shared (since I'd see other companies using the same one). At that point, having someone's "mailing address" becomes nearly valueless.
Some of this responsibility falls on the consumer, too. Obviously companies are still going to stockpile data to cross-reference and provide value, but that's reality. We've got ad / tracking blockers, anonymizers, and VPNs for the people who truly care, which help to make their own collected data valueless. But the way I see it is that the government needs to ensure they're not creating a system where citizens' data must have value, which is what has been done with immutable SSNs and mailing addresses.
1. Encryption is the only way to secure data, and without it, data will always get stolen. And encryption with a backdoor is not encryption.
2. Data needs to be owned by the people. I should be able to go to Target or American Express and ask them to delete everything they know about me. Them not doing so should result in a, say, 10,000 dollar fine that goes all to me with admittance of insurmountable guilt.
A 200 million dollar fine may still have banks and telecom gaming the system, but if they had to admit guilt, the math becomes easy. Their egos won't permit it!
And with that, finally our data will be kept safe.
- Regulate that pseudonymous use of transactional services must be permitted (with exceptions for, e.g. banks, universities). You do not need my legal name or telephone number to ship me a package, or accept payment, for example.
- Get explicit, arduous permission from users whenever PII is obtained, if you intend to store it in any way, with exactly what it will be used for.
For existing data, over some long period of time (say, a couple of years) get users to explicitly agree to each use, or delete it.
This could work pretty much like default-off OAuth2.0 scopes. For example, you could give Facebook the ability to know your name for display on the site, but not for exchange with third parties for additional data on you, or for the purposes of advertising.
While the ‘default off’ requirement would reduce the amount of PII in circulation, it would also reduce the _false choice_ between security and privacy that Facebook, Google, Amazon, and others present between privacy and security. Want my telephone number for 2FA? Possibly, but that doesn’t mean that you can use it to buy/exchange a file on me from Equifax/Acxiom/MasterCard/whatever.
That's not always true in my country. Sometimes if a package is not labelled with a name connected to the delivery address, the post office will simply return it once it reaches their sorting facility.
Which has annoyed me on two occasions. And I can't see a legitimate reason for it. And I don't know if it's connected to actual law, or just a practice.
Bring a few things:
1. A box with a lock and key
2. A cardboard box+tape
3. An adult education device + a clear plastic bag.
Put the adult education device (texas terms) inside the clear plastic bag. Put that bag in the cardboard box and seal it with tape. Put the box in the lock box. (Kind of like a Russian doll situation).
(If you're allowed to do this at all.. do it after security)
Once you're up and presenting mention:
This is the best analogy that I have: This lock box illustrates an honest attempt at protecting data. What you have in this box is your data and is potentially embarrassing. It's yours but should the public know about it?
Tell them how you and only very specialized individuals know how to open the box. Open it.
Then explain that the next layer illustrates a company that doesn't know or doesn't want to invest much on protecting the data: anyone can tear open that box.
Once you have the box torn open and the bag+item is out on display. Ask congress: This is how well Equifax protected your data. Are you going to make them pay for the FCC fine that CSPAN is going to be hit with?
"Does anyone know a good lawyer?"
Obviously the technical details may not be the appropriate level to discuss with Congress, but an analogy that they can understand might be helpful. What Equifax did was the equivalent of keeping the country's nuclear codes at an outpost in Afghanistan and then blaming a sentry for falling asleep when an enemy slipped in and stole them.
1. Impose a fine for every individual account leaked. Even if it were only $10, the recent Equifax loss of 143 million records is a $1.4 billion fine. It should probably be higher if gross negligence is involved. This would create a new industry (data breach insurance).
2. Make it so simply having all the leaked information on an individual isn't enough to cause any harm. I'm specifically thinking of some PKI-based scheme where I verify my identity by signing something with my private key.
There may be other variations, but the choice seems to be between forcing data brokers to be responsible or make it so that their irresponsibility is harmless.
But the bar for gathering my data is so low! Read a few tutorials on how to write software with Your Favorite New Web Framework and off you go on making a new site that will happily leak data once discovered. The core competencies of my power company and my local cinema are definitely not IT -- nor security. Can we expect good results here? (I hope the answer is 'yes' but I think it's unlikely).
So to take a different tack -- could funding help here? What if there were funding and/or accreditation of some libraries or frameworks? These would use best practices regarding minimizing data loss (salt e.g), regular auditing of actual deployments of this technology, fuzzing of the underlying software, etc. A marketing/branding effort regarding the accreditation could also be helped w/funding. It needn't be a US-local solution, nor even a US-local agency. Though that would certainly minimize the bureaucracy to "only" the level of US Congress.
Instead of "Stop, Drop, and Roll" or "Only You Can Prevent Forest Fires", it could be "Never Roll Your Own User Account Database" (and OMG please help us if you rolled your own crypto).
This legislation has huge, sharp teeth. It comes into full-effect in May 2018, and every single multinational is running around in a "pants-of-fire" panic trying to figure out how to comply.
If Equifax had to pay 4% of their global annual turnover per day of non-compliance, would they act? Yes, of course.
> How is the fine calculated?
> Article 58 of the GDPR provides the supervisory authority with the power to impose administrative fines under Article 83 based on several factors, including:
> The nature, gravity and duration of the infringement (e.g., how many people were affected and how much damage was suffered by them)
> Whether the infringement was intentional or negligent
> Whether the controller or processor took any steps to mitigate the damage
> Technical and organizational measures that had been implemented by the controller or processor
> Prior infringements by the controller or processor
> The degree of cooperation with the regulator
> The types of personal data involved
> The way the regulator found out about the infringement
> The greater of €10 million or 2% of global annual turnover
> If it is determined that non-compliance was related to technical measures such as impact assessments, breach notifications and certifications, then the fine may be up to an amount that is the GREATER of €10 million or 2% of global annual turnover (revenue) from the prior year.
> The greater of €20 million or 4% of global annual turnover
> In the case of non-compliance with key provisions of the GDPR, regulators have the authority to levy a fine in an amount that is up to the GREATER of €20 million or 4% of global annual turnover in the prior year. Examples that fall under this category are non-adherence to the core principles of processing personal data, infringement of the rights of data subjects and the transfer of personal data to third countries or international organizations that do not ensure an adequate level of data protection.
- At a minimum, their should be a penalty that grows from the time the breach was learned to when they disclose it publicly.
- There should also be penalities for not being transparent about what exact data was leaked for what users.
- SSN is similar to a password- you want to keep it hidden, and if it leaked, you should change it. However, we can't change it. Perhaps it should be considered more as a password?
- People should know what personal data companies have on them. A good example of this is Equifax storing peoples home addresses- this could be disclosed. On the other hand, a it is probably fine to exclude other types of data, such as an advertiser storing your zip code- people probably don't care as much.
- Should people have a right to have certain kinds of data (e.g. SSN) removed from websites?
- Is it a good idea to mandate companies disclose the security they use? For example, at one time reddit had their passwords stored as plaintext and they got hacked. Disclosing basic security hygiene (e.g. password storage) somewhere standardized in the website would make it much less outrageous.
- Certain technologies enable hackers more than others. SQL seems to enable a lot of hacking. Should we discourage it?
- Get rid of Intel ME technologies - https://schd.ws/hosted_files/osseu17/84/Replace%20UEFI%20with%20Linux.pdf
- Get rid of Intel hidden instructions - https://www.youtube.com/watch?v=KrksBdWcZgQ
- Get rid of Simon and Speck - https://www.reuters.com/article/us-cyber-standards-insight/distrustful-u-s-allies-force-spy-agency-to-back-down-in-encryption-fight-idUSKCN1BW0GV
- What is "best for National Security" is actually worst for our own. It feels like people don't have a democratic say in the right balance either.
I assume you meant
> SSN is similar to a password- you want to keep it hidden, and if it leaked, you should change it. However, we can't change it. Perhaps it should be considered more as a username?
I think studying how other people have effectively used their five minutes in instructive. I'd start with the testimony of Fred Rogers where he gave a Senate statement on PBS funding: http://www.americanrhetoric.com/speeches/fredrogerssenatetes...
I would focus on the CIA triad + Accountability + Assurance. It's helpful to use standard terminology that is understood by existing privacy practitioners.
Personal information should be Confidential from unwanted disclosure.
Personal information should have Integrity with the creation, modification, and deletion of personal information only as authorized and intended.
Personal information should be accessible readily by authorized parties.
Personal information should have Accountability, with traceable ownership to a party responsible for Confidentiality, Integrity and Access.
Personal information should have Assurance, with appropriate audit of Confidentiality, Integrity, Access and Accountability; including the right to inspect.
Just as the Amendments to the Constitution form a latticework of protection for each other -- e.g. that freedom of press helps ensure other rights are not eroded -- the elements of CIA+A+A do the same.
Recommendations can then be framed for direct implementation:
* Confidentiality: Requirements for timely breach notice
* Access: The right of the consumer to be aware of and to have access to access data about them
* Integrity: The right of the consumer to repudiate data about them and demand removal
* Accountability: Direct ownership and legal teeth (fines, jail, and barring of eligibility from data or business management roles, etc.) to compel the presence and adherence of an appropriate privacy management program
* Assurance: Standardized audit reporting, guaranteed consumer right to inspect, etc.
Folks noting "accountability" often mean the entire CIA Triad + A + A, not the technical term "Accountability". This is likely the gap to bridge -- turning a sentiment that businesses are not operating appropriate privacy management programs in to an actionable path to compel existence, adherence, reporting and audit of such programs.
That everyone does security theater, no one does real security. Mostly for convenience to marketing data is not encrypted in REST, data is not segmented into different data stores (own store for passwords etc.) but stored in the same MySQL database, employees can dump millions of records instead on one-record-at-a-time, people let fly unencrypted Excel sheets everywhere etc.
A similar approach might be the best tack to take here
* Have a public register of breaches, with auditor sign off of the details of the event so we can all learn
* publically registering the breach gives some degree of protection from liability / punishment, but there is expectation of competence and good practise (very much like accounting)
* Work with EU over Data Protection definitions and approaches - if both US and EU are singing off same hymn sheet it will become globally de facto
* probably the biggest area to push in that is that personally identifiable information should belong to the person identified - and treated like an asset held in trust by those who hold it...
* beef up whistleblower laws and roles of researchers
* have the NSA buy back some of the world's trust by identifying and hunting down cyber criminals the same way actual violent terrorists are
For example, an orthodontist in my area asks for SSN, employer, marital status, spouse's SSN, spouse's employer, and states "you must complete the entire form". I only enter name/phone/insurance info, but I bet most people will just do what the form asks.
So part of the responsibility is on the user to not willingly give away irrelevant data. Part of the responsibility is on services to be good stewards of data.
What should Congress do? How about unifying PII and IP? Give Equifax the Napster treatment.
How about prohibiting breachers (companies which have had data breaches in the past) to collect data which is not essential for their business for a limited time span?
Something like: Hey you have not secured your customers data? So why do you want to store that data anyway? You want to promote only the relevant products to your customers? So we will give you some time to get your storage security right and therefore, you are not allowed to collect any non essential data for the next two years.
Yes, the essential data is probably the more important one, but at least it would bring companies with low security to store less data for a while.
Just an idea, what do you think?
I suspect that the unintended consequence would be that Eula’s and various business relationships would be adjusted to attempt to limit liability. Maybe let liability be a court matter, just knowing about the breaches would be a huge step for consumers though.
Okay look, I get it, it is absolutely despicable what these companies are doing. But think about ordinary website operators for a minute. A lot of the proposals in this thread would basically criminalize running a basic web forum unless you're some kind of security ninja. Please, think before you write.
If a company fails to do so (or doesn't provide some relatively simple way to do it like an online form), there should be harsh financial penalties.
Well if I start getting spam at somethingUnique:myRealEmail@domain then I can call the person out about it I gave it to "I'm looking at you Facebook". So some spammers now have that address but I can shut it down or change it, I also could have had a list of people who had that address and if they still use it I will still get messages. There's accountability and even though something static was given away it was only a "pointer" to a static item so that I can change the pointer. I should know who knows what, how they think they know it, have the option to take it away, etc. but that's not really possible when companies like equifax collect data on us and it never really seems like they ask, but I'm sure we signed our life away while buying a car and thinking about picking up chicks in it signing papers in a daze.
Then again, perhaps SPAM isn't that lucrative anymore. You might earn much more by selling profiles these days. This only dawned on me after disqus has been hacked and 'lost' personal information such as e-mail addresses (my personal one among them).
What criminals could do now is simply collect breaches, put them into a database and make "e-mail" the join criteria for a query. Based on the output they can generate comprehensive profiles of people that weren't available on the market before.
Date from 'shop a' may reveal my DOB and my real name, Data from disqus allows them to extract what values I hold and so on...
In a way they bank on not sending you spam, because if they would you may change your e-mail and they are thrown off the track ;)
1) An entity storing personal data, must be audited by a government approved 3rd party and their rate must be made public.
2) Any breach (or suspicion of a breach) must be reported to this 3rd party within 24 hours.
The issues I saw with the Equifax breach was that there was NOTHING that told us how bad their security was and they were allowed to not report the breach for a month.
The way I always approach this problem is one of ownership. Congress, implicitly, assumes that individuals own their data. That is why it used to be possible to ask individuals to prove their identity by asking them to verify data they own, and presumably, haven't shared too broadly.
Identity theft is really a robbery then, because data which belongs to you is stollen. The government's opening position then needs to be that data belongs to consumers.
What isn't clear is the physical metaphor for what happens when you give your data to a third party, explicitly or implicitly. Am I 'leasing' my data to Facebook? Am I granting them shared ownership? Or is it more like a Bank... I'm allowing Facebook to "re-invest" my asset, they can make money off of it while they have it, but in exchange, they have to keep it safe.
I personally really like the Bank metaphor, and like banks, you can get certified as safe by the government to be a Bank, and have that certification taken away... We already have data protection rules like this in BTW, called HIPPA
- decommodify personal details by making them illegal to resell or distribute without permission
- restore individual control: require giving distribution and update/modify/delete rights back to the individual
- mandatory incident reporting requirements with personal criminal liabilities to executives for failing to disclose breaches
- mandatory compensation, determined by independent government risk management, to customers based on the expectation of risk incurred plus insurance against losses for 5 years
- ensure minimum security requirements, similar to PCI-DSS, by formation of a federal information security standards agency that produces practical, effective configuration and architectural standards and collects external/internal audits and conducts spot-check compliance audits similar to the IRS for taxes
- institute an opt-in national identity virtual & physical card with provably secure public/private key management, open-source infrastructure that isn’t based on social security numbers. Perhaps managed by a non-profit which includes security researchers and consumer advocates, with industry advisers with less power.
- phase out use of social security numbers as a primary key, eventually making it illegal to use and require a unique identifier generated for each service by previous system that is not shared by any other system. To connect two identifiers requires the person’s approval, the person can change their per-service identifier at any time and it changes once per year anyhow
1. Something you know
2. Something you are
3. Something you have
Unfortunately, the security of an SSN is it's something which the government gave you, and doesn't fall into the security triad.
So when a loan is approved, it's approved with data that may have been made public, either through public records, or through data breaches. The SSN and birth date were never meant to secure financial loans with, and should never have been in the first place.
So please recommend that conglomerates be forced to encrypt data in ways that protects the vast majority of accounts in case of a data breach.
Thanks a lot for taking the initiative to ask the community, regards to you my friend in freedom.
In such a system both business and citizen gain value. The information from this arrangement can give business advantage of the expertise and leverage the information quickly to resolve issues they may not have $$(expertise) to find,
so they appear to customers proactive and responsive. Customers gain with visibility and accountability of the companies who don't resolve issues by making public that same information.
I'm not opposed to reasonable grace periods between telling a company they have a problem, and before it's made fully public.
Companies generally only focus on security to the extent they understand the risks of the impact. By making impact very clear and those impacted given a strong selection of reparations (a credit monitor being only a pale solution - useless to many today). I think CFPB could come up with additional recommendations on reparations for companies that citizens would accept.
Weakness of this argument.
Gov't is slow, evidently even with information.
Coordination and competing purposes between Govt Orgs are nothing to sneeze at - they are hard to resolve.
Businesses who feel they cannot quickly or inexpensively resolve a problem given to them will attempt to hide it, working against the idea business will gain value.
Did Equifax obtain information about me via a breach?
Show me where in my mortgage contract I gave the bank permission to disclose anything to Equifax.
Show me any agreement with my employer that authorizes them to disclose salary information to Equifax.
Out of the thousands of points of data Equifax has collected about me, how many of those were obtained through 'breaches' by some definition?
In addition, incentivized and high-pressure opt-in have the same issue, companies are stealing data about customers to improve their business without offering compensation in kind. The business should be liable for any mishandling of this information and any fallout from this mishandling, if a loan was taken out in your name falsely due to the equifax breech they should have full liability and, due to their voluntary role in this incident, they should be presumed guilty unless they are able to prove their innocence.
If anything happens outside the boundaries of what data processing is allowed, by negligence or malice, this then becomes a matter of civil law. In the case of data leaks it would be up to a court to assess any economic value to the data. A multiple of the value it could have been sold for is what settlements are based on in similar matters, and this also scales well to multiple claimants.
European data protection laws have taken small steps in this direction and I think it is a sound principle.
Of course no system is 100% secure, but the narrative that it's inevitable anyways is often used as a defense for bad practices (like in the Equifax case).
Breaches might happen but it is not inevitable. And good practices can still have an impact on how often breaches happen, how much is stolen, and how useful the data is to the intruder.
I always get angry when I see some company head like "we will be broken into anyways, why even try?". They won't do that with their physical company location, why with their network? Because they don't really care about other people's data.
If companies like Equafax held only collections of public records (facts) then a 'breach' like this would have no consequences; all of the data would already be public.
What presently gives them power and what makes this breach so bad, is that these facts are used as proxies for an actual form of identification/authentication.
A national ID based on strong cryptographic solutions and issued to all citizens, preferably with their own private keys being also signed by the government if they desire, is how we properly enable digital signatures and progress to an age where forgery of identity is far more difficult.
For example, when Seattle accidentally gave me millions of emails: http://crosscut.com/2017/10/seattle-information-technology-d...
- Current remedies are designed for the convenience of the entity (write a check to a credit monitoring service and issue a public apology). The burden of action remains on the person whose data was collected, possibly without their knowledge or consent.
- Lack of accountability. It is difficult to overstate the impact of a breach like the Office of Personnel Management's SF-86 database, or any of the NSA leaks. That degree of negligence could arguably have been treated as a treasonable offense.
I'm not kidding - some teenager using Django will have a site with better security than what we've seen from some large companies in their data breaches. This is inexcusable. The narrative often is "smart hackers", when it's really "we did less than my teenage kid did in securing the data"
Product safety in the US is the world standard precisely because plaintiff attorneys extracted enough cash from manufacturers that shareholders, banks, investors, board members and insurers forced reform.
Make failure expensive. Limit liability for small business. Make officers personally liable in cases of gross negligence.
This and about 5 years will deliver results.
Someone donated £500 to a charity on his behalf to prove him wrong.
An example of this can be seen in data-science related to FDA activities; there is an incredibly heavy bias towards specific proprietary software and data-storage formats (and I don't mean Microsoft).
I think the best you could do is condense some of the worst offenses into tweet-worthy "sick burns" that will hopefully be remembered for more than a few minutes.
Also, if you can throw something in about continuing net neutrality, that would be great.
It's illegal for a bus driver to get drunk and drive a bus full of people. Why is it not illegal for a sys admin to set 'password2017!' or a developer to set 'developer2017!' as the admin password on a website and increment the year when it's time for a change? I've seen first-hand (on multiple occasions) how bad passwords like these harm people on-line. It ought to be against the law.
If you do security basics right (patching, passwords and logging) you'll be fine 99% of the time. But people won't even do that (it's tedious and boring and not sophisticated). Instead, they obsess over APT's, zero-day exploits and nation state actors, when they really just need to start by patching and setting decent passwords.
A reasonable amount of money should be dedicated to an intelligence service attempting to penetrate companies which are of significant national interest. Fines with increasing severity should be assessed to the responsible parties - and the vulnerabilities should be communicated in private.
For cases of the most persistent gross incompetence and negligence, companies should not be permitted to continue operation. Such powers exist in other agencies.
All demographic data every where is vulnerable, because it must be stored as plaintext, because we don't have nation wide unique identifiers.
1. Criminalize moving PII out of the country.
2. Personal liability for every person involved in gathering and protecting the data, and those involved in managing the teams and companies.
The fact that people can get paid while externalizing the downsides of their failures is why this is a problem. Make the personally responsible and it goes away.
Please, please, don't. This forces global-scale operations to build computing in a number of places around the world to comply. National boundaries are meaningless on the Internet, and we already have enough of those jurisdictions in existence that doing global-scale operations with PII or even GIS data is an international legal minefield. Do you really want to suppress startup development by forcing global services to talk to 200+ different lawyers about what they're doing, in the long term?
I've worked on products where entire datacenters are forced to exist in order to comply with a law somewhere. Like this law, which I bet you didn't even know existed:
So basically you want a law requiring all employees to hide any evidence of a breach?
It reminds me of the law requiring all TV's to be quietly dropped of at night at a random neighbor, because it's no longer legal to throw them away, and the legal methods are expensive and inconvenient.
I think it should only be the business that incurs the penalty. That way it is the business that is motivated to sufficiently train and oversee the employees.
Otherwise, you have employees on the hook, with employers incentivized to make them take shortcuts.
Most of the risk comes from people who are already inside your country, or from people who break into your servers and therefore won't care about moving it outside your country.
You essentially want to go to prison for something your co-worker overlooked.
Its fine for companies to collect basic usage and telemetry data--but when they start personalizing it without a user's permission/consent (i.e. when they start tying it together with personally identifiable information [as Facebook and Google do] then it becomes weaponized and can then do great harm to individuals. Privacy by default--and anonymous by default would essentially prevent this.