EDIT: I clarified one thing user downandout pointed that it could apply to App developers who may have a million users as well but overall, it should not affect bootstrappers or smaller tech. businesses that don't have that high amount of consumer data so my overall point still remains.
Below is the link to the details on this bill:
"(ii) is not substantially owned, oper
ated, or controlled by a person, partner
ship, or corporation that does not meet the
6 requirements under clause"
I'm really glad this is coming to light and I hope it passes. It will be very interesting to see how companies try to avoid it, but at first glance, it seems well thought out.
But, of course, it doesn't. Not even close.
By way of analogy, if someone looked at a link to a github repo and said "yeah well I can't see any GOTO 10 lines, I bet this will crash when the IP trace becomes Apache'd" I hope someone would be patient and polite in explaining the several levels of wrongness involved.
Civil jurisdictions don't have caselaw as an independent source of legal rules, but they still have cases and they still have interpretation. It is not uncommon for an ancient Roman jurist's opinion to be cited in argument in Scots law, just as it is not uncommon to cite very old English cases in Australian law. And it is also common to have civil codes in common law jurisdictions -- the criminal law I was taught was based on a statutory instrument which asserted itself to be the whole of the criminal law and which was frequently amended whenever courts began to accrete rulings around it.
There's no privacy law that is going to get passed by the US Government, focused on large corporations, that involves sending violators to prison for up to 20 years.
1) Revenue of less than $50 million; AND 2) Must not have info on 1 million or more people; AND 3) cannot be a data broker
That means an independent app developer who gets more than 1 million installs, or a website with more than 1 million users, IS a covered entity, regardless of revenue. Also, ANY "data broker," regardless of size, is covered.
This info is on page 4 and 5.
Edit: How is a factual comment getting downvotes? OP read it wrong, and I told him so. There's nothing in this comment to disagree with. There are only facts.
You need to pay a lawyer to evaluate that for you, that's the cost, not whoever the bills sponsor says this is intended to target.
Almost all of the language in the section we're referring to applies to just one requirement, which is to make data tech companies retain about consumers available upon request to those consumers. That's something responsible companies already do, many because they're already by regulation required to do so.
As you said in one of your comments, fortunately this bill as written will never come into law, both due to its implications, and the fact that its author is a single member of a minority party. This is one instance in which I am happy with our system of government.
My read is that it's less onerous than the California privacy statute that already covers a huge fraction of tech startups.
We do both security and privacy engineering work for our clients, most of whom are encumbered in one way or another by regs, and it is not the norm for legal to do line-item review of policies and procedures. SOC2 Type 1 audits are much closer to a mainstream practice, would almost certainly satisfy the "data protection" requirements in any rule the FTC would come up with, and certainly do not involve "a few hundred hours" of legal.
That's a coherent position, but not one we can reasonably hope to debate about between each other.
It starts with the claim that this law could put Flappy Bird on the hook for decades of prison time. I rebut, and you say (paraphrased) "no, read the law, anyone with 1MM users could be sent to prison for failure to comply". This is obviously not true.
Then the claim becomes that pp26-33 of the statute has so many burdensome requirements that it would be impracticable for many startups to comply. I ask for specifics; none emerge. Instead, a new claim appears: every startup would be on the hook for "a couple hundred hours" of legal to verify their compliance.
But the proposal as stated doesn't require formal compliance reviews, making it hard to support an argument that this proposal would somehow cost more than many other regulations that do have that requirement, and for which my firm has done significant engineering and compliance work without spending a hundred hours talking to legal.
But, no, it turns out that's not the argument. The real argument is that the proposal requires auditors, for which legal will have to be deployed prophylactically. Now, the proposal does not in fact have an auditor requirement, but also, the clause that discusses auditors goes out of its way to make it clear that the types of third parties they're referring to are technical experts, which startups already use.
So the argument changes again. Now the argument is that regardless of the specific construction in the proposal (again, these specifics were all brought to the discussion by you!), it would be prohibitively expensive for startups because a lawyer would have to take time to verify the meaning of the law for the startup.
I point out that this is an argument that applies equally to pretty much any privacy or security law, and you respond that this is one is a special case because of the prison time and fines (the "breathtaking" fines are part of the same clauses as the prison liability) --- thus resurrecting the original false claim.
This doesn't read to me like a good-faith argument.
It's of course fine to make the argument that any new regulation would impede startups and would therefore not be worth the trouble (there are other arguments against this proposal you could just as easily make; for instance, that the field isn't mature enough for us to have the FTC use rulemaking authority to establish cybersecurity requirements for startups).
But if those are the kinds of arguments, you're making, make them. Don't move the goalposts.
Actually, with specific regard to Flappy Bird, it is true because it had more than 100 million installs, far surpassing the 50 million requirement to expose him to criminal as well as civil penalties. So, in contrast to your statement, it actually is true.
Now, the proposal does not in fact have an auditor requirement, but also, the clause that discusses auditors goes out of its way to make it clear that the types of third parties they're referring to are technical experts, which startups already use.
I'm not sure what you mean here. There is an auditor requirement "where reasonable," and presumably "reasonable" would be entirely up to a court's discretion. Also, "technical experts" in the context of this law, wouldn't necessarily be the developer of the site, but rather technical experts who are trained in complying with this law. Likely, that means someone brought in by a law firm or professional auditing outfit, at enormous expense.
By the way, did Flappy Bird even collect NPI? Or is this an even sillier example?
That's what it says, but one would have to believe that failing to file such reports would also be a criminal violation in any final draft of the bill. Otherwise what would be the point of the bill? Does it make sense to you that they would have a bill like this, and provide a simple way to avoid it: just don't file? That appears to be an oversight by the author, but one would undoubtedly be fixed.
By the way, did Flappy Bird even collect NPI?
Since this bill uses a vague and legally untested definition of "personal information," simply maintaining weblogs containing IP addresses could trigger this.
If your problem is actually with the number of users, please propose a specific number of users that would make a better limit.
EDIT: This is also the exact same kind of "sky is falling" rhetoric that came with HIPPA. Lo and behold, we still have large and small doctors' offices, and have taken the first steps towards actually protecting customer health records.
Parent mentioned startups with lim -> 1 million users unable to comply. If they can't find VC money at the million user stage, then perhaps they didn't have a viable model in the first place.
(Plus, where all these I mentioned "VC funded" from the first user? If not, the argument holds for them too).
Incorrect, that would only be true if they collected and stored personal info on their users. Hopefully we see more apps and websites stop collecting this info, or a minimum started purging the data (i.e if you visited a site 1 time 5 years go they should not still have your data but many do)
> (12) PERSONAL INFORMATION.
—The term 6 ‘‘personal information’’ means any information,
re-gardless of how the information is collected, in-ferred,
or obtained that is reasonably linkable to a
specific consumer or consumer device.
So if you use an email address for your users to log in, or even a username, you are collecting personal information based on the vague nomenclature of this law. Device IDs might be the case too (though I think Apple at least now gives a per-app id, which means it can't be linked to external sources much now iirc).
You are posting like this is a bad thing, I think it is great, companies need to be held accountable for gobbling up personal data, and should be discouraged from collecting anything including email addresses, I get enough spam thank you.
You think email addresses are bad but the masses hate having to remember usernames. Emails aren't used for logins because of marketing (all those systems already had email verification in addition to usernames, thus email was required). Furthermore, without an email it becomes impossible to recover your account if you forget your password. If the app has no personal information and email and you forget your password (which most general users do) then it's impossible to recover your account.
So this law isn't going to discourage companies from collecting your email address, it just has the potential to add burdens to companies that end up with 1 million signups (even for a free website).
Using email as either Verification or Username has always been a lazy and insecure and should stop.
If normies can not recover their candy crush account and need to sign up for a new one in order to protect privacy i am find with that if companies like King stop collecting data
I am security and privacy first, convenience and "free" are about 1000000000000000 on my list of importance. If a lot of free sites die that is price we pay for better data security and privacy. i am fine with that.
I do however want ownership over my data, and the right to demand these companies tell me what they collect on me (often with out my permission see Facebooks Shadow Profiles on people that do not have accounts) and right to demand they delete said data.
Let's be clear that this law won't pass, certainly not as it is written. In the US, it's perfectly legal for websites to track your behavior. Should you object to this, you have a simple remedy: use incognito mode.
I’m having trouble thinking of any other type of work that manages to escape all liability.
I think you're downplaying the requirements of this law. You should read it, it's pretty onerous and carries decades in prison with it - even GDPR didn't go that far.
One interesting caveat, however, is that at least as written, I can't find anything imposing penalties for simply not filing the reports this law claims to require after all of the expensive audits etc it wants. It only imposes penalties for lying on the reports. I'm not sure if that was an oversight on the author's part or if that's intentional though. Any final version would likely "fix" that issue.
I don't think it will pass as is, but if it does, this is truly a "sky is falling" moment for US startups. Because it effectively limits non-VC backed startups to less than 1M users, it also makes sure that there will never be a competitor to Facebook or Google. Like GDPR, it locks in entrenched competitors.
Happy to be wrong about this; if I am, please offer a cite.
† Sec 5. (a) (1)
This just isn't true.
edit: I misread this
Now I'm not yet familiar with this draft nor with how it's likely to evolve from this point on but you have to do one or the other. You make the regulation overarching enough that partners' lack of compliance affects your own compliance efforts, in which case smaller players are going to have a hard time competing and will be cut out of the ecosystem by legit big companies who have more to lose than you do. If you don't do that and it becomes relatively easy to outsource data-related liability to your partners, there will be a loophole where shady data collection and processing activities will tend to aggregate and move towards entities that are either exempt from regulations or at least willing to take on liability for their customers for short-term gains. The latter is distinctly worse than the status quo because you're making the problem worse. The former is arguably bad for the startup ecosystem.
My understanding is that smaller players are, in the aggregate, much worse about pretty much everything and the effectiveness of GDPR-style regulations depends on the impact they have on smaller players who are much further away from compliance than the big tech companies who were for the most part never too far away from compliance and whose bottom line never depended on any of the alleged shady practices. Most people aren't aware of smaller ad-tech players or data vendors so often their (and their customers' or partners') wrong-doings are blamed on the big tech companies that are much more visible.
To a large extent, what happened in the ad ecosystem is that smaller shadier ad tech companies forced everyone else to become shadier - if they are out there promising marketers more data, better tracking and more accurate measurements and they are accomplishing that through questionable practices, that still raises the bar for what marketers expect and they are able to force their way into integration with other platforms or at least force everyone to do similar things. Ultimately, there's a trade-off between transparency and measurements for marketers and privacy and a world dominated by lots of small players who don't trust one another is one where marketers are forced to require proof that their money is being well-spent, which means more privacy-defeating tracking and measurements.
The bill "covers" any entity with over 1MM users (Flappy Bird) or $50MM in revenue over 3 years (most mid-sized startups). It creates a compliance regime, violations of which can be pursued by the FTC under its "Unfair Trade Practices" authority.
A subset of Covered Entities (those with over 50MM users [very few startups] or $1B in revenue [virtually no startups]) are further obligated by this proposal to file annual Data Protection Reports. If the CEO, CISO, or Chief Privacy Officer of one of those entities deliberately certifies such a report knowing it to be false, those specific people are liable for imprisonment.
Actual failure to comply with data protection and privacy requirements are not enough to get you charged criminally in this proposal. The violation that can actually get you imprisoned in this proposal would constitute a deliberate attempt to defraud the government. You have to be a relatively big company, fail to comply with the requirements of this draft, and then lie about it to the FTC to end up in prison.
(For what it's worth: I don't think this bill is going anywhere; it's a discussion draft by a single member of the minority party.)
Ron Wyden is indeed a single member of the minority party, but he is establishing himself as one of the leading voices on these issues. Christopher Soghoian probably played a big role in drafting this bill as his Senior Advisor for Privacy and Cybersecurity.
You can actually also tell that by the fact that the article states that privacy regulations are backed by FAANG or similar et al. They think that an eventual bill would be so watered down or not matter that they don't even mind stating up front that they support such a bill.
This is similar to a technique that I have used with my wife. She wants to know say if we can do this or that at some future date. I know that we won't end up doing it so I don't put up a fight (even though I don't agree) and I actually support it fully. Later if it looks like it might happen I use my initial agreement to change the exact terms and she is less likely to think I am gaming her. Or figure out some other way out.
Curious if others think this is wrong and sneaky or smart.
Good example: every store with a camera uses ML to identify customers and passerby's and shares this info with 3rd parties. Should we talk about how to best facial and gait ML analysis or is the answer simply criminally outlawing this practice?
Why can't I file a restraining order against big tech that states "if you identify activity and you correlate it with my identity,immediately erase it and take steps to avert similar activity data collection": because I feel bigtech's abuse of this data pauses a danger to my liberties and free excercise of my civil rights.
Modern privacy laws are overdue as it is and they need to be criminal,not civil.
Another one: ISPs would think twice about selling your celltower correlated location data and web/dns activity to 3rd parties if this meant jail time to the CEO. If this practice is outlawed with only fines and regulations as the penalty,the only person that can sue them is a well resourced attorney general or a very very wealthy person that can afford a multi year legal battle. Unfortunately,even when bigcorps break CFAA the FBI won't even listen to civilian complaints.
Personally, I believe this kind of breach-of-trust should be prosecuted much more vigorously. But besides the fear of embarrassment, I suspect federal agencies are unwilling to
compromise intelligence methods to prosecute misbehavior.
I presume the FTC doesn't have authority over the NSA/CIA, so no.
The big tech companies are edging towards complete monopolies in their spaces and a significant part of this is that they know everything and they're allowed to leverage that data. Why would you go to anybody else to advertise? Unbundling this, making advertising harder again will spread out the budget, probably push more towards publishers rather than networks. Again, not a bad thing.
It also means that new players get to compete. It's been very tough to compete with similar services on price when the incumbents can operate at zero "cost".
Not having your data traded under the table is just gravy.
I'm comfortable saying that paid networks won't in-and-of-themselves improve conversation.
In other words, while I'm not sure if this will improve discourse, at least it mitigates a little bit of the questionable social media meddling that we've seen of late.
I kind of see jimmaswell's point that there is a risk of creating a "tiered Internet". But I'm not sure it's completely an either/or situation. It sounds like that the bill also emphasizes more transparency on what is being done with the data and who it is sold to. As long as personal data is not shared haphazardly and in a leaky fashion, consumers might be just fine with the bargain they get.
Things like shopper loyalty programs already offer this sort of bargain: in exchange for allowing companies access to more granular shopper information, the shopper gets access to greater discounts. Loyalty programs currently are subject to various consumer protection rules. In contrast, the bargain social media has struck with users (free service in exchange for marketing data) is not well protected.
Personally, I'm fine with a little more transparency and disclosure in this fashion. I don't think this necessarily means the end of free social media per se. (I will say though that this is a sign, one of many, that the days of social media "moving fast and breaking things" is over.)
Also, today, privacy conscious users are cut off from the main forums of discourse. Is that better?
EDIT: since people are downvoting this, I should clarify that I’m not advocating against the policy, it just doesn’t seem intuitive to me that this policy would reduce monopolization (data privacy may well be worth the tradeoff). If I’m wrong I’d love to be corrected!
1) It encourages companies to be silent or remain ignorant.
2) I didn't see anything mentioned about the cyber-sec aspect, but I would imagine this will put a bounty on the heads of the largest companies.
Perhaps I watch too many (bad?) movies but shorting a stock on the cusp of being hacked could be lucrative.
It is an interesting thought experiment, though, to consider what facebook would look like if its revenue was derived from convincing users that it was worth a monthly subscription.
I topped up my FB balance with like $2 or $5 dollars, in ?2006?. When Facebook (quickly) discontinued that silliness, my account silently disappeared. MARK ZUCKERBERG, YOU OWE ME MONEY!
I compare that with Twitter at the time had two thousand employees. It makes no sense to me.
Which is bonkers to me personally considering how little I use it.
Under GDPR, if your data processing is based on user consent, then that consent must be given freely. That means you should provide equivalent service, regardless of consent being given. Providing one service for free and another for a fee is not equivalent, so I'd argue that such consent would be invalid, as it was not freely given.
However, consent is only one of several conditions for processing data. Another would be "legitimate interests" of the business. If you choose to process data based on it, then you do not need consent at all. In such case you can provide separate services: free with data sharing, and paid without data sharing.
The trick here is that you actually need to be able to prove that "legitimate interests" of your business require collection and processing of that data. And additionally such interests are overridden by "fundamental rights and freedoms" of users. Which is probably why most businesses went the consent route, as it appears to be more clear-cut.
Twitter wouldn’t need to pretend that it didn’t know that a substantial part of its user base is fraud-bots if it just charged some nominal fees for something other than advertising.
It's not like people will be thrown in prison because their DB wasn't patched quickly enough. They have to knowingly and intentionally lie to the federal government in an annual report.
So it's a nonstarter, since the people being prosecuted for these things will have lawyers adept at whittling down intent to only the most brazen and malicious behavior. Not only that, but Sarbanes-Oxley showed us how effective "annual report" red lines are.
Why not just reform EULAs so that people have more power over what they're often blindly agreeing to? If people could easily see the details and to what they're agreeing to and have more power over certain clauses, I think market forces would take the industry into a different direction, and there'd be more transparency. I don't always believe a market-solution is optimal, but in this case, it seems right.
I'm actually OK with that. We're always complaining that companies don't take security/privacy seriously because there's no incentive to do so. See e.g. the Equifax HN threads. Having a person in the C suite who'll end up in jail if the company seriously fucks up is, IMO, a net positive for the world.
1. Large companies by insurance for data privacy violations
2. They then lobby to repeal H.R. 3086 (Permanent Internet Tax Freedom Act
3. They then lobby for another bill for taxation on internet use.
4. Then they lobby for what would be equivalent to a hand out in order to recoup losses.
But, on the other hand, the scope of this bill has some risk of bringing about a technology winter. Most people outside of tech don't realize how much of the software they use has been indirectly subsidized by the ad and data brokering industries.
Bubbles burst. Furthermore bubbles should burst, for the health of the economy.
"The best minds of my generation are thinking about how to make people click ads. That sucks."
By intentionally sewing economic irrationality (ie beyond how irrational humans are already), advertising destroys societal value. Here I use "irrational" in the sense of "making self-harming economic decisions."
Seems to me that government is eager to punish private companies for violations, while increasing its own storage of personal information on innocent people.
In this comments section, someone else said:
> [..] when it comes to mass surveillance,intentionally malicious backdoors and general societal loss of ptivacy, the solution should be primarly legislative not technical.
I'd object to this, given that a legislative solution is unable to restrict government collection of private data, while a technical one isn't (case in point: cryptography).
Each covered entity that has not less than $1,000,000,000 per year in revenue and stores, shares, or uses personal information on more than 1,000,000 consumers or consumer devices or any covered entity that stores, shares, or uses personal information.."
Putting those two vastly different classes of entities under the same umbrella and exposing them to decades in prison seems like it would have a chilling effect on the startup community. You would just have to hope that your app/website doesn't get to 1 million users, otherwise you're exposed to requirements where the implementation will bankrupt a small team or independent developer.
I guess you could simply stop allowing new registrations at 999,999 people, but it seems like a bad idea to discourage businesses from growing beyond that.
The bill allows anything from fines up to 20 years in prison, while a murder charge has a mandatory minimum of 15 years (up to life).
Faulty thinking on my part, probably.
It would be common sense to pass laws before it happened and would of incentivize companies to beef up security.
When a problem is known and risk judged high enough to outweigh the costs mentioned above, a proactive bill can be introduced.
(Or, at least this is how it's supposed to work; in practice, you also need to add politics and lobbying to the mix.)
But threaten the executives with prison, and they'll suddenly make sure that the company complies with the law. I bet e.g. SOX would be taken a lot less seriously without those teeth.
Besides that, I don't see a reason why wilful privacy violations should not be met with prison terms. Don't want to go to prison? Don't collect/share data that you're not allowed to collect/share.
Even for negligence, there is precedent to send people to prison, although that usually requires the negligence to result in death. But the scale at which software mistakes can cause damage is often higher than the scale at which mechanical/structural engineering does: A collapsing building kills hundreds of people. A collapsing Equifax database doesn't kill anyone directly, but affects over a hundred million people, and has the potential to ruin their lives (imagine e.g. private Facebook profiles outing people in intolerant areas - would probably lead to a larger number of deaths than most modern-day building collapses).
The 2008 crash is full of executives who did not comply with the law, and did not take the threat of prison seriously. I think to this date you can count the number in prison on two hands, and most of those were more foot soldiers than masterminds.
That's why it's not taken seriously.
Laws only work when cultural norms already do 99% of the heavy lifting.
And this is the aspect where a lot remains to be done in the USA. Many people who buy, sell or plain steal personal data don't perceive it as something negative. Some of these people are high-level executives and get praise as "disrupting the industry", some do their job quietly, and very often their victims don't even realize the extent of the harm done to them.
They published opinions about the future performance of certain securities. Those opinions were incorrect. Lying would mean misrepresenting the present state of those securities, which they didn’t do. Everyone know they were buying subprime mortgages. They just expected them to trade like prime mortgages.
If I say “this stock will rise in value” and then it doesn’t, that isn’t lying. If I say that while knowing the CEO is committing felonies, it still wouldn’t be lying, but it would be problematic. If I say “this is a share of Apple” when it’s actually Twitter, that’s lying. Bad forecasts aren’t lies, they’re mistakes.
The rating agencies’ role in the crisis is interesting and nuanced, and it is difficult to fault anyone other than the investors who over-relied on (and arguably misinterpreted) their guidance.
If you look at every security the agencies rated AAA, they performed as expected in cash flows. The underlying mortgages were mostly garbage, but some kept paying. That meant the top tranche of those structures, the tranches that got a high rating, kept paying.
If you held to maturity, the securities performed as promised. (Cf: if the government hadn’t bailed out AIG, they may not have.) The problem was they crashed in value in the interim because people began doubting if they would perform. That was a problem for liquidity-constrained investors, which was almost everyone in 2008. Rating agencies don’t say “this bond won’t trade at 50¢ on the dollar.” They say “this bond will probably pay you back.” And in the latter assessment, they were surprisingly accurate. The problem was people took the second to mean the first.
Why do you think Moody's agreed to pay an almost $900 million settlement over allegations that they inflated the ratings of toxic securities?
What if the weatherman knows there's going to be a massive shitstorm tomorrow, but decides to tell everyone that it's going to be clear and sunny?
Skipping back to private consumer data, a law that clearly states that executives go to jail when there is a breach of private consumer data would likely increase the chances that the executives will pay better attention to what data is collected, how data is stored, and how data is protected.
Tangential topic: an interesting regulatory idea would be to have mandatory jail time for all executives (and board members) of a company of a minimum size that is successfully sued in a class action for more than X dollars. Make it strictly liable so that no direct knowledge is required to convict so as to punish executives who allow their companies to harm the public. If an executive allows their company to harm the public, then that person is not fit to serve as an executive.
Infamously prosecutors with conflicts of interest - often seen in cases of outrage over police misconduct. The prosecutors proceed to obviously present the worst possible case in what is an /unopposed/ hearing for a crime. There is a reason why the saying is 'they could indict a ham sandwich' - the standard is low enough that they just need to be able to present a very initial case to begin the process of proving guilt officially.
If the blockchain could send people to prison, then we’d probably have other problems.
Data leak? Death.
In this way, all software will be secure.
With this privacy and security must be taken seriously from the start. I am of the opinion that wanton neglect that results in massive consumer harm (think Equifax hack) should warrant prison time.
All of these sorts of things needs to carry punishments not for the people who are told to set the stuff up.
It needs punishment for the management who doesnt allow time to set things up properly
There's nothing wrong with iterating quickly, but developers are responsible for making sure their systems are secure and resisting shipping applications that aren't. It is a developer's responsibility to push back on shipping broken software. If they're forced to, they need to make sure there's a paper trail that shows that. Otherwise it's indistinguishable from a developer just having done a bad job of their own accord.
Hard to show it was management's fault, and not the result of a developer who really didn't have the AWS skills (though that's probably still the fault of management, for not verifying skillsets before hiring or giving AWS keys)
Everything we know about iteration cycles, quick access to devops resources, etc, will change when prison time becomes an option.
No. A leader is ultimately responsible for everything that happens or fails to happen under his or her leadership. Full stop.
The people in charge of your hypothetical developer are the only ones with the ability to put processes in place to prevent it from happening. They are the least-cost avoider. Therefore, the power and the responsibility belong there.
Strict liability for the least-cost avoider is a sound strategy from both a moral perspective and a law & economics perspective. When proving knowledge and proximate cause are difficult, and someone is clearly in charge, and the harm is great - you place the liability on the people in charge whenever something goes wrong, and be done with it.
Over the past few years, developers have seen more and more autonomy and power. I can't imagine there's a way to avoid walking some of that back (if it can be)
By specifying what is and what isn't allowed wrt. user data in products. By ensuring it gets included in new employee training, and communicating that exposing the company to data-related risks is a serious, fireable offense. By having internal checks and audits that monitor data risk and keep it low. I.e. basically the same things you'd address risk of financial malfeasance.
> Move way from "devops" and back toward a world where there's a clear isolation between ops and development?
It's not really about "devops" vs. dev/ops separation - it's about not moving fast and breaking other people's things. You can solve that with good professional practices, but there needs to be an incentive to adopt them (as opposed to the existing incentives to ignore them, in pursue of short-term profit).
Doctors already have a responsibility to safeguard the privacy of their patient e.g. interviewing minors-turned-young-adults on their own and asking if they have permission to share the patient’s health information with their parents or guardians.
If the doctor installed the lock manually and there was a reasonable expectation to install a lock of a certain strength, then yes the doctor would be responsible.
If the doctor hired someone else e.g. a locksmith or the property management, then the people who were hired would be responsible so long as they were informed that they needed a particular kind of lock, they were certified to perform the installation, and there were laws on the books for the each of those kinds of responsibilities.
It definitely seems onerous, burdensome, and expensive, but the costs of these kinds of security breaches have been severely discounted by those who have the power to otherwise act.
You cannot stop all criminals, but you can take reasonable actions (due diligence) to ensure a reasonable effort, according to industry, and timely corrective actions once a breach is known.
If a construction company sends a bunch of untrained yahoos out with explosives, well, maybe the yahoos should have known better, but the company absolutely should have known better and I have no problem holding them liable.
Say for instance there is a requirement that there must be DES encryption of passwords. That would be a downright terrible law on several levels - first of which is that the best way to secure passwords is not keeping them in the first place but a hash. The encryption standard is as laughable now as requiring banks lock their vaults with a simple warded lock - the kind where skeleton keys work because shaving the teeth from the key means it no longer has anything to catch on while it turns the lock.
Failure to disclose (or unreasonable delay in disclosing) a massive data breach by the executives of a public company is already a criminal offense! But I don't know of any cases where criminal charges have actually been brought. It's possible that assembling a securities fraud case is extremely challenging, or that US Attorneys are not keen to go after tech co. execs. If the latter is true, I don't suspect a new data privacy law would be any better enforced.
The journalist who posted the original article was too busy to put a link to the proposed bill. [EDIT: link: https://www.wyden.senate.gov/imo/media/doc/Wyden%20Privacy%2...]
To those who say white collar crime is out of control, I agree, but we don't get it under control by criminalizing new things. We control it through fairer and better enforcement of things that are already illegal.
I strongly suspect the only people criminally prosecuted under such a bill will be patsies, leaving the politically powerful free to continue rear-ending Americans while prosecutors continue to whistle and look the other way, occasionally rounding up (and convicting) the usual suspects to appear busy, and laughing all the way to the bank when they run for higher office and the same politically powerful underwrite their campaigns.
The only people that can be prosecuted under it are the chief executive officer, the chief privacy officer, and the chief information security officer.
The thing that is criminal under this bill and thus can subject them to prosecution is, despite what most news stories imply, not violating privacy.
The bill requires the company to file an annual privacy report with the FTC. The aforementioned three offices are required to provide a written certification to accompany that filing certifying that the report follows the rules of the bill.
The crime is certifying a report that they know does not follow the rules.
Draft text of bill: https://www.wyden.senate.gov/imo/media/doc/Wyden%20Privacy%2...
So what you are saying is companies should make sure not to have the latter two positions?
A CEO gets paid enough they can handle the risk, but the other two positions don't make nearly enough to exist in the face of risk like this.
The annual report has to describe in detail whether the company complied with the regulations in accordance with subparagraphs (A) and (B) of section 7(b)(1), and to the extent that the company did not list which regulations were violated and how many consumer's personal information was impacted.
7(b)(1)(A) requires the company "to establish and implement reasonable cyber security and privacy policies, practices, and procedures to protect personal information used, stored, or shared by the covered entity from improper access, disclosure, exposure, or use".
7(b)(1)(B) requires the company "to implement reasonable physical, technical, and organizational measures to ensure that technologies or products used, produced, sold, offered, or leased by the covered entity that the covered entity knows or has reason to believe store, process, or otherwise interact with personal information are built and function consistently with reasonable data protection practices".
To be criminally liable the officer has to certify the report "knowing that the annual report accompanying the statement does not comport with all the requirements set forth in this section".
In other words, to be criminally liable the officer has to lie to the FTC.
The main risk it seems to me is that the officers might be given false information from underlings, leading the officers to believe the report is accurate when it is not. If the FTC discovers this, their initial suspicion will be that the officers were the ones that lied. If the officers keep good records of where they got the information they relied on when certifying the report, they should be OK, although it will certainly be something of a hassle.
This only applies at companies with $1 billion or more in annual revenue that deals with personal information on more then 1 million consumers or consumer devices, or that deals with personal information on more them 50 million consumers or devices.
I'd expect a CPO or CISO at such a place is paid well enough to handle this.
You can claim that about any criminal law.
And there is an antidote. You even describe it yourself:
> through fairer and better enforcement of things
Voila, no patsies prosecuted!
So I won't really cry for the targets here.
Yeah, I'm not going to even bat an eye about millionaires and billionaires going to jail for actual crimes and damage done to significant parts of the population. We need to start putting more millionaires and billionaires in jail for crimes instead of black teenagers for having 2 grams of weed in their pocket.
Throwing around prison lightly is dangerous. For some reason data privacy gets people emotionally charged instead of thinking clearly, and it's easy to say "lock them up", but that's exactly the same attitude that led to such heavy criminalization of other things that now takes up resources, fills up courts, wastes lives, and accomplishes almost nothing.
Create a motivating factor for them to delete my records and you impact this.
these are all valuable humans, despite the othering identity politics of law and order rhetoric, and our focus should be bringing them back into the fold, not harshly and debilitatingly punishing them. very few people are so far gone that (long) prison terms make sense.
with that said, it'd be an injustice if we didn't also send privacy violators to prison, as it would unfairly advantage to them in the eyes of the law. above all, the legal system should strive evermore toward fairnesse. obviously the best solution is not to send any of these folks to prison.
ideally, privacy violators should have to face the consequences of their actions as part of their punishment (like personally making whole each victim, as an extreme example).
The bill defines covered entities in Sec. 2.(5)(A) and 2.(5)(B). In particular, companies with less than $50,000,000 in gross receipts and information on fewer than 1,000,000 customers are not covered by this legislation.
And even if those apply to your local coffee shop or whatever, Sec. 2(5)(B)(iii) further limits the definition of covered entity so that businesses that do not provide 3rd party access to information are not covered.
So Starbucks and other huge coffee chains/retail shops are the only organizations that would have to re-evaluate data collection from their public Wifi hotspots, and even then might be exempt depending on what they are collecting and how they are using that information. And, I should point out, these companies will need privacy experts on staff anyways, so this provision is highly unlikely to cause them to shutter their in-store Wifi networks...
Additionally, some of the more onerous requirements only apply to a subset of covered entities with yet larger gross receipts and yet larger numbers of tracked consumers.
But, unequivocally, your locally owned mom & pop coffee shop is excluded from consideration under this provision multiple times over.