As a developer, I know it is hard to implement something once, harder to implement consistently across multiple interfaces, and damn near impossible to keep correct years later after employee turnover and other twists.
The sad thing is that it costs a ton more money to do things really well, and companies can basically take advantage of the low price of doing things poorly until finally forced. And by then, they have tons of money so they can comply but any startup is screwed because now it costs more for everyone, even those entering the game.
Facebook surely must be heavily fined and regulated for their misbehavior, because to fail to keep Facebook data safe is to put lives at risk.
So would you like a fine for your bugs? And note that contrary to other professions, software development doesn’t have generally agreed recipes for building bug-free software, so was that really negligence? Was it malpractice?
Being fined for a contribution to an OSS project would be terrible, wouldn’t it? And no, the size of the company doesn’t and shouldn’t matter in the eyes of the law, only the impact.
Also people uploading stuff on the Internet should really expect a best effort privacy. If you expect secrecy, then uploading shit on a platform meant for sharing is pretty dumb.
Note that I will blame Facebook for willful privacy violations. And I hope to see them suffer under GDPR. But a bug doesn’t fall in the same category.
I would agree regarding small companies, but I wouldn't put oss developers in the same boat, fining the entity that provides a service makes more sense. It doesn't matter if that service relies on OSS or not.
It's the company providing the service to the consumer who is responsible to vet the final product.
A OSS developer has no idea if her/his code is going to be used by a gaming app or by NASA for mission critical stuff and shouldn't be made responsible if a bug in the oss project caused a rocket failure.
Similarly a construction company providing wood (and that company isn't making any false claims about the level of quality): it should not be the company's fault if someone decides to use that wood for a bridge where concrete is needed. The bridge builder is responsible of picking a good material.
I agree with your post, but I tend to think of facebook's users as providing the product (their attention). If the consumer is a company buying advertising, then where's facebook's motivation to be careful with a user's "private" data?
Under GDPR, it actually doesn’t matter if you charge money for your product or not. If you process personal data, you’re responsible for it. This also applies to private people with no commercial interest who start to gather data from strangers (in exchange for some service or whatever).
Edit: The motivation should’ve been there from the beginning, if only for ethical reasons. Now the motivation is probably enforced by hefty fines.
The post I responded to, I think, made a very good point about responsibility being on service providers rather than OSS contributors.
However, the wording about "providing the service to the consumer" seems a bit problematic; it leaves the door open to discussions about who the consumer is, and thereby who is accountable. I'm glad you brought up GPDR - it seems to take the right approach, with regards to protecting personal data no matter who's holding it.
Facebook‘s product is a platform. It can’t exist without users, and it can’t exist without advertisers (presumably).
Since both end users and advertisers are part of the product, the data of all of them needs to be protected. It doesn’t matter who the paying party is.
If you say I can avoid penalities by saying my services are "as is", what stops Facebook from doing the same thing?
Obviously, it’s still not clear for many people: All services that process personal data became more regulated through GDPR.
And yes, if any service loses its customers data, there will be a fine. The fine depends on many factors. And yes, even Mastodon.social or Gitlab.com (the service, not the OSS). The advantage of these platforms is that they actually don’t process that much personal data.
Behind any service is a legal entity that asks people for their data, to provide a service. These legal entities are subject to the same laws.
However, since the GDPR apparently determines fines on a case-by-case basis, they might give a low or no fine at all, if the service is non-commercial and had no intention to collect user data for commercial purposes. But the law still applies.
If you put a web service online that handles personal data, you must make sure to keep that data safe. It doesn’t matter if your service is free or not.
Turn this around: just because you as a user signed up for a non-commercial free service like Mastodon.social (the service, not the OSS you can host yourself), you wouldn’t want the admins of Mastodon.social to mess around with your data, no?
I'm not a fan of the overregulation of industries like aviation, but consumer software has gone too far in the other direction and is long overdue for an adjustment.
 As in: Every rule was included because someone (nearly) died because of it not being there before.
The realm problem is the inevitable regulatory capture that occurs in every market with even an ounce of complexity.
Given the number of high profile breaches we see every month, I definitely think we're due for some consequences.
We deserve it. Though of course others deserve it more.
Does it ... kill people? Does it enforce bad policies like the healthcare industry did for the past couple of decades, causing an epidemic of obesity, diabetes and heart disease, which are the top causes of death?
Yeah, regulation there definitely helped /s
Facebook asked users to upload nude photos. what if those get leaked and users commit suicide because of it? Would you (partially) blame facebook for their death?
> Does it enforce bad policies like the healthcare industry did for the past couple of decades, causing an epidemic of obesity, diabetes and heart disease, which are the top causes of death?
Genuine question but what policies are the reasons for the epidemic of the three death causes you just mentioned?
I missed that one. By now even lay people should know that's a recipe for disaster.
No, because doing nude pictures of yourself and then distributing them, no matter where, is just stupid. Parents should educate their kids to know better, or seek counseling if that mistake was made.
You're also talking of a hypothetical situation. When planes crash, people die, guaranteed. And yearly there are more than 100 plane crashes.
> "Genuine question but what policies are the reasons for the epidemic of the three death causes you just mentioned?"
The recommendation for a diet high in sugar, high in wheat and other grains, high in vegetable oils / polyunsaturated fats (e.g. Omega-6), low in saturated fat, low in dietary cholesterol, low in salt.
Children were fed in schools, diets were set in hospitals, foods where preferred in supermarkets according to these guidelines. That's not a debate I want to get into though.
Considering this article is about Facebook leaking 6+million photos to third parties, including photos that were uploaded but never shared, it's well within the realm of possibility that at least one of those millions of photos was a nude. In fact, I'd bet there were quite a few nudes in the leaked set. It only takes one more step to turn that hypothetical of yours into a reality.
I'm grateful I didn't have to live through this as a teenager, it's a shark pool.
So how long do we keep pretending that allowing this to go on is a viable way forward?
BTW, how do you think anti-vaccination, healthy at any size, and minor attracted people ideas became popular? I specify those only because they are particularly heinous, but if you want official policy, just look at literally any election, though the 2016 US presidential election and the brexit referendum are the standouts in terms of memes.
I haven't seen any response yet. Does Facebook kill people, yes or no, it's a simple answer.
> "wasting peoples' time and/or money at scale is just as bad"
> "how do you think anti-vaccination, healthy at any size, and minor attracted people ideas became popular?"
In that regard all Facebook does is giving people the tools to exercise their freedom of speech, possibly with an algorithm for that feed whose effects they couldn't predict, because it was built to maximize profits, not sanity ... and that will never be illegal ;-)
I understand some of the arguments that Facebook encouraged fake news, however speaking as somebody that was born in communism, I can tell you that fake news isn't new, it happened before WW I, it happened before WW II, it happened at the east of the Iron Curtain (at least) during the Cold War, and it happened just as well afterwards.
In my country distributing news via Facebook isn't even that popular, yet fake news is flourishing ... on TV. People are always looking for a scapegoat, for an easy answer, for an easy fix. It's only natural, but it doesn't make it right.
No, I don't think Facebook is to blame for fake news, even if it might have contributed. Facebook can't be responsible for the poor education that people are given.
Absolutely nothing wrong with that. If a small trucking company has a driver that speeds, that driver gets fined the same way a driver for a large trucking company does.
> Of course the fines have to be proportional to the number of affected users.
The recipe for how a driver should not go over the speed limit is well known. Nowadays you even have the GPS apps alerting you and many trucks get monitored in real time from the dispatch center, drivers risking to be fired if not exactly on schedule.
Most software projects are greenfield ... people reuse previous work when available and for a good price, but all custom changes are greenfield.
Do you really think that the guy responsible for Heartbleed  was aware when he introduced that bug, just like a truck driver going over the speed limit?
It's really not the same thing, lets not pretend that it is and regulation in this field would have a chilling effect for open source or startups, because only big companies like Facebook will still be willing to develop critical software, which is definitely not what we want.
I’ve worked in engineering roles where law made me potentially criminally liable for negligent handling of certain data. We took things more seriously than Facebook.
You're right. Look, software is complicated, and there's no way, yet, to make it bug free for any meaningful system. I get that. But at the same time, let's stop calling ourselves engineers if we keep hiding behind 'bugs happen'. We need to be a LOT more responsible than that. Does that mean regulation? If we keep going down the road we're on, yes. Because I gotta be honest, I'm tired of hearing, 'bugs happen' and I, the consumer, am the one who suffers.
How about a blog commenting system that leaks emails due to a bug, something like Isso:
Basically I don't like these arguments because it's about the company's size. Facebook should be punished because they are big, have a lot of data and we don't like them, right? No matter how you look at it, it's a Pandora's box.
Is this not how it works for every other industry? Up until the 2008 bank bailouts, that is.
So what should the penalty be for a 14 year old that contributes a bug into a project like Mastodon or OpenSSH or whatever, which then leaks the data of tens of millions of people?
All this would do is to have a chilling effect on the industry such that only big companies like Facebook will be able to develop critical software, due to being able to afford it. And yes, this happens in all the industries you're talking about. And it did not stop the market from crashing, it did not stop malpractice.
Also this regulation will probably not stop Facebook from lawfully violating privacy.
1. consumers want it
2. governments want it
The only thing regulation will accomplish is that only companies like Facebook will be able to do it. Yeah, big win.
Not if your contribution causes harm. A fine would be a more than welcome addition to consumer protections.
Funny, because community-driven open source is the only hope for replacing Facebook with something that is privacy oriented.
However, there needs to be SOME distinction beyond intent, which is often impossible to discern.
That said, you and I can agree that FB sucks, and delete our accounts. It is up to other people whether they follow suit.
That about sums it up for all these privacy breaches these days. It's getting to the same level of "thoughts and prayers" for tragedies. No actual change or consequences for the problems happening, just empty "sorries" and "promises" that it won't happen again/they'll get it fixed. I don't know if this is a GDPR violation or not (as someone else asked), but if it is, I hope we start actually seeing action of these sorts of things.
Sounds like you're suggesting that we criminalize software bugs.
To me, if we can criminalize something like a major oil spill such as BP/Deepwater Horizon, how is this much different? It's not like they did the oil spill on purpose, but they still need had consequences for those risks that they were taking. Software companies, esp larger ones like Facebook, should have the same kind of consequences for their risks of software bugs that cause these kinds of privacy breaches.
Also, as someone else below pointed out to someone else with a similar tone as your phrasing of "criminalize software bugs": "intentionally obscuring the debate. Gross negligence is an entirely different standard than just software bugs."
The government does a good job in this area forgiving innocuous violations, as long as all parties disclose it immediately and follow procedure.
The problem is that we're all giving our data away to these "free" platforms. That makes it difficult for a user to argue that they've "lost" something of value when there's a breach. But of course the user has lost something of value. Facebook has built their entire company around the value of our information but we let them have it both ways. It's valuable when they're selling it but worthless when they fail to protect it. Statutory damages for data breaches would deter negligence and (partially) compensate users who have been victims of data breaches.
Come to think of it, does anyone know of good auth resources for a mean stack that isn't a copy paste blog? I'm trying the udacity auth course as a starting point (uses oauth2)
The relevant rfc and drafts perhaps?
Also checkout OWASP
If your implementing openid connect, use a certified lib
Dex by coreos, Open Policy Agent, Kubernetes docs & code are all good examples, lots of frameworks have docs / code.
Case in point look at the quality of medical software today. Hospitals still use windows xp and other completely insecure and outdated software. Because absolutely nobody wants to deal with the nightmare that is HIPAA.
"But HIPAA" has never, in my experience, been employed except by people who find the idea of doing the right thing inconvenient or inconveniently expensive. (It is virtually never that hard and its benefits are clear.)
There are reasons for not modernizing tech stacks in the medical space. HIPAA is, in every case I've ever observed, not a meaningful one.
Thank you for directly attacking my character without even addressing my actual argument.
I'm not arguing against HIPAA, I'm arguing against such regulations in spaces that don't require that kind of sensitivity. I think that medical data absolutely requires the protections it has. But it absolutely has had the unintended consequence of making current medical data more insecure and stifling innovation in the space. Most doctors don't even follow HIPAA compliance sending patient medical records over email.
I would estimate that 40% of doctors today are not compliant with HIPAA, sending X-rays and other similar patient information over email with providers that they haven't signed BAAs with.
>There are reasons for not modernizing tech stacks in the medical space. HIPAA is, in every case I've ever observed, not a meaningful one.
Then please enlighten us. Up until a few years ago (maybe even just a year) you couldn't use AWS to host medical data. Today you can't use Google Cloud to host medical data unless you are a large enough business to be able to get into contact with one of their sales reps. Can you even sign a business associate agreement with digital ocean? So up until a year ago you could not even have a small healthcare startup hosted on the cloud. Please explain to me how this hasn't stifled medical software innovation.
If it isn't HIPAA it's some other outdated regulation.
"But it's hard, for no actual reason I will define" is not a meaningful argument. So--when one hears hoofbeats, think horses, not zebras.
> I would estimate that 40% of doctors today are not compliant with HIPAA, sending X-rays and other similar patient information over email with providers that they haven't signed BAAs with.
Probably true! But that's their own damned fault. Medical has Artesys and similar, dental has Apteryx and similar. This problem is largely solved but for hands-on unwillingness to use them.
Those providers should be nailed to the wall, the wall should not be torn down for them.
> Up until a few years ago (maybe even just a year) you couldn't use AWS to host medical data.
AWS has been signing BAAs since at least...2013? I believe the first time I looked into it was 2014. But, regardless--if your innovation was so tremendously stifled by this, I'm not particularly sympathetic. I've been running my own services and writing them too for at least a decade and you can do thou likewise, I promise. I am, however, saying that today it's very easy to do so 'cause Amazon is all-too-happy to sign one.
Also, I haven't had to use GCP for HIPAA-covered entities--found their BAA pretty easily though!--but even assuming you're correct the idea that you have to, hiss, talk to somebody before getting them to take some legal responsibility for your held PHI, I don't find that to be a particularly nasty requirement. I still find it odd that AWS will just let you sign right through with AWS Artifact.
Azure's all-too-happy to sign one, too. Not that I'd recommend it.
So you're fine with financial losses, loss of privacy, and the material harm that goes along with both? Disregarding the impact that data breaches imply is just naive.
> Case in point look at the quality of medical software today. Hospitals still use windows xp and other completely insecure and outdated software. Because absolutely nobody wants to deal with the nightmare that is HIPAA.
I wrote medical device software for more than a decade. HIPAA has nothing to do with it. Many systems run on outdated platforms because the cost of replacing them is deemed to outweigh the benefits. That determination is debatable on a case by case basis, but in practice we see a hell of a lot more damage being caused by breaches of companies running on modern technology than we do e.g. hospital systems or LIMS.
please, if there is provable material harm they can take it to civil court.
We regulate the finance industry not because of a risk of physical harm, but because financial harm can be equally serious and civil suits do not act as a sufficient deterrent to bad behavior by the powerful. Why do you feel this sort of thing is different? I believe the only real difference is that this sort of thing is new, not well understood by most, and we just haven't caught up.
And your "nightmare" scenario of (civil) liability flowing from programming bugs already exists in the investment world and it hasn't come apart at the seams. Google Axa Rosenberg. A coding error in their trading algorithm went undiscovered for two years. Negligent for sure, but not why the SEC went after them. The problem was they didn't promptly disclose the error to investors and they didn't promptly correct it. Algorithmic trading firms should have mechanisms to catch errors, correct errors, and disclose those errors to investors. And after seeing Axa Rosenberg's $250 million fine and Rosenberg's lifetime ban from the industry guess what they all implemented?
This is false.
Source: Works for a company that has mandatory HIPAA training for every employee every six months.
citation please. Here's mine:
> Criminal penalties
> Covered entities and specified individuals, as explained below, who "knowingly" obtain or disclose individually identifiable health information, in violation of the Administrative Simplification Regulations, face a fine of up to $50,000, as well as imprisonment up to 1 year.
> Offenses committed under false pretenses allow penalties to be increased to a $100,000 fine, with up to 5 years in prison.
> Finally, offenses committed with the intent to sell, transfer or use individually identifiable health information for commercial advantage, personal gain or malicious harm permit fines of $250,000 and imprisonment up to 10 years.
Source: American Medical Association
To me that trumps a non-lawyer’s interpretation of a non-legal web site.
That they have a different company risk profile doesn't necessarily change the facts at hand. And, TBH, they don't have to tell you the truth if it helps achieve their immediate goals. (They can tell you you'd be personally and criminally liable. It might make you do what they want better. It might also not be true.) Or it may all be in good faith. But what you describe doesn't square with anything I've ever worked with, at multiple clients and employers.
My guess as to why the draconian position is more about the internal process. You have to identify and disclose breaches in a timely way; if you don’t the company is at risk.
From HHS summary of the rules:
(See: https://www.hhs.gov/hipaa/for-professionals/privacy/laws-reg... ) (it’s also laid out in the regulation which I don’t have time to find.)
“Criminal Penalties. A person who knowingly obtains or discloses individually identifiable health information in violation of the Privacy Rule may face a criminal penalty of up to $50,000 and up to one-year imprisonment. The criminal penalties increase to $100,000 and up to five years imprisonment if the wrongful conduct involves false pretenses, and to $250,000 and up to 10 years imprisonment if the wrongful conduct involves the intent to sell, transfer, or use identifiable health information for commercial advantage, personal gain or malicious harm.”
229. If a builder builds a house for a man and does not make its construction sound, and the house which he has built collapses and causes the death of the owner of the house, the builder shall be put to death.
233. If a builder builds a house for a man and does not make its construction sound, and a wall cracks, that builder shall strengthen that wall at his own expense.
Bugs in houses have been criminalized for a very long time. Online data may be less fundamental than safe housing, but housing our data safely becomes proportionally more important as more of modern life depends on it.
If a social media company leaks private photos of its users, the company's executives and senior staff shall have its photos leaked.
I would love something like that. Nobody protects anyone else's interests in this modern world unless there's Skin in the Game. Would highly recommend reading Nassim Taleb's book of the same name; he is popularizing this term, and its implications to society.
You misunderstand what "eye for an eye" means.
"Eye for an eye" means let the punishment fit the crime.
Before "eye for an eye" was established by religious texts, the common retaliation for poking someone's eye out was death.
"Eye for an eye" was a step towards a more civilized justice system.
And houses are not free.
Are you seriously suggesting that information in this day and age does not have the power to directly cause harm, including lethal harm, to people?
Also - that it can be used for organizing bad behaviours is an entirely different subject that has little to do with quality or security.
But Facebook is a company that actively snoops and uses the data of its resources, the end-user. It's like a security guard who's paid to prevent shoplifting actively ignoring violent crimes because it's not related to stealing, or a baby food company ignoring that some metal got into their food because the food is still OK if you pick out the metal bits.
If the government wants to hire people to decide what counts as what - they should do that.
Right. Companies like FB are entrusted with the private information of hundreds of millions of people. There should be investigations as there would be in a plane crash. If negligence is found, there should be appropriate punishment doled out.
When there is irreparable damage I believe it should be criminalized. You cannot regain privacy after an incident such as this, it is irrevocably taken from you against your will.
If you’re using OSS for mission-critical software you must either ensure that it’s fit for purpose or pay someone to do it for you. Nothing in the Linux Kernel documentation suggests that it can/should be used for flying airplanes of securing PII without doing additional due diligence.
The other big difference is that (for the most part) keeping a passenger plane in the air isn't an adversarial task. Actual breaches are the result of active bad actors, which is completely different from the problems you encounter in designing a plane.
So criminal action seems crazy to me, though I can definitely see a great case for changing the incentives around storing user data. Could definitely see a good case for fines (and even an ongoing per-user tax, to make it an up front cost) for storing PII.
“Information leakage through covert channels and side channels is becoming a serious problem, especially when these are enhanced by modern processor architecture features. We show how processor architecture features such as simultaneous multithreading, control speculation and shared caches can inadvertently accelerate such covert channels or enable new covert channels and side channels. We first illustrate the reality and severity of this problem by describing concrete attacks. We identify two new covert channels. We show orders of magnitude increases in covert channel capacities. We then present two solutions, Selective Partitioning and the novel Random Permutation Cache (RPCache). The RPCache can thwart most cache-based software side channel attacks, with minimal hardware costs and negligible performance impact.”
When my dad went to college, a very old and bitter professor (this was Civil Engineering, communist Eastern Europe) told the students on the first day in class something along the lines of: "If you know you're stupid or don't give a shit about your work, just go home and save everyone the trouble of dealing with your future fuckups. Mistakes here can cause deaths or losses of huge amounts of money".
I believe we've reached the point in which negligence in the software world can cause loss of lives, even when the software is not operating a crane or an airplane (think Grindr leaking account data over http in Saudi Arabia).
So you're minimizing the issue by asking if we should criminalize software bugs. We should and currently do criminalize negligence. If bugs are a result of negligence (you know, 'move fast and break things', 'better to ask for forgiveness than for permission') then fines, jailtime and criminal records should be a'coming. This is no longer child-play, this is the new world which runs on software.
Anywhere else its a plain case of malpractice whether its law or medicine, etc.
It happens with such regularity that I'm amazed anyone here would be kind enough to accept their fake apologies for clearly malicious actions.
There is risk in any human endeavour that touches upon someone else's life, in every domain. But, for example, only some of the deaths that occur in a hospital are the result of malpractice. That is the type of mistake for which we hold others accountable: not the mere act of providing insufficient care, but the act of providing insufficient care as a result of a dereliction of professional duty or a failure to exercise an ordinary degree of professional skill or learning.
1. If this was a novel, or very complicated breach, that Facebook did everything possible to avoid, but avoiding it was beyond the knowledge and skills of their security, engineering and QA teams, who otherwise did their absolute best, then it's at the very least defensible. One could argue that you shouldn't handle private data if you can't do it securely, but risk is inherent to anything, and perhaps worth it under the right circumstances.
2. If this was just "move fast and break things" policy, then a big fine is in order, and if no insurance is in place, whoever approved it should get to pay it out of their own pocket. This is the equivalent of a civil engineering company designing a collapsing bridge because everyone showed up at work hungover, or skipped safety calculations because they just take too damn long and time to market is critical.
If you think gee, this was just a bunch of photos, man, it's not like a bridge collapsed, how certain are you they didn't end up traded on the black market, or used for blackmail? Bet-your-company's-profits certain they weren't?
3. If this was deliberate policy -- not just accident, but a conscious business decision that was then reverted and declared a breach -- then whoever came up with it and/or approved it should be facing jail time.
Edit: also, it pisses me off that people are trying to decide how responsible we should about what we do based on other fields. They don't fine companies that write crashing firwmares for planes or cars or they fine it X amount, clearly we're only doing computer stuff so we should be fined less, no?
What the hell? First, they are fined (see, for instance, Toyota, who were fined 1.2B for their infamous acceleration firmware bug). And second, even if they weren't, we shouldn't be aspiring to do the worst thing that's still acceptable! We should be striving for better than anything else, not for well, at least we're not worse than civil engineers...
That said, the same changes won't likely occur with sites like FB unless it can be proven that the data leaked lead to loss of life or physical harm. They create incentive's for people to happily be the product. How do we prove that damage has occurred to the product? Have any forums popped up where people share stories of harm to their family as a result of data leaked from FB?
I can imagine GDPR being useful in the EU for corporate FB accounts. Wasn't FB working on a work-specific version of their site? If so, corporate legal teams would get involved in leaks, I would imagine.
I can imagine GDPR working very well for consumers as well, and it seems we are up for some real legal entertainment in the next few months/years :-)
Edit: It also wouldn't surprise me if it gets worse before it gets better. If I was a publisher right now I'd seriously consider blocking access from EU countries. (But that would of course be an invitation for a small, agile publisher who'd succeed either with a micropayments based approach or a context based ads approach.)
"Well, leave!" isn't an option. They can't leave. Quitting Facebook when you're an active user means you lose a huge amount of social contact. I can think of a dozen people I know who are there because it's how they send baby pics and the like to family. They're non-technical and don't care about federated mastodons, they just want to see their niece and go to their high school reunion.
So yeah they get really mad at this stuff but the network effect is so strong, you can't simultaneously convince the entire graduating class of whatever to plan reunions via some new thing when 1)everyone's already on facebook and 2)they've been using it for so long it's part of their workflow.
The answer is the same with any other foul business practice you oppose. The problem is not unique to digital businesses and I really hope those demanding justice don't request something more brash than they otherwise would in a non-digital situation.
And yes they can leave. There are real things you can't leave like your only ISP (internet is essential in modern society and no alternatives), then there are websites you can choose not to leave like FB (not essential in modern society). Your misuse of the word "can't" instead of "won't" just discourages any level of consumer responsibility.
Yes, of course everyone can always leave. The only price - for more than a few of those 2 billion - is the complete disruption of their social lives. Telling someone they can never see cute pics of their lil cousin again is punishment, not a serious decision about consumer responsibility.
You can tell because a history of malfeasance and yet, 2 billion active users.
I guess I'm old, but I find that email is great for sending baby pics to friends and family, and for planning things.
so solve for that, not your dozen or so social group.
"We are required to notify you that we have leaked information from your account, please be advised that we have no idea who has your profile information, pictures, post history or any other information contained in your posts. Please consider resetiing your entire online persona to avoid financial and/or social consequences."
With two buttons: "Erase me from Facebook" or "I get it, I don't care."
Just planting seeds, Bill Hicks style...
The fact that they don't leave mearly show that they value the gained social contact higher than the cost of data breaches.
Bugs and security vulns are literally inevitable. Security is important but it this was the standard I'm not sure that any company would still exist.
If you had an error that leaked private information, it's worth an investigation. If it made it through despite controls, that's understandable. If they find you failed to do analysis on the risk to users privacy, if you failed to have controls in place, if you didn't code review or test the code, then you have made specific choices that harmed users. That should be criminal.
We need to take software engineering seriously as a discipline. We have the potential to do more wide scale aggregate harm than any structural engineering collapse. We need to start acting like it.
I'm a huge security person. It's my job. But its unbelievably difficult to secure programs even if there are clear steps in hindsight that could have prevented a bug.
All of the above, possibly. Other engineering disciplines seem to have defined what constitutes due diligence just fine. This isn’t a novel problem.
It’s obviously not possible to make anything perfectly safe or perfectly secure. But it’s certainly possible to define a minimum amount of effort that must be put towards these goals in the form of best practices, required oversight, and paper trails.
Edit: Even “fuzzy” disciplines like law have standards for what constitutes malpractice or negligence when representing a client.
What evidence is there that this was gross negligence?
This is true and it's also the reason why there are more software vulnerabilities than necessary. Software could be a lot more secure. There will always be bugs, but its is possible to build software and platforms with many fewer vulnerabilities. But it's expensive, so we don't, and users suffer the consequences while the companies shrug their shoulders and count their money.
CRIPPLING fines to start. Or shut down the freaking company if you can't secure it. Not everything is forgiven with a "we're sorry for 27845th time." Private pictures can and do ruin lives. (We can question the wisdom of posting private pics on FB but after all it's a huge company and they said they're private)
Without regulation massive companies are entirely unchecked, there is virtually never market pressure to fix problems like this.
Though in practice, enforcement of regulations in general is already handled this way.
Same in the App Store, a minor app breaking App Store rules doesn’t get knocked out because the downloads are too small to bother.
The ideal of some arbitrary cut off point has been tried in lots of scenarios, and is gamed by all parties.
Example: Copyright will protect new works for x years, at which point Disney lobbies for the arbitrary goal posts to be moved.
Keeping regulations focused on big players serves the dual purpose of focusing regulation where its affect will be most significant, while also ensuring it doesn't negatively affect the market. But yeah, like you're mentioning the big problem is that once companies reach a certain size they begin to develop the political connections necessary for them to simply kill, or at least castrate, any potential regulation that might genuinely require them to behave in a way that is inconvenient - even if it's better for society.
Tax payer funded systems are one of the most controversial things we have. You'll find sharp disagreement on topics like e.g. public vs private funding for everything from education to medical and a wide array of other issues. Yet you'll find most of nobody that wants to privatize e.g. the fire department. This is because most of everybody would agree that the fire department does a good job, does it efficiently, and does it cheaply.
The point of this is that if there were a regulatory framework that was unambiguously and intrinsically superior to any alternative you'd find next to no opposition to it. Everybody wants the same thing in the end -- we just disagree on what's more likely to get you there. In many ways, I think lemonade stands are just a timeless and perfect example. In many states in the US today it is literally illegal, or at least unlawful, for a kid to go sell lemonade in their front yard. They can [and have] faced ticketing, confiscation, and so on. This is clearly idiotic by any standard, yet the very rules and regulations the produced this were all at some point created with good intentions. Perhaps ensuring food safety, or avoiding money laundering, or whatever other rule they happen to be breaking by selling a cup of lemonade for a quarter.
A rule that would generally stand to impose substantial penalties for writing bad code is something that would have unimaginably vast consequences at the lower level. And I think you're looking more at destroying small business in the tech industry than in suddenly having a world where all code is "good". By contrast the companies at the top can afford to greatly expand their staff and create factory lines of code review, extensive internal penetration testing, general audits, and so on. And perhaps most importantly, when they do end up violating the rule they have the resources to manage this just fine. And so it's very possible that the regulation could have an overall net positive effect there. But if it were applied to society as a whole (instead of just large companies), I think you'd be effectively killing off tech industry competition.
I could probably get away with murder, but for some reason I'm not out on the town strangling prostitutes.
Why do companies always need an "incentive" to not be anti-social? Why can't CEOs simply derive pleasure from delivering a quality service in exchange for some advertising eyeballs?
Sounds like a good time to reiterate the advice: Don’t upload things to the internet that you don’t want to be on the internet. That way there won’t be any of your things on the internet that you didn’t want to be there.
It's always hilarious how people try to pretend that it's easy to just drop out of society and the systems that people use to keep in touch. Sure, you can live like the unabomber in a shed in Montana with no phone service, but having that be the only option to keep your personal data safe from leaks is a bit much of an ask. People should be able to live their lives and take reasonable but not extraordinary precautions to safeguard their privacy and be able to have some expectation of privacy as a result. Unfortunately, there is so much data being collected on everyone, so many intrusions to our private lives, and so little care being taken by the stewards of that private data that it is not, it turns out, a reasonable expectation. And the onus for solving that problem shouldn't be on individuals. We shouldn't be forced to live our lives in fear of digital representations of our appearance being leaked onto the internet as someone might have once feared an ordinary photograph could steal a soul. Rather, those who are going to great efforts to destroy the boundaries of personal privacy should be heavily regulated to prevent them from doing so and heavily incentivized to safeguard private data whenever they are in possession of it.
This means anyone in the world can upload an image, tag you in it, and it will show up in searches for you. It still won’t show up on your profile if you have confirmations for that enabled, but still.
This means that if you started to upload a photo to the uploader wizard and then thought better, the photo is still out there.
Bright big popup right over main facebook.com (and peripheral webs/apps) dismissable only if you scrolled it all the way down, confirmed to have read it, saying "private photos of millions of users were leaked" in big bold letters, would go a long way.
Like the saying goes, “One death is a tragedy; one million is a statistic” — Facebook has made all its privacy blunders and issues over many years a statistic...something people may nod their head at, feel bad for a moment and go back happily to the same company’s platforms.
Unless lawmakers around the world do something, nothing will materially affect Facebook (the company). Even if they do, I personally have no faith that the company is capable of changing unless people at the top, like Mark Zuckerberg and Sheryl Sandberg, are out.
In industry perspective: We call ourselves "engineers", but real engineers are held accountable when they sign off on using an untested metal alloy in bridge joists and then people die when the bridge collapses. Facebook's constant bad engineering may not kill people directly, but it does lead to a lot of really important information stolen, peoples financial future being ruined, and who knows what other consequences for their users. If you still work for Facebook in this day and age you should be ashamed of yourself; I know people can justify just about anything while claiming that they'll "make it better from the inside" or because they just need to collect a fat paycheck and are comfortable and don't want to look for something new, but we need to fight these impulses anywhere we work.
Especially if we know the baskets have goals not aligned with our own, despite it being oh-so-convenient, but also not centralized in the first place.
What about, for example, pictures sent in a private message?
I'm so very glad I deleted my account months ago.
I did as well. One thing that stood out to me in the article was that users who were impacted by the breach would be notified via a Facebook message. What about people who were impacted by the breach who no longer have an account?
The paywall advertises a "Premium EU Ad-Free Subscription" which is more expensive than the standard subscription and explicitly states "No on-site advertising or third-party ad tracking" as one of the perks.
Trying to buy it has the following:
My only response to this is a big "fuck you" and this link: https://outline.com/zd5du7 so you can read the content without any of that garbage and without paying them since they don't even deserve a single penny.
Customer support by email says I have to provide a copy of my driver's license or passport to "secure the account". I said that's not reasonable, companies leak too much personal data so you can't have anymore of mine, I'll just open a new account. They replied they'd just change the phone number (now no longer requiring the required photo ID). They did and the end.
- No explanation why the verification code process would not work.
- None of my ID's have either my email address, account number, or phone number, and the account doesn't even have my name on it. Giving them photo ID does jack shit for the purpose claimed.
- If the account security is questionable, you should not only require text verification of the new phone number, but they should have removed my stored payment accounts, requiring me to reenter them. AFAIK the credit card verification requires CVV and phone number matching the credit card account. That seems like the right way to secure the account rather than bullshit photo IDs.
Have never seen an analysis of it.
They accelerated the planned shutdown for exactly that reason.
Whether you believe data was exfiltrated is essentially a reflection of how much you trust or distrust Google.
Maybe spend some time over the Xmas period having 'The Conversation' with our loved ones about their data safety?
There should be GDPR consequences of this - it's time that law got properly put to the test.
We will see how this plays out, but there should be a fine nevertheless (because others have been fined and they reported it).
But reporting it doesn’t make the fine go away. After all, you started to process personal data and are responsible for it. Alternatively, you could’ve opted for not processing personal data if you think you can’t protect the data adequately.
You can read all this here: https://gdpr-info.eu/issues/fines-penalties/
Not this particular thing per se but, you know, it's Facebook. As the recent history has proven these things kind of come with the package.
The long-term solution to this mess should come from users abandoning it which is happening gradually based on recent reports.
Where will the people go? If it's other software it might end being as bad or worse.
People want to communicate with others. If they use software for that then.... my original question applies.
And you've avoided answering that question.