How is this different than taking a picture of someone? The image is owned by the photographer, not the subject(s) according to current laws.
if I meet someone on the street, and record their name, the conversation we had, and the location where I met them, and their phone number, have I “taken their data”?
Do they have the right to demand that I not record that information?
Does my perspective or interpretation of that information give me some ownership to that data?
What if I use that information for commercial gain? Is that what makes this illegal?
Or is it only if I do this at a scale beyond which humans are not capable, and store it digitally, is that what makes this illegal?
We are flexible and smarter than Vulcans. Something doesn't have to be necessarily expressible into a single, unambiguous universal formula to be made illegal.
We can e.g. allow people to perceive things about other people in their brains (or even notebooks) as we've done for millennia, but not allow them to compile them into large aggregated digital databases of thousands or millions of people without their consent, or give them to advertisers.
>How is this different than taking a picture of someone? The image is owned by the photographer, not the subject(s) according to current laws.
Depending on the jurisdiction, taking an image of someone can be illegal. Sometimes, even if it's a public space. And using an image of someone to advertise stuff without their consent is illegal, including in public space.
The fact that we have lawyers who argue over definitions and whatnot suggests that this isn't possible except through case law. Almost all written rules/laws have exceptions. It's incredibly hard to codify even something we all agree on in terms of wanting to outlaw: murder.
While it would be nice, there's actually not really such a requirement. New legal theories are brought in criminal cases from time to time.
The existence of novel legal theories in criminal cases means it's not always knowable precisely what is and is not illegal.
The world is shades of grey, not black-and-white, and that's especially true in criminal law.
A single example of the laws not being clear-cut and concrete, but hopefully covering enough ground that people can be reasonably expected to understand, is illustrated in IRS Publication 535 under 2. Employees’ Pay.
Reasonableness is defined as:
You must be able to prove that the pay is reasonable. Whether the pay is reasonable depends on the circumstances that existed when you contracted for the services, not those that exist when reasonableness is questioned. If the pay is excessive, the excess pay is disallowed as a deduction.
Factors to consider. Determine the reasonableness of pay by the facts and circumstances. Generally, reasonable pay is the amount that a similar business would pay for the same or similar services.
To determine if pay is reasonable, also consider the following items and any other pertinent facts.
- The duties performed by the employee.
- The volume of business handled.
- The character and amount of responsibility.
- The complexities of your business.
- The amount of time required.
- The cost of living in the locality.
- The ability and achievements of the individual employee performing the service.
- The pay compared with the gross and net income of the business, as well as with distributions to shareholders if the business is a corporation.
- Your policy regarding pay for all your employees.
- The history of pay for each employee.
It is LITERALLY IMPOSSIBLE to cover all of the cases with laws. Arguing “common sense” also isn’t valid, “common sense” isn’t really “commonly shared”.
So, we do our best to create laws that cover bases but also give room for interpretation SO THAT WE CAN catch those people who are deliberately trying to break them.
If we said “you may not go over the speed limit” what happens if a bunch of people decide to go under, making it unsafe for other people?
They would have a basis to argue that they were not breaking the law.
So we create laws that are also guidelines.
It doesn't have to be expressed with a precise mathematical formula, but it does need to be expressed with a clear legal formula.
Otherwise, people cannot know whether or not their actions are legal, and the law becomes a fearsome weapon in the hands of its interpreters.
EDIT: I'm not claiming it's impossible for such a rule to be formulated in this case; I merely point out that clarity is absolutely necessary.
To the degree that this fact is true, the law is broken.
Stop acting like this is new.
We need to reform our broken system.
In Germany, people have some claim to the copyright of their own image, much in line with the German view that you own your private information.
Where I draw the line is when they start sharing it with third parties, especially without my consent.
Funny enough, the thing you object to is usually the one thing explicitly spelled out in ToS that they are allowed to do with your information
The photographer may own the image, but the subject may also need to sign a release for the photographer to monetize.
> if I meet someone on the street, and record their name, the conversation we had, and the location where I met them, and their phone number, have I “taken their data”?
> Do they have the right to demand that I not record that information?
In some US states, yes. Many states have what is called "two party consent" and you cannot record a conversation with a device unless both parties agree.
> Or is it only if I do this at a scale beyond which humans are not capable, and store it digitally, is that what makes this illegal?
There are many different factors. However, one of the big ones I would put forth is if I don't ever interact with your service and you still have a profile on me.
My friends tag pictures on Facebook. This means that Facebook now has a profile on me that I did NOT consent to. What is MY recourse?
Your first statement would imply that the photographer has some responsibility to gain consent for the picture and its use.
Those people didn't ask me because they didn't need to. They don't know me; I don't know them; we're not going to interact.
Facebook, however, is coordinating all those pictures, their GPS coordinates, their tags, and working out everybody in them.
I did not give anyone permission for that. Yet I don't have a way to stop it.
If someone is in a public space (such as a park) and you take a photograph for instance that happens to include them in it, then you're not violating their privacy. If they're in their home and you photograph them through their window, that is a violation of privacy because there's a legal expectation of privacy in a private residence.
People are going into the digital equivalent of a park and getting upset when data collection happens there which happens to include them. What happens with that data (selling to advertisers, etc) is not really relevant to the legal privacy discussion. They gave away their data by participating in a public space.
It's sort of like if some organization running public CCTV systems decided to sell recordings of public spaces they covered to advertisers (or heck, even data extracted from those recordings using facial recognition). Creepy? Sure. But not a violation of privacy as such since there is no legal expectation of privacy in a public space. (Incidentally, I'm not sure about the legal details of this specific example, but the point is privacy is dependent on location)
The conclusion, then, is that people who care about legal protection of their privacy online should own the platform where they communicate, such as by hosting a Diaspora pod.
If you follow someone around the park constantly taking their picture and then decide to sell the ones that look the best without either the permission of nor compensation for the subject of your photo then they have a legitimate claim against you.
People are not getting upset that they are being photographed in the park by someone who is an enthusiast about outdoor architecture and park planning, they are getting upset that the digital paparazzi are recording every footfall and selling it without permission. The idea that this data collection is not something that can or should be regulated is perverse and thankfully the general public is coming around to this viewpoint.
A while ago I would have agreed, but as I've watched the progress of data legislation, I've come under the impression that it's flawed in at least two significant ways:
1. Companies lobby for laws that favor them. Sooner or later, they win. And then they spend those winnings to ensure they keep winning (some numbers on the top political contributions from electronics/communications companies: https://www.opensecrets.org/industries/indus.php?ind=B).
2. Enforcement is never going to result in jail time. It's going to appear as fines, serving as a mere cost of doing business which results in further entrenching existing companies against newcomers who can't afford the risks.
People right now are excited about greater legislation because they think it will divert us from the cyberpunk dystopia of megacorps owning the world. But the trend I'm picking up from the current legal battles is that they're actually hastening it by pushing for legislation which those same companies will get to shape the details of.
Thus, the solution I see is not greater legislation (which also implies greater centralization, thus more winner-takes-all for companies and governments), but greater decentralization and personal ownership. Legislation sounds good now, but in the long run it's a trap.
However the real kicker here is, if they hold private companies to this then how do they excuse the government from similar actions and how does a law assigning this level of protection and declaration of personal property not affect law enforcement? In particular in that your phone/email account/etc has no rights even though your data is there.
If this does get somewhere, what if companies choose to encrypt it all and when stolen it is just encrypted. does the government demand the keys at all times?
There is no reasonable expectation of privacy in public.
Meanwhile, advertisements and depictions of people who have unlicensed but copyrighted images tattoos on their person open up those who distribute media with their likeness to lawsuits by the owners of the copyrighted material.
I think there is a good argument that this is often not just. The iconic image of many events may have the victims preferring not to be that iconic representation years later. They may want their life to be known for them, not a moment long past. For the most obvious example, the famous image of a child running during the Vietnam war. She's famous for being terrified of a napalm attack against civilians and allied forces. The photographer gets to decide if it may be reused, and he will, where it's proved to be his meal ticket. There's many less significant examples of where resting with the photographer is inappropriate.
Rights resting with every subject would raise its own, different problems. I'm not sure how to improve the balance without unreasonably restricting the freedom to take perfectly reasonable snaps.
> if I meet someone on the street .. right to demand that I not record that information
They should, unequivocally. I can walk past a person with clip board taking a survey. I prefer unannounced recordings to be left to authorities and journalists.
> Does my perspective or interpretation of that information give me some ownership to that data?
Nope. If it's about me, I care not what you are trying to interpret from my eye colour, location and my presence in a shop. Just whether I have agreed to your monitoring of me, or whether you are one of a limited number of exceptions. Which seems to be the sensible starting point of GDPR.
> What if I use that information for commercial gain
See above. Makes no difference, other than hugely reducing my sympathy for its collection. I don't care if it concerns 10 or 10m if it's without informed consent (no dark patterns etc).
Also the principle concept of "stealing your data" is more ludicrous than "stealing" in the copyright sense; that data is meta data and it's not yours, it was generated by machines you don't own and have no claim over.
I'm pretty sure that, for example, the list of grocery brands someone buys using a store loyalty card isn't "metadata", and while "stealing" is hyperbole, I'm pretty sure most people would be upset upon realizing how far and wide that information is being sold.
I think even if you have no ownership of the data and stealing is not involved, that does not give the collecting or managing party the right to sell or publicize or share that data, necessarily.
There have been studies about the value (and impact) of inferences from metadata e.g. https://www.pnas.org/content/113/20/5536.short .
((Edit) Agreeing with you, steal is incorrect term in a lot of cases, however I am not sure if we can say it is not applicable in general.)
I am not sure we have an adequate legal model now to deal with it. We should probably get to developing one real soon. But roping in emotional terms from the different field - like calling it "stealing" or "robbery" or "piracy" or "stampeding cattle through Vatican" is not very helpful. It makes it look as if it's simple - if it's stealing, stealing is already banned, just use the same laws here - but it's not and those laws won't work. Real work is needed here, not wordplay.
A shopping list is metadata and the issue here is data ownership, you can't own a shopping list, you can't even copyright one, if you don't want it to be associated with you, you should be able to opt out, but no one should be burned at the stake for it.
Why should it be "opted in" automatically in the first place?
In that case, they're directly monetizing data about me as a person.
There are laws against stalking in the US, which define stalking as:
engaging in a course of conduct directed at a specific person that would cause a reasonable person to:
(A) fear for his or her safety or the safety of others;
(B) suffer substantial emotional distress.
If you succeed in proving that Facebook tracking causes you substantial emotional distress, and that would be the case for a reasonable person too, you might have a case here.
"In a line of questioning from Rep. Ben Lujan, a Democrat from New Mexico, Zuckerberg allowed that his company creates profiles on people who don’t actually use Facebook — what are sometimes referred to as “shadow profiles.”"
Interpreting and converting these raw inputs into what a user wants, is literally what an app gets paid for.
Let's also make sure all future gains are made by the lawyers and other middle men so that world peace will finally be at hand.
There are countless examples of this in other industries. Some of the laws following the 2008 financial crisis are a good example which reduced the market place of hundreds of small banks who couldn’t operate with the new rules and further solidified the positions of the 5 mega banks. So much for ending “too big to fail”.
Idealism of “what should be done” meets reality of always historically ends up happening. If you’re interested in this topic Thomas Sowell included a hundred examples of well intentioned laws having the opposite effect and making problems either worse or stopping the one problem and generating far worse ones (usually after a period of time when every claims the regulation a success and moves on, before the reality of the situation reveals itself).
But do you have an actual example of this happening in tech? And beyond that, a series of examples showing this to be a systemic problem? Because high-tech has long been a Wild West with little regulation, and many many firms have been built upon finding ways to dodge existing regulation and social conventions. They will likely be fine.
But one wouldn't know that from the whining and moaning of oh so many advertising fans around here.
Yup, and this is why campaign finance reform is so important.
If companies are too large to be affected by law, then the only recourse is for the government to step in and break their monopoly. A company that is unaffected by laws will also have extreme leverage in the free market and have the strength to smother those smaller small-medium sized companies you claim are competitors to FB. It would seem illogical that a company would have the lawyers and influence to ignore regulations, but not be willing to use that same influence to kill competitors.
Regulation designed to rein in the heavy hitters will not harm small startups, especially ones whose innovation is built upon careful flouting of law and loophole-seeking, anyway.
They make a token concession or two, heap on the compliance costs and complications, and then enjoy a cosy relationship with the government group in charge of their would-be competitors too.
Facebook et al are not the source of all evil, they are actually a source of financial relief to most people, they are only a problem to media companies, and those who write books about how bad they are.
It's healthcare and housing costs that are the bane of everyday people, this media fabricated tech backlash is a strategic distraction.
That's literally how muckrakers during the Progressive Era alerted the public to the depredations of big business and forced government regulation of business practices to ensure competition and free enterprise.
And how does any social media company provide "financial relief"?
So you're one of those.
Were the "muckrakers" at the time in the same business as the companies they were raking muck at? Cause that sure is the case nowadays.
> And how does any social media company provide "financial relief"?
Zero dollar cost for communication, broadcasting, entertainment, information retrieval, etc while little else is free.
Incumbents, especially in the tech industry, face a greater threat of being unseated by a startup than being unable to handle regulations. The "heavy hitters" have plenty of cash available to pay for the lawyers needed to deal with regulations, and the lobbyists needed to shape regulations to their advantage. Startups need to be careful about their budgets, and the added cost of compliance represents an entry cost that will almost certainly work against small players.
Moreover, when you regulate to the point where the big incumbents suffer economically, you are typically in a state of over-regulation. The evidence is very clear that numerous freight railroads failed in the 1960s because over-regulation prevented them from adapting to new realities; it was too difficult for the railroads to shut down unprofitable routes due to service requirements and they were required to continue paying taxes and maintenance costs on redundant infrastructure. Following deregulation (the Staggers Act) America's freight rail industry was able to reorganize and become profitable once again (and today the North American freight network is one of the most efficient systems on earth and is envied by the world). Passenger railroads are still uneconomical even in regions with high population densities that are absolutely dependent on passenger service (e.g. the northeast corridor, which is the ideal scenario and home to some of the only services that manage an operating surplus) largely because of persistent over-regulation (especially safety -- Acela trainsets are significantly heavier than comparable equipment in Europe and Asia and are more expensive to operate).
Good regulation is certainly possible, but it is the exception rather than the rule. The more typical pattern is either the economic failure of an industry (over-regulation) or regulatory capture.
In the USA. It's highly unclear whether that holds for democratic regimes.
Google though they can afford not to, so they didn't. Now Google is starting to get hit with fines (e.g. France), so they'll probably change their minds.
The startups that never get off the ground because the cost of compliance is prohibitive will mean less competition for Google etc in the long term.
Moreover, startups seeking capital must convince potential investors that the chance of being wiped out by a GDPR complaint is low -- on top of convincing those investors that their business model is viable, that they are entering the market at the right time, etc. Plenty of startups with great ideas never get off the ground because they cannot get the initial capital they need, or they fail to get enough capital to survive a rare negative event.
There is not much doubt that regulations raise the cost of entry to a market. The real question is whether or not it is worth it for society -- if we are willing to sacrifice a few small companies for the sake of the regulatory goal. User privacy is a fine goal, but the EU is losing the leadership it once had in the tech industry to the US and China. Where is the European answer to Google, Facebook, Tencent, or Alibaba? Where is the Europe in the AI race? It is not just GDPR; the right to be forgotten, the draconian copyright rules, and so forth have all contributed to a stifling regulatory environment in Europe and a stagnant tech industry.
You implied it doesn’t matter if Google has less competition, and conveyed an unexamined assumption that the GDPR is the most reasonable and optimal way of assuring user privacy.
People who are committed to logical argumentation – and I've seen this point made often on HN – will say that the reduction in the quantity and formidability of new startups is an acceptable price to pay for improved user privacy.
It still leaves open the question of whether the GDPR is a reasonable and optimal way of achieving improved user privacy, but at least it's a logical argument.
The question of whether GDPR really is reducing startup formation and success is unclear at this stage, and it's possible it will never really be known.
This Bloomberg article  from November cites research suggesting that it is, but argues that it's probably not a bad thing.
As I said, that's a fair enough position, but we all need to be clear clear about what our position is.
> Wagman and Zhe Jin didn’t break down their data by business model, but if companies in the data extraction business receive less funding, Europe as whole and European consumers in particular probably won’t be any worse off.
> There’s also the question of data quality; Jia, Wagman and Zhe Jin cautioned in their paper that their dataset was not complete. And indeed, according to Pitchbook, a multinational firm that tracks public and private equity investment, while venture activity in Europe dropped somewhat in the third quarter and is likely to be relatively flat for the year as a whole, the share of capital received by software companies is higher than ever before, which would suggest tech innovation isn’t exactly being stifled.
It would seem that we are an impasse until further empirical data is collected. Perhaps an American experiment is in order?
While I don't doubt there's been many good things coming out of the release and growth of Facebook, if only for their contribution to the ecosystem, I think you might be hyping it quite a bit there.
Care to elaborate on your thought?
...one can say indeed that their contributions have been overwhelmingly positive.
Certainly only a case of scapegoating and envy comrade.
Not disagreeing here, I just don’t know what contributions you are referring to.
Are there any actual case studies and examples of tech startups being killed by regulation? Or is this a campfire story that is retold whenever the possibility of regulation is mentioned.
Similarly regulations in favor of privacy for citizens is going to naturally result in some companies, somewhere, having to adapt or take a hit or possibly not survive the transition. That doesn't mean we shouldn't implement those regulations because ultimately the larger monopolistic companies pose a far greater problem than the smaller startups can solve.
So long as dumb money continues to flow, there is little to fear. When this bout of irrational exuberance does abate, tech will have bigger things to worry about than consumer protection laws.
Really though, the tech industry has not yet been subject to such significant regulations. The history of the railroad industry is filled with examples of the destructive effects of bad regulations, ultimately leading to a near collapse of the entire industry in the 1960s (a cascade of bankruptcies, especially in the northeast). The Staggers Act saved the freight industry by relaxing rules, but passenger industry remains uneconomical and is basically quasi-state-run.
Not to mention, while some p2p tech companies were sued out of existence, others that went legit (like Napster) or toed the line (like BitTorrent) were not.
Yes, regulation will lead to some losers. But it’s questionable that consumers will be among them.
Peer-to-peer is a totally different concept of global distribution, one that challenges the entire business model that is built around copyright. If Netflix is a diesel locomotive, peer-to-peer is an automobile -- it is more than just a new way to do the same thing that we had done previously, it is an entirely new concept of how things can be done. That is why the RIAA and MPAA panicked. They understand how to negotiate with or sue a centralized distributor like Netflix or Megaupload, but their entire business model is threatened by peer-to-peer distribution.
Bittorent is only half the promise of peer-to-peer. Yes, you are participating in distribution, but you still need a central service to help you find the torrents you want to download. Hardly anybody is working on distributed search, or good ways to deal with spam/malware/etc. that do not involve a central service of some kind. There was a time when people were talking about peer-to-peer messaging systems, but the death of peer-to-peer left us all relying on more centralized approaches.
Ironically, the death of peer-to-peer contributed to the rise of tech giants, all of which follow the same centralized model that peer-to-peer challenged. I think it is entirely possible that a peer-to-peer social networking system could have hindered the rise of Facebook. Youtube might never have been created if peer-to-peer had flourished. We may not have even been having this conversation if the talent that went into Google and Facebook had instead been devoted to peer-to-peer.
It is impossible to know. The problem with deliberately killing a technology in its infancy is that it is hard to know how the technology might have developed or what it might do for society. It is certainly possible (I would say likely) that consumers would have benefited from the growth of peer-to-peer technology.
It’s also doubtful that P2P withered away as in the narrative presented. It flourishes today under another under-regulated category: blockchain. And has also yet to see widespread mainstream adoption, or even very useful products, despite the lack of broad legislative oversight.
Blockchain is indeed another P2P application, but as you say, it is questionable as far as mainstream adoption goes (though it is likely to see use in non-consumer, business-to-business applications where the hard technical problem of identity is easier to manage). The thing about P2P filesharing is that it was very popular and was starting to enter the mainstream, and we are sitting here arguing about whether or not the UX problems were a cause or effort. Blockchain also came after years of stagnation and missed opportunities in P2P because the first killer app was snuffed out.
Did these P2P services even offer any major consumer benefits aside from convenient ways to pirate media? Because that’s a value proposition that could return as the proliferation of streaming subscriptions (having to juggle multiple accounts at once to gain access to desired content) may cause some to simply kiss goodbye to streaming and return to the Pirate Bay. But even with legal challenges taking out the Groksters and the like, I fail to see how P2P for other purposes were damaged. They could’ve simply lost out because of lack of interest from both consumers (poor UX, no value add) and tech companies (saw no interest in pursuing such tech).
An example of regulation I'm glad didn't pass: https://en.wikipedia.org/wiki/Clipper_chip
Just like the clipper chip was a tradeoff between public safety and privacy (ultimately not passed because the cost was too large), any upcoming regulations will trade something away.
As long as legislators are aware, then that's fine. However I will remark that our senators seem to be especially clueless about technology.
No it wasn't.
> ...including jail time, if their companies steal and sell user data,
> or allow a massive data breach to occur at their company.
Software is a moving target. It has become such a complex endeavor, always changing, always evolving. It's difficult to determine who's responsible for which part of the system; and by this I'm not suggesting we abolish responsibility. Good outcomes will be the result of multiple forces, balanced in the right mix:
A. Users should ask more of their favorite companies (and mean it, e.g. boycott your favorite tech company when they behave unethically)
B. Legislators should be more mindful of the legislation they propose (for one separate data (re)selling from data breaches, and what exactly is my data vs data generated by machines, etc)
C. Reckless tech leaders should have their reputation affected by lose security, privacy & business practices
D. And engineers should be more aware their craft affects the lives of millions and maintain a high standard of quality across their work
In the grand scheme of things, we've just barely gotten off the ground with this thing called software. And software changed everything, but so did agriculture, which is 10,000 years old.
If we develop careless regulation and start throwing people in jail for software faults, we're hardly encouraging innovation.
The reason I don't think this will work is because the government and related contractors are major players on collecting, distributing and hacking private data. And if this works, it won't be because the government will stop doing it, but because want to control who else does.
I've been working in this industry for nearly 30 years, and it wasn't even "new" when I got my start. I can't help but laugh at people who act as though it just popped up last year. Is this just a result of young kids trying to convince themselves and others that they got in on the ground floor of something that existed well before they were born?
It's easier to delegate responsibility and enforcement rights to some higher authority, but since software is so complex, a much more powerful and robust solution in the long run is to embrace personal responsibility and agency. Complex systems evolve faster without or limited centralized control.
For example, the cryptocurrency space is already providing us with a playground where we can experiment with the next generation social systems. But like I've mentioned earlier, we're just beginning to scratch the surface and no one truly knows what will come next. But the beauty of it, is that we have the freedom to thinker, and figure out what works best.
Does anyone know what context he refers to? I wasn’t aware that Facebook ever directly sold private messages. Same for address, interest...
> Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.
“Take Spotify for example. After signing in to your Facebook account in Spotify’s desktop app, you could then send and receive messages without ever leaving the app. Our API provided partners with access to the person’s messages in order to power this type of feature.”
Facebook did not play fast and loose with peoples data or abuse their privacy. They don’t sell or give away user data.
What are you trying to refute, exactly?
"In order for you to write a message to a Facebook friend from within Spotify, for instance, we needed to give Spotify “write access.” For you to be able to read messages back, we needed Spotify to have “read access.” “Delete access” meant that if you deleted a message from within Spotify, it would also delete from Facebook. No third party was reading your private messages, or writing messages to your friends without your permission. Many news stories imply we were shipping over private messages to partners, which is not correct."
It's become clear from engaging in this discussion that people aren't interested in facts or context, but have a chip on their shoulder about Facebook. Others have also been misinformed by inaccurate news stories.
I don't even use Facebook, yet it's pretty easy to understand the facts if you're actually interested in them.
"These partnerships were agreed via extensive negotiations and documentation, detailing how the third party would use the API, and what data they could and couldn’t access."
That's not how you treat people's private data. Allow the app to send messages, maybe allow the app to read replies to what it sent (did Netflix even need this at all?), don't give it full read access that relies on a pinky swear to keep data safe.
And at your earlier comment, sending a message does not inherently require that the sender be able to read anything.
"In order for you to write a message to a Facebook friend from within Spotify, for instance, we needed to give Spotify “write access.” For you to be able to read messages back, we needed Spotify to have “read access.” “Delete access” meant that if you deleted a message from within Spotify, it would also delete from Facebook."
You've got an axe to grind and its tiring me out. Whatever.
Both Spotify and Netflix claim they only used access to send messages, and were unaware of broader powers. Netflix: “At no time did we access people’s private messages on Facebook, or ask for the ability to do so” Spotify: “Spotify’s integration with Facebook has always been about sharing and discovering music and podcasts. Spotify cannot read users’ private Facebook inbox messages across any of our current integrations. Previously, when users shared music from Spotify, they could add on text that was visible to Spotify. This has since been discontinued. We have no evidence that Spotify ever accessed users’ private Facebook messages.”
Note that even in the facebook statement, they don't say that the companies couldn't have accessed unrelated data. They claim that the permissions were appropriate (which they did not justify) and that none of the companies did access unrelated data.
I don't have an axe to grind, I'm pointing out that the spotify and netflix statements are pretty condemning and in a contradiction between them and facebook I trust the company saying "we did nothing wrong" less.
And nobody's voting on these posts...
I'd prepare myself for the incoming fines and regulation if I were you, instead of trying to do damage control.
Usually at best a corporation gets a fine and it just becomes a cost of doing business. Everyone involved including the government profits from the crime.
- Creating a paid "no tracking" option. Obvious failure mode: the option price is $100M, and anybody who uses it is unable to use 99% of the functionality, since it requires some form of tracking. Obvious next step - the law requiring this option to be no more than 10% more expensive than regular membership. Obvious next failure mode: inapplicable to sites that do not charge for membership. Obvious next step - creating a government commission empowered to decide what sites are supposed to charge for "no tracking" membership and which services it is supposed to cover. If you like your site subscription - you can keep your site subscription.
- Penalize large companies that submit false information in their annual privacy report - thought submitting false information to any government agency is a crime anyway? And for a public company, I assume publishing almost any false report would immediately put them under the shadow of fraud charges from SEC. So declaring something that is already a crime a crime again is supposed to... what?
- Require companies to assess their algorithms for accuracy, fairness, bias and discrimination. Obvious failure mode: who does the assessment? Obvious next step: creating a government commission empowered to approve algorithm fairness assessment standards. Obvious failure mode: since nobody knows what "fairness" it, it turns into another partisan tug of war, to be used as a club against companies affiliated with opposite tribe or just representing a good jumpstart to the next political campaign. Reasonable academic discussion of algorithmic bias becomes impossible, buried under layers of partisan tribal rhetoric and professional offense miners. Billions are spent annually on "bias prevention", without any shade of solution on the horizon, on the contrary, the problem becomes worse every day - at least if you're listen to bias prevention industry, but they're the only ones who are allowed to speak on the topic.
So, you either have to come up with something constructive, or someone else will.
Obvious comment on it is that "something" must improve the situation after being done, merely doing something that doesn't work because current situation doesn't work is not likely to make it work.
And if you're implying I have to right to criticize stupid proposals from politicians before I myself am elected into political office and make a full-formed policy proposal that solves all the problems - sorry, it's not how this thing works.
You are arguing that doing nothing is superior to this proposal. While you may be correct, that train has left the station and is no longer on the table. Both the public and the politicians are in agreement on this, so, good luck changing that narrative.
> And if you're implying I have to right to criticize stupid proposals from politicians before I myself am elected into political office and make a full-formed policy proposal that solves all the problems - sorry, it's not how this thing works.
Sorry, but, at this point, either you come up with an alternative, or a proposed alternative is likely to get implemented. This IS already moving, so all you can do at this point is nudge the direction.
"The avalanche has already started, it is too late for the pebbles to vote."
If they don't pay for the consequences of the risks they take (such as prioritising profits over security, etc) market forces demand that they take those risks.
People talk about the market fixing things, but that only works if it's not possible to externalise costs.
Unfortunately, the only practical way to enforce that is through government regulation.
The government is also a system which seeks to externalise costs...
in other words, all they have to do is fulfill their obligation disclose that they were hit by this 0-day, in order to at least be protected from jail time.
You could argue a single 0-day should not result in a breach (security is best as a layered defense), but that’s probably far less likely to find.
Na, I'm tired of these excuses too. Maybe jail is too extreme in this case, but being sloppy isn't ok. Or not supporting MFA when you're dealing with financial data.
Bonus points if we can write a law for something like genetic profiling, or abuse of facial recognition, or microphone eavesdropping that becomes the corporate equivalent of internet child pornography, and carries the death penalty for C level officers.
Perhaps by way of equivocating that voice, face and gene surveillance endangers the privacy of children because it is indiscriminate like chemical or biological weapons, so life in prison and capital punishment for upper echelon high command at the Nuremburg trials.
If I were an elected official, one reason why I would be very cautious about voting for this is that, if we make allowing a data breach a felony punishable by imprisonment, it is likely to have a somewhat chilling effect on the likelihood of engineers to start new companies where they as founders would be potentially prosecutable for such failures.
I share the author's stated frustrations, and agree that jail time for gross data-related negligence would be right at least in some cases, but it's not the simple problem-simple solution issue he's making it out to be.
You seem to think that's a bad thing, for some reason.
The people in congress/senate have no idea how technology works, because the younger generations are severely underrepresented, and so are technologists.
However, if it's your data, then maybe jail time should be on the table?
It's no less valid than other ways for people to collectively pool influence to enact change (i.e. boycotts or forming other types of organizations).
Can you provide a way I can get Equifax and similar companies to stop stealing and selling my data? I've been trying to do that for decades. No. Your argument is bullshit. I didn't sign up for this shit and neither did anyone else, yet we still got fucked. How do you propose we fix it now? The cellular companies are still selling my data. Should I not own a cell phone because I can't get a cell plan that won't steal and sell my data? The ISPs are selling my data. Should I not have an Internet connection? Yes, I can get rid of those, lose my job, be homeless and starve while I wait for the idiot masses to do the same so maybe a competitor could rise up. That's your brilliant solution. And it still doesn't deal with the fact that companies I didn't sign up with are stealing and selling my data.
Do you need credit cards to survive? No, you don't. If you sign up, you get what you deserve.
While consumer credit is a useful service, Facebook is not. No one's life or business is going to be affected because they don't have FB. If you oppose their business model, simply quit and they will wither away and die.
Facebook will not be "a thing" in 10 years. If more people quit today, it could have a shorter lifespan that that.
No one needs a social network like FB. Just stop using it and they will go away.
You’re being disingenuous by equating that with theft.
Power companies, cable companies, and telco operators are all monopolies propped up by government. An actually free market will always eliminate them when some competitor rises up with a lower cost, a better product, or better service.
There has never in history been a monopoly company that could operate in a free market without government support.
No, this is why we have regulations for cars, food, and everything else.
Free markets don't work well in most of these areas because information can't flow properly. If it did, markets might work quite well.
But they don't, and players cheat, so we need regulations.
With Facebook and other "free" services, you are the product -- not the customer. Stopping this is very simple; just don't agree to be the product anymore.
No one needs a social network. There isn't one compelling reason for anyone to join one unless you are a shareholder. You don't need the government to protect you from social networks or fix their ills. You need to delete your account and take personal responsibility for your complicity.
No, you are a supplier of a key input to the product.
Don't ask government to save you from this when you can delete your FB account by yourself and get out of the game.
There isn't one personal or commercial benefit to FB or other social networks which has not already been solved by other technologies.
> Your personal information [...] is the product.
I am not my personal information.
> you can delete your FB account by yourself and get out of the game.
Not really. https://www.techopedia.com/definition/29453/facebook-shadow-...
FWIW, I have a FB account, never initiate friend requests, log in rarely, post ~never. I make no claim that this is the "best" strategy by any particular metric.
Of course I had the engineers stand under the bridge while my army was marching across.
This part isn't fixable. But by having an FB account and actively using the service, you become part of the problem. FB would not have nearly the market value that they do if no one used their product. Actually, they would have zero value.
Is there something in your life that requires FB so badly that you're willing to continue being their product? No, there isn't. You can still phone your friends. You can still text your family. There are plenty of other methods of communication without FB.
FB is not a communications tool, though they pitch themselves that way. They are a data gathering and advertising tool. If you agree to be part of that, that's on you.
Here is just a partial list:
When these companies have data breaches and loose your credit data into the public they should pay a real price. At the moment their penalty is offering you "a years worth of credit monitoring".
There is virtually no accountability for protecting this data yet your credit profile is used extensively for everything from getting a load, to rending an apartment, security and background checks.
Here again, personal responsibility trumps government salvation every time.
Targeted advertising can be and is used to target people at their most vulnerable. There almost certainly is a death toll.
As I've said before and will say again, a social network is not a human necessity nor a right. Delete your account and watch these problems go away in a flash.
That incredibly naive, unless you’re a literal hermit other people in your social and professional circle do spread your data around, as to do businesses you use. I don’t have a Facebook account and never have, but I’m not gulllible enough to seriously believe that means they don’t have a shadow profile one me.
And what about Google, or Amazon? Sure, you can go full T. Kazynski and live in a shack, but short of that you’re screwed.