Hacker News new | past | comments | ask | show | jobs | submit login
Proposed Jail Time for Tech Companies Who Steal Data (trofire.com)
261 points by badrabbit on Feb 9, 2019 | hide | past | favorite | 200 comments

Is personal information really a kind of property, that can be stolen? If it is, aren’t we doing this all the time, any time we perceive anything about anyone?

How is this different than taking a picture of someone? The image is owned by the photographer, not the subject(s) according to current laws.

if I meet someone on the street, and record their name, the conversation we had, and the location where I met them, and their phone number, have I “taken their data”?

Do they have the right to demand that I not record that information?

Does my perspective or interpretation of that information give me some ownership to that data?

What if I use that information for commercial gain? Is that what makes this illegal?

Or is it only if I do this at a scale beyond which humans are not capable, and store it digitally, is that what makes this illegal?

>Is personal information really a kind of property, that can be stolen? If it is, aren’t we doing this all the time, any time we perceive anything about anyone?

We are flexible and smarter than Vulcans. Something doesn't have to be necessarily expressible into a single, unambiguous universal formula to be made illegal.

We can e.g. allow people to perceive things about other people in their brains (or even notebooks) as we've done for millennia, but not allow them to compile them into large aggregated digital databases of thousands or millions of people without their consent, or give them to advertisers.

>How is this different than taking a picture of someone? The image is owned by the photographer, not the subject(s) according to current laws.

Depending on the jurisdiction, taking an image of someone can be illegal. Sometimes, even if it's a public space. And using an image of someone to advertise stuff without their consent is illegal, including in public space.

Rule of law demands it be clear what is and is not illegal. Make it illegal to film people in public and the guy who made the Rodney King tape is going to jail.

Rule of law demands it be clear what is and is not illegal.

The fact that we have lawyers who argue over definitions and whatnot suggests that this isn't possible except through case law. Almost all written rules/laws have exceptions. It's incredibly hard to codify even something we all agree on in terms of wanting to outlaw: murder.

In my country, its not illegal to film people in public. Using this film for anything but personal use w/o the explicit consent of every identifiable person on it is illegal and can be charged with jail time (its usually just a fee + public excuses though). I can easily see a similar legislation/formulation for personal data.

> Rule of law demands it be clear what is and is not illegal

While it would be nice, there's actually not really such a requirement. New legal theories are brought in criminal cases from time to time.

The existence of novel legal theories in criminal cases means it's not always knowable precisely what is and is not illegal.

The world is shades of grey, not black-and-white, and that's especially true in criminal law.

Your first sentence isn’t true. I was just looking at the laws regarding applying for a sellers permit and filing the tax status for my startup (deductions, compensation, etc).

A single example of the laws not being clear-cut and concrete, but hopefully covering enough ground that people can be reasonably expected to understand, is illustrated in IRS Publication 535 under 2. Employees’ Pay.

Reasonableness is defined as:

Test 1—Reasonableness

You must be able to prove that the pay is reasonable. Whether the pay is reasonable depends on the circumstances that existed when you contracted for the services, not those that exist when reasonableness is questioned. If the pay is excessive, the excess pay is disallowed as a deduction.

Factors to consider. Determine the reasonableness of pay by the facts and circumstances. Generally, reasonable pay is the amount that a similar business would pay for the same or similar services. To determine if pay is reasonable, also consider the following items and any other pertinent facts.

- The duties performed by the employee. - The volume of business handled. - The character and amount of responsibility. - The complexities of your business. - The amount of time required. - The cost of living in the locality. - The ability and achievements of the individual employee performing the service. - The pay compared with the gross and net income of the business, as well as with distributions to shareholders if the business is a corporation. - Your policy regarding pay for all your employees. - The history of pay for each employee.


It is LITERALLY IMPOSSIBLE to cover all of the cases with laws. Arguing “common sense” also isn’t valid, “common sense” isn’t really “commonly shared”.

So, we do our best to create laws that cover bases but also give room for interpretation SO THAT WE CAN catch those people who are deliberately trying to break them.

If we said “you may not go over the speed limit” what happens if a bunch of people decide to go under, making it unsafe for other people?

They would have a basis to argue that they were not breaking the law.

So we create laws that are also guidelines.

The actual law being proposed in this case is pretty clear-cut.

You can’t retroactively apply laws, thankfully.

I am astonished. I had no idea.

> Something doesn't have to be necessarily expressible into a single, unambiguous universal formula to be made illegal.

It doesn't have to be expressed with a precise mathematical formula, but it does need to be expressed with a clear legal formula.

Otherwise, people cannot know whether or not their actions are legal, and the law becomes a fearsome weapon in the hands of its interpreters.


EDIT: I'm not claiming it's impossible for such a rule to be formulated in this case; I merely point out that clarity is absolutely necessary.

Its just not possible in general to know a priori the legality of 100% of your actions. Companies have entire compliance departments to attempt this and still cant always get it right. There are tons of grey areas and edge cases so what actually happens is the relevant authorities and advocates get together after the fact to figure out the legality. I don’t know how anyone would propose to write a legal code that covers 100% of circumstances and doesn’t need human interpretaion but I feel like it it would be akin to the problem of general AI.

> Its just not possible in general to know a priori the legality of 100% of your actions.

To the degree that this fact is true, the law is broken.

The law has been a fearsome weapon in the hands of its interpreters since 1776.

Stop acting like this is new.

We need to reform our broken system.

This analogy breaks down immediately in other countries.

In Germany, people have some claim to the copyright of their own image, much in line with the German view that you own your private information.


I'm reasonably comfortable with a company keeping records of its interactions with me, as long as they're taking proper precautions to keep it secure.

Where I draw the line is when they start sharing it with third parties, especially without my consent.

Here's the rub though, what is the definition of "reasonable"? How about "record" or critically, "interaction"? Does browsing a web page allow them to build a shadow profile under the guise of recording an interaction?

Funny enough, the thing you object to is usually the one thing explicitly spelled out in ToS that they are allowed to do with your information

> How is this different than taking a picture of someone? The image is owned by the photographer, not the subject(s) according to current laws.

The photographer may own the image, but the subject may also need to sign a release for the photographer to monetize.

> if I meet someone on the street, and record their name, the conversation we had, and the location where I met them, and their phone number, have I “taken their data”?

> Do they have the right to demand that I not record that information?

In some US states, yes. Many states have what is called "two party consent" and you cannot record a conversation with a device unless both parties agree.

> Or is it only if I do this at a scale beyond which humans are not capable, and store it digitally, is that what makes this illegal?

There are many different factors. However, one of the big ones I would put forth is if I don't ever interact with your service and you still have a profile on me.

My friends tag pictures on Facebook. This means that Facebook now has a profile on me that I did NOT consent to. What is MY recourse?

> What is MY recourse?

Your first statement would imply that the photographer has some responsibility to gain consent for the picture and its use.

And what about the other 30 people taking selfies whose background I am in.

Those people didn't ask me because they didn't need to. They don't know me; I don't know them; we're not going to interact.

Facebook, however, is coordinating all those pictures, their GPS coordinates, their tags, and working out everybody in them.

I did not give anyone permission for that. Yet I don't have a way to stop it.

You can wear a mask

It's already illegal to take certain kinds of photos. For example, upskirt photos or bathroom/locker room photos. The victim may never know the data was captured, but a crime still occurred.

If we consider laws around privacy in meatspace, then here's the common dividing line (according to my memory of an information law class I took something like eight years ago):

If someone is in a public space (such as a park) and you take a photograph for instance that happens to include them in it, then you're not violating their privacy. If they're in their home and you photograph them through their window, that is a violation of privacy because there's a legal expectation of privacy in a private residence.

People are going into the digital equivalent of a park and getting upset when data collection happens there which happens to include them. What happens with that data (selling to advertisers, etc) is not really relevant to the legal privacy discussion. They gave away their data by participating in a public space.

It's sort of like if some organization running public CCTV systems decided to sell recordings of public spaces they covered to advertisers (or heck, even data extracted from those recordings using facial recognition). Creepy? Sure. But not a violation of privacy as such since there is no legal expectation of privacy in a public space. (Incidentally, I'm not sure about the legal details of this specific example, but the point is privacy is dependent on location)

The conclusion, then, is that people who care about legal protection of their privacy online should own the platform where they communicate, such as by hosting a Diaspora pod.

If you go into a park and take a picture of the park that has a person standing in the distance and then sell that picture then you have committed no crime.

If you follow someone around the park constantly taking their picture and then decide to sell the ones that look the best without either the permission of nor compensation for the subject of your photo then they have a legitimate claim against you.

People are not getting upset that they are being photographed in the park by someone who is an enthusiast about outdoor architecture and park planning, they are getting upset that the digital paparazzi are recording every footfall and selling it without permission. The idea that this data collection is not something that can or should be regulated is perverse and thankfully the general public is coming around to this viewpoint.

>The idea that this data collection is not something that can or should be regulated is perverse and thankfully the general public is coming around to this viewpoint.

A while ago I would have agreed, but as I've watched the progress of data legislation, I've come under the impression that it's flawed in at least two significant ways:

1. Companies lobby for laws that favor them. Sooner or later, they win. And then they spend those winnings to ensure they keep winning (some numbers on the top political contributions from electronics/communications companies: https://www.opensecrets.org/industries/indus.php?ind=B).

2. Enforcement is never going to result in jail time. It's going to appear as fines, serving as a mere cost of doing business which results in further entrenching existing companies against newcomers who can't afford the risks.

People right now are excited about greater legislation because they think it will divert us from the cyberpunk dystopia of megacorps owning the world. But the trend I'm picking up from the current legal battles is that they're actually hastening it by pushing for legislation which those same companies will get to shape the details of.

Thus, the solution I see is not greater legislation (which also implies greater centralization, thus more winner-takes-all for companies and governments), but greater decentralization and personal ownership. Legislation sounds good now, but in the long run it's a trap.

First off, I think the proposal is one of those "looks good on the surface but is dumb when the details come to light". Simply put it is too vague and borderline an election campaign trial balloon, meaning we will see similar on someone's platform.

However the real kicker here is, if they hold private companies to this then how do they excuse the government from similar actions and how does a law assigning this level of protection and declaration of personal property not affect law enforcement? In particular in that your phone/email account/etc has no rights even though your data is there.

If this does get somewhere, what if companies choose to encrypt it all and when stolen it is just encrypted. does the government demand the keys at all times?

> How is this different than taking a picture of someone? The image is owned by the photographer, not the subject(s) according to current laws.

There is no reasonable expectation of privacy in public.

Meanwhile, advertisements and depictions of people who have unlicensed but copyrighted images tattoos on their person open up those who distribute media with their likeness to lawsuits by the owners of the copyrighted material.

If only people all believed that getting photographed steals your soul. Tongue only partially in cheek

> How is this different than taking a picture of someone? The image is owned by the photographer

I think there is a good argument that this is often not just. The iconic image of many events may have the victims preferring not to be that iconic representation years later. They may want their life to be known for them, not a moment long past. For the most obvious example, the famous image of a child running during the Vietnam war[1]. She's famous for being terrified of a napalm attack against civilians and allied forces. The photographer gets to decide if it may be reused, and he will, where it's proved to be his meal ticket. There's many less significant examples of where resting with the photographer is inappropriate.

Rights resting with every subject would raise its own, different problems. I'm not sure how to improve the balance without unreasonably restricting the freedom to take perfectly reasonable snaps.

> if I meet someone on the street .. right to demand that I not record that information

They should, unequivocally. I can walk past a person with clip board taking a survey. I prefer unannounced recordings to be left to authorities and journalists.

> Does my perspective or interpretation of that information give me some ownership to that data?

Nope. If it's about me, I care not what you are trying to interpret from my eye colour, location and my presence in a shop. Just whether I have agreed to your monitoring of me, or whether you are one of a limited number of exceptions. Which seems to be the sensible starting point of GDPR.

> What if I use that information for commercial gain

See above. Makes no difference, other than hugely reducing my sympathy for its collection. I don't care if it concerns 10 or 10m if it's without informed consent (no dark patterns etc).

[1] https://en.wikipedia.org/wiki/Phan_Thi_Kim_Phuc

Pitchfork mobs dictating policy, that is why you don't elect an ex state attorney general to the legislative branch: prosecution and grandstanding is all they know.

Also the principle concept of "stealing your data" is more ludicrous than "stealing" in the copyright sense; that data is meta data and it's not yours, it was generated by machines you don't own and have no claim over.

> that data is meta data and it's not yours, it was generated by machines you don't own and have no claim over

I'm pretty sure that, for example, the list of grocery brands someone buys using a store loyalty card isn't "metadata", and while "stealing" is hyperbole, I'm pretty sure most people would be upset upon realizing how far and wide that information is being sold.

People might be upset about a lot of things, the question is whether you have legal ownership on the knowledge about the facts about you. Can you sue somebody for disclosing true information about you as "theft"? Is writing a Wikipedia article about you means "stealing" your life facts? Does making your photo steal your soul? I'm pretty sure the only reason to use "steal" in this context is to hopelessly confuse the matter.

Still it is worth thinking the effects of disclosing meta-data that a legal entity has collected about another entity (here an individual). Is it not abetting to disclose when you are away from your house and what alarm system you have to a burglar? What if you do that via wikipedia?

I think even if you have no ownership of the data and stealing is not involved, that does not give the collecting or managing party the right to sell or publicize or share that data, necessarily.

There have been studies about the value (and impact) of inferences from metadata e.g. https://www.pnas.org/content/113/20/5536.short .

((Edit) Agreeing with you, steal is incorrect term in a lot of cases, however I am not sure if we can say it is not applicable in general.)

Disclosing certain information (like, your banking account password) would certainly harm you. But this is not the kind of information we're discussing here, are we? We are discussing kind of information that is already either public or semi-public (i.e. known to some - potentially wide - circle of unknown people) but aggregation and concentration of which may lead to knowledge about you that you'd prefer not to be public.

I am not sure we have an adequate legal model now to deal with it. We should probably get to developing one real soon. But roping in emotional terms from the different field - like calling it "stealing" or "robbery" or "piracy" or "stampeding cattle through Vatican" is not very helpful. It makes it look as if it's simple - if it's stealing, stealing is already banned, just use the same laws here - but it's not and those laws won't work. Real work is needed here, not wordplay.

Zealots would be upset sure but I'd argue most people won't.

A shopping list is metadata and the issue here is data ownership, you can't own a shopping list, you can't even copyright one, if you don't want it to be associated with you, you should be able to opt out, but no one should be burned at the stake for it.

> you should be able to opt out

Why should it be "opted in" automatically in the first place?

Because if you really care about it you have the option, otherwise it's a valuable resource that should be put to use.

Let's say someone really hates Facebook. What exactly is their recourse for them to say 'I don't want you to keep a shadow profile and I don't want you to sell or use that in any way shape for form'.

In that case, they're directly monetizing data about me as a person.

Let's say person A really hates person B. Does A have a legal right to ban B from recognizing A on the street?

There are laws against stalking in the US, which define stalking as:

engaging in a course of conduct directed at a specific person that would cause a reasonable person to:

(A) fear for his or her safety or the safety of others;

(B) suffer substantial emotional distress.

If you succeed in proving that Facebook tracking causes you substantial emotional distress, and that would be the case for a reasonable person too, you might have a case here.

I wonder if it is possible to licence myself, the same way a software, for example, is licenced. Meaning that any company like facebook monetizing data whithout any agreement between me and them would fall under my licence. I have absolutly no legal knowledge and i have no idea what could be enforced this way. I'm also sure that nothing could be done by one person because of the need to lawyer up to enforce anything... But if it could be possible and would gain enough momentum, I'm thinking it could be quite an interesting thing to do.

That person can simply choose not to use Facebook.

Really? What about non-users shadow profiles?

"In a line of questioning from Rep. Ben Lujan, a Democrat from New Mexico, Zuckerberg allowed that his company creates profiles on people who don’t actually use Facebook — what are sometimes referred to as “shadow profiles.”"


I really love the simplicity of this argument. If you generate the metadata, you own it, no questions asked.

What does "you generate the metadata" mean? If an app on my phone "generates metadata" I own it under this principle. Likewise for code running on my computer.

Your phone/os generate raw inputs (eg touch events in the format of: posx, posy, pressure). Each app then make use of these raw inputs in its own way. SwiftKey will give you some words. Piano app will pay a sound. And So on.

Interpreting and converting these raw inputs into what a user wants, is literally what an app gets paid for.

It's a simple principle, but since it makes the world shittier, let's not adopt it.

This would enable a black market for the data, at which point good luck regulating

Are you saying that this doesn't already exist? There are tons of people selling passwords, profiles, credit cards info, social security numbers et cetera. Or are you trying to say that legit companies will begin doing that? Aren't they already doing that too? Facebook and Cambridge Analytica being just one example that we know about.

I’m really enjoying watching tech transform before my eyes from regulatory optimism to hardened regulatory pessimism.

Yeah let's restrict, ban and kill every new idea in it's crib, that'll surly make the world a better place, I'm happy you're enjoying your time in this entrepreneur community.

Let's also make sure all future gains are made by the lawyers and other middle men so that world peace will finally be at hand.

Considering right now companies like Facebook are very actively making the world a worse place I would say this would be a strong move towards something better.

The rules always get written to control the big companies like FB, despite the fact the laws effect the whole marketplace of thousands of companies of different sizes. But big companies like FB have enough lawyers and influence and never really change. Meanwhile the laws cripple all sorts of harmless small-medium sized companies, including potential future better competitors to FB. And the laws stay law for decades before they get fixed, if ever.

There are countless examples of this in other industries. Some of the laws following the 2008 financial crisis are a good example which reduced the market place of hundreds of small banks who couldn’t operate with the new rules and further solidified the positions of the 5 mega banks. So much for ending “too big to fail”.

Idealism of “what should be done” meets reality of always historically ends up happening. If you’re interested in this topic Thomas Sowell included a hundred examples of well intentioned laws having the opposite effect and making problems either worse or stopping the one problem and generating far worse ones (usually after a period of time when every claims the regulation a success and moves on, before the reality of the situation reveals itself).


> There are countless examples of this in other industries.

But do you have an actual example of this happening in tech? And beyond that, a series of examples showing this to be a systemic problem? Because high-tech has long been a Wild West with little regulation, and many many firms have been built upon finding ways to dodge existing regulation and social conventions. They will likely be fine.

yes, GDPR. furthermore, regulatory capture is indisputably proven.

GDPR is so far a success. In fact several countries have fined or are looking at fining the big players.

But one wouldn't know that from the whining and moaning of oh so many advertising fans around here.

> The rules always get written to control the big companies like FB, despite the fact the laws effect the whole marketplace of thousands of companies of different sizes.

Yup, and this is why campaign finance reform is so important.

Let's say that your argument is true: That large companies are effectively immune to the law and any type of regulation we pass would simply wash right over them. I don't buy into this one bit considering my personal experience with GDPR at a larger company, but for the sake of argumentation let's follow that logic.

If companies are too large to be affected by law, then the only recourse is for the government to step in and break their monopoly. A company that is unaffected by laws will also have extreme leverage in the free market and have the strength to smother those smaller small-medium sized companies you claim are competitors to FB. It would seem illogical that a company would have the lawyers and influence to ignore regulations, but not be willing to use that same influence to kill competitors.

Facebook is people and people suck. Facebook is also a scapegoat and to the contrary of what a resentful media industry would like you to believe the sum of their contributions to the world is overwhelmingly positive

True, I think people equate news media with informative journalism, which it does embody at its finest, but at its worst it's a machine for generating fear and clicks over constant scandals. Not that there's malice, but the entire system is incentivized to do this.

Facebook is the tip of the iceberg. For every bad story about Facebook there's at least a dozen about Google, Amazon, Microsoft, Apple, Uber, AirBnB, and smaller players.

Regulation designed to rein in the heavy hitters will not harm small startups, especially ones whose innovation is built upon careful flouting of law and loophole-seeking, anyway.

But in practice, the regulations end up being written by the heavy hitters, who also capture the regulators enforcing it.

They make a token concession or two, heap on the compliance costs and complications, and then enjoy a cosy relationship with the government group in charge of their would-be competitors too.

Regulation as the result of "bad stories" yeah let heysteria and sensationalism lead the way.

Facebook et al are not the source of all evil, they are actually a source of financial relief to most people, they are only a problem to media companies, and those who write books about how bad they are.

It's healthcare and housing costs that are the bane of everyday people, this media fabricated tech backlash is a strategic distraction.

> Regulation as the result of "bad stories" yeah let heysteria and sensationalism lead the way.

That's literally how muckrakers during the Progressive Era alerted the public to the depredations of big business and forced government regulation of business practices to ensure competition and free enterprise.

And how does any social media company provide "financial relief"?

> muckrakers during the Progressive Era

So you're one of those.

Were the "muckrakers" at the time in the same business as the companies they were raking muck at? Cause that sure is the case nowadays.

> And how does any social media company provide "financial relief"?

Zero dollar cost for communication, broadcasting, entertainment, information retrieval, etc while little else is free.

Except that in practice regulation nearly typically protects big incumbents from startup competitors. That is exactly the effect GDPR had in Europe:


Incumbents, especially in the tech industry, face a greater threat of being unseated by a startup than being unable to handle regulations. The "heavy hitters" have plenty of cash available to pay for the lawyers needed to deal with regulations, and the lobbyists needed to shape regulations to their advantage. Startups need to be careful about their budgets, and the added cost of compliance represents an entry cost that will almost certainly work against small players.

Moreover, when you regulate to the point where the big incumbents suffer economically, you are typically in a state of over-regulation. The evidence is very clear that numerous freight railroads failed in the 1960s because over-regulation prevented them from adapting to new realities; it was too difficult for the railroads to shut down unprofitable routes due to service requirements and they were required to continue paying taxes and maintenance costs on redundant infrastructure. Following deregulation (the Staggers Act) America's freight rail industry was able to reorganize and become profitable once again (and today the North American freight network is one of the most efficient systems on earth and is envied by the world). Passenger railroads are still uneconomical even in regions with high population densities that are absolutely dependent on passenger service (e.g. the northeast corridor, which is the ideal scenario and home to some of the only services that manage an operating surplus) largely because of persistent over-regulation (especially safety -- Acela trainsets are significantly heavier than comparable equipment in Europe and Asia and are more expensive to operate).

Good regulation is certainly possible, but it is the exception rather than the rule. The more typical pattern is either the economic failure of an industry (over-regulation) or regulatory capture.

> Good regulation is certainly possible, but it is the exception rather than the rule. The more typical pattern is either the economic failure of an industry (over-regulation) or regulatory capture.

In the USA. It's highly unclear whether that holds for democratic regimes.

No, that's not what happened with the GDPR. Most small companies rightfully came to the conclusion that they can't afford to not comply, so at least they tried to.

Google though they can afford not to, so they didn't. Now Google is starting to get hit with fines (e.g. France), so they'll probably change their minds.

Google can afford to take as long as they want to comply, and cop the fines along the way.

The startups that never get off the ground because the cost of compliance is prohibitive will mean less competition for Google etc in the long term.

I think this is nonsense - it’s only true if you believe the businesses should have existed without protecting user privacy. GDPR and such don’t require you to go out and buy any hardware, or pass through any other expensive compliance audits. PCI/DSS didn’t kill e-commerce, it just set a minimum bar for what companies SHOULD have already been doing.

Any company effected by GDPR is at risk of being fined a (relatively) large sum. Even if the compliance cost of GDPR appears low, the regulatory risk is large enough that companies have to bear the cost of legal staff to deal with that possibility. Big companies are in a much better position to pay for such things than small companies.

Moreover, startups seeking capital must convince potential investors that the chance of being wiped out by a GDPR complaint is low -- on top of convincing those investors that their business model is viable, that they are entering the market at the right time, etc. Plenty of startups with great ideas never get off the ground because they cannot get the initial capital they need, or they fail to get enough capital to survive a rare negative event.

There is not much doubt that regulations raise the cost of entry to a market. The real question is whether or not it is worth it for society -- if we are willing to sacrifice a few small companies for the sake of the regulatory goal. User privacy is a fine goal, but the EU is losing the leadership it once had in the tech industry to the US and China. Where is the European answer to Google, Facebook, Tencent, or Alibaba? Where is the Europe in the AI race? It is not just GDPR; the right to be forgotten, the draconian copyright rules, and so forth have all contributed to a stifling regulatory environment in Europe and a stagnant tech industry.

You dismissed my comment as “nonsense” but then didn’t refute anything I said.

You implied it doesn’t matter if Google has less competition, and conveyed an unexamined assumption that the GDPR is the most reasonable and optimal way of assuring user privacy.

Is the cost of compliance truly prohibitive to new entrants? Because if it isn’t, then the claim truly is nonsense.

You've just re-asked the very question my parent commenter should have addressed if they were going to dismiss my first comment, avoided addressing it yourself, then repeated the "nonsense" dismissal with the addition of an emphatic word.

People who are committed to logical argumentation – and I've seen this point made often on HN – will say that the reduction in the quantity and formidability of new startups is an acceptable price to pay for improved user privacy.

It still leaves open the question of whether the GDPR is a reasonable and optimal way of achieving improved user privacy, but at least it's a logical argument.

The question of whether GDPR really is reducing startup formation and success is unclear at this stage, and it's possible it will never really be known.

This Bloomberg article [1] from November cites research suggesting that it is, but argues that it's probably not a bad thing.

As I said, that's a fair enough position, but we all need to be clear clear about what our position is.

[1] https://www.bloomberg.com/opinion/articles/2018-11-14/facebo...

The study is examining the amount of venture funding received in countries affected by GDPR, which seems unrelated to your statement “startups that never get off the ground because the cost of compliance is prohibitive”. Investors backing off because of perceived costs of compliance do not necessarily mean compliance is all that much expensive. Furthermore, it would appear that the study is incomplete.

> Wagman and Zhe Jin didn’t break down their data by business model, but if companies in the data extraction business receive less funding, Europe as whole and European consumers in particular probably won’t be any worse off.

> There’s also the question of data quality; Jia, Wagman and Zhe Jin cautioned in their paper that their dataset was not complete. And indeed, according to Pitchbook, a multinational firm that tracks public and private equity investment, while venture activity in Europe dropped somewhat in the third quarter and is likely to be relatively flat for the year as a whole, the share of capital received by software companies is higher than ever before, which would suggest tech innovation isn’t exactly being stifled.

It would seem that we are an impasse until further empirical data is collected. Perhaps an American experiment is in order?

Here's a thought experiment for you: if it were shown that the costs and risks associated with GDPR - in its current form - were high enough to meaningfully reduce the number of startups starting and achieving success, would you still support it, as it currently exists?

As it currently exists, of course not. But as with any regulation or policy, it can be modified as befitting local conditions and times. Certainly it need not be a carbon copy of the European legislation. The devil’s in the details, after all.

"overwhelmingly positive"

While I don't doubt there's been many good things coming out of the release and growth of Facebook, if only for their contribution to the ecosystem, I think you might be hyping it quite a bit there.

Care to elaborate on your thought?

When ignoring the facilitation of election manipulation, age and race discrimination in ads, genocide and their paychological manipulation experiments and the mass surveillance and them targeting teens with their fake VPN and them purposefully allowing kids to be preyed on by IAPs and many others...

...one can say indeed that their contributions have been overwhelmingly positive.

Certainly only a case of scapegoating and envy comrade.

Can you elaborate on the sum of facebooks contributions being positive?

Not disagreeing here, I just don’t know what contributions you are referring to.

It's not about Facebook, it is about some startup you have never heard of that actually is doing something positive for the world and suddenly finds its business smothered by poorly thought out regulations. Considering that we have powerful congressmen who do not even understand Facebook's business model (and whose staff failed to explain it to them) does not give me confidence in Congress' ability to craft constructive regulation.

That's a terrifying possibility.

Are there any actual case studies and examples of tech startups being killed by regulation? Or is this a campfire story that is retold whenever the possibility of regulation is mentioned.

The inherent nature of regulations mean there is always going to be a 'winner' and a 'loser'. For example I have no doubts that regulations removing lead from gasoline resulted in lost profits and hurt businesses, but we can also believe that the societal gains were far greater than the losses.

Similarly regulations in favor of privacy for citizens is going to naturally result in some companies, somewhere, having to adapt or take a hit or possibly not survive the transition. That doesn't mean we shouldn't implement those regulations because ultimately the larger monopolistic companies pose a far greater problem than the smaller startups can solve.

I completely agree. I was referring to the GP's framing of the situation as Big Bad Regulation squashing Mom & Pop tech startups- a bogeyman of dubious existence.

How willing are you to invest in an early-stage startup whose founders could be arrested over a data breach? How much of your own money would you be willing to risk? Would you be willing to work as a founder of such a company and take on the risk of jail time? If this "bogeyman" is of "dubious existence" then your answer should not be impacted at all by the nature of the regulation or the punishment for non-compliance.

Sure, given how many questionable firms from Theranos to Juicero have been successfully funded.

So long as dumb money continues to flow, there is little to fear. When this bout of irrational exuberance does abate, tech will have bigger things to worry about than consumer protection laws.

If like me you view copyright and related laws like the DMCA as regulation, then absolutely -- all the peer-to-peer networking companies from 15 years ago were killed by regulation, not to mention companies that tried to sell circumvention tools (all killed by the DMCA).

Really though, the tech industry has not yet been subject to such significant regulations. The history of the railroad industry is filled with examples of the destructive effects of bad regulations, ultimately leading to a near collapse of the entire industry in the 1960s (a cascade of bankruptcies, especially in the northeast). The Staggers Act saved the freight industry by relaxing rules, but passenger industry remains uneconomical and is basically quasi-state-run.

That is a fair point about the DMCA, but doesn’t the reduction of piracy caused by both the rise of new upstart streaming services like Netflix or Steam, and the entrenched major players relenting and offering their own services and allowing their properties to be streamed, refute that legislation had a dampening effect on innovation?

Not to mention, while some p2p tech companies were sued out of existence, others that went legit (like Napster) or toed the line (like BitTorrent) were not.

Yes, regulation will lead to some losers. But it’s questionable that consumers will be among them.

Netflix is basically just an incremental update to the cable TV model: one centralized distribution service that negotiates broadcasting rights. The only real difference is that people are free to choose when to watch things, and even that is just an "Internet version" of the same thing people had with their VCRs (time shifting) or rental services. It is innovation, yes, but in a box that does not really change the larger business model; by way of analogy it is like railroads switching from steam engines to diesel locomotives.

Peer-to-peer is a totally different concept of global distribution, one that challenges the entire business model that is built around copyright. If Netflix is a diesel locomotive, peer-to-peer is an automobile -- it is more than just a new way to do the same thing that we had done previously, it is an entirely new concept of how things can be done. That is why the RIAA and MPAA panicked. They understand how to negotiate with or sue a centralized distributor like Netflix or Megaupload, but their entire business model is threatened by peer-to-peer distribution.

Bittorent is only half the promise of peer-to-peer. Yes, you are participating in distribution, but you still need a central service to help you find the torrents you want to download. Hardly anybody is working on distributed search, or good ways to deal with spam/malware/etc. that do not involve a central service of some kind. There was a time when people were talking about peer-to-peer messaging systems, but the death of peer-to-peer left us all relying on more centralized approaches.

Ironically, the death of peer-to-peer contributed to the rise of tech giants, all of which follow the same centralized model that peer-to-peer challenged. I think it is entirely possible that a peer-to-peer social networking system could have hindered the rise of Facebook. Youtube might never have been created if peer-to-peer had flourished. We may not have even been having this conversation if the talent that went into Google and Facebook had instead been devoted to peer-to-peer.

It is impossible to know. The problem with deliberately killing a technology in its infancy is that it is hard to know how the technology might have developed or what it might do for society. It is certainly possible (I would say likely) that consumers would have benefited from the growth of peer-to-peer technology.

I think you are placing too much faith in the P2P technology and overlooking the consumer side. Facebook and YouTube users don’t care about what tech is underlying their apps, so long as it is convenient and easy is use. Would Mastodon, had it existed in 2003, have beaten FB? Depends if they could have presented a better user experience. Ultimately I don’t think it’s the tech- nor regulation that supposedly suppresses the tech- that truly matters in the cases you’ve discussed. It hinges upon the UX.

It’s also doubtful that P2P withered away as in the narrative presented. It flourishes today under another under-regulated category: blockchain. And has also yet to see widespread mainstream adoption, or even very useful products, despite the lack of broad legislative oversight.

You are right that UX was a problem for P2P systems, but there is no technical reason that the UX problems could not have been solved had serious effort been devoted to it. The problem is that the technology had become de-legitimized and it was too risky to work on. Imagine if iTunes had natively supported P2P downloads with all of Apple's UX expertise going into it -- do you doubt that Apple could have designed a good P2P UI?

Blockchain is indeed another P2P application, but as you say, it is questionable as far as mainstream adoption goes (though it is likely to see use in non-consumer, business-to-business applications where the hard technical problem of identity is easier to manage). The thing about P2P filesharing is that it was very popular and was starting to enter the mainstream, and we are sitting here arguing about whether or not the UX problems were a cause or effort. Blockchain also came after years of stagnation and missed opportunities in P2P because the first killer app was snuffed out.

iTunes is not a particularly great example of Apple providing excellent UX- but that aside, I doubt that they deigned to pursue P2P because of “delegitimization” or fear of having to deal with regulation.

Did these P2P services even offer any major consumer benefits aside from convenient ways to pirate media? Because that’s a value proposition that could return as the proliferation of streaming subscriptions (having to juggle multiple accounts at once to gain access to desired content) may cause some to simply kiss goodbye to streaming and return to the Pirate Bay. But even with legal challenges taking out the Groksters and the like, I fail to see how P2P for other purposes were damaged. They could’ve simply lost out because of lack of interest from both consumers (poor UX, no value add) and tech companies (saw no interest in pursuing such tech).

By the time those case studies exist it'll be too late. Regulations have a way of coming into existence more than going out of existence. I don't see why we can't learn the same lessons from banking or manufacturing.

An example of regulation I'm glad didn't pass: https://en.wikipedia.org/wiki/Clipper_chip

Are you saying the banking industry should be deregulated? That is not the big takeaway from history...

As someone in the tech industry I'm far too deep in conflicts of interest to make any good judgements on "should be" type questions :)

Just like the clipper chip was a tradeoff between public safety and privacy (ultimately not passed because the cost was too large), any upcoming regulations will trade something away.

As long as legislators are aware, then that's fine. However I will remark that our senators seem to be especially clueless about technology.

"it was generated by machines"

No it wasn't.

  > ...including jail time, if their companies steal and sell user data, 
  > or allow a massive data breach to occur at their company.
What if the government loses our data? Then what? Will they go to jail too?

Software is a moving target. It has become such a complex endeavor, always changing, always evolving. It's difficult to determine who's responsible for which part of the system; and by this I'm not suggesting we abolish responsibility. Good outcomes will be the result of multiple forces, balanced in the right mix:

A. Users should ask more of their favorite companies (and mean it, e.g. boycott your favorite tech company when they behave unethically)

B. Legislators should be more mindful of the legislation they propose (for one separate data (re)selling from data breaches, and what exactly is my data vs data generated by machines, etc)

C. Reckless tech leaders should have their reputation affected by lose security, privacy & business practices

D. And engineers should be more aware their craft affects the lives of millions and maintain a high standard of quality across their work

In the grand scheme of things, we've just barely gotten off the ground with this thing called software. And software changed everything, but so did agriculture, which is 10,000 years old.

If we develop careless regulation and start throwing people in jail for software faults, we're hardly encouraging innovation.

"loses"? What about collecting, and even forcing private companies to give them your data without telling you? What was disclosed on 2013 is still running.

The reason I don't think this will work is because the government and related contractors are major players on collecting, distributing and hacking private data. And if this works, it won't be because the government will stop doing it, but because want to control who else does.

How many more decades does the "tech" industry get to continue using the "you can't regulate us because we're new and stuff, also innovation or something" line for?

I've been working in this industry for nearly 30 years, and it wasn't even "new" when I got my start. I can't help but laugh at people who act as though it just popped up last year. Is this just a result of young kids trying to convince themselves and others that they got in on the ground floor of something that existed well before they were born?

Software is unlike any industry we've had before. Software is now the craft of developing extensions of our minds, and it is evolving rapidly.

It's easier to delegate responsibility and enforcement rights to some higher authority, but since software is so complex, a much more powerful and robust solution in the long run is to embrace personal responsibility and agency. Complex systems evolve faster without or limited centralized control.

For example, the cryptocurrency space is already providing us with a playground where we can experiment with the next generation social systems. But like I've mentioned earlier, we're just beginning to scratch the surface and no one truly knows what will come next. But the beauty of it, is that we have the freedom to thinker, and figure out what works best.

Sounds like an excuse to behave without accountability or responsibility.

Of course not. The government is never held accountable.

Yes, but they obviously can't all fit at once so they will have to take turns serving it, just like how all employees of companies are also collectively sentenced.

Seriously though, what sort of question is that, don't you have rule of law in your country?

“Oops, we didn’t notice, but it’s another to be a company like Facebook that takes your private messages and sells that data. It takes your address, it takes your interest, it takes your browsing history and sells that to people without your permission Wydens.”

Does anyone know what context he refers to? I wasn’t aware that Facebook ever directly sold private messages. Same for address, interest...

They don't sell your private messages. What they actually do is just give them away to business partners.


> Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

No, they didn’t “give it away”. Please let’s stick to facts. The news is being extremely dishonest. Apps that you could send FB messages through could see the message (how else would it work?) And it was explicitly opt-in, users had to authorize it.

“Take Spotify for example. After signing in to your Facebook account in Spotify’s desktop app, you could then send and receive messages without ever leaving the app. Our API provided partners with access to the person’s messages in order to power this type of feature.”


Read that quote again. It only says opt-in was needed to use this „feature“, not for data access. And that does not even include the other parties in those messages.

“Did partners get access to messages? Yes. But people had to explicitly sign in to Facebook first to use a partner’s messaging feature,” Konstantinos Papamiltiadis, director of developer platforms and programs at Facebook, wrote in the blog post.

“Take Spotify for example. After signing in to your Facebook account in Spotify’s desktop app, you could then send and receive messages without ever leaving the app. Our API provided partners with access to the person’s messages in order to power this type of feature.”

Facebook did not play fast and loose with peoples data or abuse their privacy. They don’t sell or give away user data.

So all they had to do was sign in, and those programs got access to private messages.

What are you trying to refute, exactly?

I'm refuting that they gave away user data, which is factually false. Here's Facebook's explanation: https://newsroom.fb.com/news/2018/12/facebooks-messaging-par...

"In order for you to write a message to a Facebook friend from within Spotify, for instance, we needed to give Spotify “write access.” For you to be able to read messages back, we needed Spotify to have “read access.” “Delete access” meant that if you deleted a message from within Spotify, it would also delete from Facebook. No third party was reading your private messages, or writing messages to your friends without your permission. Many news stories imply we were shipping over private messages to partners, which is not correct."

It's become clear from engaging in this discussion that people aren't interested in facts or context, but have a chip on their shoulder about Facebook. Others have also been misinformed by inaccurate news stories.

I don't even use Facebook, yet it's pretty easy to understand the facts if you're actually interested in them.

The claim wasn't that they opened a TCP connection to the partners and forced private data over the line. The claim was that they gave away access to partners that didn't need it, or even know about it.

"These partnerships were agreed via extensive negotiations and documentation, detailing how the third party would use the API, and what data they could and couldn’t access."

That's not how you treat people's private data. Allow the app to send messages, maybe allow the app to read replies to what it sent (did Netflix even need this at all?), don't give it full read access that relies on a pinky swear to keep data safe.

And at your earlier comment, sending a message does not inherently require that the sender be able to read anything.

The linked Facebook article includes a screenshot of the feature in Spotify allowing people to send and receive Facebook messages.

"In order for you to write a message to a Facebook friend from within Spotify, for instance, we needed to give Spotify “write access.” For you to be able to read messages back, we needed Spotify to have “read access.” “Delete access” meant that if you deleted a message from within Spotify, it would also delete from Facebook."

You've got an axe to grind and its tiring me out. Whatever.

Spotify still has a feature to share music through facebook, and that current feature doesn't require the ability to read messages. So that screenshot that only shows a "send recommendation" feature doesn't prove anything at all. No non-recommendation text is displayed on that screenshot.

Both Spotify and Netflix claim they only used access to send messages, and were unaware of broader powers. Netflix: “At no time did we access people’s private messages on Facebook, or ask for the ability to do so” Spotify: “Spotify’s integration with Facebook has always been about sharing and discovering music and podcasts. Spotify cannot read users’ private Facebook inbox messages across any of our current integrations. Previously, when users shared music from Spotify, they could add on text that was visible to Spotify. This has since been discontinued. We have no evidence that Spotify ever accessed users’ private Facebook messages.”

Note that even in the facebook statement, they don't say that the companies couldn't have accessed unrelated data. They claim that the permissions were appropriate (which they did not justify) and that none of the companies did access unrelated data.

I don't have an axe to grind, I'm pointing out that the spotify and netflix statements are pretty condemning and in a contradiction between them and facebook I trust the company saying "we did nothing wrong" less.

And nobody's voting on these posts...

That still says they had access anyway, but you had to opt in to use the feature. Work on your reading comprehension!

They don’t. It’s just part of the privacy and Facebook hysteria being pushed by the rage-machine that is the news cycle now.

When people start realising they've been getting proper fucked for more than a decade by the SV surveilance capitalism machine (led by Google and Facebook) they will get a bit hysterical.

I'd prepare myself for the incoming fines and regulation if I were you, instead of trying to do damage control.

If a human would have gone to jail for the crime the company should have its charter revoked for at least an equal amount of time. Even the giants.

Usually at best a corporation gets a fine and it just becomes a cost of doing business. Everyone involved including the government profits from the crime.

Looking at actual bill, it proposes:

- Creating a paid "no tracking" option. Obvious failure mode: the option price is $100M, and anybody who uses it is unable to use 99% of the functionality, since it requires some form of tracking. Obvious next step - the law requiring this option to be no more than 10% more expensive than regular membership. Obvious next failure mode: inapplicable to sites that do not charge for membership. Obvious next step - creating a government commission empowered to decide what sites are supposed to charge for "no tracking" membership and which services it is supposed to cover. If you like your site subscription - you can keep your site subscription.

- Penalize large companies that submit false information in their annual privacy report - thought submitting false information to any government agency is a crime anyway? And for a public company, I assume publishing almost any false report would immediately put them under the shadow of fraud charges from SEC. So declaring something that is already a crime a crime again is supposed to... what?

- Require companies to assess their algorithms for accuracy, fairness, bias and discrimination. Obvious failure mode: who does the assessment? Obvious next step: creating a government commission empowered to approve algorithm fairness assessment standards. Obvious failure mode: since nobody knows what "fairness" it, it turns into another partisan tug of war, to be used as a club against companies affiliated with opposite tribe or just representing a good jumpstart to the next political campaign. Reasonable academic discussion of algorithmic bias becomes impossible, buried under layers of partisan tribal rhetoric and professional offense miners. Billions are spent annually on "bias prevention", without any shade of solution on the horizon, on the contrary, the problem becomes worse every day - at least if you're listen to bias prevention industry, but they're the only ones who are allowed to speak on the topic.

It's easy to complain, but the problem is that status quo ISN'T WORKING. That's no longer on the table.

So, you either have to come up with something constructive, or someone else will.

That's called "politician's fallacy" - "something needs to be done! This is something, therefore this needs to be done!".

Obvious comment on it is that "something" must improve the situation after being done, merely doing something that doesn't work because current situation doesn't work is not likely to make it work.

And if you're implying I have to right to criticize stupid proposals from politicians before I myself am elected into political office and make a full-formed policy proposal that solves all the problems - sorry, it's not how this thing works.

> That's called "politician's fallacy" - "something needs to be done! This is something, therefore this needs to be done!".

You are arguing that doing nothing is superior to this proposal. While you may be correct, that train has left the station and is no longer on the table. Both the public and the politicians are in agreement on this, so, good luck changing that narrative.

> And if you're implying I have to right to criticize stupid proposals from politicians before I myself am elected into political office and make a full-formed policy proposal that solves all the problems - sorry, it's not how this thing works.

Sorry, but, at this point, either you come up with an alternative, or a proposed alternative is likely to get implemented. This IS already moving, so all you can do at this point is nudge the direction.

"The avalanche has already started, it is too late for the pebbles to vote."

Don't mistake motion for progress

The problem, as always, is externalisation of costs.

If they don't pay for the consequences of the risks they take (such as prioritising profits over security, etc) market forces demand that they take those risks.

People talk about the market fixing things, but that only works if it's not possible to externalise costs.

Unfortunately, the only practical way to enforce that is through government regulation.

The government is also a system which seeks to externalise costs...

What if a data breach happens due to an 0 day exploit with a 3rd party library? Do people from the company where the data breach happened still go to jail then?

as far as I can tell, this bill would only allow jail time if there was a serious breach, at a large company, and higher-ups left it out of an official report they would be required to make to the government.

in other words, all they have to do is fulfill their obligation disclose that they were hit by this 0-day, in order to at least be protected from jail time.

Which seems like a reasonable policy IMO. Its encouraging companies to be forthcoming with their data breaches .... or else.

In law, the term “negligence” usually plays an important role. “What ifs” have already been played out historically, which is case law.

My guess is prudent man principal applies - did they have a reasonable plan to remediate once the 0-day was known? How would peers in the industry have been affected? If it’s truly a 0-day, and they followed reporting the breach, I don’t see it being likely.

You could argue a single 0-day should not result in a breach (security is best as a layered defense), but that’s probably far less likely to find.


Double the term: once for the breach, and again for poor security review & architecture.

> it’s one thing to just be bad at your job and you leak a bunch of data and oops, you didn’t notice that a hacker was in your system like a Kofax for months.

Na, I'm tired of these excuses too. Maybe jail is too extreme in this case, but being sloppy isn't ok. Or not supporting MFA when you're dealing with financial data.

Wyden seems like he's been on top of the ball recently.

Wyden is one of the few people in Congress who seems to make a real effort to address difficult issues in balanced and reasonable way.

S.420 is another good example of Wyden being on top of the ball recently.

I was going to laugh at how stupid this sounds, but, then again, It would be cool to see corporate overlords suffering a comparable level of punishment for the equivalent of peer-to-peer MP3 file sharing.

Bonus points if we can write a law for something like genetic profiling, or abuse of facial recognition, or microphone eavesdropping that becomes the corporate equivalent of internet child pornography, and carries the death penalty for C level officers.

Perhaps by way of equivocating that voice, face and gene surveillance endangers the privacy of children because it is indiscriminate like chemical or biological weapons, so life in prison and capital punishment for upper echelon high command at the Nuremburg trials.

> There is no reason why any person on Capitol Hill should vote against this.

If I were an elected official, one reason why I would be very cautious about voting for this is that, if we make allowing a data breach a felony punishable by imprisonment, it is likely to have a somewhat chilling effect on the likelihood of engineers to start new companies where they as founders would be potentially prosecutable for such failures.

I share the author's stated frustrations, and agree that jail time for gross data-related negligence would be right at least in some cases, but it's not the simple problem-simple solution issue he's making it out to be.

> it is likely to have a somewhat chilling effect on the likelihood of engineers to start new companies where they as founders would be potentially prosecutable for such failures

You seem to think that's a bad thing, for some reason.

Perhaps some of us browsing HN, a site by a venture capital firm in the bay area, might want to consider the effects of legislation on tech startups.

Please. This forum has more people saying they just want a 9-5 job than actual founders. It’s no surprise it’s anti-startup. It’s just /r/programming+technology now.

Well, less competition is generally considered to be a bad thing.

Maybe the result will not be less competition but rather better competition, as the entrants will be founders who have actually done their due diligence and research and are prepared to act responsibly in compliance with the public interest.

A chilling effect on the likelihood of engineers to start new companies whose business model is the collection and monetization of user data, to be a bit more specific.

Putting people in jail is the wrong solution. You're going to put CEOs in jail because some angry employee at the company decided to add some backdoors, or someone in middle management made the wrong call. Either way, the CEO and high level execs can't constantly monitor every part of a world scale business. Anyone in technology also knows it's impossible to become invulnerable to breaches.

The people in congress/senate have no idea how technology works, because the younger generations are severely underrepresented, and so are technologists.

Most of the responses so far seem to concern themselves with law and policy rather than technology. It might be a bit more on-point to complain that the people in technology have no idea how laws or legislation works, perhaps because the older generations are severely underrepresented, as are lawyers/legislators.

Laws are usually clear about criminal intent. If it was done without the CEOs knowledge,the CEO would not be liabel.

The great irony here is that people have no issue stealing data -- copyright theft is considered "normal" by many people.

However, if it's your data, then maybe jail time should be on the table?

Yes, because the potential damage is much more severe. If someone pirates a movie, the damage is about the price of a movie ticket. If someone's private data gets stolen, it can ruin their entire life.

It's not irony. It's you being willfully blind to the difference between "this has copyright" and "this is personal data". Those two aren't even similar!

Quit expecting governments to save you. The easiest fix for this problem is to stop using these products! No matter how much bad news comes out about Facebook, Apple, Google, and Amazon using your data in bad ways -- people still keep using them. If enough people quit, this will stop or a competitor will rise up who doesn't do this stuff.

Governments are one lever through which society enacts changes upon itself. They are made up of the people themselves, after all.

It's no less valid than other ways for people to collectively pool influence to enact change (i.e. boycotts or forming other types of organizations).

Governments are not made up of people; they're made of politicians. The most powerful dynamic in the marketplace is purchasing power. If you don't like something, quit using it and the company will eventually fold (most likely) or morph into something people want (far less likely).


Can you provide a way I can get Equifax and similar companies to stop stealing and selling my data? I've been trying to do that for decades. No. Your argument is bullshit. I didn't sign up for this shit and neither did anyone else, yet we still got fucked. How do you propose we fix it now? The cellular companies are still selling my data. Should I not own a cell phone because I can't get a cell plan that won't steal and sell my data? The ISPs are selling my data. Should I not have an Internet connection? Yes, I can get rid of those, lose my job, be homeless and starve while I wait for the idiot masses to do the same so maybe a competitor could rise up. That's your brilliant solution. And it still doesn't deal with the fact that companies I didn't sign up with are stealing and selling my data.

And actually, you did sign up for it. You might not have read it, but every credit app you sign gives them permission to do exactly that.

Do you need credit cards to survive? No, you don't. If you sign up, you get what you deserve.

While consumer credit is a useful service, Facebook is not. No one's life or business is going to be affected because they don't have FB. If you oppose their business model, simply quit and they will wither away and die.

Facebook will not be "a thing" in 10 years. If more people quit today, it could have a shorter lifespan that that.

You can get out of the credit game. You can use a cell phone without a Google account, or buy one of the new open source ones coming to market.

No one needs a social network like FB. Just stop using it and they will go away.

Equifax is a credit reporting agency. So, for example, someone gets a loan from a bank, then fails to pay it and the bank reports that to them. Other lenders check with them before lending to people.

You’re being disingenuous by equating that with theft.

Not patching servers for months and leaving them for attackers to exploit is definitely stealing. It doesn't matter how they got the data, they let others steal it and were therefore complicit in the theft itself due to negligence, willful or not.

Yeah, they didn’t steal your data. They were negligent with storing data on you given to them by lenders, and if you can demonstrate you were actually damaged by their negligence you can join the class action that’s happening. You should have got an email about it. I know I did.

For the Equifax breach, most of the people affected didn’t have any direct relationship with Equifax. So this advice wouldn’t apply in that case.

"Let the invisible hand of the market fix it" is not something that has a history of actually working on near-monopoly actors.

Monopolies have to be propped-up by governments. No monopoly can survive in a truly free market. Never has; never will.

Power companies, cable companies, and telco operators are all monopolies propped up by government. An actually free market will always eliminate them when some competitor rises up with a lower cost, a better product, or better service.

There has never in history been a monopoly company that could operate in a free market without government support.

How do you even know who does bad stuff?

Well, see, you just have to thoroughly investigate holding companies, paywalled court filings, online security breach notices, and legal documentation conveniently available on the other side of the country in a locked filing cabinet in the unlit second subbasement, and do it every time you buy groceries, pay for gas, receive a shipped gift, or spend time in the vicinity of anyone who takes photos with the Facebook app. I don't see what makes it so difficult.

"Quit expecting governments to save you. The easiest fix for this problem is to stop using these products! "

No, this is why we have regulations for cars, food, and everything else.

Free markets don't work well in most of these areas because information can't flow properly. If it did, markets might work quite well.

But they don't, and players cheat, so we need regulations.

It might surprise you to learn that governments actually support monopolies and encourage them. A monopoly cannot exist in a true free market and has never done so in history.

Or if we all get together and decide a company shouldn't behave in a certain way, that's another option. We call that government regulation when we everyone gets together to do that. You can think of it as voting with your money, just without the money part.

Governments have never consisted of "everyone getting together." That's a myth that smacks of socialism. The best way to get the products and services you want (and wipe out the ones you don't) is to vote with your pocketbook.

With Facebook and other "free" services, you are the product -- not the customer. Stopping this is very simple; just don't agree to be the product anymore.

No one needs a social network. There isn't one compelling reason for anyone to join one unless you are a shareholder. You don't need the government to protect you from social networks or fix their ills. You need to delete your account and take personal responsibility for your complicity.

> With Facebook and other "free" services, you are the product

No, you are a supplier of a key input to the product.

No, sir. You are the product to be bought and sold on the free market and that's how they treat you. FB's software is free and therefore not the product. Your personal information is what's commercially valuable and therefore is the product. Google operates on the same model and you can quit that, too, if you'd like. If you don't, that's on you.

Don't ask government to save you from this when you can delete your FB account by yourself and get out of the game.

There isn't one personal or commercial benefit to FB or other social networks which has not already been solved by other technologies.

> You are the product [...]

> Your personal information [...] is the product.

I am not my personal information.

> you can delete your FB account by yourself and get out of the game.

Not really. https://www.techopedia.com/definition/29453/facebook-shadow-...

So you still have an FB account because "it doesn't make any difference?" Is that you're argument?

I wasn't making any argument here about what anyone should or shouldn't do. I was simply pointing out that it's not the case that deleting your FB account gets you "out of the game" when it comes to being tracked. That's not the same thing as "it doesn't make any difference".

FWIW, I have a FB account, never initiate friend requests, log in rarely, post ~never. I make no claim that this is the "best" strategy by any particular metric.

Exactly! For example I have hired a food taster to try my food, and I'm currently looking for someone willing to be a guinea pig next time I interview a bunch of surgeons for a job.

Of course I had the engineers stand under the bridge while my army was marching across.

Facebook collects data on people that never even use or touch or go near facebook. How exactly do you stop using their product?

Lots of companies that you've never used collect data on you. Look at the amount of junk mail you receive from people you've never done business with, and all of them have data on you.

This part isn't fixable. But by having an FB account and actively using the service, you become part of the problem. FB would not have nearly the market value that they do if no one used their product. Actually, they would have zero value.

Is there something in your life that requires FB so badly that you're willing to continue being their product? No, there isn't. You can still phone your friends. You can still text your family. There are plenty of other methods of communication without FB.

FB is not a communications tool, though they pitch themselves that way. They are a data gathering and advertising tool. If you agree to be part of that, that's on you.

You can't stop using Equifax, Experian, or Trans Union. While it is likely you can stop using Target, there are plenty of other retailers who have been the subject of data breaches.

Here is just a partial list:


When these companies have data breaches and loose your credit data into the public they should pay a real price. At the moment their penalty is offering you "a years worth of credit monitoring".

There is virtually no accountability for protecting this data yet your credit profile is used extensively for everything from getting a load, to rending an apartment, security and background checks.

Stop borrowing money on credit cards. Live within your means and it won't matter what any credit card company thinks of you.

Here again, personal responsibility trumps government salvation every time.

This isn't going to happen while the problems are still theoretical. Few people care if companies have their data as long as it isn't used against them.

These companies collect that data specifically to use it against you for their own financial gain.

Most people don’t know about these issues or don’t care. Also how do you know what company uses your data in bad ways? There are probably plenty of others who do shady stuff you never hear about. We need privacy laws so products that are on the market can be safely used. Otherwise let’s abolish food safety rules and tell people just not to eat at places where food with salmonella gets sold.

I think comparing food safety to data safety is comparing unlike types and levels of safety. No one has ever died from targeted advertising.

>No one has ever died from targeted advertising.


Targeted advertising can be and is used to target people at their most vulnerable. There almost certainly is a death toll.

Yes I'm sure targeted advertising is the worst thing that can happen from mass profiling.

It's the same thing in the sense that the customer can't really tell upfront which service is safe.

If you have a Facebook account, you know their using your data in bad ways. If you don't know that already, that's on you.

As I've said before and will say again, a social network is not a human necessity nor a right. Delete your account and watch these problems go away in a flash.

If you have a Facebook account, you know their using your data in bad ways. If you don't know that already, that's on you. As I've said before and will say again, a social network is not a human necessity nor a right. Delete your account and watch these problems go away in a flash.

That incredibly naive, unless you’re a literal hermit other people in your social and professional circle do spread your data around, as to do businesses you use. I don’t have a Facebook account and never have, but I’m not gulllible enough to seriously believe that means they don’t have a shadow profile one me.

And what about Google, or Amazon? Sure, you can go full T. Kazynski and live in a shack, but short of that you’re screwed.

There are plenty of other bad players you don’t know about. We need enforceable rules so people know that a company is following certain standards .

So you're not willing to quit FB in you're own? You want the government to make it ok for you? Is that what you're saying?

I don't use Facebook. I use other services where I have no idea what they are doing behind the scenes. For example there is a good chance your name was involved in the Equifax leak. How do you quit Equifax? Go cash only?

I too wish I had the resources to go live in the woods and not deal with the modern world.

We should also demand for more control over what data our browsers send out.

About fucking time. Until legislation like this passes, nothing will change. I'm glad at least one of our senators has the balls to actually hold CEOs and companies accountable for their atrocious actions. This behavior should be criminal and this should be just the start. It doesn't make sense to send petty thieves to jail and hard labor, yet reward CEOs who cause millions or billions of dollars of damage to our society with gigantic resignation packages. These CEOs should languish in jail and find out for themselves what it's like to work for pennies a day. I'd bet any amount of money that once CEOs are actually held responsible for their reprehensible actions, things will change in regards to security and other overlooked practices. Once a company's profit and the CEO's resignation package can be clawed back with fines, these CEOs will think twice. Perhaps as a society we may rethink the idea of limited (in reality that just means nonexistent) liability corporations where even the officers who have the power of life or death over millions have no repercussions when they inevitably abuse and misuse that power to hurt and even kill people. That will probably take a lot longer, however. Until then, the best way to commit a crime and get away with it (including murder, poisoning, and other atrocious crimes) is still being a company executive.

I love how we understand the value of intellectual property, but when we talk about private personal information we are unable.

It's strange how some people think that gossip and intellectual property are somehow the same thing. You have no right to be paid when people talk about you.

But I think everyone understands that it's shitty to be on the receiving end of gossip, especially if the rumors are untrue or can be used against you.

Certainly! But when you abstract it away as "personal information" then you also abstract away why sharing it is sometimes harmful.

If you are interested in a text of the bill, one of its drafts is available here: https://www.wyden.senate.gov/imo/media/doc/Wyden%20Privacy%2...

A tough interpretation of the existing US Computer Fraud and Abuse Act, which has criminal penalties, could do much of that right now. See the "exceeds authorized access" provisions. Any slip-up in asking for permission could place a company stealing your data in serious legal jeopardy.

I am not a lawyer, but vague laws are unconstitutional, so, your interpretation is highly unlikely to pass scrutiny.

It's not "stealing" because the data isn't being taken away. It's "copying".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact