Hacker Newsnew | past | comments | ask | show | jobs | submit | srkmno's commentslogin

Are they invested in law firms? Because that's the logical explanation for taking up this silly cause.


Um, forced arbitration is deliberately chosen because it is

a) biased in favor of the company

b) For lower income workers costs much more than the potential "win" for getting back what you are owed - the typical case being stolen wages, that are often less then the employee born cost of arbitration.

c) They prohibit the constitutional right toclass action law suits - combined with (b) this makes mass already illegal exploitation essentially free: it's just not cost effective for victims to to individually litigate their minor costs.

Companies say it's purely about efficiency, but it is absolutely about making it uneconomical for employees that are victims of illegal actions to receive compensation. See the recent case where uber is trying to avoid the arbitration they required (https://www.reuters.com/article/legal-us-otc-uber/forced-int...), and chipotle trying to avoid the forced arbitration when their employees actually enacted it: https://www.latimes.com/business/hiltzik/la-fi-hiltzik-chipo...


Hold on. I'm all for ending forced arbitration, but b here is wrong.

Costs of arbitration after borne almost entirely by the company. This makes it cheaper to bring a complaint. But it creates a conflict of interest where the company never loses. The arbitrator is paid by the Corp, so if the arbitrator rules against them, the Corp will just pick a different arbitrator.


No, they aren’t - arbitration is hugely biased in favor of the company, to the extent that the only way you can reasonably hope to win is by hiring a lawyer. But then to recover your money (a few grand maybe) you’re retaining a lawyer potentially for months, but because you’re not actually in court your chances of getting your legal fees.

This ignores that if you are a low wage employee subject to this kind of contract you are unlikely to be able to afford the initial retention costs.

My inclination would be to allow the arbiter to be selected by the employee. If companies really do believe arbitration is fair they should have no problem with that.


Owning law firm shares is the only explanation to resisting the expansion of a privatized justice system?


Arbitration is the result of a power imbalance between labor and employers, an attempt to subvert the ability to seek recourse in the justice system. It is wildly disingenuous to insinuate investment in legal firms is the motivator.

Uber and Chipotle are two examples of arbitration working as intend by employers.

https://www.reuters.com/article/legal-us-otc-uber/forced-int... (Forced into arbitration, 12,500 drivers claim Uber won’t pay fees to launch cases)

https://www.latimes.com/business/hiltzik/la-fi-hiltzik-chipo... (Chipotle may have outsmarted itself by blocking thousands of employee lawsuits over wage theft)


It's not just employers. It's everything now. Just recently I went car shopping. The sale agreement requires arbitration. Cell phone contracts require it.

Come think of it, is there anything left that doesn't require arbitration?

About the only way out seems to be to have no business dealings with the company. If an airline drops a huge chunk of blue ice through my roof, and I haven't used that airline, then I guess I can still sue them. This isn't much comfort, because nearly every situation in which I might want to sue a company would involve doing business with the company and thus being subject to an arbitration agreement.


These aren't "working as intended" these are the companies being called out on their claim that it was about "efficiency".

In the both cases it's clear that they never intended to go into arbitration, and just assumed the amount of time required vs. the reward would not be worthwhile. The moment that the bluff was called they immediately tried to avoid the "efficient" option: Uber isn't paying the arbitration fees, chipotle tried to lawyer their way out of what They chose to force on their employees.


The Chipotle example is news because it is precisely not working as intended by the employer - it is preventing Chipotle from mounting a single deep pocketed defense for all of the cases together, and forcing Chipotle to address the claims one by one in small claims arbitration, which will may cost it more than just paying out.


Curious, isn't it. Google cancelled their forced arbitration policy so soon after we saw it going so badly for chipotle and uber...


Are you invested in private arbitration companies? Because that’s the logical explanation for making your silly comment.

See, I can make unfounded, uncharitable comments too.


Whatever your thoughts about arbitration there is no denying that it's the quicker and cheaper method, what this unfortunate development accomplishes is enlarging the pool of well to do clients and plaintiffs for the lawyers to exploit.

In a related note: organized labor almost killed the US car industry, when economical foreign cars got popular, domestic car companies couldn't quickly make the necessary adjustments to compete with the foreign product largely because of the inertia of union agreements.

Labor unions are adversarial to change and innovation and especially in this era there is no place for them and hopefully automation will eradicate them completely, it's disheartening that many HN commenters would support such antiquated and inefficient agreements.


This is a strange comment and something I see a lot in online forums, I guess you could characterize this as the "slippery slope" or "worst case scenario" argument. Unions have no power in big tech right now, none of the top tech companies have something resembling a union -- and yet even the discussion of unionization within the tech industry, despite some really terrible working conditions for "auxiliary" tech workers (ride share drivers, fulfillment center workers, delivery app drivers) and "core" workers (ageism anyone?) the slight mention of unionization and/or collective bargaining brings up these types of scenarios. Unionization in tech is not going to look like it does in other industries, I can't help but wonder why some folks shoot it discussions so quickly with this type of argument.


The reason "auxiliary" workers get a raw deal in this country is that health insurance is dealt through employers via a terrible WW2 era tax incentive.

If companies didn't get on the hook for providing health insurance they would directly employ more people instead of using contractors.

So this is why I'm worried about labor unions in tech and the private sector in general; they are never the solution, and their lack isn't the cause to any of the described problems.


The point of unions is to get workers a better deal through collective bargaining. In a company that's growing rapidly, it doesn't make sense for either the employees or the owners / management to worry too much about how the pie is getting divided up. As companies mature and growth slows, employees who aren't unionized are inevitably going to get screwed. In an industry where corporate profits can exceed $1000000 per employee, I'd say we need more unionization.


So they can kill these companies faster?


Hahahahaha, I don't think half these startups need any help grinding themselves into the dirt any faster, nor do I think that unions would contribute meaningfully to extinguishing something that wasn't already in process of that.

And honestly, if some hotshot startup dies because it's treating its workers like trash then good riddance. In this day and age we ought to do better than exploiting workers to the bone just because we can.


>The reason "auxiliary" workers get a raw deal...

is because they are something of a commodity. It's low-skill work that is managed in bulk and individuals can be replaced with little friction. The core IT work is in demand by companies, rather than the other way around.

I mean, your point is also true, but it's more of an additional factor rather than the main one.


Yes, they are a commodity, so either they can invest in their education and skills to make themselves more valuable or make it easier for companies to take on more of that "commodity".

A stronger safety net that would allow unemployment workers to acquire new skill is a much better social endeavor than antiquated labor unions.


> there is no denying that it's the quicker and cheaper method

I don't think anyone is disputing that. The problem is that "cheaper" and "faster" comes with the cost of injustice.

> Labor unions are adversarial to change and innovation and especially in this era there is no place for them

Why not? What replaces them? The entire point of unions is to try to correct the serious power imbalance between employer and employee, an imbalance that historically (and currently) has been eagerly leveraged against employees.

What sort of protection against this would you recommend? Much stricter regulation?


1. I can't take seriously someone who writes "the cost of injustice" in this context.

2. When unions muddle the machinations of corporate management and production ultimately shutting the whole thing down surely they will be happy about the power balance they achieved.

In a free market an employee's own abilities and the dynamism of the economy determines their value. So the trick is to invest in one's education and coming up with policies that keep the economy chugging at a healthy pace, labor unions is not such a policy.


This is removing forced arbitration. Parties can still agree to settle a case via arbitration. But now it will actually be an agreement between the two parties for a particular case, as opposed to a blanket contractual obligation with unbalanced power dynamics.


Quicker and cheaper is not a good thing. Your negotiating power is based on your BATNA, or Best Alternative to Negotiated Agreement.

With a binding arbitration agreement, the company's BATNA is paying for arbitration proceedings. Without it, their BATNA is getting sued and going through discovery. Discovery in high-profile cases will wind up resulting in a gigantic PR disaster, which means that the company settles for basically whatever damages the aggrieved employee asks for.


> Discovery in high-profile cases will wind up resulting in a gigantic PR disaster, which means that the company settles for basically whatever damages the aggrieved employee asks for.

And that's good thing?!


For employees, yes. The company can offer a settlement for basically all the monetary claims if the employee skips discovery and agrees to not speak about the case.


> there is no denying that it's the quicker and cheaper method

Yes there is. One of the primary motiations some companies have for forcing abitration is to make claims that would usually be covered by class actions prohibitively expensive to pursue. Recently that has backfired for Uber with 12,500 individual actions being filed against them [0].

Also, when arbitrators actual go ahead and do things properly with discovery and all, the costs can be greater than court because instead of the taxpayers paying for the judge the parties in the suit do, e.g. [1].

[0] https://www.reuters.com/article/legal-us-otc-uber/forced-int...

[1] https://www.employmentclassactionreport.com/flsa/eleventh-ci...


More wealthy clients and plaintiffs for the lawyers to exploit, somehow it always ends with the parasites on top.


If I understand this correctly the videos in question are family videos uploaded and monetized by parents, and the issue is that perverts are commenting on them, right?

Invariably a small percentage of people are going to suck, it's the way of the world, what is YouTube or anyone to do about it? Are they now responsible for everyone who watches a publicly available video?

This brand of hysteria is the product of alarmism, and sensationalism regarding everything social media, it's gotten ridiculous when we blame theses platforms for all of humanity's failings and newspapers are advocating for censorship.


Pitchfork mobs dictating policy, that is why you don't elect an ex state attorney general to the legislative branch: prosecution and grandstanding is all they know.

Also the principle concept of "stealing your data" is more ludicrous than "stealing" in the copyright sense; that data is meta data and it's not yours, it was generated by machines you don't own and have no claim over.


> that data is meta data and it's not yours, it was generated by machines you don't own and have no claim over

I'm pretty sure that, for example, the list of grocery brands someone buys using a store loyalty card isn't "metadata", and while "stealing" is hyperbole, I'm pretty sure most people would be upset upon realizing how far and wide that information is being sold.


People might be upset about a lot of things, the question is whether you have legal ownership on the knowledge about the facts about you. Can you sue somebody for disclosing true information about you as "theft"? Is writing a Wikipedia article about you means "stealing" your life facts? Does making your photo steal your soul? I'm pretty sure the only reason to use "steal" in this context is to hopelessly confuse the matter.


Still it is worth thinking the effects of disclosing meta-data that a legal entity has collected about another entity (here an individual). Is it not abetting to disclose when you are away from your house and what alarm system you have to a burglar? What if you do that via wikipedia?

I think even if you have no ownership of the data and stealing is not involved, that does not give the collecting or managing party the right to sell or publicize or share that data, necessarily.

There have been studies about the value (and impact) of inferences from metadata e.g. https://www.pnas.org/content/113/20/5536.short .

((Edit) Agreeing with you, steal is incorrect term in a lot of cases, however I am not sure if we can say it is not applicable in general.)


Disclosing certain information (like, your banking account password) would certainly harm you. But this is not the kind of information we're discussing here, are we? We are discussing kind of information that is already either public or semi-public (i.e. known to some - potentially wide - circle of unknown people) but aggregation and concentration of which may lead to knowledge about you that you'd prefer not to be public.

I am not sure we have an adequate legal model now to deal with it. We should probably get to developing one real soon. But roping in emotional terms from the different field - like calling it "stealing" or "robbery" or "piracy" or "stampeding cattle through Vatican" is not very helpful. It makes it look as if it's simple - if it's stealing, stealing is already banned, just use the same laws here - but it's not and those laws won't work. Real work is needed here, not wordplay.


Zealots would be upset sure but I'd argue most people won't.

A shopping list is metadata and the issue here is data ownership, you can't own a shopping list, you can't even copyright one, if you don't want it to be associated with you, you should be able to opt out, but no one should be burned at the stake for it.


> you should be able to opt out

Why should it be "opted in" automatically in the first place?


Because if you really care about it you have the option, otherwise it's a valuable resource that should be put to use.


Let's say someone really hates Facebook. What exactly is their recourse for them to say 'I don't want you to keep a shadow profile and I don't want you to sell or use that in any way shape for form'.

In that case, they're directly monetizing data about me as a person.


Let's say person A really hates person B. Does A have a legal right to ban B from recognizing A on the street?

There are laws against stalking in the US, which define stalking as:

engaging in a course of conduct directed at a specific person that would cause a reasonable person to:

(A) fear for his or her safety or the safety of others;

(B) suffer substantial emotional distress.

If you succeed in proving that Facebook tracking causes you substantial emotional distress, and that would be the case for a reasonable person too, you might have a case here.


I wonder if it is possible to licence myself, the same way a software, for example, is licenced. Meaning that any company like facebook monetizing data whithout any agreement between me and them would fall under my licence. I have absolutly no legal knowledge and i have no idea what could be enforced this way. I'm also sure that nothing could be done by one person because of the need to lawyer up to enforce anything... But if it could be possible and would gain enough momentum, I'm thinking it could be quite an interesting thing to do.


That person can simply choose not to use Facebook.



Really? What about non-users shadow profiles?

"In a line of questioning from Rep. Ben Lujan, a Democrat from New Mexico, Zuckerberg allowed that his company creates profiles on people who don’t actually use Facebook — what are sometimes referred to as “shadow profiles.”"

https://slate.com/technology/2018/04/facebook-collects-data-...


I really love the simplicity of this argument. If you generate the metadata, you own it, no questions asked.


What does "you generate the metadata" mean? If an app on my phone "generates metadata" I own it under this principle. Likewise for code running on my computer.


Your phone/os generate raw inputs (eg touch events in the format of: posx, posy, pressure). Each app then make use of these raw inputs in its own way. SwiftKey will give you some words. Piano app will pay a sound. And So on.

Interpreting and converting these raw inputs into what a user wants, is literally what an app gets paid for.


It's a simple principle, but since it makes the world shittier, let's not adopt it.


This would enable a black market for the data, at which point good luck regulating


Are you saying that this doesn't already exist? There are tons of people selling passwords, profiles, credit cards info, social security numbers et cetera. Or are you trying to say that legit companies will begin doing that? Aren't they already doing that too? Facebook and Cambridge Analytica being just one example that we know about.


I’m really enjoying watching tech transform before my eyes from regulatory optimism to hardened regulatory pessimism.


Yeah let's restrict, ban and kill every new idea in it's crib, that'll surly make the world a better place, I'm happy you're enjoying your time in this entrepreneur community.

Let's also make sure all future gains are made by the lawyers and other middle men so that world peace will finally be at hand.


Considering right now companies like Facebook are very actively making the world a worse place I would say this would be a strong move towards something better.


The rules always get written to control the big companies like FB, despite the fact the laws effect the whole marketplace of thousands of companies of different sizes. But big companies like FB have enough lawyers and influence and never really change. Meanwhile the laws cripple all sorts of harmless small-medium sized companies, including potential future better competitors to FB. And the laws stay law for decades before they get fixed, if ever.

There are countless examples of this in other industries. Some of the laws following the 2008 financial crisis are a good example which reduced the market place of hundreds of small banks who couldn’t operate with the new rules and further solidified the positions of the 5 mega banks. So much for ending “too big to fail”.

Idealism of “what should be done” meets reality of always historically ends up happening. If you’re interested in this topic Thomas Sowell included a hundred examples of well intentioned laws having the opposite effect and making problems either worse or stopping the one problem and generating far worse ones (usually after a period of time when every claims the regulation a success and moves on, before the reality of the situation reveals itself).

https://www.amazon.com/Wealth-Poverty-Politics-Gordon-Tulloc...


> There are countless examples of this in other industries.

But do you have an actual example of this happening in tech? And beyond that, a series of examples showing this to be a systemic problem? Because high-tech has long been a Wild West with little regulation, and many many firms have been built upon finding ways to dodge existing regulation and social conventions. They will likely be fine.


yes, GDPR. furthermore, regulatory capture is indisputably proven.


GDPR is so far a success. In fact several countries have fined or are looking at fining the big players.

But one wouldn't know that from the whining and moaning of oh so many advertising fans around here.


> The rules always get written to control the big companies like FB, despite the fact the laws effect the whole marketplace of thousands of companies of different sizes.

Yup, and this is why campaign finance reform is so important.


Let's say that your argument is true: That large companies are effectively immune to the law and any type of regulation we pass would simply wash right over them. I don't buy into this one bit considering my personal experience with GDPR at a larger company, but for the sake of argumentation let's follow that logic.

If companies are too large to be affected by law, then the only recourse is for the government to step in and break their monopoly. A company that is unaffected by laws will also have extreme leverage in the free market and have the strength to smother those smaller small-medium sized companies you claim are competitors to FB. It would seem illogical that a company would have the lawyers and influence to ignore regulations, but not be willing to use that same influence to kill competitors.


Facebook is people and people suck. Facebook is also a scapegoat and to the contrary of what a resentful media industry would like you to believe the sum of their contributions to the world is overwhelmingly positive


True, I think people equate news media with informative journalism, which it does embody at its finest, but at its worst it's a machine for generating fear and clicks over constant scandals. Not that there's malice, but the entire system is incentivized to do this.


Facebook is the tip of the iceberg. For every bad story about Facebook there's at least a dozen about Google, Amazon, Microsoft, Apple, Uber, AirBnB, and smaller players.

Regulation designed to rein in the heavy hitters will not harm small startups, especially ones whose innovation is built upon careful flouting of law and loophole-seeking, anyway.


But in practice, the regulations end up being written by the heavy hitters, who also capture the regulators enforcing it.

They make a token concession or two, heap on the compliance costs and complications, and then enjoy a cosy relationship with the government group in charge of their would-be competitors too.


Regulation as the result of "bad stories" yeah let heysteria and sensationalism lead the way.

Facebook et al are not the source of all evil, they are actually a source of financial relief to most people, they are only a problem to media companies, and those who write books about how bad they are.

It's healthcare and housing costs that are the bane of everyday people, this media fabricated tech backlash is a strategic distraction.


> Regulation as the result of "bad stories" yeah let heysteria and sensationalism lead the way.

That's literally how muckrakers during the Progressive Era alerted the public to the depredations of big business and forced government regulation of business practices to ensure competition and free enterprise.

And how does any social media company provide "financial relief"?


> muckrakers during the Progressive Era

So you're one of those.

Were the "muckrakers" at the time in the same business as the companies they were raking muck at? Cause that sure is the case nowadays.

> And how does any social media company provide "financial relief"?

Zero dollar cost for communication, broadcasting, entertainment, information retrieval, etc while little else is free.


Except that in practice regulation nearly typically protects big incumbents from startup competitors. That is exactly the effect GDPR had in Europe:

https://techcrunch.com/2018/10/09/gdpr-has-cut-ad-trackers-i...

Incumbents, especially in the tech industry, face a greater threat of being unseated by a startup than being unable to handle regulations. The "heavy hitters" have plenty of cash available to pay for the lawyers needed to deal with regulations, and the lobbyists needed to shape regulations to their advantage. Startups need to be careful about their budgets, and the added cost of compliance represents an entry cost that will almost certainly work against small players.

Moreover, when you regulate to the point where the big incumbents suffer economically, you are typically in a state of over-regulation. The evidence is very clear that numerous freight railroads failed in the 1960s because over-regulation prevented them from adapting to new realities; it was too difficult for the railroads to shut down unprofitable routes due to service requirements and they were required to continue paying taxes and maintenance costs on redundant infrastructure. Following deregulation (the Staggers Act) America's freight rail industry was able to reorganize and become profitable once again (and today the North American freight network is one of the most efficient systems on earth and is envied by the world). Passenger railroads are still uneconomical even in regions with high population densities that are absolutely dependent on passenger service (e.g. the northeast corridor, which is the ideal scenario and home to some of the only services that manage an operating surplus) largely because of persistent over-regulation (especially safety -- Acela trainsets are significantly heavier than comparable equipment in Europe and Asia and are more expensive to operate).

Good regulation is certainly possible, but it is the exception rather than the rule. The more typical pattern is either the economic failure of an industry (over-regulation) or regulatory capture.


> Good regulation is certainly possible, but it is the exception rather than the rule. The more typical pattern is either the economic failure of an industry (over-regulation) or regulatory capture.

In the USA. It's highly unclear whether that holds for democratic regimes.


No, that's not what happened with the GDPR. Most small companies rightfully came to the conclusion that they can't afford to not comply, so at least they tried to.

Google though they can afford not to, so they didn't. Now Google is starting to get hit with fines (e.g. France), so they'll probably change their minds.


Google can afford to take as long as they want to comply, and cop the fines along the way.

The startups that never get off the ground because the cost of compliance is prohibitive will mean less competition for Google etc in the long term.


I think this is nonsense - it’s only true if you believe the businesses should have existed without protecting user privacy. GDPR and such don’t require you to go out and buy any hardware, or pass through any other expensive compliance audits. PCI/DSS didn’t kill e-commerce, it just set a minimum bar for what companies SHOULD have already been doing.


Any company effected by GDPR is at risk of being fined a (relatively) large sum. Even if the compliance cost of GDPR appears low, the regulatory risk is large enough that companies have to bear the cost of legal staff to deal with that possibility. Big companies are in a much better position to pay for such things than small companies.

Moreover, startups seeking capital must convince potential investors that the chance of being wiped out by a GDPR complaint is low -- on top of convincing those investors that their business model is viable, that they are entering the market at the right time, etc. Plenty of startups with great ideas never get off the ground because they cannot get the initial capital they need, or they fail to get enough capital to survive a rare negative event.

There is not much doubt that regulations raise the cost of entry to a market. The real question is whether or not it is worth it for society -- if we are willing to sacrifice a few small companies for the sake of the regulatory goal. User privacy is a fine goal, but the EU is losing the leadership it once had in the tech industry to the US and China. Where is the European answer to Google, Facebook, Tencent, or Alibaba? Where is the Europe in the AI race? It is not just GDPR; the right to be forgotten, the draconian copyright rules, and so forth have all contributed to a stifling regulatory environment in Europe and a stagnant tech industry.


You dismissed my comment as “nonsense” but then didn’t refute anything I said.

You implied it doesn’t matter if Google has less competition, and conveyed an unexamined assumption that the GDPR is the most reasonable and optimal way of assuring user privacy.


Is the cost of compliance truly prohibitive to new entrants? Because if it isn’t, then the claim truly is nonsense.


You've just re-asked the very question my parent commenter should have addressed if they were going to dismiss my first comment, avoided addressing it yourself, then repeated the "nonsense" dismissal with the addition of an emphatic word.

People who are committed to logical argumentation – and I've seen this point made often on HN – will say that the reduction in the quantity and formidability of new startups is an acceptable price to pay for improved user privacy.

It still leaves open the question of whether the GDPR is a reasonable and optimal way of achieving improved user privacy, but at least it's a logical argument.

The question of whether GDPR really is reducing startup formation and success is unclear at this stage, and it's possible it will never really be known.

This Bloomberg article [1] from November cites research suggesting that it is, but argues that it's probably not a bad thing.

As I said, that's a fair enough position, but we all need to be clear clear about what our position is.

[1] https://www.bloomberg.com/opinion/articles/2018-11-14/facebo...


The study is examining the amount of venture funding received in countries affected by GDPR, which seems unrelated to your statement “startups that never get off the ground because the cost of compliance is prohibitive”. Investors backing off because of perceived costs of compliance do not necessarily mean compliance is all that much expensive. Furthermore, it would appear that the study is incomplete.

> Wagman and Zhe Jin didn’t break down their data by business model, but if companies in the data extraction business receive less funding, Europe as whole and European consumers in particular probably won’t be any worse off.

> There’s also the question of data quality; Jia, Wagman and Zhe Jin cautioned in their paper that their dataset was not complete. And indeed, according to Pitchbook, a multinational firm that tracks public and private equity investment, while venture activity in Europe dropped somewhat in the third quarter and is likely to be relatively flat for the year as a whole, the share of capital received by software companies is higher than ever before, which would suggest tech innovation isn’t exactly being stifled.

It would seem that we are an impasse until further empirical data is collected. Perhaps an American experiment is in order?


Here's a thought experiment for you: if it were shown that the costs and risks associated with GDPR - in its current form - were high enough to meaningfully reduce the number of startups starting and achieving success, would you still support it, as it currently exists?


As it currently exists, of course not. But as with any regulation or policy, it can be modified as befitting local conditions and times. Certainly it need not be a carbon copy of the European legislation. The devil’s in the details, after all.


"overwhelmingly positive"

While I don't doubt there's been many good things coming out of the release and growth of Facebook, if only for their contribution to the ecosystem, I think you might be hyping it quite a bit there.

Care to elaborate on your thought?


When ignoring the facilitation of election manipulation, age and race discrimination in ads, genocide and their paychological manipulation experiments and the mass surveillance and them targeting teens with their fake VPN and them purposefully allowing kids to be preyed on by IAPs and many others...

...one can say indeed that their contributions have been overwhelmingly positive.

Certainly only a case of scapegoating and envy comrade.


Can you elaborate on the sum of facebooks contributions being positive?

Not disagreeing here, I just don’t know what contributions you are referring to.


It's not about Facebook, it is about some startup you have never heard of that actually is doing something positive for the world and suddenly finds its business smothered by poorly thought out regulations. Considering that we have powerful congressmen who do not even understand Facebook's business model (and whose staff failed to explain it to them) does not give me confidence in Congress' ability to craft constructive regulation.


That's a terrifying possibility.

Are there any actual case studies and examples of tech startups being killed by regulation? Or is this a campfire story that is retold whenever the possibility of regulation is mentioned.


The inherent nature of regulations mean there is always going to be a 'winner' and a 'loser'. For example I have no doubts that regulations removing lead from gasoline resulted in lost profits and hurt businesses, but we can also believe that the societal gains were far greater than the losses.

Similarly regulations in favor of privacy for citizens is going to naturally result in some companies, somewhere, having to adapt or take a hit or possibly not survive the transition. That doesn't mean we shouldn't implement those regulations because ultimately the larger monopolistic companies pose a far greater problem than the smaller startups can solve.


I completely agree. I was referring to the GP's framing of the situation as Big Bad Regulation squashing Mom & Pop tech startups- a bogeyman of dubious existence.


How willing are you to invest in an early-stage startup whose founders could be arrested over a data breach? How much of your own money would you be willing to risk? Would you be willing to work as a founder of such a company and take on the risk of jail time? If this "bogeyman" is of "dubious existence" then your answer should not be impacted at all by the nature of the regulation or the punishment for non-compliance.


Sure, given how many questionable firms from Theranos to Juicero have been successfully funded.

So long as dumb money continues to flow, there is little to fear. When this bout of irrational exuberance does abate, tech will have bigger things to worry about than consumer protection laws.


If like me you view copyright and related laws like the DMCA as regulation, then absolutely -- all the peer-to-peer networking companies from 15 years ago were killed by regulation, not to mention companies that tried to sell circumvention tools (all killed by the DMCA).

Really though, the tech industry has not yet been subject to such significant regulations. The history of the railroad industry is filled with examples of the destructive effects of bad regulations, ultimately leading to a near collapse of the entire industry in the 1960s (a cascade of bankruptcies, especially in the northeast). The Staggers Act saved the freight industry by relaxing rules, but passenger industry remains uneconomical and is basically quasi-state-run.


That is a fair point about the DMCA, but doesn’t the reduction of piracy caused by both the rise of new upstart streaming services like Netflix or Steam, and the entrenched major players relenting and offering their own services and allowing their properties to be streamed, refute that legislation had a dampening effect on innovation?

Not to mention, while some p2p tech companies were sued out of existence, others that went legit (like Napster) or toed the line (like BitTorrent) were not.

Yes, regulation will lead to some losers. But it’s questionable that consumers will be among them.


Netflix is basically just an incremental update to the cable TV model: one centralized distribution service that negotiates broadcasting rights. The only real difference is that people are free to choose when to watch things, and even that is just an "Internet version" of the same thing people had with their VCRs (time shifting) or rental services. It is innovation, yes, but in a box that does not really change the larger business model; by way of analogy it is like railroads switching from steam engines to diesel locomotives.

Peer-to-peer is a totally different concept of global distribution, one that challenges the entire business model that is built around copyright. If Netflix is a diesel locomotive, peer-to-peer is an automobile -- it is more than just a new way to do the same thing that we had done previously, it is an entirely new concept of how things can be done. That is why the RIAA and MPAA panicked. They understand how to negotiate with or sue a centralized distributor like Netflix or Megaupload, but their entire business model is threatened by peer-to-peer distribution.

Bittorent is only half the promise of peer-to-peer. Yes, you are participating in distribution, but you still need a central service to help you find the torrents you want to download. Hardly anybody is working on distributed search, or good ways to deal with spam/malware/etc. that do not involve a central service of some kind. There was a time when people were talking about peer-to-peer messaging systems, but the death of peer-to-peer left us all relying on more centralized approaches.

Ironically, the death of peer-to-peer contributed to the rise of tech giants, all of which follow the same centralized model that peer-to-peer challenged. I think it is entirely possible that a peer-to-peer social networking system could have hindered the rise of Facebook. Youtube might never have been created if peer-to-peer had flourished. We may not have even been having this conversation if the talent that went into Google and Facebook had instead been devoted to peer-to-peer.

It is impossible to know. The problem with deliberately killing a technology in its infancy is that it is hard to know how the technology might have developed or what it might do for society. It is certainly possible (I would say likely) that consumers would have benefited from the growth of peer-to-peer technology.


I think you are placing too much faith in the P2P technology and overlooking the consumer side. Facebook and YouTube users don’t care about what tech is underlying their apps, so long as it is convenient and easy is use. Would Mastodon, had it existed in 2003, have beaten FB? Depends if they could have presented a better user experience. Ultimately I don’t think it’s the tech- nor regulation that supposedly suppresses the tech- that truly matters in the cases you’ve discussed. It hinges upon the UX.

It’s also doubtful that P2P withered away as in the narrative presented. It flourishes today under another under-regulated category: blockchain. And has also yet to see widespread mainstream adoption, or even very useful products, despite the lack of broad legislative oversight.


You are right that UX was a problem for P2P systems, but there is no technical reason that the UX problems could not have been solved had serious effort been devoted to it. The problem is that the technology had become de-legitimized and it was too risky to work on. Imagine if iTunes had natively supported P2P downloads with all of Apple's UX expertise going into it -- do you doubt that Apple could have designed a good P2P UI?

Blockchain is indeed another P2P application, but as you say, it is questionable as far as mainstream adoption goes (though it is likely to see use in non-consumer, business-to-business applications where the hard technical problem of identity is easier to manage). The thing about P2P filesharing is that it was very popular and was starting to enter the mainstream, and we are sitting here arguing about whether or not the UX problems were a cause or effort. Blockchain also came after years of stagnation and missed opportunities in P2P because the first killer app was snuffed out.


iTunes is not a particularly great example of Apple providing excellent UX- but that aside, I doubt that they deigned to pursue P2P because of “delegitimization” or fear of having to deal with regulation.

Did these P2P services even offer any major consumer benefits aside from convenient ways to pirate media? Because that’s a value proposition that could return as the proliferation of streaming subscriptions (having to juggle multiple accounts at once to gain access to desired content) may cause some to simply kiss goodbye to streaming and return to the Pirate Bay. But even with legal challenges taking out the Groksters and the like, I fail to see how P2P for other purposes were damaged. They could’ve simply lost out because of lack of interest from both consumers (poor UX, no value add) and tech companies (saw no interest in pursuing such tech).


By the time those case studies exist it'll be too late. Regulations have a way of coming into existence more than going out of existence. I don't see why we can't learn the same lessons from banking or manufacturing.

An example of regulation I'm glad didn't pass: https://en.wikipedia.org/wiki/Clipper_chip


Are you saying the banking industry should be deregulated? That is not the big takeaway from history...


As someone in the tech industry I'm far too deep in conflicts of interest to make any good judgements on "should be" type questions :)

Just like the clipper chip was a tradeoff between public safety and privacy (ultimately not passed because the cost was too large), any upcoming regulations will trade something away.

As long as legislators are aware, then that's fine. However I will remark that our senators seem to be especially clueless about technology.


"it was generated by machines"

No it wasn't.


The EU has no internet "champions" so they have few reservations about burning the whole thing to the ground, and that is the lens through which all their internet regulations should be seen.


That is at least more straightforward than the US which has a similar lack of Internet champions, but plenty of Web champions. Silicon Valley essentially only cares about the Internet insofar as it's the communications conduit over which it delivers proprietary services.

We're essentially arguing sides in a commercial dispute, because we've been led to think that using Google is akin to running software under our control.


Check out the net neutrality battle in the US. Not only did many corporations have opinions about this Internet argument, but a lot of passionate individual people had opinions. Including a ton of my friends in the Silicon Valley.


Check out the net neutrality battle in the US. Not only did many corporations have opinions about this Internet argument, but a lot of passionate individual people had opinions. Including a ton of my friends in the Silicon Valley.

How did that end up working out? What does passion and experience matter if the only real voice is a pile of money and favors in the hands of a lobbyist? The FCC is so corrupt/captures that it didn’t make a bit of difference, unless you count catharsis from speaking against it as a positive result.


I agree there are many remnants of belief that the tech giants represent some culmination of the Internet ideal. But this is because financial success from the corporate compromise has made people forget about the tradeoffs and sea change realizations take time.

I suspect general age-based conservatism also makes many write off the currently appearing problems. For example, interpreting this recent censorship push as well and good, rather than as a direct dismantling of the revolution of two decades ago.

And sure, it's great that Reddit et al did that blackout. But realistically that is primarily marketing for their customers and employees. Most of the energy in the Net Neutrality battle is about who will be commercial winners and losers. For instance, Netflix would have no problem with regulation that would defend their position yet say allow discrimination of BitTorrent traffic - this is the hazard I'm highlighting with the thinking that companies are going to stand up for individual rights.


> because we've been led to think that using Google is akin to running software under our control

I would imagine most people arguing about this have no illusions about how under their control Google is. They're arguing the same reason people argue about other policy questions: because it will affect them, either positively or negatively, and because it's fun.

> lack of Internet champions, but plenty of Web champions

Pedantry. A web champion is a subtype of an internet champion. The US has plenty of both.


Yes, the problem is that it will affect us - these companies are poking hornet nests and creating terrible precedents. If we were running Internet-style software to carry out these tasks, politicians would be faced with the prospect of policing individual behavior. But rather, they have a nice centralized target to attack.

It's not pedantry. It's disingenuous to equate Web Giants as standing up for Internet rights, when they're really just optimizing for their bottom line. While the two concepts start off aligned, what happens is that the companies compromise in a way that preserves their profits, yet still destroys the freedom. This actually sets up a barrier to competing with them, called regulatory capture. Witness that it wasn't enough for Google to simply suffer the DMCA, they went beyond and created a whole slew of systems to proactively police their users' activities.


What's an example of an internet champion that's not a web one?


Internet champion implies Web champion, but not vice versa. What you're really asking for is some Internet champions - EFF, FSF(E), TOR.

The obvious difficulty is there's little money in it, as the benefits are distributed throughout society rather than captured by a centralized company. My overall point is that looking towards the tech giants to defend our own rights is folly.


I see the confusion here: "champion" in this context is an economic term meaning a leading company that's the pride of it's home nation.


No, that's the way I took it. The fact of the matter is that Google isn't primarily an Internet company, but a media company that happens to deliver over the Internet. Would you call Time Magazine a "Postal Company" ?

(They do interface with several early Internet protocols such as SMTP. But that's better looked at as mining content from non-web protocols, which they'd have no intrinsic problem deprecating like XMPP)


When you think having giant corporations dictating your policy through lobbying is normal and good.


FYI: "giant corporations" are dictating this copyright law.


Which we both agree, is a bad thing.


Google and Facebook have large offices in Europe.


Spotify maybe?


Users don't directly upload content to Spotify, it all goes via the publishers and rights holders, so I wouldn't think the new rules would affect Spotify at all.


It would somewhat apply. Profile pics of users and uploaded pictures for playlist coverart.


This can't be accurately called "collusion" since it's not a secret, it's still a cartel thought but lucky for them EU regulators have eyes only for US companies.


There's been increasing consolidation in the automotive industry, nothing new here.

Big challenges need big investments and it makes sense to work on open standards. Car makers don't want to become the next smartphone industry, with a race to the bottom among OEMs and Google reaping all the profits.

German groups know very well that creating distrust in the industry is not a winning strategy when everyone works with the same suppliers.

Disclosure: former HERE employee, executives always insisted on the fact staying neutral was key to long-term success and that we should welcome investments from other OEMs.

I admit that European countries have conflicting interests in the matter, however I'd rather have them look at more pressing issue first, like measuring and enforcing emission standards. "Unfair" competition against Google and Uber... I can live with that.


You're suggesting that the law shouldn't apply to the competitors of US companies because you worked for one of them and feel sympathetic?! That says a lot about the community makeup and voting pattern on HN.


I'm not suggesting anything, and I don't feel sympathetic towards my current or previous employers, this would be quite silly.

I'm sure German car makers take part in cartel-like behavior, like all big companies. However I don't see how building an open platform and working together on setting open standards could be seen as one of them. By the very definition of open standards, competitors can adopt them.


So what? it's opt-in, is consent not enough anymore? does a privacy maximalist mentality needs to be imposed on everyone?


Yes, because these systems inherently violate the privacy of parties that didn't consent to the spying. This includes private individuals, and competing companies with whom the "consenting" individual interacts.

A thought experiment: imagine a corporate TOS including a clause that specifically prohibits use of devices/software that violates the provider's privacy. E.g. an end user's account can be terminated because they're using Google/FB/other "voluntary" spyware...


So how many individuals and organizations do I need to get permission from to install something on my phone?

All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.


> So how many individuals and organizations do I need to get permission from to install something on my phone?

The problem isn't about installing something on your phone, it's about handing over every single private communication you have with others without getting their approval. It contradicts your first assumption that people have _opted in_.

> All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.

Are you're suggesting that no consent is necessary for you to put other people's private information on sale?


Private correspondence is always like that.

If you send me a snailmail, I have all right to publish what you right to me (with a few exception, of course).

That's why we got all those email disclaimer nonsense.


You may have the right to publish some emails when necessary. It is a whole different thing to sell every bit of private correspondence to a third party in secret. Not only is it a betrayal of trust, a quick search on the internet suggests that it may be illegal in some cases[1].

[1] https://injury.findlaw.com/torts-and-personal-injuries/invas...


This is excessively reductionist logic. There's a huge gulf between, for example, the snarky reply below ala "forwarding emails is illegal" and "an app is en masse siphoning all communications between its users and others".

The privacy backlash is precisely about people becoming more aware of the latter class of behavior and rebelling against it. Complaining that "no one would be able to do anything" is a straw-man without relevance to the actual social conversation going on right now.


You seem to be operating on the assumption that you're defending something other than the status quo.

(A quick example of this brand of individualism that offers the individual right, non-declinable, to be analyzed and sanctioned by the government: https://news.ycombinator.com/item?id=18704330)


And that, kids, is why forwarding emails is illegal.


> does a privacy maximalist mentality needs to be imposed on everyone?

I see you follow Google closely; close enough to know that privacy concerns have hardly impeded its growth and dominance. Same with all other major tech companies.

Does a privacy minimalist mentality need to be imposed on everyone? (I'm asking rhetorically. In either form, it's not a substantial argument: it's a strawman. Privacy isn't a measurable quantity, and each person or community cares about protecting or revealing different things.)

Edit: Q.E.D. https://news.ycombinator.com/item?id=19039593


What does any of that mean? my point is against imposing one's POV on others and that people are free to consent to stuff you might not like.


Consent has never been enough. It has always been about informed consent as a minimum, and free choice as the target ideal.


> does a privacy maximalist mentality needs to be imposed on everyone?

I kinda think so, yeah. Everyone flies past the privacy stuff, or gives compromises, or is traded free Farmville points, or given extra coins, or a new shiny feature in exchange for it, their friends are on it, their celebrity is on it, etc.

People will opt-in for all those reasons not realizing or seeing what they gave up or its consequences.


People don't know what they're consenting to. In reddit conversations I've seen people say that they're fine with the app because "everything private is encrypted with SSL". They don't realize the whole point of this is to get around that SSL encryption.


What if they know?

This Google app, unlike Facebook's, do not decrypt traffic.

Now tell me who don't know what they consent to.


Well if they do know, then who cares. You do you. But, at least in FB's case, they don't.


We don't allow poor people to sell one of their kidneys for cash.

I know this is not the same as privacy, but consider it when saying "is consent not enough anymore"


Scams are opt-in.


How is that a scam? are you saying participants didn't receive their gift cards? are the terms agreed to false?


You were asking if "consent was not enough", as if such a thing was inconceivable. I'm just giving you one of many possible counterexamples.


There is consent and then there is informed consent.


Informed consent is enough. That's the problem though.


Man, this is interesting and complicated.

I'll submit this for consideration: you're right, it's 100% fine from an individual standpoint. But there's an aggregate effect of some sort that is a concern. Every move like this changes people's standards and expectations. Call it a "cultural shift", and call us "conservatives" along this particular axis. If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.

So what does that imply we should do? I'm not sure. Maybe simply push back as we are. Maybe try to impress onto opters-in just what they're selling, and maybe they'll reconsider. Maybe ask Google to simply brand this differently. Call it the "Truman Show Package". So at least everyone is aware that, while the data being collected is valuable, and while everything is 100% a-ok so long as everyone consents, this is NOT NORMAL and nobody should accept it as such.


Who exactly is the "us" and "we" in this scenario?


> If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.

So, people who fit that "if" clause. I assume it's a significant number of people here.


I don't understand what's so objectionable here? there is ample disclosure, it's for adults, and those who opt in receive rewards, is this really about everyone being concerned for apple's terms or is it just that it's "hip" to portray tech companies in the worst possible light?


Distributing apps signed with an enterprise certificate to users is against the rules.


Yeah what's the exact text that says that?

Besides this isn't portrayed as an apple terms mismatch story it clearly reads as another pearl clutching privacy panic clickbait.


> Join the Apple Developer Enterprise Program for 299 USD per year and get everything you need to start distributing proprietary in-house apps to your employees.

From the page describing the program: https://developer.apple.com/programs/enterprise/


You really want a source? That's a completely obvious violation. Otherwise you'd see lots of apps by 3rd parties being used without going through App Store approval. Come on.


This kind of program doesn't just suck up the data of the people who opted in with (theoretically) clear knowledge of what it means, it also likely pulls in data about the other people they communicate with using their phone, and none of those people consented to anything.

Even if we were to pretend that isn't the case, why is it so necessary for companies to be able to suck up so much data about people? Why shouldn't we insist that companies respect our privacy in the same way we respect each others' privacy (I'm assuming that you don't go and peek into your neighbours' windows to figure out what they've been buying recently).


This is exceedingly petty even considering that the GDPR as a whole is a tool to subject US tech firms to a degree of scrutiny and control that would suffocate other industries and extract the occasional payout.

No one benefits from this, they just get shitty UX.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: