Hacker News new | comments | ask | show | jobs | submit login
We Can't Just Assume that Facebook Will Do Its Best (zeit.de)
220 points by Quanttek 21 days ago | hide | past | web | favorite | 91 comments



"If an algorithm doesn't work, then responsibility lies with the person who deployed the software. Real people cannot become laboratory rats for the testing of an algorithm."

I'm wondering what the HN opinions are on this view. My guess is that the argument will be that this will stifle innovation. But Facebook has such clout that algorithms have very direct effect on people, up to even the security of a state. I hear Facebook was a major contributer to the ethnic cleansing of Rohingya. Is algorithm deployment subject to (government) audit? Or are there alternatives to ensure the algorithm does nothing bad?


We tend to judge people by their intentions rather than their results. E.g. personal bugbear: drivers who kill pedestrians or cyclists are largely not punished for it. More abstractly, governments and organisations pushed for adoption of diesel cars out of concern for CO2 emissions; turns out that has killed many people via air pollution. We don't generally hold those organisations culpable for those deaths.

I don't think "algorithms" changes this at all. People who have bad intentions are guilty. People who ignore or bypass established safety protocols are reckless. But people who made a good-faith effort to do something legitimate and turned out to be mistaken? There's no real precedent for punishing those, and it doesn't seem fair to make a special exception just because technology is involved.


> But people who made a good-faith effort to do something legitimate and turned out to be mistaken? There's no real precedent for punishing those

Yes there is, if they failed to thoroughly consider the consequences of their actions, then we would call it negligence.

In law this is called "mens rea", or "guilty mind", and it exists in a hierarchy: Accidental, Negligent, Reckless, Knowingly, Intentionally, and we punish people differently based on which level of guilt they have.

There's another axis that courts consider when deciding charges and sentences called "actus reus", or "guilty act", and it's supposed to be represent how responsible the person is for the harm rendered. This also exists in a spectrum, from "proximate cause" (you did the thing that immediately caused the harm) to "coincidence", (you stabbed a voodoo doll, and the person got hurt, with no chain of causality linking the two events)

I would argue that Facebook engineers were negligent with respect to, like, the fake news fiasco, though probably not very responsible, being enablers rather than actors. Same with the racial loan targeting scandal.

The depression experiment, I would call reckless and a proximate cause of whatever harm was done. There is no way in hell that a secret experiment to make people depressed by manipulating their news feeds would pass an IRB, and Facebook should have been punished for it.


When your actions have an impact on the world it's meaningless to make a distinction between negligence and criminal intent. The victims of the Whatsapp riots in India or of the Facebook-driven pogroms in Myanmar don't care if Facebook was the actor or the enabler. These people are dead, they don't have the luxury of caring.


This argument would call for an end to the distinction between manslaughter and honicide, as an example.

It’s a great example of why we don’t allow victims, or relatives of victims, to or Jesus their own justice. Because they don’t care about distinctions that we, as a society, have decided are important.


This argument would call for an end to the distinction between manslaughter and honicide, as an example.

Corporations should probably not enjoy that distinction in the same way that people do. The sooner we dispense with the harmful fiction of corporate personhood the better. Either way, let’s not conflate people and the law, with corporations and the law.


> drivers who kill pedestrians or cyclists are largely not punished for it

You might want to qualify this with the part of the world you're speaking about (especially given the article is on a German site). For instance, in the Netherlands, given a collision between a motorized vehicle and an non motorized (pedestrian or bike), the motorized vehicle is automatically at fault (for liability and insurance purposes etc).


Everybody knows that the Netherlands has a strong distinction between manslaughter (or whatever they call it there) and murder. Intent matters; that is the point that was being communicated.


Audits won't change much if the algorithm is learning with new data, which is all the race right now. You'd have to inspect the data, and think about all possible cases. Innovation doesn't have to be stifled by regulation, but ad revenue will possibly be with weaker targeting accuracy. Is that really that bad for consumer? Maybe I will finally start seeing ads that would broaden my interests, instead of seeing what is too similar to what I already own and know to interest me? Or maybe each product should have an ad free paid version?


You wonder what happened. Back then, when biotech and genetic engineering was going to become hugely impactful Paul Berg & colleagues organized the Asilomar Conference, and the impact of that meeting is still felt in the field. Now we have deep learning on Google-sized natural language corpora, and there isn't a word on the impacts of that on society.


I can't say I agree there isn't a word. There's been plenty of conversations about it, resulting in stuff like https://www.partnershiponai.org/

How honest those efforts are is, in my opinion, worth questioning, but they are talking about it.


> I hear Facebook was a major contributer to the ethnic cleansing of Rohingya

Yeah people claim that. But does the claim hold water? I think it was just used because it was a convenient way to communicate and if they didn't use facebook they would have used any of the thousands of other ways to communicate.


Except that it's not a "claim", it's fact. In Myanmar as well as many other developing economies Facebook simply "is" the internet via it's Free Basics program.

Facebook was zero rated with the state owned PTT - Myanma Posts and Telecommunications. The majority of people can not afford the style data plans people in the West take for granted. For many, their "internet" is largely confined to Facebook's walled-garden. So no they don't have "thousands of other ways" to communicate." Practically speaking the "convenient" way also happens to be the only way.

Maybe do a bit of research on the issue before summarily dismissing it.


It is curious how Free Basics is largely overlooked in discussions about Facebook's role as a publisher or platform with respect to speech and access.

When it's the only way to access the internet in an affordable manner, Facebook becomes the de facto authority in control of what readers have access to.

Dangerous.


Indeed and FB's internet.org/Free Basics has amassed 100 million users in developing counties at this point.

Curiously FB's Free Basics pulled out of Myanmar recently:

https://theoutline.com/post/4383/facebook-quietly-ended-free...


Your argument is that Facebook was the de facto communications medium in the region, so it’s a fact that they contributed?

Did Boeing contribute to 911?


Here, Facebook was a medium designed for maximum dispersion of information among groups of people--remember the pious crusade, "connecting the world"?--which, when carried out to the point that waves of toxic, destructive information can propagate across that medium, is just an insane thing to unleash upon the world without some really rigorous safety measures. The problem is that Facebook intentionally optimized for virality and completely ignored, repeatedly and aggressively, critiques that not all information dissemination is inherently good.

I don't know if that is or is not corporate negligence, but we should consider it so. It is detrimental to humanity, and they have continued to try to ignore the immense responsibility that the world now finds them in hold of. If they cannot very quickly act as responsible stewards of this immense power, they should not be entrusted with it. That is how government has always worked, and guess what! Making infrastructure public so that it can be regulated, monitored and managed for the public good turns out to be a good thing. FB is social infrastructure and needs to either accept and act on their deep responsibilities or cease trying to be social infrastructure.


Yes they were in fact complicit. They provided and subsidized a platform and then didn't bother to enforce their own community standards on that platform.

Perhaps read the BSR Myanmar report[1] and then read FB's own blog post where they agree with many of those findings in the BSR report[2].

[1] https://fbnewsroomus.files.wordpress.com/2018/11/bsr-faceboo...

[2] https://newsroom.fb.com/news/2018/11/myanmar-hria/

Your comparison of Boeing and 911 is beyond absurd even as a strawman.


That's an odd comparison to make; Boeing did not exert control over the passengers, crew, or flight path. Facebook exerts control over membership, access and exposure.


I think letting governments have power to control which algorithms get deployed is a bad thing. Instead, individuals need to cultivate a better intuition on how and why granting a large amount of power to a single entity is detrimental for them in the long term. They also need to develop tools and heuristics that lessen the probability of this happening.

Power needs to be distributed as much as possible, so that no single entity can do a large amount of damage and have a large amount of leverage. In other words, Facebook only has as much power as the people using it let them have. The people need to become acutely aware of this and act accordingly.


No. We have elected representatives to study these questions in depth. You can't ask every citizen to be an expert in ethics, genetics, medicine, computer sciences, virology, etc. so they can make the obvious/good/true/rational choice.

That's why societies have laws. So we can know what to do or not when faced with such questions instead of passing the burden to the individual under the guise that it's "freeing people".


>No. We have elected representatives to study these questions in depth.

Except the elected representatives are dumb as rocks. I'm sure they studied how the cookie law would work really in depth before voting on it. That's why the cookie law worked out so perfectly, right? Oh wait, it didn't do ANYTHING other than cost companies millions of euros and wasted the time of the general public. It had ZERO benefits and a CS student could tell you that the way it was designed it could never have any benefits. I guess the only benefit of the law is to show that lawmakers continue to be incompetent at technology.

>That's why societies have laws. So we can know what to do or not when faced with such questions instead of passing the burden to the individual under the guise that it's "freeing people".

Societies have laws so that the regular person thinks that everyone plays according to the same rules, but that's not true in practice. Laws are intentionally written to be vague and not consistently enforced. This has led to a situation where there are so many laws that everyone breaks several every year and as a result if somebody higher up in the chain doesn't like what you're doing they can use the state to harass you with.

If laws were truly about knowing what to do then it would be impossible to graduate from mandatory education without having studied every single law that will apply to you in most circumstances. Furthermore, vague laws wouldn't be written and would get removed from legal codes. Furthermore, laws that are only selectively enforced would be removed, because laws can only be just if they apply to everyone consistently.


> That's why societies have laws. So we can know what to do or not when faced with such questions instead of passing the burden to the individual under the guise that it's "freeing people".

Yes, but laws can be devastating just as easily. In an age where the abuse and misuse of the ever-growing body of laws is so rampant, I cannot help to think that having even more badly written laws is a good solution.

Legislation is at best a necessary evil: a baggage, a clunky, heavy stone which you have to lug with you all day, every day. It serves a purpose, but you want to have as little of it as possible.


I get where you’re coming from, but it seems any proposal that requires every single person to change to be protected from some harm is destined to failure, or would at least be incomplete and take decades to establish itself. Judging by how populism is tearing our societies apart, we may not have that time.

Indeed, the problem of coordinated action is so prevalent in any society, we have developed a host of tools to tackle it: morality and religion („don’t eat pork“), heuristics (“no free lunch”), and, yes, government (SEC, FAA, FDA,...)

The adjacency to free speech may increase the downside risk. But if the alternative is between government intervention and what is essentially “before flying, check that the pilot does not appear drunk”, the former seems to be the obvious choice. If the industry or the public wish to avoid that fate, they would have to come up with something in the middle: self-restraint (ha), or possibly a private certification scheme like MPAA.

It’s also notable that we do have examples very close to the issue of free speech that did not go down the slippery slope, namely public-airwaves television (were regulation is somewhat silly, but in no way restricting viewpoints other than sex being a healthy activity), and the aforementioned MPAa.


> I get where you’re coming from, but it seems any proposal that requires every single person to change to be protected from some harm is destined to failure, or would at least be incomplete and take decades to establish itself. Judging by how populism is tearing our societies apart, we may not have that time.

Perhaps we may not have the time, but I'm unconvinced it is so. There is also no need for every single person to change, just a critical mass and not necessarily drastically either. I propose that even some awareness of this issue in the majority would be enough.

It is also not unprecedented that the overall stance of society changes on some issue drastically over time. I'm thinking of ideas such as the abolition of slavery, women's suffrage, the acceptance of non-white people as equal members of society (yes, the change is not complete, but it is drastically different compared to a hundred years ago), the awareness of sexually transmitted diseases and so forth.

The awareness of the danger of letting any single corporation have too much power seems like a good addition to this class of ideas.


>If the industry or the public wish to avoid that fate, they would have to come up with something in the middle: self-restraint (ha), or possibly a private certification scheme like MPAA.

Sounds to me like you've accepted government tyranny as being the default. You also make it sound as though government intervention fixes all the problems - it doesn't. You STILL have to check whether the pilot is drunk before the flight and sometimes they still are drunk during the flight even with all of the regulations in the industry.


Sounds to me like you have a problem with nuance: meat inspectors != tyranny, if only because they are widely accepted, and created & overseen by democratically elected governments.

And either you never fly or frequently lie: there’s no chance they let you examine the pilot as a normal passenger on a commercial flight. Plus, being drunk was obviously just an example. Are you going to subject them to a three-day practical examination? And how are you going to provide tested pilots conducting those exams, without being lost in the recursion of your little populist fantasy?


The whole point of having a government is to deduplicate the work that each individual would otherwise have to do. Instead of constantly having to second-guess whether the company you're dealing with has malicious intentions or is acting negligently (something which you might not be able to notice without specialized knowledge), you can rely on the government having set a standard of legal behavior that the company is incentivized to follow. Any attempt to decentralize power that also decentralizes the work required to exercise that power is going to fail in the majority of cases.

That doesn't mean you can only choose between "anything goes" and "here's a list of government-approved algorithms, everything else is illegal". But it does mean that instead of individuals developing tools and heuristics to protect themselves, the government should mandate the creation of such tools.

That's essentially what GDPR is about. Instead of having to find out for yourself how your data is being used, the company needs to tell you. Instead of having to write a crawler yourself to get your data back out, the company has to provide it to you. And so on.

The power of making the final decision still remains with the user, but the government's job is to ensure that the decision is as easy as possible.


> That doesn't mean you can only choose between "anything goes" and "here's a list of government-approved algorithms, everything else is illegal". But it does mean that instead of individuals developing tools and heuristics to protect themselves, the government should mandate the creation of such tools.

I agree with almost everything you wrote in your comment (including this quoted paragraph), but still maintain it is paramount that power be decentralized as much as possible. Similarly, that doesn't mean the only choices are between "power is completely decentralized such that no entity has greater power than any other" and "power lies solely within one central entity". Instead, the power of any government, corporation or organization should be kept bounded, with strict, irremovable limitations. The exact method to achieve this goal remains somewhat elusive, but I think the largest prosperity happens in those instances when it was achieved.

The trouble with solely relying on governments is scope creep. Seemingly inevitably the government eventually gravitates towards regulating more and more, until it is effectively prescribing an allowed list of algorithms, burdening the whole ecosystem in the process. You mention GDPR, but in order for the argument to stay balanced, in the same breath you should also mention Articles 11 and 13 of the EU Copyright Directive.

I also consider it essential for individuals to take matters into their own hands in some degree. They absolutely should develop tools and heuristics that help achieve the goals they want. Should some of the burden be taken away from the individual, by codifying tried and true principles into law? Of course!

As a closing thought, perhaps the model of the huge, centralized social network is ultimately a deeply flawed one and no amount of regulation can help it. Perhaps the end game is, once the average of all individual desires have been codified into law, that the traditional social network becomes economically non-viable. Centralizing all communication under a single large entity certainly sounds deeply wrong to me personally. And if it is so, the users will eventually realize this.


>That doesn't mean you can only choose between "anything goes" and "here's a list of government-approved algorithms, everything else is illegal". But it does mean that instead of individuals developing tools and heuristics to protect themselves, the government should mandate the creation of such tools.

Except the government doesn't understand even the basics of how the internet works. And you want them to regulate the specifics of algorithms?

>That's essentially what GDPR is about. Instead of having to find out for yourself how your data is being used, the company needs to tell you. Instead of having to write a crawler yourself to get your data back out, the company has to provide it to you.

Here's an idea: DON'T GIVE OUT THE DATA IN THE FIRST PLACE. Ask for more controls over your data in browsers, because none of these rules are going to actually protect you. Yes, Google in Europe can't legally steal your data, but do you think a Chinese company cares? Of course not, because the EU does not have jurisdiction over them. All GDPR did was make European and above-the-board corporations and citizens worse off for some security theater, but they didn't actually deal with the problem.

GDPR has also made using the web frustrating due to every single website having a pop up now and some sites being outright blocked in the EU. GDPR is a terrible example of government intervention, because it shows that the people making decisions don't understand what they're making decisions about.


GDPR is a direct reaction to large American corporations (Facebook, Google, Microsoft, etc.) not doing the right thing off their own backs for European users. They should make data collection opt-in not opt-out.

It’s very hard to “not give data out in the first place”. You don’t have to give it out, they’ll do anything they can to take it from you without asking.

I’ve recently had to email Airbnb to cancel my account becaue I don’t want to be tracked by Facebook and Airbnb put that tracking on every page logged in or not. Airbnb’s view is that I have to create a Facebook account and then set privacy settings there. They refuse to honour Do Not Track headers.

Airbnb won’t even let me login to delete my account. Instead I have email them and hope they’ll actually delete my data and account.

When you have companies making things as difficult as possible for users and using sophisticated tax avoidance to crush local competition on price you need legislation with teeth.


Isn't that exactly what A/B testing is? I'd say lack of consent is the unethical part of it.


If holding people responsible for their actions, whether carried out via an algorithm or not, stifles innovation, I don't care. I mean, yeah, fomenting genocide through algorithms is genuinely innovative. Protecting people is more important.


This headline just needs a two word response: "Of course!"

Facebook the company needs to be changed drastically by strong external forces (maybe governments, though the effects could include some harm too). It will not do anything on its own while Mark Zuckerberg and Sheryl Sandberg are stil at the company and/or have influence (which they will have, for a long time...at least the CEO will).


I wouldn't trust the US government any more than I would trust Facebook the company(Speaking as a non US citizen). Many countries governments are less transparent than publicly traded companies. Governments often act with shady 'national security' and geopolitical interests, or sometimes even elections in mind, but at least public companies more or less motivated towards increasing shareholder value.

I do rue the dominant narrative in much of HN and the mainstream media, who is the more dangerous actor, FB vs NSA? My bet is on the NSA.


>at least public companies more or less motivated towards increasing shareholder value.

Is that supposed to be reassuring?? Increasing shareholder value for a lot of companies means destroying manipulating human lives


Speaking as a European, I would trust the US government over the German government at least.


Could you please elaborate on what aspects you’re referring to and provide some links?

From one perspective, as far as pervasive mass surveillance and exchange of surveillance information are concerned, the U.S. in the Five Eyes group has more influence and impact than Germany in the Fourteen Eyes group.


> Facebook the company needs to be changed drastically by strong external forces (maybe governments, though the effects could include some harm too)

It seems most can only imagine a government-led punitive solution. I would like to see more efforts (they can be government led) that educate users and encourage alternatives. You can't win going against the people's wants and unfortunately to many in the tech circle they don't recognize that FB is what the people want right now (it's not so "forced" or "they are too stupid" as many delusionally hope/believe).

PSA's, populace education/curriculum, grants, coalitions/communities, public-sector development, transparency requirements (w/out other large-sweeping data requirements for now), etc are all reasonable compared to existing efforts of large-scale-legislation/requirements + fines. We can get to where we want without large legislative internet interference that won't ever be reduced. It just takes people, the tech and journalism community particularly, to stop seeing FB as only a nail and the legal hammer as their only tool in the shed. It's like everyone forgets the failures of historical law-based attempts to curb each generation's bogeyman (alcoholics, communists, druggies, terrorists, capitalists, etc).


Stealing call and contact data off a phone is hacking, it's theft, it's criminal activity. It's what they and their chief engineers did. Prosecute them. Clawback all their earnings. They belong in prison for systemic phone hacking.


Would courts consider it hacking if it was allowed in the terms we accepted?


In the EU, it really depends. I think per country, even. It's not as simple as whatever you wrote in the terms gets to hold, or even not as straightforward as equating clicking with accepting.

It's for courts to decide when it gets challenged, but the general idea is, the less obvious or expected the agreement is for normal usage, the less likely it'll hold. So burying some text that amounts to "we get to hack your device" is about as likely to hold as "we get your first born child" (which has been done multiple times by different companies, for fun, and no it doesn't hold).


Contracts can be voided if they show serious vices the person signing was not aware of. Moreover, afaik. contracts are not above law and constitution, so if they violate individual rights they can also be voided.


I guess it this must depend on the jurisdiction and the precise rights in question because in the US contracts regularly ask you to give up some of your rights and I'm pretty sure they're enforceable as such.


I found this answer (https://law.stackexchange.com/a/13374) though I cannot advocate for the source reliability, it does corroborate with the little research I did in this matter in which as long as the subject of a contract is under the law, the contract is valid, although this is all in context of the seventh amendment.

I come to believe, therefore if the subject of a contract, in the facebook case information theft and violation of privacy, the contract can be nullified in favour of the citizen


In germany a contract can be null and void (in whole, meaning the contract was never valid to begin with or after the fact, meaning the contract was invalidated) without the contract specifically breaking any single law.

Unconscionability in german law may occur in a Knebelvertrag (oppressive contract) or in contracts that abuse power differences between the contractual parties or even usury (immoral monetary loans for example). In that case the contract can become invalid.

In case of any disagreement, the spirit of a contract weighs harder than the wording in textform, if it is possible to determine.

A contract can also not place any part of the agreement outside the reach of a court (ie via arbitration clauses. You can still fight your agreement outside court but you can't forbid someone from going to court). You can't waive most of your rights either and even if you do, a court may determine that contract to be invalid and the other party having violated a right will then be prosecuted.

Contracts themselves are largely a specific case of a legal transaction (negotium juridicum) and aren't specifically regulated (all above is basically just about any legal transactions ranging from contracts over last wills to verbal agreements).


Under the GDPR, I guess explicit content at the moment it is done would be required.


What German ministers have to say about Schufa Holding AG running undisclosed rating algorithms on the entire resident population of their country?


I'm curious what a solution like "All Machine Learning algorithms, and the data that taught them, must be open-source and easily accessible by the public" would look like.

It'd allow for public scrutiny at the algorithms that run our lives. It'd essentially mandate thorough vetting of how anonymous and/or biased the data being fed into these algorithms is.

Which would be interesting



Well the idea would be that companies would be forced to actually figure out how to properly clean data or forced not to use it out of public pressure


It's probably not possible to clean data like that. Just a few slices of a person's life are enough to uniquely identify them. So all you're asking then is effectively a ban on machine learning.


Apple and others have poured a lot of resources into a concept called “differential privacy” that can mathematically guarantee privacy for data in a machine learning context.

It’s rather well known, so you should probably read up on it before espousing strongly worded opinion weakly connected to the current state of the art.


I don't think anyone has managed to train differentially private models with acceptable performance, and releasing raw data in differentially private way seems impossible to me.


What happened to innovation? Innovation on privacy isn't possible?

The only way we can assure that cryptographic algorithms are in fact secure is to subject them to public scrutiny over many years.

Perhaps the only way to ensure that machine learning algorithms are not racist, harmful or biased is to have access to both the algorithm itself, the trained model, and the data that created it.

Algorithms can be biased. Data can be biased. Trained models can be biased.

We can't trust the government or companies to prove that a cryptographic algorithm is secure and we can't trust the government or companies to prove that an algorithm is unbiased and unharmful.


It would either kill machine learning or it would give everyone's personal data out to everyone else.


I don’t know that i’d trust a company that would intentionally steal millions of dollars from children.

https://www.revealnews.org/article/facebook-knowingly-duped-...


That's harsh - they do not intentionally steal money from children.

They intentionally use children to steal their parents' money.


Trust is gone and it will be the end of the big Facebook (website).

I think Facebook (company) is lucky with Instagram because that's where the people are going right now.


Yeah, if the Zucker doesn't destroy it

Though I have to say that at least in its current form, the ability of Instagram to spread crap is limited


> Though I have to say that at least in its current form, the ability of Instagram to spread crap is limited

That proved to be untrue:

https://www.nytimes.com/2018/12/18/technology/russian-interf...

https://www.theguardian.com/technology/2018/dec/18/instagram...

> But two new analyses of the Russian online propaganda campaign by the Internet Research Agency reveal that this view of Instagram was as rose-colored as, well, an artistically filtered Instagram post.

> “Instagram was perhaps the most effective platform for the Internet Research Agency,” states the report by New Knowledge, an American cybersecurity firm which analyzed data sets from Facebook, YouTube and Twitter.

> During the period studied by the report’s authors, IRA posts on Instagram garnered more than twice as many engagements (such as likes or comments) as IRA posts on Facebook – 187m on Instagram vs 77m on Facebook – despite the fact that Facebook offers many more ways for users to interact with content, and Instagram has no native “sharing” button to promote virality.


A startling portion of Instagram users are unaware that Instagram is owned by Facebook, giving Facebook a clean slate. Therefore even when they start to seriously zuck it up it will take many people longer to notice the trend. I think the future of Instagram is rosy for at least a few years.


On the contrary, we should assume that Google, FB, MSFT etc etc will do the worst (for users /society) if it's better for their bottom line. Their trust level is in negative territory, it was zero a few years ago.


> On the contrary, we should assume that Google, FB, MSFT etc etc will do the worst (for users /society) if it's better for their bottom line. Their trust level is in negative territory, it was zero a few years ago.

All the refrains that try to excluse/explain corporate misbehavior by claiming that that only duty of a corporation is to create value for its shareholders are only tempting a massive regulatory response in the future.

If capitalist ethics are unused to justify socially unethical and immoral behavior, don't be surprised when democracies start to turn against capitalism...


It’s always worth remembering that the term “Banana Republic” is as originally used to describe countries where the local government had been effectively taken over by United Fruit Company [1] to the extreme detriment of the local people.

The lesson here is that you can’t trust large corporations to act ethically and if you can’t trust you need to regulate them.

(That a clothing company rather insensitively chose Banana Republic as a trade name sort proves that you can’t legislate against bad taste).

[1] https://en.m.wikipedia.org/wiki/Banana_republic


Like almost all articles criticising Facebook, this has a prominent 'like on Facebook' icon/link. It is first among the social media logos. Unlike most articles criticising Facebook, their logo is not fixed and permanently visible, so they must be getting really serious about their criticisms.


> Like almost all articles criticising Facebook, this has a prominent 'like on Facebook' icon/link.

The "voting with your wallet" logic doesn't work very well with operations at the size and impact of Facebook. Zeit may be dependent on Facebook for profit. That's why some people ask for regulations. Companies should not be expected to act as moral agents.

You should also know that journalists writing for a paper has little to do with business strategies of the owners.


(I'm not sure why the mods changed the title as it's quite important to note that this is is an op-ed by Katarina Barley, the German minister of justice)


Please don't do things to make titles stand out, like using uppercase or exclamation points, or adding a parenthetical remark saying how great an article is. It's implicit in submitting something that you think it's important.

...

If the original title includes the name of the site, please take it out, because the site name will be displayed after the link.

https://news.ycombinator.com/newsguidelines.html

I'd say that applies to author names, too. Besides, I think she's written the article in her capacity as minister of consumer protection, the fact that she's also simultaneously minister of justice isn't all that relevant.


[flagged]


> From calling early users "dumb fucks"

For the love of god, enough already. This is mentioned on almost every Facebook story we get on Hacker News. It's becoming a meme! Look, barely 15 hours ago it was mentioned on another story: https://news.ycombinator.com/item?id=19004067

Can we please elevate the discussion around Facebook and stop mindlessly citing something Zuckerberg said when he was barely an adult? If you want to substantively critique Facebook, by all means do so. There's plenty of material to work with. You don't need to dig into the private conversations a college student had over a decade ago.

Please read and internalize this comment: https://news.ycombinator.com/item?id=19005009


Has Zuck ever apologised for saying it? That’s a geniune question.

If he has then we might indeed be able to right this off as a folly of youth.

If he hasn’t and he knows how it keeps being brought up it means something different.


Back in 2010 He said that he regretted writing that message and that he'd matured a lot. The decade that followed that apology, his subsequent actions time and time again spoke much louder.


It will remain relevant until Zuckerberg is able to convince people that he no longer feels that way about the common plebs. The way he's been running his company ever since then shows that he still feels that way. Some things you just don't get to live down, particularly when you show no signs of changing who you are.

What kind of person would refuse to give refunds when children get scammed out of money by games designed by professionals from the gambling industry to confuse and manipulate children? Somebody who thinks their users are "dumbfucks", that's who.


It's not relevant for understanding Facebook's current challenges whatsoever. Do you honestly believe that most human beings can't be judged just as harshly for things they said in private conversation when they were 19 years old? Most people don't make offhand, immature and puerile remarks as a teenager? Should those remarks be used to critically understand large companies a decade and a half later?

It's also inane to keep citing it, because there is nothing new to the comment. It brings absolutely nothing new to the discussion that any of us can dig into. It's just an opportunity to fire off yet another short, low effort comment about Facebook.

What drives me mad is that you don't need to cite this stupid quote to convince people Facebook has serious privacy issues. There's so much better material to choose from - more recent, more extensively documented, involving more people...


>It's not relevant for understanding Facebook's current challenges whatsoever

Yeah, it is.

Looking at the original goals and intentions that FB was founded around is indeed quite relevant to understanding their current actions.


No, it's not. You ignored basically everything I just said.

Facebook was not founded around principles written in a private conversation. And even if it was, those principles are not a suitable lens for understanding the massive enterprise it has become. Would you like any company you found in perpetuity to be criticized based on a private chat you had on AIM two decades ago?

Judge Facebook's current issues according to substantive evidence we have in the present day. It's not like we're wanting for it! There's plenty of it, we don't need to rehash the words of a teenager who is now in their thirties every time their company pops up on HN!

I don't get why this is so hard to understand. You'd think I'm defending Facebook here. I'm not! I'm telling people to stop injecting noise into discussions about Facebook's legitimate privacy concerns.


>Would you like any company you found in perpetuity to be criticized based on a private chat you had on AIM two decades ago?

Yeah, because that's relevant information, and that's how context works. We see this in courts of law. Why can't we look at history and consider the present at the same time?


> that's how context works

When people invoke this quote they never examine the context. There's every possibility that this comment was self-deprecating as much as mocking his users, but that possibility is never explored.

People just deliriously portray him as this deeply odious Dr Evil-like villain in the making, and can't actually imagine a goofy 19-year-old saying goofy 19-year-old shit.


> "It's not relevant for understanding Facebook's current challenges whatsoever."

Facebook's "current challenges" are all rooted in their unethical CEO's psychopathic disregard for other people. Why was Facebook so careless(fraudulent) when they mislead news organizations about video engagement numbers, leading many news organizations to invest heavily in video production and consequently get cratered? Because Zuckerberg has little regard for other people.

That quote happens to be one time he said it about himself but he's proved it about himself numerous times in the subsequent years. A fish rots from the head; the trouble Facebook is constantly getting comes from Zuckerberg and the culture he has induced into the rest of the organization. It's the same story every time because the root of the problem is still there, unaddressed.


This is a classic tactics by politicians: instead of reprimanding the judicial system for refusing to do their job properly, they cry and whine publicly in order to get the consensus to create more laws, which often guarantee more power to the government and also the creation of special regulatory bodies where they can put their people with a juicy governement salary.

All the problem described in this thread could be solved with laws from 25 years ago. Facebook stealing call and contact data? No, they can't do it. No, GDPR was not needed. No, putting a phrase in a ToS that nobody reads is not enough to save a company from being destroyed in court in case of serious wrongdoing. Friendly fraud? Also a crime, just use the existing laws. Calling the users dumb fucks? Not a crime. Manipulating the users emotion or testing the algorithms on users? Also not a crime and I don't see why it should be.

Of course, it's much easier for the judicial system to not bother with those big companies by applying the existing laws. Why risk getting attention from such powerful people? And after all, if they were to apply the existing laws, then the poor politicians would have much less fake crisis to work with in order to expand the role of government and make money.

This is how the whole government thing work. Proved since the beginning of time. Trust them even less than Facebook.


Agree with all of this in general. Inability to enforce existing statutes is not a justification for piling more on. However, what you wrote is widely disagreed with by many of the loud tech and media voices which believe adding laws is the only solution. Modern generations fear companies more than governments because it is difficult to put into perspective the real harms caused by each without a historical lens.


I’ve always wanted the power you describe.

And now I have it in several countries and various political systems. Some of which are totally incompatible with the one I was raised in.

All you need is consensus. And it is easy to get. No matter if its an ethnostate, a communist state, has a supreme ruler or prince able to override the legislature, representative republics elevated as bastions of the capitalist society, you name it.

It is so surprising and entertaining how the population of these countries get so caught up in their political system in the least effective way. The key distinction being they spend energy on the parts that have no consensus, like not even a little or half of their energy, ALL of their energy in politics is used on alienating themselves and everyone to support the fringes of their political party that doesnt have consensus and none on the part that has consensus, letting them get totally steamrolled by power hungry people and wondering “what happen”. They are the ones being played here.

Downvoting doesnt change that or offer another perspective.


>All the problem described in this thread could be solved with laws from 25 years ago. Facebook stealing call and contact data? No, they can't do it. No, GDPR was not needed. No, putting a phrase in a ToS that nobody reads is not enough to save a company from being destroyed in court in case of serious wrongdoing. Friendly fraud? Also a crime, just use the existing laws.

Even if it's true that these are illegal, you're not going to be able to ascribe a "crime" to someone you can charge ex post facto. At most, you might see a lawsuit that results in a settlement for a small number of affected individuals, which is going to be accounted for already by Facebook's legal team in their decision to implement these polices.

We do need new laws here. The fact that Facebook feels so free to do these things means that they are unafraid of current law, probably with good reason.


If facebook's actions are illegal now, they are illegal now, there is no ex post facto. Can't talk for the US, but for example the "friendly fraud" could be considered fraud in many European countries, since it brings profit to facebook by inducing people into error. The punishment includes jail time. You do need someone to start a lawsuit, of course, and that's what the judicial system should do when the fraud is repeated. But do they do it? Nope, instead we just have the politicians whine that they need more power. What a surprise.

The reason why FB and big companies in general are unafraid of current laws is that they know they won't be applied. It's a big hassle to punish the rich and the corporations, because people working in the judicial system just don't care about making big enemies. Also it's a pain in the ass because there are two thousand layers of limited liability they have to uncover before they can put the responsible people in jail. It's much easier to focus all the energy on some poor guy selling some weed or a small business not submitting the right form at the right time. Punishing them need zero effort, they can't defend themselves properly and their punishment justifies the work of the judicial system.

Elected politicians should be the one to keep the judicial system in check, but they don't have an incentive to do so that it starts to punish rich people FIRST. In the end the people, by showing support for "more laws" they only get more laws, which will cost more money to the taxpayers and also will make life difficult for the small business while big business won't care and will even be advantaged by them. All this aside from the fact that it doesn't make sense to make another law when there is already a law.


So are we back to having dozens of facebook spam every day? Can't believe a nearly 2 day old facebook spam story is still on the frontpage.

Facebook is a private company. It will do its best for its shareholders.

And as best as I can tell, what this german minister of "justice" and a segment of the politcal and business elites are complaining about is that they don't get to control facebook. These authoritarians want to control what people read, what people say and how they think. No different than saudis or the chinese or the russians complaining about facebook.

If you don't like facebook, stop using it. If germans don't like facebook, create a german version of facebook. Frankly, the germans should be more worried about the nazi/stasi-esque german intranational spying. It's strange how the people complaining the most about facebook and its privacy violations/spying are also the leaders in spying on its own citizens. I doubt even china spies on its own citizens as much as the british and the germans do.


people hate facebook because trump won and brexit passed and they hate that the people didn’t listen to the elites.


I assume this is sarcasm.


you just think it’s a coincidence that all the fb hate started after trump got elected?


It didn't.


yes it did. atleast the latest wave of intense fervor. it started when people claimed russian bots on fb affected election. of course there’s always been some criticism but not to the same extent


> atleast the latest wave of intense fervor.

That's moving the goalpost to a subjective location. But if you never saw intense criticism of Zuckerberg and his company on HN before the 2016 election outcome, then you just weren't paying attention. The "dumbfucks" comment was what, a decade ago?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: