Hacker Newsnew | past | comments | ask | show | jobs | submit | DaFranker's commentslogin

EULAs are never legally binding, and especially not for someone who is 1) not currently a consenting user, and 2) invoking a right.

So in the EU, if current legislation projects bear fruit, a case could probably be made for implementing this under the protection of the right to be forgotten, and there would be very little those services could do to prevent it.


EULAs are never legally binding

And blanket statements are always false.


Perhaps you'd like a rephrasing:

A EULA can never bind you into a legal contract that has consequences beyond those already provided for by a law or the ability of the copyright holder of the software to deny use of the license.

The EULA is itself purely informal, but informal speech can serve in the provisions of certain laws that look at it to determine the intentions of the two parties. If a law exists that considers whether a user agreed to something in the determination of whether that user must be held to that thing they agreed to, then accepting a EULA is obviously valid under that provision -- but so would an email stating as much.

EULAs are just a list of statements the copyright holder and the user are throwing at each other in bulk format in order to cover those provisions where laws take such statements into account, along with a conditional authorization to use the software. If a EULA does anything else, it will be ignored, as there is generally no law that says "You must do what it says in the EULA". Unless you have one where you live, in which case ouch.

I can say whatever I want in an EULA, but it's going to be worth nothing in court if the user stopped using the license unless I was saying something that directly ties into an existing law. And if they do continue to use the license after doing things I forbid in the EULA, they are committing the specific crime of unauthorized use, since I'm no longer authorizing them.

As much as some EULA writers might get a kick out of writing that you will be tried according to whichever court's law they want, and that you'll be held responsible for XYZ humongous damages if you breach even the tiniest provision even up to two years after you cease using the software... yeah, nope, you'll still get convicted for unauthorized use, not the rest of that crap they listed.

Mind you, I'm repeating what a canadian lawyer explained to me. YMMV and some places may indeed hold the EULA against you.


Any counter-example about this particular case?


https://web.archive.org/web/20061124021646/http://www.justic...

He was threatened with 55 years in jail for abusing an account with the American College of Physicians after clicking accept on the EULA.


I very much doubt any directive regarding the right to be forgotten will give you the right to use fake information. It's essentially inviting fraud.


What is the difference between a fake profile and a pseudonymous profile? Is it really fraud to use something other than your real name in non-financial settings? What about other-than-real data? People have been using fake names and fake data since the dawn of the net; why would this be any different?


I don't see any problem with the practice per se, but I'd be wary of codifying such Right into law without opening a can of worms.


Isn't the right to be forgotten a can of worms already? In theory it's about individuals and their privacy, but in practice it's about fraudsters hiding their fraudlent behaviour so that they can continue being fraudsters.


You're basically making the case that, if you have nothing to fear, you should have nothing to hide. But, replacing "fraudsters" with "terrorists" or "online weed dealers" in your comment states the law enforcement and intelligence community case against personal privacy pretty clearly.

I think that, in practice, it's about whether or not social media sites and search engines should be forced to maintain a market on personal data to serve as a crowdsourced proxy for state-sponsored surveillance.


I see a difference between privacy (preventing information about you from getting out) and rewriting history after the fact. Historical data is a public good, it's much bigger than an individual and his affairs, whether legit or not.


>I see a difference between privacy (preventing information about you from getting out) and rewriting history after the fact.

I don't, in this case. Search engines and social media sites were never intended to serve as arbiters of historical truth. Is it really a good idea to suddenly pretend they are, and insist they act like it?


Search engines weren't intended to be anything, they weren't designed up front by committee. They just ended up what they are. Twitter also wasn't intended to be the channel for officials to communicate with the public, nor Facebook the ultimate Yellow Pages, nor was Google Maps commissioned to become the world's most popular GIS.

You could argue that preserving public goods is the task for the government and yes, we shouldn't insist that private companies do their job for them. But here, we have a company that wants to do that job out of its own will, and the governments insist they stop.


> EULAs are never legally binding

Violation will certainly give the social media cite grounds and desire to delete the false profiles though.


Aren't their financial valuations based on account numbers and made up metrics like that?

So deleting this fraudulent account will cause a measurable, definable, actionable, $0.0001 decrease in shareholder value, ignoring it will have no effect on shareholder value, what is my fiduciary responsibility here?


You are not a fiduciary in this context so would not have fiduciary responsibility to the owners (public or private). Essentially your relationship is with the company as a user/buyer of their product or services and is governed by user agreement - which may be a blanket EULA and may or may not be enforceable.


Not to mention blocking such fake-profile creation services.


> read access


If you have their tax returns, you can mail them a letter, or often even phone or email them.


You're thinking on the level of decisional strategies across agents. Arguments earlier up the comment chain where thinking on the level of implementing decisions in a given context for one agent, where the context includes other agents with unmodifiable strategies.


Yes, and I believe it is very strange to assume that those other agents have unmodifiable strategies when you are discussing strategies with them :-)


There was a lot of that going around yesterday. For me the most amusing was Twitch.


Heh. Gotta love how the default filter already blocks ||sourceforge.net^


What we need is more pervasive control of discrimination, then, and of other things that could use biometric information.

Simply "hiding" your biometrics or banning biometric identification altogether would be moving backwards in the bigger picture, which I'd compare to forbidding evolution research because it's against religious teaching.


Unfortunately that “control of discrimination” can change uncontrollably in the future so any system we develop now should have some thought to the risks of leaving data which could be abused by a bad actor in the future.

The BackStory podcast had a recent episode on the history of surveillance in America:

http://backstoryradio.org/shows/keeping-tabs-2/

Among other topics, one segment discussed how a racist official in Virginia used data collected in the 19th century to protect free African Americans as part of his effort to enforce racial purity laws in the 20th century:

“HELEN ROUNTREE: In the county courthouses, there was another kind of record made, and that was the register of free Negroes. He had to get a certificate stating that they were of free birth, otherwise they could be kidnapped and sold into slavery.

The law about that went in in 1806. Plecker was able to get copies of those registers – every county had one. And then if he got a tip later and he could have his people trace geologically back to a free negro register, he had that present-day person as a person of African ancestry.”

The full segment is worth listening to:

https://soundcloud.com/backstory/one-data-point-one-drop


pervasive control of discrimination

The US equal protection clause dates from 1868, and yet not just racism but actually unequal law enforcement is still widespread.


This brings up the point I'm most looking forward to if we can get this stuff widespread: massive carpooling.

Forget Uber. Get a Google-AV subscription, find the nearest "unoccupied" AV (or call for one to come pick you up, preferably scheduled in advance to it's just waiting for you when you get out), hop in, state destination, get off. Car continues on its merry way to its next duty. Repeat when you want to go back home. Much less massive waste of space involved in all those parking lots. Much less opportunity cost waste. Less total cars in existence for the same amount of travelers.

It's like a public transit pass, but the bus stops are exactly where you want them, and the bus passes exactly when you need it, and you don't have to deal with that insane, smelly old guy who dances in the middle of the bus and then shakes peoples' shoulders so they give him pocket change.


Whoa what. Are you suggesting that suspicion of possibly maybe having put a trojan in someone else's files somewhere is grounds to make all one's efforts useless and poisons everything else you do?

Geeze, I guess we should stop using Google. They've been accused and suspected of much worse by a lot of people. I hope that's not what you meant.


Are you suggesting that suspicion of possibly maybe having put a trojan in someone else's files somewhere is grounds to make all one's efforts useless and poisons everything else you do?

Short answer: Yes. Downloading and running arbitrary binaries from the web inherently a quite dangerous thing do to, and I only feel comfortable taking such a risk with sites I trust. I no longer trust Sourceforge and there is very little they can promise me to make me start wanting to download from them again.


Er, okay.

Well, I don't agree¹ with your method of evaluating trustworthiness (which seems to me rather too quantized and "chastity"-minded), but at least you know exactly what you're doing and who you're trusting.

[1] Read as "I believe it's sub-optimal for a given cost-benefit formula, after some assumptions about certain variables and certain opportunity costs, and other methods would likely be more useful in context."


suspicion of possibly maybe having put a trojan in someone else's files

Isn't it a hard fact at this point?


For Sourceforge specifically? Sure.

In general, the way the comment was worded? No, suspicion does not equal hard fact.

We were talking about the latter.


> and who breaks into our offices to steal our tech?

AFAIK: Shadowrun characters, and that's about it.


> I would say that 99% of all spam I receive is written in English, whereas about 80% of my personal emails are in German.

You're missing some important numbers here for this to be meaningful: Base rates.

If you're getting 10 spam emails per month, and 99% of them are in English, but you're receiving 1000 legitimate personal emails per month, and 20% of them are in English, that's a total of 210 English emails on average per month. In this case, while technically true that an email written in English is more likely to be spam than an email written in German, it's also such a very poor indicator that you'd be better off using another indicator entirely.

On top of that, you've only pinpointed two categories out of many more possible categories of emails. What if someone is getting 50 spam emails per month, 5 job offer emails per month, 10 updates relevant to their profession per month, 100 professional emails per month, and 50 personal emails per month, and the English:German ratios for those are all 99:1, 9:1, 4:1, 1:1 and 4:1 respectively? Suddenly just "this email is written in English" isn't all that important anymore.

Base rates are very, very, very, super important when discussing stats like these.


That's why I said "one factor of many". If you have ten factors like that and combine them, you'll get an excellent spam filter. That's how Bayesian spam filters work. They combine a number of factors that are not very significant on their own, but using all of them adds up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: