Hacker News new | comments | show | ask | jobs | submit login
Path texts my entire phonebook at 6 AM (branded3.com)
1223 points by kemayo 1634 days ago | hide | past | web | 409 comments | favorite

How about another detail -- the fact that the message said the user had photos to share, when he didn't?

There's annoying spam, and then there's straight-out-lying spam -- the "x has sent you a message, you need to create an account to view it" type.

Just curious, is there a way to sue/fine a company like this for false advertising, essentially?

We need to nip this in the bud.

Maybe it's time to create a Hippocratic Oath for developers to publicly commit to?

A future Path developer could then refuse to implement an unethical "feature" by pointing out that the company had hired them with the full knowledge that the oath had been undertaken.

I don't think the company would pink slip the developer, as they would probably want to avoid any attention being drawn to the unethical "feature" in a tribunal or other legal setting.

A "Hippocratic Oath for developers" sounds like a great idea, but we vastly underestimate the number of jerkbags in the world.

People want validation. Line-level employees want praise from coworkers and bosses. Executives want praise from their peers, investors, industry, and press. Concepts like ethics, "right," or even this-is-good-for-thie-world isn't a concern when faced with "X will increase my social status and happiness with my peer brogrammers." What's X? It's anything possible, regardless of legal, right, wrong, or ethical.

A nontrivial number of companies use unethical methods (spam, false invites, false installs, phone and email address book capture, fake attractive profiles) to increase their vanity metrics. Employees see those methods as either: "this is bad, but it's sooooo good for us — look at all the lame n00bs who fall for our tricks" or "this is bad, and I'm ashamed to work here."

The ones who feel shame would take the Hippogrammer Oath. Those who revel in manipulating others and standing on their broken bodies will rake in all the profits while the good guys just sit around and "play nice."

Even the tech darlings of today used spammy methods to grow their initial user base. How do you grow your userbase to ten million when you're growing at a constant 5,000 per day? Obviously you want to "go viral." How does one just on a whim "go viral?" You can either become a meme, a social phenomenon, or spam and manipulate unsuspecting people. Spam is less work than creativity.

Haven't seen this mentioned yet, but ...


I often find myself saying, "I bet somebody got a really nice bonus for that feature."

"That feature" is something aggressively user-hostile,...

A lot of the things mentioned there are good reasons for a curated App Store approach. It's almost impossible to stop arbitrary programs from abusing features of the OS on which they run, unless you have control over which ban poorly behaving programs from ever reaching end users.

On the other hand, isn't Path's app distributed by a curated App Store?

I don't think anyone is saying it's a solution to all problems, just that it might be a problem to some problems.

Alternatively, you can think of it as a good argument for open source. Take abuse of the notification area, for example: in Ubuntu this was eliminated by modifying every package in the archive. That's something a system like Windows with closed components belonging to dozens of different manufactures can't really do.

wow, I (like many others, obviously) have experienced many of the things described in the msdn blog article you linked to, yet never really thought critically that those things don't have to be there!

Things are going to look different to me now when I'm on a windows machine.

Nice post, but putting shortcuts in Quick Launch bar seems pretty standard these days and I like it as long as the installer asks. Also, he mentions the fact a programmer would have to hard code the file path ... I don't see why they couldn't write an algorithm to discover it instead.

I suspect that the post's point is that many don't use an algorithm and cause problems for non-English installations of Windows.

This would be great...

I remember having to put my job on the line a few times for refusing to program / setup something awful.

One of the worst was when I was asked to combine all divisions email lists and send out a marketing email selling some overpriced book, this was against the Privacy Act (AU), against our privacy policy and highly unethical to boot, refused and was given a written warning.

There was once when I did this as well.

It was a contract web development company I was working for, about ten years ago. One of our clients wanted some SEO work done, and my supervisor had recently been reading a lot about SEO. He started out reading white-hat stuff, but by this point he was delving into some black-hat research, and he essentially asked me to program a message-board spam-bot. I just told him straight up that I believed that would be unethical and I refused to do it, knowing full well that simply refusing to do assigned work could cost me job.

Thankfully, not only did this not cost me my job, it caused my supervisor to re-evaluate his own position and he decided to go back into completely white-hat SEO. And in the end, he actually thanked me for refusing to do that work.

I feel like lucked out on that one.

> The ones who feel shame would take the Hippogrammer Oath. Those who revel in manipulating others and standing on their broken bodies will rake in all the profits while the good guys just sit around and "play nice."

I think it is important that we should realize this kind of mentality won't work. You might see bad people making money, but eventually it will be no good for them. Either they don't sustain, or the money is no good for them, or they can't sleep with all that money under their pillow. You will see lot of examples of this from history.

I have seen people who are ethical and right also make a lot of profits. May be not in the short term, but in the long term. The idea is, you don't go behind money, instead you do what you do best, and money will come behind you.

you underestimate humans' ability to conceive themselves doing good while actually immoral, i.e. hypocracy.

I am not a jerkbag yet multiple times I have put my beliefs to one side to implement something I really didn't want to do. It made me sad and demoralized for weeks. Yet the choice between sticking to values and feeding family is not a difficult one to make in the end.

This is why we got mad at the top of the Nazi regime, not the bottom. The workers just needed to feed their families, the ones at the top orchestrated the evils.

There were plenty people at the lower levels of the Nazi regime who were put in trial if they were considered to have committed crimes (e.g. concentration camp guards).

Computers attempted various guises of "can't we all just get along".

At one time, it was cooperative multitasking and memory management. Programs were supposed to behave themselves and get out of one anothers' way. Except that, due to bugs or malice, some didn't. We called this world "DOS" (or pre OSX Macs).

Microsoft still attempts to allow vendors to install programs whereever the hell they want, and to, pretty please, not overwrite other program's infrastructure or system-level DLLs. Yeah. Right.

In the Linux world, we've solved this problem, if done right, though distro-managed, well, distributions. Any program can be included if it meets qualifications (generally limited to licensing requirements), and a sponsor steps up. Once included, the package gets the benefits of being included in the package lists, distributed over archive mirrors, and included in bugtracking and support systems. However it's also got to play along with the requirements of Debian Policy as to how it behaves on a system.

The proper way to address the issues of app privileges is to control privileges centrally on the device and grant them to specific apps. If a user doesn't wish to give an app, say, addressbook access, then they can deny it (or feed it a bogus addressbook). The app vendor can decide what they're going to do at this point, but what they can't do is override the user's explicitly stated limits.

>Maybe it's time to create a Hippocratic Oath for developers to publicly commit to?

This is silly. Stop trying to add grandeur to writing some code at X,Y startup/company.

People don't die or get harmed when some social-messaging application spams someone. Code is a way to implement an idea. Most applications exist to make money. If this a shock to you, read the user agreement before installing/upgrading, uninstall the application, or realize that in social networking, your personal data what the company uses to make a profit.

I disagree. As a computer engineer in Canada, I must swear by the Code of Ethics because what I do (or potentially don't do) can cause harm.

Ethics in computer-science-related fields are important and I think we do need a set of rules we can dogmatically follow like the Hippocratic Oath. Of course, the HO is different in that failing to follow can cause physical harm. However, the world is progressing quickly and more and more information is hosted online -- personal information.

I think it's our jobs to make sure we don't promote poor practice and un-ethical behaviour.

>I think it's our jobs to make sure we don't promote poor practice and un-ethical behaviour.

No, it's our responsibility as decent people. I don't need to sign some online pledge to keep myself from pushing people in front of trains. If I was the sort to harm others, why would I care about some meaningless online campaign?

"No, it's our responsibility as decent people"

Well, yes. But there is a reason that every profession that has tackled this problem has used a system of oaths and certification. Engineers, Doctors and Lawyers are the canonical examples.

You need something that is given and can be taken away for bad behaviour in order to change behaviour at this level. Damn human brains.

Taken away by whom and on what basis? I would dispute that you need the ability to take away other individuals' ability to lawfully write software. That ability is bound to be abused for political reasons (which is also what it looks like when people have reasonably different ethical systems and one imposes his by force).

Anyway, the issue at hand is bad corporate behavior, not bad programmers. I don't see why we need to start licking our chops about the prospect of forming a blacklist against individual programmers.

This is just a bonding/licensing arrangement, it's in use by every other profession that has this exact problem.

So go ahead and try and stop bad corporate behaviour, everyone else can use a proven system so that programmers can easily say "no" when asked to do something unethical and not be fired for it.

I'm often confused by how often programmers completely reinvent the wheel when faced with social problems. The idea of looking to other similar industries never comes up, even if the problem is exactly the same.

There's nothing to sign and it's not a campaign. You're right, it is our responsibility as decent people to uphold a certain level of moral and ethical behaviour, especially when the software we write is in control of sensitive information.

The Oath is there to remind you to act in the best interest of the user. There are no formalities and although it seems common sense to people like you and me, others might not see it so clearly.

An engineer holds a license such that they can profess, which license is conditional to the respect of their code of ethics (and a bunch of rules). If they do not follow those conditions, their license can be revoked.

So what if they lose their license? They can still write code and do harm.

Yes, indeed. As it stands right now, the reality of the engineer's license is such that it doesn't fit very well the software world. The vast majority of companies couldn't give less of a damn whether you are licensed or not. However, it depends.

Regulations might eventually come in place to force software producers to hire only licensed engineers if the nature of their business is prone to put the public in danger. And as technology grows ever deeper into our lives, the danger that consumer apps can cause on the public is ever growing as well. For instance, breaching a user's privacy can be enough info to grant an ill-intended operator access to the user's e-mail through social engineering, from which it is then often trivial to gain access to that user's bank informations. You don't need that much imagination to figure out a scenario where a user's life can be turned to shit by some software abuse.

Given that this risk is ever growing, the possibility of a code of ethic on software business is plausible. Say in X months, the government of country Y decides that companies hoping to run a social network available on their territories must hire licensed software-engineers, and have them all sign-off any code that is presented to the public. That software engineer they'd hire would have to put their license and career in jeopardy if they were to implement some evil feature.

Before Québec's bridge, engineers didn't need a license to build infrastructure. The parallel between the current situation and the past isn't too hard to make.

> I don't need to sign some online pledge to keep myself from pushing people in front of trains.

Neither do doctors really need the Oath of Hippocrates to stop themselves from harming people.

Which is convenient, because the Oath of Hippocrates has not actually stopped doctors from harming people.

So in Canada, software engineers never cause harm to anyone? Nice to know.

>>>> I think we do need a set of rules we can dogmatically follow like the Hippocratic Oath.

So you don't actually have to employ your own brain and your own moral judgement, because somebody already did it for you and wrote this nice set of rules, that you swore by Apollo to faithfully execute, without thinking, not unlike that box of wires and silicon chips you are paid to play with? Nice arrangement, I suppose.

>>>> I think it's our jobs to make sure we don't promote poor practice and un-ethical behaviour.

And you need to sign an explicit oath to do that?

I think you're missing the point. The oath is to remind you to use your head, your best judgement and a body of ethics and to act in the interest of the public. Sure engineers still make mistakes, but the code of ethics isn't some magical document that eliminated human error.

Also, I think you're sort of merging the HO with the Code of Ethics for engineers in Canada -- two very different documents and I suggest you give them a read. And no, no one still thinks they're swearing to Apollo.

>>>> The oath is to remind you to use your head, your best judgement and a body of ethics

Why you need an oath for that - shouldn't it be always the default behavior?

>>>> and to act in the interest of the public.

"Interest of the public" is a very dangerous thing. I can remember a lot of very bad things that were done "in the interest of the public". You can make almost anything pass as "in the interest of the public" if you want to. Murder? Millions were murdered "in the interest of the public", because they were of the wrong ethnicity, class, physical features or just in the wrong place in the wrong time. Robbery? Millions were stripped of their property and reduced to utter poverty because it was claimed it is "interest of the public" to do so. And so on, and so forth.

I would rather steer clear of anything that has "interest of the public" written on it, at least until it's very clear what is underneath. Too many things that were underneath such writing proved to be a disaster.

>>> no one still thinks they're swearing to Apollo.

Swearing to a document composed by a faceless bureaucracy is no better. If you have code of ethics, live by it, if you do not - get one. What Apollo or his modern equivalent, the almighty bureaucracy, has to do with it?

Because that's how humans work?


One more variation: Nina, On, and Ariely conducted a similar experiment. But, one group was asked to write down 10 books they had read in high school, and the other group was asked to try to recall and write down the 10 Commandments. When cheating was not possible, the average score was 3.1 When cheating was possible, the book group reported a score of 4.1 (33% cheating) When cheating was possible, the 10 Commandments group scored 3.1 (0% cheating) And most of the subjects couldn't even recall all of the commandments! Even those who could only remember 1 or 2 commandments were nearly as honest. "This indicated that it was not the Commandments themselves that encouraged honesty, but the mere contemplation of a moral benchmark of some kind." Perhaps we can have people sign secular statements--similar to a professional oath--to remind us of our commitment to honesty. So Ariely had students sign a statement on the answer sheet: "I understand that this study falls under the MIT honor system." Those who signed didn't cheat. Those who didn't see the statement showed 84% cheating. "The effect of signing a statement about an honor code is particularly amazing because MIT doesn't even have an honor code."

Interesting experiments, the question is if this persists - i.e. if you read the 10 commandments at the beginning of the semester and take the test at the end - would the difference still remain.

"So you don't actually have to employ your own brain and your own moral judgement, because somebody already did it for you and wrote this nice set of rules, that you swore by Apollo to faithfully execute, without thinking, not unlike that box of wires and silicon chips you are paid to play with? Nice arrangement, I suppose."

So I assume you never use any open source code in any of your programming projects, since you are fundamentally opposed to adopting any ideas that are not exclusively your own?

You assume wrong, this in no way follows from what I have said and I never said that I am "fundamentally opposed to adopting any ideas that are not exclusively your own". You must have by accident commented in a wrong branch.

So you wear the ring then, eh?

Soon :) I'm just entering my final year of study.

I'm not sure where you are, but... be sure to wear a lot of layers of clothes with at least one all-black layer on the finals day.

Seriously. I hope that you'll see this in time!

You really come off as an amoral jerk here.

What happens if an app for job seekers texts your boss? What if a casual hookup site texts your new girlfriend--even though you signed up a year before meeting her? What happens if your app calls an old person and they crack a hip trying to answer a phone--when everyone that knows them personally knows not to call until they're awake and their caretaker is in?

These may sound far-fetched, and we all mostly don't pretend we're as disciplined as structural engineers, but "we do it for the money lulz" is a shitty and stupid argument.

>You really come off as an amoral jerk here.

I'm okay with this. I'd rather be calculating than have my head in the sand about the business models of social networking what-have-you applications.

>What if a casual hookup site texts your new girlfriend--even though you signed up a year before meeting her?

While I don't and won't have to experience this, your imagined relationship suffers more from lack of trust and honesty than "some dumb app does some dumb, annoying thing."

>"we do it for the money lulz" is a shitty and stupid argument.

Don't Straw Man me. If my code was going to be used for something I perceive as evil, I'd leave the job.

Our industry doesn't need yet another pointless, embarrassing ethics/integrity campaign when the people writing the code don't care.

> Our industry doesn't need yet another pointless, embarrassing ethics/integrity campaign when the people writing the code don't care.

When was the last one?

> People don't die or get harmed when some social-messaging application spams someone.

Of course they get harmed: their time is wasted, and perhaps their concentration disturbed. This is a small harm to each victim, no doubt about it, but if you write code that makes your social-messaging application spam people then you're delivering that small harm to a large number of people. If your code wastes 10 seconds each, just once, for a million people, that's about three person-months of aggregate time you've stolen that will never come back.

It's quite true that "in social networking, your personal data what the company uses to make a profit". But if it were near-universal practice for new software engineers to swear a solemn oath not to use their powers for evil, who knows? perhaps some other business model for social networking might have had a chance to succeed.

Its also true that in search, your personal data is what the company uses to make a profit. And yet, one hears less complaints about that.

I'm not justifying either approach, by the way, just observing what I regard as a strange disconnect.

People don't die or get harmed when some social-messaging application spams someone.

I disagree. This case reminds me of Geni. You would put a relative's email address in to invite them, and they'd then receive a torrent of spammy "updates", until they registered to unsubscribe.

My less tech-savvy father added many relatives from his address book to Geni. Lots of hate from deranged relatives, and some less technically-inclined relatives are probably still being spammed, 6 years later-it made family gatherings awkward for a while. There are real-world, harmful consequences to this kind of scummy, unethical tactic.

> People don't die or get harmed when some social-messaging application spams someone


Of course people get harmed. The harm just has vastly less depth and vastly more breadth.

This response assumes that joining all social networks is user choice. Unfortunately, we live in a world where it hurts consumers NOT to be on some networks. LinkedIn (which also has had a history of horrible spamming in the past) is an example - sure, you can choose not to be on it, but you'll probably get dinged by hiring managers because your credentials somehow seem less legitimate if not corroborated by LinkedIn.

if there were such an oath, programmers would write the code and the business will fill it with content..

//It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.


I'm a member of the Order of the Engineer. Their oath is a pledge of responsibility in engineering, in the interest of the public good: http://en.wikipedia.org/wiki/Order_of_the_Engineer

Engineers have this system already and have had it forever. In Canada (the one I'm familiar with) it is the P.Eng (Professional Engineer) license.

I think the system should be licensing and involve losing that license if you commit an ethics violation.

There would be unlicensed developers of course, but connecting the incentive to not do unethical things with the incentive to be part of the elite class in your profession has worked pretty damn well for Engineers, Doctors and Lawyers.

This is an important sticking point. Maybe a Professional Engineering certification isn't the solution, but let's not get hung up here.

Example: P.E. certified people should have no problem creating weapons systems for a nation-state at war. Does that make it ethical? Depends on who writes the history books afterward.

Example: P.E. certified people might refuse to participate in experimental, unorthodox methods. But especially in software these often become the runaway successes.

In other words _you_ have to own _your_ personal ethics. You won't be able to point and say "I was just following orders!" The pointy-haired boss who gave the orders isn't going to be able to exonerate you of the guilt. Often he doesn't even congratulate you for "doing the right thing." Maybe he'll fire you or give you a bonus – or join you in prison! – but my point is: it's orthogonal to your personal ethics.

Ethics may sometimes appear to conflict with rapid progress. That doesn't necessarily imply an existential crisis, just a lack of forethought. So many ethical problems arise due to overflowing ignorance / lack of forethought combined with a sudden rash of malice (when it comes time to pay the piper). Ethics are a way of expressing realities about the world that conflict with the general Adam Smithian "enlightened self-interest." I view ethics as meta-enlightened self interest – like how Apple is more than just industry-leading, they carved new niches where no one thought to go.

Engineers (software engineers or otherwise) have untangled things much more complicated than this. It's only overwhelming if it blows up in your face.

Path seems like a classic case of all of the above.

Your _personal_ ethics can easily get you _fired_.

Of course. The _whole point_ of _any_ ethical or moral principle is that it directs you to do things that are right even at some possible cost to yourself.

If you believe that standing up for your strongly-held beliefs will get you fired, you should look for a new job _now_. Sure, that incurs the trouble and uncertainty of a job switch, and possibly a pay cut (though perhaps less of that than you think). But if it means that you don't have to be ashamed of what you do all day --- it's generally worth it.

> has worked pretty damn well for Engineers, Doctors and Lawyers.

... and has pretty much screwed over the rest of society, at least in the latter two cases. The legal and medical cartels have done incredible harm to their customers over the years.

See http://mises.org/freemarket_detail.aspx?control=51 (law) and http://mises.org/daily/4276 (medicine) for details.

I have to say that your link about medicine is short of laughble.

Even if there was some bad "allopathy" back in the day, there is more bad eclictics and homeopathy right now. And to practice medicine you have to understand scientific method, especially falsifability.

And I also have to say that the "free market" idea isn't falsifable. "Let it to free market" rarely works.

Parts of the "free market" idea are falsifiable.

They assume rational actors (people making decisions based on their own self interest). That's been falsified (when applied to humans).

Most variations of the efficient market hypothesis have been disproved as well, for the same reasons:

Humans have cognitive biases and other types of irrational behaviour.

But anyone linking to mises.org is probably a follower of the church of the free market. And they generally strongly disagree with the idea that humans have cognitive biases (because their faith requires it not to be true).

I'm glad someone else laughed at the pro-homeopathy / conspiracy theory around the history of snake oil salesmen content on there.

> They assume rational actors (people making decisions based

> on their own self interest).

That's untrue of some schools of economics that advocate free markets, e.g. Austrian.

> But anyone linking to mises.org is probably a follower of

> the church of the free market. And they generally

> strongly disagree with the idea that humans have

> cognitive biases (because their faith requires it not to

> be true).

That's an ... interesting ... claim. Care to justify it?

> That's untrue of some schools of economics that advocate free markets, e.g. Austrian.

I took the term "free market 'idea'" to be specifically talking about those for which it's true. That seemed to be the point, and the site linked to was Austrian. Both articles make the assumptions in question about the ability to self-regulate that assumes rational actors. So yes, my statement was not true of all schools, but it seemed like those types of Austrians were not in the scope of the discussion.

> That's an ... interesting ... claim. Care to justify it?

Subjective opinion. I read economics news and neuroscience news because it's interesting. Comment threads, especially here, frequently have two types of subjects that start the vocal libertarians arguing and proclaiming: government regulation and the phrase "humans are irrational".

This may vary by province, but our provincial engineering board will not stand up for you if you get fired due to upholding your code of ethics (they even told us so in ethics class).

Furthermore, whistleblowers are often unemployed for extended periods of time, due to corporations not wanting to hire them as they could be a liability.

Hippocratic Oath for developers? Here's a nice post from 2005 on this exact topic: http://glyf.livejournal.com/46589.html . Choice quote:

> Who would knowingly submit themselves to a doctor, knowing that they might give you a secondary, curable disease, just to ensure they got paid?

Does this mean that we can't write software for drones anymore?

What about software for missile guidance? Is that okay by this oath?

Or do we as a community value not texting people at 6AM more than we value not killing people?

Strawman. And anyway plenty of software developers (and other engineers) won't work on armaments.

So should a software developer's code include armaments, or not?

If it does, it would never get mainstream acceptance; if it doesn't, but does cover the topic of this post, it will be ethically absurd.

I don't see how this is a straw man; when building a professional code, one has to choose what actions to allow or disallow, and this seems like a topic that would obviously come up. What do you think about this scenario misrepresents the idea of a developer's professional code?

This is an internal debate I've had with myself since reading Bernard William's Objections to Utilitarianism [0].

I've never had to actually face the ethical dilemma of developing weapons, but what if the development improved precision on a missile? If we can ignore the question as to whether a missile is ethical or not, developing a better guidance system for a missile will help limit collateral damage, but could increase the "comfort-level" of using the weapon for those who decide such things, therefore increasing overall death/destruction. Utilitarianism is hard, because taking all factors into account is impossible. Kind of like machine learning.

I will never scoff at someone who turns down work for ethical objections, but some people are more pragmatic than others.

Both of Williams examples are really hard to wrap your head around if you accept the situations as presented. They are similar to a Sophies Choice [1]

[0]: http://plato.stanford.edu/entries/williams-bernard/#Day [1]: http://en.wikipedia.org/wiki/Sophie%27s_Choice_(novel)

This is a divergence, but it seems to ignore the value that comes from propagating the meme that building armament systems is unethical by refusing to participate in it.

While the idea of an oath may be flawed, I do think it's about time some segments of the tech field showed a little less contempt towards users. I don't know how that would happen, but launching your own startup shouldn't give you carte blanche to exploit your users however you see fit.

As a programmer, I don't want a bunch of charlatans in SF to give my career an unsavoury reputation because of these antics.

That's a decision for each individual developer to make.

Some folk tried to create a pacifist version of the GPL[1].

Others are using the RMPL (RobotGroup-Multiplo-Pacifist-License)[2] - basically a MIT license, but with a restriction that bans military projects.

[1] http://arstechnica.com/uncategorized/2006/08/7511/

[2] http://multiplo.com.ar/soft/Mbq/Minibloq.Lic.v1.0.en.pdf

The irony being that the government isn't strictly bound by copyright or licensing terms. They can and have violated them as needed.

> That's a decision for each individual developer to make.

I generally agree with this, which is why I find the idea of a "developer's code" somewhat ridiculous.

We don't need an oath. We need to stop using Path. We need to get everyone we know to stop using Path. No need to over complicate things.

Getting everyone to stop using Path is pretty over-complicated.

We need them to change their ways, not disappear.

Twitter essentially created a Hippocratic Oath for patent usage (that is, for its employees who are developers concerned about unethical offensive use of software patents): https://blog.twitter.com/2012/introducing-innovators-patent-...

The patent language says it's "a commitment from Twitter to our employees that patents can only be used for defensive purposes." Extending this more broadly would say "a commitment to our employees that the code they write can only be used for non-spamming purposes."

The problem, of course, is that it's pretty easy to tell whether a patent is being used defensively or offensively. Defining spam (or more difficult yet, privacy) is a bit more slippery.

Used that in some contract work last year--it's a good thing.

What difference would that make? Path could still fire him, oath or no oath, and I can't see their decision being much influenced there.

They certainly could. The talent pool would certainly hear about it as well.

Publicity. He writes a blog "Path fired me for not breaking the oath" and then communities like this one will rally around that whistle-blower.

This assumes that Path (or any company) would be foolish enough to make their intentions clear.

If a developer refused to implement a feature due to their ethics then the company would do the following:

* Move engineer to different project

* Set unrealistic goals/deadlines/expectations

* After engineer fails, voice concern about performance

* Set up performance review and improvement plan

* After causing engineer to fail a second time due to unrealistic expectations, fire them due to poor performance

Even if that engineer writes a blog post, enough has happened between his initial refusal and termination as to make conclusive proof impossible. The discussion will be a he-said-she-said affair as his former employer makes a counter-blog post explaining the engineer's poor performance.

A lawsuit is similarly out of the question as most companies have sufficient funds to cause delays in court, thereby causing you to spend all your money on attorney fees and bleeding you dry.

Yup. SOP in the food service industry is to give employees who are underperforming 4 hours per week, on the slowest shift, and just leave them there until they quit.

Honestly, if you refuse to do something on ethical grounds and then are moved around, you know what's going on. Most people aren't clueless to office politics and at that point it's your decision to blow the whistle or shut up and watch it happen in spite of your oath.

It's not like hospitals haven't contended with this exact thing for a very long time. The wills of surgeons/doctors and their hospital administrators do not always match up.

How would this be any different from the situation today, where he writes a blog post titled, "Path fired me for not spamming millions of people's phones" and the community rallies around the whistle-blower?

Yeah, that negative publicity around Foxconn's workplace conditions really tanked Apple.

Maybe not, but Apple changed their practices pretty darn quick. They now review suppliers much more closely than they did before, and Foxconn's practices have changed a bit as well. You can argue it didn't have ENOUGH effect, but you can't claim it made no difference.

The goal was not to tank apple, but to improve the working conditions at the factory floor and transparency on apple's part. Apple' supply chain is much more transparent after the noise. On another note, our outrage/disagreement cannot be outcome based. We hope the bad publicity will change things. many times it does not.

I get your point, but these guys don't have anywhere near the same clout that Apple has.

Path is young (and building a service, not a device/OS) and it can be abandoned for something similar since all they're keeping is data. Once you buy a gadget, that's an investment on your part which will make a lot of people hesitant to give it up and the culture it that surrounds it.

And I wouldn't bet that Apple will be able to survive scandal after scandal and still survive unscathed. Cook is no Jobs.

I think that's a great idea. Drafting a version of it right now.

I wrote up something quite quickly: http://maxmackie.com/2013/04/30/The-Turing-Oath:-The-Promise...

"The Turing Oath" is on Github (https://github.com/maxmackie/Turing-Oath/blob/master/README....) and I recommend people contribute and we grow this to become something people recognize.

While I viscerally agree with this:

But I'm not sure if Turing, who is not well known for having had anything to do with privacy is the right person for this oath.

Arguably, the root cause of Turing’s persecution was that his privacy got invaded, and subsequently the government did not think he had a right to his private conduct.

Though admittedly, no technology was involved in the whole matter.

Also, Turing’s wartime exploits involved a breach of privacy in the service of a good cause.

Arguably, the root cause of Turing’s persecution was that his privacy got invaded, and subsequently the government did not think he had a right to his private conduct.

Actually, come to think of it, if viewed in that way, he's the perfect name for an ethics oath regarding privacy. I hadn't considered it that way.

It would be interesting to combine this with an open source license which links to the Oath, and forbids use of the code in any project or system which breaks the Oath.

For developers who have undertaken the Oath, the challenge would then be to write the best code, so that it sees widespread adoption. This might potentially make it harder for companies like Path to engage in activities which break the Oath.

I'd object to the name first and foremost if I knew Turing wasn't involved or directly responsible for its creation. It's still lying.

Fair enough, have another candidate in mind? Feel free to email me -- I want to see this be known.

Two words: Government Regulation.

Once Congress puts a bill forward to deal with this issue, it will be the beginning of the end for this type of behavior. I'm sure Obama will get behind it as well, as it will help the computer market immensely.

"Don't be evil" has never really worked, despite best intentions, depending on your vantage point.

You don't need an official Oath or for the company to know or base their employment on such an Oath. Either way, the bottleneck is the employee drawing attention to the company doing something unethical; no Oath has to get in the way of that. There's already a sub-thread on top-secret/weapons/armaments jobs. What about political ethics? And religious? Marriage / gay rights? Porn? "Sexism?"

Has anyone considered that perhaps Path did NOT violate any ethical boundaries?

Perhaps the guy checked a box that said "Please notify all of my contacts via text message".

Then all the messages went into queue that was delayed a bit.

Then the phone companies converted text messages to voice calls.

"Shippocratic Oath"


There was a court case against a high school alumni site that sent out "a classmate is searching for you" emails to get people to sign up:


> The case originated with two lawsuits claiming that Classmates.com had sent out millions of deceptive e-mails telling users that an old friend was trying to contact them, and had viewed their profile or signed their "guestbook." For the great majority, that wasn't true; no one at all had shown an interest in their profile. About 60 million users were contacted, and about 3 million actually took the bait, paying between $10 and $40 to Classmates.

You don't know how happy it makes me to see former annoyance-kings classmates.com referred to as a "high school alumni site." Ah, the pre-FB days.

Another hook might be a violation of the Telephone Consumer Protection Act (TCPA) which has been interpreted to prohibit automatically sending text messages without the recipient's consent. The statutory damages are $500 per incident.

Classmates.com was sued in 2008 for sending emails claiming former classmates were searching for you. Similar, but Classmates.com was actually requiring a paid subscription before telling you there wasn't actually anyone on the other end.


Simple math is all it takes to understand why this is pervasive. In the classmates.com example, they collected upwards of $75 million in revenue from this scheme, and only had to dish out $2.75 million for the settlement several years later. So for any newcomers, this type of bait is an attractive business proposition; the potential lawsuit years down the line can simply be attributed to cost of revenue. And you'll obviously only get sued if there's money to be won, which means the scheme was a success, so in other words, you want to get sued. The real problem is the judgments/settlements are an order of magnitude lower than they should be.

There's the hidden cost of permanently alienating potential customers, though, which will show up silently in reduced conversion on all subsequent advertising efforts.

Sure. But it gets some douche product manager his bonus for revenue/user acquisition which was the only goal in the scheme.

Maybe that's why we've been seeing Path more in App Store top charts recently?

Another app doing similar spamming is Circle: http://discovercircle.com - surprised no one talked about that...

Growth hacking through grey-hat tricks? This is an outrage!

(your winnings, sir.)

Apps can't send text messages without the user knowing it on iOS, you have to tap Send for each one of them, so this couldn't have affected App Store rankings.

You presume the app sent it and not their servers or a partner service provider. They already grab your address book, including phone numbers. They don't need you after that.

In that case don't you think we would have seen many more reports of this shady practice? This seems to me a combination of purposefully bad UI and the user not paying attention.

Still the same practice spammers use, but very different from "covertly uploading my contacts data and texting everyone without telling me".

Actually the article notes that quite a few other people do seem to have experienced this in the past...

See, I am not convinced that it was an innocent "automated mistake". This is equivalent to the anecdotal old excuse one always hears from the government officials: "the computer did it". When some spamming process like this is automated, all it means is that it is being inflicted on lots of people and most likely deliberately, so automation is not something that can be accepted as an excuse, on the contrary, it is an aggravating circumstance.

I incline towards the opinion that these sharks do it quite deliberately. They just don't care how many people they embarrass and annoy, as long as some of those tech-innocent grannies and plumbers join up and thus put figures on their business projection sheets that get these sharks closer to cashing in big on an FB style IPO.

Facebook does that exact thing by the way - my boss came back from a week of vacation and wondered how I found him on Facebook. Facebook had sent each one of us in the office a link, that he didn't know about. Anytime you send email as a user - it needs to be very explicit.

Probably that why they hiring director of legal and privacy - https://hire.jobvite.com/CompanyJobs/Careers.aspx?c=qd29Vfw3...

In Australia it is possible to get companies that send unwanted texts banned from sending texts on Australian networks.

"The Privacy Act gives you the right to make a complaint if you believe an Australian or ACT government agency, or a private sector organisation covered by the Act, has mishandled your personal information contained in a record."


There's no way to sue, but I suppose the FTC could take action and fine.

They will claim it's a "bug"... albeit one of those "viral bugs" that seems to lead to topping the App Store download charts. These tactics make me think there's no such thing as organic growth within the app stores.

>There's no way to sue //

You can sue for anything; just not always successfully. That said ...

This appears to be trespass which I think is a tort (entering a part of property, the phone, that was off limits and without consent). You can sue for that.

It's also in the same respect contrary to the UK Computer Misuse Act (crime) AFAICT [story appears to be in the UK, or is it just UK entries in the addressbook]. That would probably hinge on the consent to access the particular files duplicated.

Then there's database rights (tort?), a sort of copyright for databases. [Copyright wouldn't apply as it's not a creative work].

Harassment (tort I think) and infringement of the right to a private life as enshrined in the ECHR (crime) seem to be causes to object as well.

As the calls were business motivated then failure to check against a telephone cold-call blacklist could also generate extra fines.

Seems there's much that could be sued for.

Reminds me of the emails Facebook sends me telling me that I've missed important notifications when I don't log in for a while. I log in, and there are no notifications waiting.

And the messages Facebook sends me telling me about an event and letting me know that one of my friends is a guest, when they've been invited but haven't actually confirmed.

"Just curious, is there a way to sue/fine a company like this for false advertising, essentially?"

Yes. It's illegal to use someone else's likeness for advertising without their permission.

Fine you say? Here's an earlier HN submission pertaining to just that: https://news.ycombinator.com/item?id=5630449

UPDATE: That appears to be related to collecting info on minors. Looks like they need to be fined again for this.

Well, I don't think we should look for malice when is just incompetence.

LinkedIn keeps asking me to share my mail user/password so I can connect with more people, and says that X and many others already did it. I can't tell about others, but X is my wife and I'm certain she didn't do it.

So well, as I said I think that this is just a matter of incompetence (ie. automated message gone wrong).

EDIT: typo

How about another detail — the fact that – according do another comment here, the only one that seems to actually have looked at how the app behaves before jumping the gun – the app tells you it's going to invite everyone on first launch and you need to tell it not to?

Still bad, but quite a different perspective.

This is proven (and risky) way to grow. I really cannot blame them: they have to pay their rent and VCs are nervous.

This approach might burn them completely but also can get them to some significant number of users (after which they will issue an apology and pay all fines if needed).

> I really cannot blame them

Shitty behavior does not stop being shitty behavior because you have bills to pay. And it's not even in the "understandable under duress" area of shitty; Path isn't a person with a starving child and that dude's contact list isn't a loaf of bread.

If your company can't exist without being shitty, your company shouldn't exist.

I completely agree with you on "If your company can't exist without being shitty, your company shouldn't exist.". However, here in VC-stanm the behavior like this is considered "hacking the growth" and something which is not considered a bad thing. So unfortunately, you cannot play the game here (ok, you can play but no big money will bet on you) if you not ready to do things like this.

I'm OK with that game not being played. That spam-calling random people at 6AM is not an immediate "get the fuck out of my office" is probably a pretty good indication that something is deeply rotten in, as you put it, "VC-stan."

But you can get on TechCrunch, so it's gotta be okay, right? :(

And we also need to understand that all these social networks are in business of spamming people ("growing the network"). Hey I remember back in 2006 (maybe 2005?) Facebook invited all my all my Gmail contacts to Facebook (including people who interviewed me ...). That is the key of their existence, so I'm not sure how anybody in right mind can expect anything else. Maybe I'm too old and see these things as they are...

If this sort of behavior is really being pushed by investors, it kind of makes one wonder about the corporate veil shielding investors from liability.

Shitty behavior also does not stop being shitty behavior when you're pressured into doing it.

I'm all for the moral high ground, but it becomes a lot less simple when "pressured" means something like a combination of "sole breadwinner" and "ethical choice." Reality doesn't bend to our idealism, and businesses are grounded solely in reality (often to our detriment).

>when "pressured" means something like a combination of "sole breadwinner" and "ethical choice."

I can't figure out what you mean by this. Are you saying that this behavior is only alright when you need the money?

Definitely not. I'm saying it's never "right" in a moral/ethical sense, but when you're put in certain (fairly common) positions, the moral high ground is not a feasible option for most people. It comes down to whether you consider stability for those who depend on you more or less important than doing or refusing to do something based on whether you consider it right or wrong... that's something that's easy to judge until you're in such a position.

This isn't the hypothetical where you're stealing a loaf of bread from the market to feed your starving family. I certainly respect that taking the moral high ground isn't always easy and that some people are going to be in situations where it is more difficult for them to do so than for others.

But the standard for acting like an asshole has to be greater than simple expediency. The necessity of breaking the social contract has to be roughly proportional to the community inconvenience; that's why firemen get to use the siren and everyone is supposed to yield when they are headed to, well, fight a fire, but I don't get to use one when I'm headed to the grocery store.

If I bang on a stranger's door at 6am because their house is on fire and I'm trying to warn them, then that's great, because the "don't harass strangers at six in the freaking morning" social norm is less important than the "OMG THE FLAMES THEY BURN!" social norm. In contrast, if I bang on someone's door at 6am trying to sell Amway products, then I'm an asshole. Finally, If I bang on someone's door at 6am, insist that their buddy, whose name I found by going through the trash, has photos to share with them, and only later reveal that there never were any photos, then I'm an unbelievable jackass.

Unfortunately, your analogy is excessively dramatic and misses the reality of the situation. It's easy to create surreal situations in which absolute visions of your own morals apply. Sorry, but you just don't get it.

Apologies for trying to make a point. If anything, my analogy seems pretty much on point, as it barely changes the reality around the events here as described:

banging on door <=> text/phone call, which likely causes an audible alert stranger <=> contact/acquaintance/vendor/client/boss/relative/lover/ex-lover/dentist of a new user 6am <=> 6am their buddy <=> their contact the trash <=> a new user's cell phone contacts has photos to share <=> has photos to share there never were any photos <=> there never were any photos unbelievable jackass <=> unbelievable jackass

Seems close enough.

But my real question for you is which is it? Did a developer/Path act unethically, but you believe those actions are justified because people have families they need to care for? Or do you believe that Path/the developers did nothing wrong and I'm just applying my own morals to the situation? One or the other is a legitimate position to take (though I may disagree with your view), but you can't have both.

Because your hand-waving and appeals to authority are so much more relevant?

This sounds patronizing. As adults, you make your choice and face the consequences. That you may have dependents does not reduce your moral responsibility one bit.

Clearly you don't know how it feels to actually have "dependents" or what that responsibility entails. This will sound patronizing, but it's absolutely valid: Come back and comment on this in 10 to 20 years. At that point you might have the perspective you clearly lack now.

[edit]To be clear: I do not have dependents, but I also don't have an employer (who isn't me), and I do have people that depend on me. What I also have is a lot of experience in a lot of situations that are very much "gray" in terms of what less-experienced people seem to consider moral/ethical absolutes, which simply do not exist. That last part is what you don't understand but are likely to figure out as life teaches you the things your parents and teachers would like to but simply can't.[/edit]

It sounds patronizing because it is. It sounds silly because it's that, too.

Doing the right thing is sometimes difficult. That's not an excuse to do the wrong thing. Indiscriminately spamming hundreds of contacts is always the wrong thing.

It doesn't, but it might mean we need to look into how the pressure is being applied.

We can bitch at Path all day, but there exists strong incentive to do this, and people who don't do this will have unilaterally disarmed and be at a disadvantage. We need to encourage the first group to not do this, and encourage the second group to keep it up and not feel like suckers for respecting their users.

Would it also be okay to murder people in order to pay your rent and your backers are nervous? What about just hitting them over the head? What about just threatening to hit them?

Ethical justification isn't easy, but it's also not this hard. The entire point of being ethical is that you might lose out as a result of being ethical. If you opt to discard ethics in order to get ahead, you are being unethical. That is what it means to be unethical.

This. Is. Unethical.

Then the fines are obviously not high enough.

I think a better solution is to make fines exponential (for repeated reasons), for e.g. 1st time $10, 2nd time $100, third time $1000 dollars and so on.

Fines are meant to discourage behaviour found unacceptable by society[0]. That obviously fails to work if the net outcome of a ‘discouraged’ action is still positive even after taking into account the fine (and compensation for damage, if any). So, really, the (first) fine was not high enough. For a very similar reason, non-trivial fines[1] are usually given as a percentage of turnover or daily fines.

[0] Note that jail time also serves to rehabilitate the offender in case of serious crimes. That obviously doesn’t really work with companies, and unfortunately, carelessness with other people’s data is not considered a serious crime in many places.

[1] Basically anything above a parking ticket.

Yeah, that's why I think exponential grow in fines for repeated offenses by a company is a good idea, it will eventually be net negative for the company so they will stop doing it.

But why do you want to grow fines instead of imposing appropriate (i.e. high-enough) fines on the first offence?

Because you can't know the net positive in advance; is way too dependent of the context and that includes the size of the company.

It also discourages the searching of flaws in the system because even if you get away with it once (net-positive) you know the next time the fine will make unviable.

BTW this is similar to how already the civil justice system works in many countries, where after repeated minor offenses you go to jail.

I'm sure this will be downvoted to hell and back, but still: http://jesuschristsiliconvalley.tumblr.com/post/46539276780/...

And here I was busy thinking "Wow. If Path is doing crap like this than that post about the founder being a douchebag must be 100% correct." Congrats on being even more obnoxious than Facebook.

This was so great it almost made me laugh out loud.

the man is a genius. his other posts are also pretty awesome. however, part of me suspects that in an ultimate act of vanity, dave morin himself set up his own hateblog.

Turning off the ringer on your phone is totally legit, though. I don't believe in interrupt-driven communication, unless it's mediated through a machine or some kind of filter. (I'll let a machine notify me if lots of stuff is down, or if one of a very small number of people call me, but that's about it. IIRC, pg's call went to voicemail a couple years ago.

Turning off the ringer on your phone is totally legit, though.

It is, but his stated reason for doing so is nothing if not totally insane.

Especially in context, the phrasing is ridiculous, but semantically it no different from "I refused to be bossed around by my phone".

Refusing to be bossed around by an inanimate object still seems somewhat ridiculous to me.

A phone, especially a ringing one, isn't particularly inanimate.

Being bossed around by an inanimate object seems even more ridiculous to me :)

Turning off the ringer on your phone is totally legit, though.

For the CEO of a business that sends phone spam to other people at 6AM, it’s also rather telling behavior, though.

There are several things there that could be perfectly legit but the way they're stated just comes across as completely absurd. I know several people who have two phones (for similar reasons), I know people who prefer texting to calls (and reject calls regularly), I know people who create bespoke apps to play around and solve their own problems. None of the people I know would say any of the kinds of things you see in that article (Except maybe about Uber - that one seems normal).

Awesome. I've been searching high and low for something to fill the uncov-shaped hole in my heart.

FYI, I stopped reading your comment six words in and immediately downvoted it.

This isn't Reddit.

And I'd like to keep it that way.

Looking at other posts, those writings lack empathy and imagination. The post about Google Glass [0] is probably one of the most asinine thing I read on the Internet this month.

[0] - http://jesuschristsiliconvalley.tumblr.com/post/48596551224/...

it's obviously written in that voice for effect, but there is actual thought behind it. comparing google glasses to segway is valid, and probably not far off. only time will tell.

Path must really like thos FTC fines.

Here's to hoping the next fine will exceed their cash reserves and we can put an end to this madness.

The post is proof positive that path still uploads phonebooks from the app to their servers right after installing it.

> The post is proof positive that path still uploads phonebooks from the app to their servers right after installing it.

Is it? The texts were coming from his phone number, which suggests they were sent from his phone (not necessarily, I know, but you said "proof positive").

I don't know how Android text message sending works, but there is likely some rate limiting to how many texts you can send so they certainly could have been queued up to be sent later.

The sent address on an SMS is as meaningless as it is on an email. SMS gateways allow the sender to use any number or caller ID. I've used it previously as a party trick, it's completely transparent to the user.

I know that, and I said so in the comment, I have used it myself in some projects. I just meant that this is not necessarily "proof positive": there are other ways they could have sent those texts without uploading the user's address book to their server.

I've received no less than two text messages from Path today - both of which were not sent from my friends phones, but rather one of those 5 digit numbers that automated texts seem to use. This is... shady to say the least.

This comment is a great example of how people are so quick to rush to judgement with emotional reactions. Let's look at the facts:

1. Path was fined, not for anything involving address books, but for allowing 12 year olds to sign up for the service.

2. Yes... it is proof that Path uploads your phone book. Of course, they ask you. The OS won't even give you access to the phone book without prompting the user. So somewhere along the way, the user knowingly gave Path access to their contacts.

It would be rather trivial for a real reporter to do some research here. Does Path actually say "We're going to invite all your friends via SMS", even in fine print? It might be sleazy, but it would certainly change a lot. But instead, we're just going to sit here and speculate about things and irrationally talk about a fine that didn't have anything to do with this.

Me1000, I love you, but...

> The OS won't even give you access to the phone book without prompting the user. So somewhere along the way, the user knowingly gave Path access to their contacts.

The introduction of address book privacy in iOS was in large part prompted by the publication of Path's behavior. Up until the Path and eventually iOS update after the controversy first arose, Path didn't explicitly ask the user for access to their address book.


> Path was fined, not for anything involving address books, but for allowing 12 year olds to sign up for the service.

Path was fined for the 12 year old signup thing specifically, but they were still charged with privacy violations regarding the address book kerfuffle.


OP is using an android phone.

Android's permission system also doesn't require apps to ask for permission before they access your phonebook - it requires you to give permission to install the app, and it tells you the app can view your phonebook, but you have to either trust the app not to abuse that ability or not install it at all. There's no way of telling the difference between an app that can use your phonebook to provide useful optional functionality and one that'll upload the entire thing to the mothership the moment you start it.

I really think Android should add another layer of protection here, similar to the "This app wants to use your location" prompt in iOS. I'd like to be able to install an app that might need to access my phonebook in some use case but be able to deny it when it attempts to access that information when I don't want it to.

For example, I'd want to be able to use the facebook app and many users might even want to have it scan their address books in order to find friends. However, if the app attempts to read my address book when I'm just checking someone's status update that is clearly not okay and I want to be able to block it.

The free pass to pillage my phone upon installation doesn't sit well with me.

> I really think Android should add another layer of protection here, similar to the "This app wants to use your location" prompt in iOS

Most definitely. This has always been my argument against the whole system: installing apps that need excessive permissions is basically blackmail. Just like "Do you agree to the terms of service?", you hardly have a choice. I was very surprised to see people not even glance at the permissions before clicking Accept.

But as I said, it's blackmail anyway whether you look or not. You don't want them to have all your contacts, your exact location, all data on your sdcard, and full network access? Fine then, you won't get [whatsapp] (or pretty much any other app), that what everyone else has and that you're almost socially obliged to have (at least in my age category).

It even goes so far that the android user has no permissions to use the permission manager to deny or allow permissions for apps. There are commands ("pm grant x" and "pm revoke y") that lets you change apps' permissions... but you can't use it by default, even as root ("java.lang.SecurityException: Neither user [your uid] nor current process has android.permission.GRANT_REVOKE_PERMISSIONS"). It's totally messed up.

On a rooted android phone, it is possible to install apps but not give the the permission. Of course, that usually means they will crash then they try to do something, but it gives you a layer of protection if you want to try the app out or something.

I'm well aware of what prompted the change to iOS. We talked about that last year. I'm not defending anything, but that's not really at issue today, though.

Specifically in this case: pretty sure Android and Path alike might be informing you about address book access, but nowhere is the user going to be prompted for Path to share with your contacts after uninstalling the app.

Same odious behavior, just a bit different this time.

it certainly is on android. one of the reasons why i only install "social" apps on iOS..

1. Path was fined, not for anything involving address books, but for allowing 12 year olds to sign up for the service.

That's not true, is it? According to the FTC[1], they were fined for "collecting personal information from their mobile device address books without their knowledge and consent."

2. Yes... it is proof that Path uploads your phone book. Of course, they ask you. The OS won't even give you access to the phone book without prompting the user. So somewhere along the way, the user knowingly gave Path access to their contacts.

Giving them permission to read the address book (which might be useful and perfectly legitimate) and giving them permission to send everyone in that address book spam is two very different things.

[1] http://www.ftc.gov/opa/2013/02/path.shtm

1. The sub title: "Company also Will Pay $800,000 for Allegedly Collecting Kids' Personal Information without their Parents’ Consent" The fine was only with regard to the violation of CIPA.

They pled down, effectively. They were charged with more widely-ranging violations.

You are equating "grant access to the address book" with "spam all 'friends' at 6 o'clock in the morning to tell them lies"

By your logic, it would be completely useless to even read the fine print, because giving them access to the addressbook would imply my consent for them to do anything technically possible with it.

> ...because giving them access to the addressbook would imply my consent for them to do anything technically possible with it.

Well, from a technical perspective that is indeed the case. Once they physically have your contact info they may do as they please.

You, as the iOS or Android user, are not giving them permission to use your contacts "properly" or "nicely"--you're giving permission to access them, the raw data of all of them, and once that's done all bets are off. If the app is untrustworthy it is free to go crazy (one of the reasons I always say "no" to that question).

I don't see how Apple or Google can stop this in a technical way without making the permissions more fine grained which in turn makes it more confusing to users (who probably mostly click "OK" anyway).

Apple could, however, make better app policies so that they can pull apps when they attempt this kind of shady crap. I'm not familiar with the Android app store policy, so I won't speculate there.

I agree, there's no easy solution. Maybe public debates like this are the best we can hope for anyway. My approach is to be extremely cautious with apps that depend on network effects to be useful or profitable.

  >Does Path actually say "We're going to invite all your friends via SMS", even in fine print?
Should it? More importantly, will anyone download the app in the first place if it did?

No one -- in their right minds -- would suddenly want to share (non-existent) photos with all their contacts. Seems like an odd way to say "We're going to invite all your friends via SMS". Your address book doesn't consist primarily of your Twitter followers. It doesn't matter if they intended it to be a feature; someone at the company should have raised a Big Red Flag and made any such SMS feature explicitly opt-in-only. With a big fonts, high-contrast colors, dancing bananas or whatever else you can use to grab attention to that fact.

This is an order of magnitude beyond sleazy.

"I think it's be more appropriate if the box bore a great red label: 'WARNING: LARK'S VOMIT!!!’" — "Our sales would plummet!” — "Well why don't you move into more conventional areas of confectionary??!!"

Without addressing the opt-in point, which I agree with... You're also assuming there is no "continue without doing this" button, which of course, I was assuming there would be...

Many users just mash on the "next" button on app intro screens. Again, it's all just speculation until someone takes the time to start researching and documenting the facts instead of just yelling "KILL IT"! :/

I agree, we're just all shooting the breeze here until we get a proper word from Path, and/or a journalist does a proper investigation into what exactly did happen.

I'm not unsympathetic when someone says "oh, I didn't read that bit" (I'm guilty of that too), but surely they have usability experts who would have warned them about it. The original author is technically inclined to make an informed decision. That's a big deal to me. That tells me, they never gave the guy any settings options to begin with or hid it in an obscure panel.

What's worse, according to the author, texts were sent possibly after he uninstalled the app. Which means they still keep the data!

You "assume" there would be? You worked for Path...

Classy. Interning at a company and having not been there for 9 months (see above) obviously counts for nothing.

For what it's worth, I disagree with Path and Me1000's arguments. Doesn't mean us here at HN should be attacking him or ignoring his points because of that, nor does it mean that doxing him is acceptable.

Really disappointed with HN in this thread :/

If it was opt-in then mashing the "Next" button would be sufficient to prevent the app from misbehaving...

Me1000, you should mention to the folks here that you actually worked on the address book stealing code during your time working at Path so you are particularly aware that the statements in your post there are both untrue and self-serving.

I've never hidden that I interned at Path. The address book debacle happened long before I ever joined. The code you're talking about did little more than normalize phone numbers so it could be hashed _before_ it was sent to the server.

It's been about nine months since I've worked at Path and as you may know (although, given how baseless your comment is, perhaps you wouldn't know)... startups move quickly, Path has released many updates since I've left. It's incredibly disingenuous to suggest what I'm saying is untrue (since both statements I made are provable with empirical evidence) or that I had any motive other than trying to get people to think before they go on a witch hunt.

Droithomme, if I know you personally, I would appreciate you contacting me privately.

FWIW, while I disagree with your stated opinion, I can also see how you came to it. I think most of us here realise that your opinion is separate from your past employment.

Doxing Me1000 is incredibly uncool.

That's not really doxing, I don't think. He's got links in his profile and the same username as on Twitter.

Lots of people know Randy. He's had posts on the front page of HN.

As much as a developer can be a "public figure", I think he is one.

If John Resig posted about DOM libraries and someone mentioned that he wrote jQuery, I don't think anyone would suggest he'd been doxed.

There's a big difference between "X would like to access your contacts" and "X would like to access your contacts, then store everything on their servers for reasons/duration of their determination"

But not from the OS's point of view. iOS (and Android) can only warn about the contact crossing the threshold into the app. After that the app can do as it pleases...

Path's settlement with the FTC for the last address book incident was that they create a privacy program and get independent privacy audits every other year for the next 20 years.

I don't know what the details are on those audits, but if these texts were sent without consent it seems like the kind of misuse of personal information that they would be concerned about.

Android doesn't ask for user's permission before accessing the address book, is it?

Android shows the permissions each app is requesting before you install, and even lets you know if they change their permissions between updates. While what Path did is crappy, they didn't subvert the Android permissions system.

The thing that burnt the poster is that while a social app asking for access to their contacts might not rise a brow, the user has no way to know what they are going to do with that data without looking at the reviews or around the internet for complaints/testimonials.

"Android shows the permissions each app is requesting before you install"

Yes and no. Google often hides the most offensive permission requests under that "see more" arrow. And the permission requests (and accompanying explanations) are too vague and ambiguous. For example: Does "request access to network" mean they're able to sniff all my incoming/outgoing data, granting the app access to everything?

That, and the page is designed so most people will click the "Accept & Download" button without even reading the top-level permission requests.

It's got the title and button at the top, taking up a large chunk of space (1/3rd on my Nexus 4), and then a vague list of - to most users - technical-sounding "stuff".

My guess is a large majority of users never look past the button.

Congrats __chismc. Shortly after this post it appears they moved the "Install" link under the permission requests.

But Android also doesn't allow you to deny specific permissions. It's all or nothing at time of install - if it ever gets location, it always gets location. This is one of the reasons I like iOS.

This doesn't sound very difficult to verify.

Go to play.google.com. Search for path. First result in the app store. Click on Permissions.

"This application has access to the following:

... blah blah blah ...

This permission allows the app to use the camera at any time without your confirmation.

... blah blah blah ...

read your contacts Allows the app to read data about your contacts stored on your tablet

... blah blah blah ...

read call log Allows the app to read your tablet's call log, including data about incoming and outgoing calls.

... blah blah blah ...

Now users have been trained to click "yes" to all requests without even reading them, so I you can get into philosophical arguments about if the "really" have permissions. Just like most users randomly click thru "click thru licences".

Read data about contacts doesn't sound unreasonable for an app like Path though. Facebook uses that permission to sync contacts if you want, and I don't see any problem with that. Unfortunately, reading data means they can store it, off device, independently of the install state. That's a difficult problem to solve, but I don't think users should be expected to expect this as a result of that permission.

Exactly. WhatsApp wants permission to practically everything possible, because it offers various features on top of these permissions. Yet it never spammed anyone so far from what I can tell.

They might not spam users, but they don't let you delete contacts. So in short: once they grab your contact list, it's theirs.

Can you actually prevent an app from using one or more of those permissions? Like can I give it permission to my camera but not to the call log?

There is no official support for this AFAIK. Some 3rd party ROMs like Cyanogenmod have this functionality built in, though, and if you root your phone there are apps like "Permissions Denied" that you can run to do this.

I'd assume the reason Google is somewhat hesitant to offer this officially is that many apps don't deal well with this -- some do degrade gracefully, while others end up throwing task-ending exceptions because the app code just never planned for not being able to do some task which requires permissions declared in the manifest.

There is a simple solution for managing permissions for poorly built apps: serve them empty or fake data.

Every app already has to consider the case of GPS being unavailable indoors, the contact list only having one person (yourself) in it, or the camera picture being black in darkness.

Yes, that is a very good idea.

I'm sure some apps will fail anyway because they just never expected a contact list of 0 entries, but the list should be much smaller in that situation (mostly limited to those who do virtually no QA).

Sounds like a huge time investment and you risk bricking your phone? Doesn't seem worth it.

I have always wondered why Google doesn't add this feature. Creeping up the ladder of permissions is a problem in Android, and the user's choice is all or nothing. This can become a bad choice: Add a permission, or lose access to the data an app is keeping for you.

It would be easy enough for developers to catch security exceptions that Google would find little or no developer fall-off due to a requirement like this.

No. Only on some custom Android builds.

yes, android requires user permission to access any data on your phone.

It tells you the permissions requested by an app before you install it. For Path, see the "permissions" tab on this Play Store page https://play.google.com/store/apps/details?id=com.path&h...

Yes, you have to put this line in your app's manifest:

  <uses-permission android:name="android.permission.READ_CONTACTS"/>
and the installer will prompt the user for that permission when they install the app.

When you install an Android app it lists the permissions you grant the app by continuing with the install. Contact access is one of these permissions.

Path seems to do all the shady things with your data that we fear Google and Facebook could do. FB and G definitely push the boundaries of privacy/creepy sometimes, but Path seems to have no qualms about blowing right past them. I am staying away.

It reminds of one of Facebook's early growth tactics - as part of the contact import process, they'd send out spam IMs to all your contacts saying you just joined Facebook and ask if they wanted to join. It was very shady.

The impact is also smaller, FB has 1 billion or so accounts and Google probably has about the same. Plus, they can track every move you make across the web, things you like, searches you make, what type of content you email, share etc.

If Path grows, I hope they just die (two strikes is enough for me!) more and more people will look under the hood.

And this is why I fundamentally don't trust my smartphone.

It's a fun device. But it's a spy, outside my control, in my pocket.

I've rooted it, but haven't yet modded it (and if anyone cares to point me at a gentle introduction for CyanogenMod or another option that works on an HTC Incredible, I'm all ears).

I've been reasonably conservative in what apps I place on my phone, and several (Pandora specifically comes to mind) were removed when permissions were extended to include contacts (Pandora, you listening?).

I'm waiting eagerly for the following capabilities:

To define at the phone level what information I'm willing to share. Existing "privacy controls" make a mockery of any semblance of either "privacy" or "control" by distributing vague and conflicting access among a great many applications with no ability to centrally audit them.

To specifically grant to specific applications specific rights. My location is something I'll disclose very guardedly (I disable GPS functions on my phone). Other rights generally shouldn't be shared.

To request and audit ALL information a given application has of me in a convenient electronic format (such as a database dump accessibly by MySQL or Postgresql). Such functionality is of course a three-edged sword, as what information the vendor has and I wish to request a third party might also request pretending to be me. Or having legal authority to make the request (though that's already the case), via subpoena or warrant.

My contacts list is off limits. Full stop. Specific contacts might be contacted by way of an application if specifically designated by me, but no other use may be made of their information. Hell, it's not even mine to give.

The existing state of smartphones is interesting, but it's also a little shop of horrors. And if application authors, smartphone manufacturers, and telecom providers don't get their act together on this Real Soon Now, we're going to see some horror stories.

A thousand times this. Why does the OS not make the app permissions more granular than a bit flag? Every app and its dog nowadays require all these really invasive data permissions, and all the choice we're given is either "yes to all" or GTFO. Google went to great efforts to design the concept of contact circles (something that could easily have prevented the OP's problem), why is Android still stuck in the dark ages of data sharing?

What I really wish there was in android is a way to disable permissions after installing an app. Obviously this probably won't make it into stock android, but I would love to be able to install an app like Pandora and then revoke specific privileges.

Then whenever the app attempted to use those revoked permissions, android would do something logical for certain cases (like providing an empty contacts list for the contacts permissions), or even just crash the app if it couldn't do anything else. I would totally be willing to accept a certain amount of instability for a feature like this.

> "What I really wish there was in android is a way to disable permissions after installing an app."

You can, you can! Only Google went ahead and disabled it for you. The commands are "pm revoke x" and "pm grant y", but if you ever try it (even running as root), you'll get this message:

Operation not allowed: java.lang.SecurityException: Neither user [your uid] nor current process has android.permission.GRANT_REVOKE_PERMISSIONS

> "Then whenever the app attempted to use those revoked permissions, android would do something logical for certain cases (like providing an empty contacts list for the contacts permissions), or even just crash the app if it couldn't do anything else. I would totally be willing to accept a certain amount of instability for a feature like this."

Exactly! Same for me. If this made it into stock android, developers would be forced to put phone book access in a try{} block so that permission revoking doesn't crash the entire app. Your solution with returning an empty phone book sounds even better, but that's also more work so I don't know whether that'll ever make it... Then again, it's a much nicer solution, so who knows.

Why on earth are people still using Path after it has become so very obvious a long time ago how unethical this company is?

This kind of behavior doesn't just go away after a bit of bad publicity or a few fines. It's part of the DNA of a company. Such a lack of ethics permeates everything from strategic decisions to technical choices to hiring.

Expect more of the same.

Remember when a while back they downloaded far too much information from each phone (for the convenience of connecting you to people)? Everyone was surprised (some outraged) and then they pushed an update to stop that "feature" and when the CEO/Boss man posted a blog entry apologizing, everyone forgave the company, people were holding hands singing Kumbaya.

Edit: Here's when they flubbed a year ago.


Edit2: Er... apparently, I suffered a seizure of some sort (and an aneurism and a stroke simultaneously). Reworded.

I just noticed, in that apology letter, the line:

"... Your trust matters to us and we want you to feel completely in control of your information on Path. ..."

So they want you to feel in control.

"We are deeply sorry if you were uncomfortable with how our application used your phone contacts."

That non-apology is corporate communications at its most typical.

Path's customer service replied to this article's author on Twitter saying they'd "love to engage."

If you aren't Captain Picard, you're not engaging anything. Shut up and talk human, folks.

I’m a big sucker for a non-apology apology, reading one never fails to make my day.

More here: http://terribleapologies.com/ and http://en.wikipedia.org/wiki/Non-apology_apology#Examples and http://jezebel.com/sorry-not-sorry-how-to-non-apologize-5993...

This kind of thing sits alongside "it is important that justice is seen to be done" in my list of phrases that will preclude you from being taken at face value ever again.

Kind of. In law, actual justice is obviously the top priority, but it's also important for social stability and the effectiveness of the courts that justice appear to be done. (Otherwise, people may not respect court decisions even when they are just, etc.) Luckily, these two properties are usually complimentary. Given actual justice, the appearance of justice usually follows through transparency and the like.

Haha, yes! It's very important to feel you're in control, just like you want to feel like you're making a wise decision when you reach for the conveniently placed junk in the supermarket isles.

I get the feeling that either those in charge are hopelessly detached from society to see how privacy is perceived by the rest of us (like Zuckerberg) or there's little to no vetting when it comes to implementation decisions.

They just had to pay a $800,000 fine for that stunt http://www.ftc.gov/opa/2013/02/path.shtm

I think the HN traffic may have destroyed another WordPress install.

So, google cache: http://webcache.googleusercontent.com/search?q=cache%3Ahttp%...

For those with WordPress installs who want to survive an HN frontpage:

    1. if you don't have sudo, use the W3 Total Cache plugin http://wordpress.org/extend/plugins/w3-total-cache/
    2. if you have sudo:
        2a. the easy way: apt-get install memcached, add the pecl memcache extension, and use object-cache.php http://plugins.svn.wordpress.org/memcached/trunk/object-cache.php and batcache http://wordpress.org/extend/plugins/batcache/
        2b. the hard way: varnish https://www.varnish-cache.org/ https://github.com/pkhamre/wp-varnish

Using CloudFlare or Google Page Speed Service is also a good idea to further optimize!


  # /var/spool/cron/crontabs/apache
  */2 * * * * ( cd /var/www/htdocs && [ ! -e .mlan.lock ] && touch .mlan.lock && wget -q -O tmp.html http://www.mywebsite.com/blogs/my-long-article-name/ && mv -f tmp.html my-long-article-name.html && rm -f .mlan.lock )
Post HN story http://www.mywebsite.com/my-long-article-name.html and it'll get refreshed every 2 minutes. Pretty simple hack. (Edit: add lock file)

    rm -rf /var/www/wordpress

This is good information

I'll make sure to never install Path

There are some abuses that can't be solved by an apology.

Yeah, this just convinced me to never use Path too.

Great, we have a few more convinced not to use Path. One application will now have perhaps a few hundred less users. What about the rest of the permission-hungry apps? Whatsapp? Facebook? Any Google app? Any other big developer's app? What if the founders of Path just start a new company?

Not installing Path is not a solution here. You gotta look critically at the permissions an app uses and their terms of service (at least skip to the privacy related issues, though they usually try to hide and obfuscate them). If there is something you don't entirely trust, wonder why you really need that app. Perhaps it's an improvement for your life, but can't you really live without? You've gone without that app for the past how many years? Is it worth giving up your phone book and all sdcard contents?

You're correct, and it gives me the shivers to see an application like 'photo sharing' wanting several useless permissions

Here's what I would like: for Android to allow me to deny or ask for a confirmation for each permission of these.

Or ask upon usage. "Do you want to grant XYZ to view your contact list? [Yes, and don't ask again] [Just this time] [Never]"


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact