Hacker News new | past | comments | ask | show | jobs | submit login
Uber CEO Plays with Fire (nytimes.com)
559 points by bmahmood on April 23, 2017 | hide | past | favorite | 492 comments

Buried lede here:

"They spent much of their energy one-upping rivals like Lyft. Uber devoted teams to so-called competitive intelligence, purchasing data from an analytics service called Slice Intelligence. Using an email digest service it owns named Unroll.me, Slice collected its customers’ emailed Lyft receipts from their inboxes and sold the anonymized data to Uber. Uber used the data as a proxy for the health of Lyft’s business. (Lyft, too, operates a competitive intelligence team.)"

Wow I think this unroll.me thing is the real scandal here.

I am an unroll.me user, but had no idea they sell user data to companies this way.

Their whole value proposition is to help people control their own privacy and now I kind of feel betrayed..

I worked for a company that nearly acquired unroll.me. At the time, which was over three years ago, they had kept a copy of every single email of yours that you sent or received while a part of their service. Those emails were kept in a series of poorly secured S3 buckets. A large part of Slice buying unroll.me was for access to those email archives. Specifically, they wanted to look for keyword trends and for receipts from online purchases.

The founders of unroll.me were pretty dishonest, which is a large part of why the company I worked for declined to purchase the company. As an example, one of the problems was how the founders had valued and then diluted equity shares that employees held. To make a long story short, there weren't any circumstances in which employees who held options or an equity stake would see any money.

I hope you weren't emailed any legal documents or passwords written in the clear.

situations like this is what makes it really hard for others in this space to survive. I run https://clean.email (and we don't store/retain/sell any data, just charge people to use it) and the biggest issue we have is lack of trust because of news like this.

although every day someone would still email with a question "why you are not free like unroll.me".. sigh.

I understand that you don't retain user emails, and that's good, but do I understand that your service has somewhere a database of OAuth bearer tokens that provide direct access to the email archives of everyone who has signed up for your service? How do you protect that? I would be terrified.

yes, that is correct. we actually started without keeping refresh tokens and only using access tokens – but they expire really fast and google api randomly stops accepting them so we had to start keeping refresh tokens as well.

they are encrypted and can only be decrypted by "scan" and "action" (delete, trash, etc) jobs, job servers are not exposed to the outside and can only be accessed via the private network via ssh using access keys and only from a specific node which has those keys. keys are password protected. access to that specific node is restricted to a set of known public ip addresses. database and job servers are different servers of course. database servers are also only accessible within the private network.

the only thing that's publicly exposed is a load balancer. to access anything else we log in to the "gateway" instance which we access by ip only and it does not have any domain name associated with it.

with all that – I am very open to ideas about protecting that further.

Encryption at rest? Backups and encryption thereof?

All job servers are stateless by design and easily disposable/replaceable with a fresh build so we don't back them up. we don't back up user data either – it's deleted within 24 fours (or immediately on request). the only thing backed up is a table with refresh tokens which are encrypted and decryption keys are not backed up with it.

Well now you have an excellent value proposition you can point to for why you aren't free.

Yeah, I'm working on the website update right now to put ToS/policies front and center – "we can do a better job" communicating our policies :)

> the biggest issue we have is lack of trust because of news like this.

This gives you something fundamental to compete on.

Could you explain the limits of the free plan? Interested in trying this out but it's not clear what I'll get it and if/when I'll be forced to pay. That said, I understand the value in paying for such a service instead of selling off all my data.

Free plan allows you to clean (remove, trash, label etc) 1000 emails

Thanks! That sounds pretty reasonable. It would be great to have that explained somewhere on the site.

Hmm, makes sense. I went ahead and added it under pricing. Thank you :)

Awesome :) Minor nit, the grammar you're currently missing an article. It should be "Cleaning the first 1,000 emails is free!"

fixed that too. thank you again.

that's the thing about quick fixes lol :)

Interesting. I can't click your Terms of Use link. Would you happen to have a direct link handy?

We have them on the "about" page – https://clean.email/about – but we are actually working on a separate page right now. as I said above – we can and should do better putting our policies front and center.

it's kinda funny how ~50 people who came from this thread to our service illustrated the point of the lack trust – not a single person registered :)

I clicked, I read your value prop, I just can't see myself paying $95+ /year for less obnoxious email in my inbox. It's really not that big of a problem to me.

ugh. I have "Yearly pricing to the homepage" sitting in my to-do list for a few weeks :) so – there's yearly pricing (and it starts with 14.99 / year (I know, this looks really weird, but it took us some time to get to this pricing).

now, whether it's valuable enough to justify the price – depends a lot on how you use your email. we've got users managing 3-5 accounts with hundreds of thousands of emails each and they use our labeling/organization more than removal. think of it as of a way to act upon a group of emails no matter what the size of the group is.

(and I kinda think our website is not really good at communicating this – our traffic is mostly coming from android app right now and we've been putting website work off. who knew!).

Then why you complain?

You offer plain and simply ask for 8€ per month per account.

That's simply a ridiculous amount of money for 99% of the people, what you have but we can't see is part of the problem, not the trust, the price is just not worth for what you offer, so, don't complain about "not a single new customer from 50 clicks".

Hey, quick fix: Just make Yearly the default option when the page loads, since the yearly options are the best price. Users to your site may just scroll through without clicking anything and only see the monthly prices (like I did).

this sounds like a great idea from the rational standpoint, but our data says otherwise. we've seen a conversion increase and generally more people started buying when we enabled monthly prices and again when we made them default. I have a few theories to back it up – but generally speaking pricing perception is emotional, not rational. looking at our prices you'd assume no one buys monthly, but about 40-50% of people do :)

Yea I'm certainly not eager to sign up for another service like this after finding out that the last one I used sold my data. It's getting really tough to trust third party services with your data these days.

my point exactly. I was just discussing this with a friend – there's really no way for us to prove that we don't keep or don't sell the data we get access to (aside from clearer tos/policies).

and it's even scarier with iCloud for example – they don't have oAuth and people need to enter their passwords to scan/clean. (they do have "app-specific" passwords though but looks like people have hard time figuring those out.)

Well there is, but it's not cheap. You get a trusted third party to Audit you and publish the result of their Audit, something similar to a SAS70.

It's not a perfect solution but it's an option to consider

fair point – this is something we consider doing before expanding to b2b market. but:

my day job is in ecommerce (I work as a product manager at FastSpring) and I used to work on CleanMyMac at MacPaw – had to work with trust in both. it's somewhat unexpected but people who are buying software for themselves usually don't care about PCI compliance, audits, and other artifacts of "institutional validation". they care about a "norton secured" badge, proper language, recommendation from a person they know, a review at the website they read, "that green thing with the lock in my browser".. we're now at the phase where we are trying to find the right combination.

just to be clear – it's very different from project to project and depends on the audience. what I'm saying is that we're making decisions emotionally mostly based on our prior experience and rely on internal "thermometer" to tell us if what we're seeing is trustworhty.

When dealing with sites where high trust is required I think people would much rather see an independent audit or compliance with a (legit) security accreditation than a Norton badge, however, most of the time this is not offered, so we make do with the crappy badge, a recommendation, or gut instinct.

Having said that, I deal with independent audits in my job, and they're not all that reassuring.

Pardon my ignorance or perhaps its just that I've become jaded, but outside of circumstances with dire/sever consequence such as laws, regulations, etc how does an independent audit (legit accreditation or not) verify what happens after the audit is done and the auditors long gone?

How does an independent audit detect out of band taps (swapping binaries, re purposing archives/backups, mirroring, etc) on infrastructure the auditor wasn't monitoring before the audit? logs? but more importantly amortized or not the customer eventually pays for all this activity that at the end of the day is more fluff than substance (in terms of what the customer can actually verify) In the end doesn't all this come down to just another form marketing?

Please note, that I recognize that there are many scenarios where an independent audit would add value. I just don't think it adds anything that social validation doesn't already add when considered from the perspective of a consumer to whom the infrastructure behind the service is unavoidably opaque.

I don't see how that indicates a lack of trust. People may not be in the mood to change, or need to do more research before they do, especially since it is very late in the evening for the Western world.

Also, it's only been 30 minutes since your first post, and 50 is a small sample size.

that's just a joke – I was not really hoping to get users from here :) I was actually surprised with 50 even clicking the link.

You won't survive and you clearly don't understand how this business works.

That's so far outside of what is acceptable that it should be actionable in some way and I sincerely hope Google cuts them off at the knees. Aren't you breaking an NDA by posting this? (If so, extra kudos to you!)

I'll quote what I said elsewhere:

> I haven't been a part of that company for several years now, and did not have any legal agreements or first party relationship with either of the companies named above, and since the deal closed since with Slice it would be difficult for anyone to allege damages.

And if all this disappears, then yes, someone did attack me legally over it. I don't like the business culture that has built up around this kind of thing -- reputation is important, so let's defend it with lots of lawyers and NDAs, but it's too much effort to be up front about business practices that might give us a bad reputation. That's bullshit.

I totally agree. But, and this is a very big but: companies would no longer be open to potential acquisition partners during the due diligence phase of an acquisition if professionals in this space would talk publicly (or even at all) about what they find.

I'm seriously conflicted about this because I too have seen some extremely horrible stuff in the last couple of years, some of which I'm quite sure would rock the world orders of magnitude worse than what unroll.me has been up to and that was secured roughly in the same way (or maybe even worse) and with data best qualified as 'radioactive'. I do sign NDAs and I stick to them religiously but it is very hard at times to do that. Even so I understand that I'd make life miserable for those that employ me if I'd ever break an NDA.

Yep. Working in Systems as I do, my word that I'll keep my employer's secrets is pretty precious. Still, we share war stories over libations with our peers. These stories have value; they're how we know as a community what products to use and what employers to seek or to avoid. While I didn't intend for this to get quite the audience that it's getting, I will own up to having shared the story.

My boundary, and the legal boundary that NDAs (even despite what is written in them) are generally held to is "trade secrets." I would hope that everything in my post is three or more years out of date, and would no longer qualify as such.

We're in immoral waters here: 1) NDA's prevent most human's getting closer to the truth. 2) Selected audiences (the drinking friends you share details with) know that companies x, y and z are scammers and criminal, but most don't. 3) As a consequence companies that are immoral and fronted by the most skilled marketing liars thrive too much.

I say not doing evil to the rest of mankind trumps protecting the evil few.

Leaking systems that work seem like the moral road?

Couldn't you leak anonymously?

I don't believe in that. For one, there is no such thing as anonymity to begin with, for another, I think if you do a thing like that you should stand by it.

Plus if you do it anonymously it's easy for the company to spin it as "hit pieces" which is what Uber does.

Is this answer on their FAQ an outright lie, then?


> we don't store any of your emails on our servers.

Either way, I just deleted my Unroll.me account and revoked access to my Gmail account. I don't think there's anything the company can do to ever get me back as a user.

I guess it's not a lie since they store it on Amazon's servers?

I'm not sure I can answer that in detail, or that it hasn't changed since the details were originally shared with me.

That might also be a case of an article that's written from the point of view of one feature ("what happens if I delete") and not what's going on under the hood. There are other references to deleting data stored with unroll.me, e.g. When you go through the delete steps you need to do it in a particular order so that data on their side is removed, as discussed in another comment thread.

The store them on amazon's...

In terms only of capabilities, that makes me wonder a lot about Gmail. I don't see anything there that they couldn't do if they wished to do it on a far grander scale.

Granted, I tend to think the people who run Gmail are more honest than that, but if someday the wrong people retired and others took over or what have you, I wonder just how suddenly that could change?

Gmail doesn't need to sell data to anyone, they use it for rest of the google suites like google.com, adsense, youtube, doubleclick, and all the other properties they own.

In fact it would be a stupid idea for them to sell any of that data directly to a 3rd party. Instead they package them in user friendly (marketer/advertiser friendly) ways to capitalize. Some of these are shady and I'm not a fan but overall I think this approach is fine.

The problem happens when you sell user data to a 3rd party.

Here's an example: Let's say you start an email newsletter about travel. You get millions of subscribers. Then you start putting ads on your email. Maybe sometimes even send sponsored messages. This is kind of annoying but not "unethical".

On the other hand, the same company could take all the email list and sell it to bunch of travel agencies. Then all the million users who subscribed suddenly start receiving spam emails from these travel agencies. This is unethical because they literally "sold" your email address.

Of course this is more of an extreme example, but the pattern is the same.

> "Instead they package them in user friendly (marketer/advertiser friendly) ways to capitalize."

Yes, it is called Gmail Sponsored Promotions or "GSPs." Depending on the audience they can apparently be quite effective. [1]

[1] http://marketingland.com/gmail-sponsored-promotions-everythi...

Who knows? Maybe at one point people will realize the value of local mail storage and end to end cryptography.

There are so many factors involved that it becomes unreasonable quickly.

Your main problem is going to be getting everyone to use it. If you converted 25% of the people using email today to an end to end encryption system it means that they can either only email anyone else in that 25% or anytime they send or receive an email from the other 75% it's not going to be encrypted the entire way.

Do you only use one device?

Using multiple devices does not preclude one from using a server and end-to-end encryption.

The reason I ask is about private key movement. I'm curious how you share that across devices. It's the biggest issue in e2e encryption imo.

Just curious if you do anything novel there.

I don't do anything novel. I have my private key on three devices.

How do you get that private key on each of your devices, this is a real issue for a lot of less technical users.

Unless they're using an HSM, that would most likely be a matter of just copying a file.

With an HSM, it would have to be marked as exportable (bad for security), or to happen via some proprietary HSM to HSM cloning method endorsed by the vendor.

That said, I don't see that much HSM usage outside of the government or their contractors.

Just so you know, you have been quotes on gruber's article. Be careful about what you say on careful forum especially since your profile gives your contact information.

Thanks. I'll let it stand for now. I haven't been a part of that company for several years now, and did not have any legal agreements or first party relationship with either of the companies named above, and since the deal closed since with Slice it would be difficult for anyone to allege damages.

On top of that, it should be very clear that everything I said is hearsay at best. If I had known the attention this would receive, I would have been clearer about it.

No, but you did just point a whole pile of nasties at a very juicy and poorly secured target.

I would hope that Slice secured it after the acquisition.

You can't be sure of that.

I wonder how expensive it would be to keep full text of all modern multi-gig mailboxes anyway.

tech has become a cesspool of slimy founders + and unbridled capitalism - this needs to stop for the greater good

In 20 years I haven't seen a time where this wasn't the case.

Seed money is the first to take the risk and deserves the majority share of profit

VC (series A,B,C) are putting in the most money and brining big hitters for the board and advisors. They clearly deserve the majority share.

Founders do all the work and it's their idea so they deserve all the money.

The second generation leaders productive, operationalize, and bring legitimacy to the company, so they deserve all the money.

Whichever group has the leverage forces the table to tilt their direction.

It doesn't matter how good the potential is, how sure the victory is, how close the first breakthrough customer is, if you don't trust someone, or there's a slimy/smarmy vibe then just walk away. It's not worth putting in years of effort to have to resort to contract lawyers to get paid.

It boils down to this: Capitalism is not the goal of society, it is a tool.

Somewhere in the past few decades we've conflated the two - and a larger portion of our population believe that Capitalism is the goal. It's not, it's a way to achieve our goals. It is efficient, it is effective, it will always have little to no morals and consolidate in the hands of the few. That is not a judgement of the system it is an assessment. No different than stating a hammer will will work well with nails and poorly with screws.

We as a world society (and particularly an American socienty) need to refocus on what our goals of society are. And actively decided when to use and when to rein in specific tools to achieve our goals.

Absent of focusing on goals, our tools become our goals and we get the results we're seeing today.

As sympathetic as I am to anti-capitalist rabble-rousing, your comment comes off as a canned micro-rant which doesn't relate in any substantial way to the parent.

Your comment is unnecessarily dismissive and inaccurate.

I'll avoid getting into an internet argument, and just leave this quote here.

> tech has become a cesspool of slimy founders + and unbridled capitalism - this needs to stop for the greater good

This was the thread the comment was posted in and it's entirely the topic of discussion.

In the future try to choose more productive ways of describing people's views and engaging in discussion than as 'rabble-rousing' 'rants'.

What has changed is speed and scale.

If you want to understand what the implications are you need to spend time not with technologists but with ecologists. In nature there is a reason the apex predator doesn't evolve predatory advantages at a faster rate than its prey evolves defensive advantages. These rates grow or shrink in lockstep depending on resource availability. If they don't the ecosystem collapses.

I agree, but it won't until the bubble pops, and even that won't sort out the monopolists. As long as it's centered around VCs with more money than sense or morals, tech is going to continue unabated in its transformation into Wall Street 2.0

There is a phrase which I live by - If it's free, you're the product.

What bothers me is that this phrase becomes a thought-terminating cliche.

Too often, it's used to shut down all conversations around corporate malfeasance re:privacy, so the industry doesn't get better, we all just move on to the next big story. And victims are blamed and shamed. "Your fault for using a free service, what did you expect?" vs. "This is unacceptable behavior, let's force a change."

Not to mention so many don't understand free vs. non-free. Are there ads? Are there optional purchases that keep the company going? As someone else mentioned here, unroll.me showed ads, which would lead users to believe their usage was being subsidized by those ads - and Slice's About page on its web site says nothing about using unroll.me as a data source, it claims to use its own shopping app.

That's all well and good - and many people realise and accept this - but the degree to which you're the product can clearly vary wildly. That's the real issue.

Of course it varies. It's up to you to decide/deduce/infer what the "cost" of using a free service likely is. Understanding that there IS a cost is really the first step, one which a good portion of the population seems to not understand.

To be fair Unroll.me shows you ads in their emails to you so a user may think thats how they monetise and be ok with that. Its another thing altogether when the company sells all of your email data directly to buyers.

I wish more people thought like that. with everything being free it's really hard to actually charge people for something.

Well, this site is free...what's the catch?

This is clearly and explicitly content marketing that attempts to fill YCombinator's venture capital deal funnel.

We come for startup and tech news. YC is a startup and tech funding company that needs to have it's portfolio reach us as their customers/employees/investors/etc...

we're definitely being watched and analyzed :)

Thanks for the heads up. If anyone else wants to delete their account you can follow these instructions: https://unrollme.zendesk.com/hc/en-us/articles/200165526-How...

... This is why I treat equity and ipo as monopoly money.

It's ok that we pay you sub market salary because you get great ipo and equity.

Yeaaaah no.

It's comparative to the "espresso machine in the office" perk.

When you bump into your colleagues in the morning you have an extra talking point.

With all due respect you were a consultant for 8 months at Returnpath and know nothing about why that deal didn't happen and you certainly weren't important enough there to know anything about the equity structure of the company. Returnpath is also a data company that buys companies for data collected from services provided to the user. Ask Josh Baer. That's why they bought his company.

Also spreading unfounded rumors about data storage practices you know zero about is really irresponsible.

Suggestion: delete Unroll.me account, fill reasons with "Other", and then "Privacy! https://news.ycombinator.com/item?id=14180463"

You might have been and possibly still are under an NDA from the acquisition process. I'm not sure it is worthwhile detailing all of this in a public forum.

And if they signed up to try Unroll.me they might have violated it already!

Probably (and hopefully) only a matter of time until someone starts to work on an Open Source version of Unroll.me.

This is a good idea. I'd love a version of UnRoll.me that I could host on my own servers.

til then, i wrote a script that'll let you unsub from everything at once before you close your unroll.me account.


So, should I change my email address because all of my emails were read and archived by Unroll.me on their servers?

AND I can tell you as a co-founder of the company this is 100% false.

Am I correct in thinking this is class-action lawsuit-able?

Slice (unroll.me) is owned by Rakuten, who also owns a whole family of companies including Ebates, Rakuten Marketing (a fairly large adtech company), Viber, Buy.com, and lots of others. Slice's data has all sorts of interesting applications, both within the Rakuten family and to third parties like Uber. If you're a Slice/unroll.me user, I'd bet that a lot of your online experience is shaped by the data you share with Slice. A lot of Rakuten's other properties actually share office space with Slice in San Mateo, so there's obviously plenty of opportunity for collaboration. :-)

The irony is that Rakuten also owns a significant (12 percent?) stake in Lyft. Pretty funny that one Rakuten property was selling data to Uber who used it to hurt a second Rakuten property.

From their own site: "Unroll.Me is a free service", emphasis added.

If it's free for you, you're the product.

>If it's free for you, you're the product.

So hypothetically, if you paid $5 a month for this service, you would be confident they were NOT selling your information?

Since the answer is obviously no (and in fact, purchasing behavior is the juiciest stuff to sell, be it Comcast, Target, etc), then this tired trope is meaningless.

So hypothetically, if you paid $5 a month for this service, you would be confident they were NOT selling your information?

No. A statement's truth doesn't imply its inverse [1].

It simply means if a company has employees, and they're receiving paychecks, and the money is not coming from you, then it's definitely coming from someone else.

If you are giving them money, it doesn't mean they're not also getting money from somewhere else. But they're less likely to need to do that, especially if it would upset their paying customers.

[1] https://en.wikipedia.org/wiki/Denying_the_antecedent

This is implying that you're able to prove that all free products sell/profit from your data.

I think the idea is to help you think about the free services that you use and your privacy, not be a perfect way to describe every free service.

If X then Y doesn't imply anything about not-X, but that doesn't mean it's less useful for describing X.

It's moot anyway because the answer is out there publicly and no heuristic is needed to resolve it.

Whether I'm paying for a service is orthogonal to what its ToS allows them to do with my data. When I'm not paying for it, you can be pretty confident the ToS explicitly allows it, with even money on its being phrased in abstrusely arcane legalese. They have to make money somehow...

But if you aren't paying then they must be making money elsewhere.

In the UK, O2 has a line in their terms and conditions that pretty much say they're allowed to sell your data to 3rd parties.

Now it's obvious why I've got people calling me asking about my recent loan that may have had PPI or that traumatic car crash I was in a few years ago, even though I've never in my life on both fronts. It's my bloody carrier!

If you use Google, shop in a supermarket, use a mobile phone, have the internet, your data is being collected and, in most cases, being sold to "chosen partners for marketing purposes". Everyone's doing it because they can make money from it.

While it's generally true, I'm not sure that's really an excuse for deceptive behavior.

The saying isn't meant to be an excuse for companies, it's a reminder to end users to pay attention what they're signing up for.

Situations like this one are exactly why the saying became popular.

Like i said below, the most obvious expectation was thinking that they would monetize with ads, which is why people didn't think twice about this.

There's a difference between ad supported businesses and business that actually directly sell user data behind the scenes.

Equating all the ad-supported businesses with this case is not really fair because the types of businesses you're talking about here are not actually literally selling you out. They are simply pushing you ads on THEIR platform which YOU agreed to use. Sure there are lots of shady things going on in this department as well, but it's a completely different game than what this looks like.

Based on this article it looks like they took your data and actually sold it to a third party, this is different from simply displaying ads on their platform. They literally sold you. And it happened OFF of the platform you signed up for.

You're describing my assumptions exactly! I incorrectly assumed that the occasional ads for Dashlane, HelloFresh, etc. in the "Daily Rollup" were how they monetized their service. But hey, turns out they're straight up selling my emails to third-parties. I've deleted my account and revoked their access on my Google account.

It's trite, and aside from belonging in the big dustbin of Hacker News cliches like linking to XKCD Standards, 'Just because you can doesn't mean you should' and 'Conflating Causation and Correlation' in this case it obscures more than it illuminates.

A product may be free - and you may still be happy to be the product if you think your attention is being sold, or that they plan to upsell you onto a premium plan.

'If you're not a paying user' doesn't immediately lead you to 'They're going to scan my email and sell the data to fucking Uber' and shouldn't require the user to scan the ToS / rack their brains for every nefarious bit of fuckery the company might conceivably use the data for.

How is it nefarious for Uber to use anonymized data to come to conclusions like:

1. Emails with subject [xxx] are opened more often than emails with subject [yyy].

2. Lyft does more business in Scottsdale than we expected.

3. 25% of people who use ridesharing use multiple services and 75% are loyal to one service.

4. 33% of people who don't use ridesharing services also don't use traditional taxi services.

"Nefarious" is a strong word.

> 'If you're not a paying user' doesn't immediately lead you to 'They're going to scan my email and sell the data to fucking Uber' and shouldn't require the user to scan the ToS / rack their brains for every nefarious bit of fuckery the company might conceivably use the data for.

I agree with the first half of what you said and disagree with the second. That's precisely the value of the original saying: it's not there to be an excuse for a company, it's there to spread awareness and remind people that they should get informed about just _how_ they're being productized _before_ they have cause to regret using the service.

If more of us scanned the ToS carefully, we might catch the nefarious bits of fuckery on time and pressure the company to change.

Does the privacy policy say your data will be sold to third parties? Great, find out what data, to what third parties and questions like that.

Since the privacy policy explains in clear English that your data will be sold it would be more deceptive if they didn't sell your data.

And since no one actually reads the privacy policy, users likely expect that the only data they're retaining is about email subscriptions, and that it will only be used internally and not sold to 3rd parties.

It's still unethical.

If someone has strict privacy needs, why wouldn't they read the privacy policy? Read it or don't read it, it's your choice. But if you choose not to, don't complain that the terms you agreed to don't work the way you arbitrarily expected them to. That's on you, it's not unethical that the terms don't work the way you want them to.

So you've read every TOS for every iTunes, or Android, or Windows update, ever? And if not, you'd be fine with handing over the keys for your house if one of them had a sentence in there that transferred it to Apple/Google/MS?

If you take out a loan from a loan shark, you should probably know what to expect. That doesn't mean that the loan shark isn't breaking the law when you get roughed up for non-payment.

Also, the thing is this is not really the loan shark situation.

People who go to loan shark know exactly what they're getting into--borrowing money.

People who signed up for unroll.me signed up because they got sick of all the the spammers and wanted to get away from all that easily.

Most people including me, thought they would somehow monetize with ads or something like that, but never thought they would sell our info to 3rd parties like this. So No, it wasn't at all obvious what to expect.

It would be an interesting challenge to popularize free services as loan-sharking your personal information.

You have to be naive to give a service access to a trove of your private data and expect them to just leave it there...

Are we really at the point where people are being called naive for trusting a company to act ethically? It's no wonder people outside of tech hate us.

As a general rule, if you give an organization any kind of advantage over you, sooner or later someone in that organization will abuse it.

The saying "power corrupts" is more correctly expressed as "power attracts the corruptible".

Here's the link to revoke perms: https://myaccount.google.com/permissions

I just disconnected from one or two services which had access to my gmail (reasonably so).

Thanks, why the fuck does Swift keyboard need access to all my emails?

The innocuous, benefit-of-the-doubt reason would be to improve their prediction/autocorrect.

I'm sure it's buried somewhere in the Terms of Use. Contemporary terms of use are a classic dark pattern. If one would like to setup an useful ML project, take as input Terms of Use, spit out the highlights, the buried ledes, such as user data being sold.

Wasn't there a project similar to that featured on here at one point? I don't think it used sophisticated machine learning or anything, but it used basic keyword search to produce "plain English" summaries of ToS. I remember it being somebody's hobby project; I can't seem to find it now.

Here it is: https://tosdr.org.

This is why I didn't and never will trust 3rd parties products/services getting accessed to my inbox and reading my emails wholesale in order to mine data or provide value.

Paribus is another of such services that I am aware that require you open your inbox access to them (the pull model).

There is nothing wrong with reading your emails with your explicit consent but I believe a push model like TripIt/Kayak's previous push model (send email receipts to trips@tripit/kayak.com) is a safer way to avoid privacy being violated and abused.

I was a little creeped out when--long after I deauthorized the app and enabled 2FA on my Gmail account--I got an email from them saying "We've found 141 new subscriptions". I wonder if that was just marketing spam, or if they have a weird way of accessing my email still.

I believe Google lets you view all apps with access to your account. Check that, change your password, and you should be good.

Oh, that's what I meant by "deauthorized the app". I removed it on the Google side, and shortly after even got an email from Unroll.me saying that it no longer had access. So I was surprised to see an email a couple months later saying they'd found more stuff to unsubscribe from. It could have been a bug or just a message they sent to everyone who'd unplugged their email, but was quite jarring.

What do you guys think of Mixmax? Heard of people liking their service for making appointments within Gmail, and e.g. receiving read receipts. From their privacy policy:

> [...] Mixmax may securely access or store your name, your Gmail email address, your Gmail emails and other conversations, and your Gmail contact list [...] We may anonymize your Personal Information so that you are not individually identified, and provide that information to our partners.

https://mixmax.com/privacy.html https://mixpanel.com/privacy/

I agree, the unenroll.me is a whole other story here that should be dug into deeper

Seems to be a recurring theme for services that scrape your email inbox. For example, see https://context.io which is a popular service for building these kinds of apps. They clearly state that the free version is funded by collecting anonymized data from the end users' email inboxes.

Just another reminder that nothing is free =)

Hahaha, welcome to the future ...

Me too. I'm pissed.

Yes thanks I just deactivated unroll.me and deleted my account, make sure you change your email password and deny them access from your email provider as well.

"kind of"?

> Their whole value proposition is to help people control their own privacy and now I kind of feel betrayed..

Really? I never got that. While I'm also a little unsettled by the selling data part, the value prop was always pretty clearly simplifying unsubscribing en masse, which necessarily involved handing over access to the contents of your emails.

BTW: unroll.me's parent company Slice has the exact same business model.

Did you pay money for unroll.me? Did you read the entirety of the TOU/EULA? How can you be so naive? If you're not the customer you're the product.

This is nonsense. What prevents you being the product even if you pay? Companies can still sell your data.

If unrollme made it clear they were making their money by selling every one of your emails in plaintext they'd never have signed up anyone.

> Wow I think this unroll.me thing is the real scandal here.

How is it a scandal? Competitive intel is a normal and widespread thing and it's not a scandal that people don't read a privacy policy that says, "We may collect, use, transfer, sell, and disclose non-personal information for any purpose."

> it's not a scandal that people don't read a privacy policy that says, "We may collect, use, transfer, sell, and disclose non-personal information for any purpose."

That companies are allowed to have such a non-privacy policy damned well should be. The number of privacy policies I'd have to read on a daily basis to function on the Internet is ludicrous and it is only because of my job working for companies like that that I know that those privacy policies exist in the first place.

This jumped out at me, too. Here's how Unroll.me describes their service:

  Clean up your inbox
  Instantly see a list of all your subscription emails. 
  Unsubscribe easily from whatever you don’t want.
It's not clear at all that they are reading people's emails and selling the data.

What you read on the public site is the USP for the users that install the unroll.me app. They are not the economic buyers, the ones who pay for the service. Those are the people that buy the user's aggregate data. They get in-person pitches on confidential slides, because if their USP leaked to the public, nobody would install and use the app.

"We may collect, use, transfer, sell, and disclose non-personal information for any purpose. For example, when you use our services, we may collect data from and about the “commercial electronic mail messages” and “transactional or relationship messages” (as such terms are defined in the CAN-SPAM Act (15 U.S.C. 7702 et. seq.) that are sent to your email accounts.

We may collect and use your commercial transactional messages and associated data to build anonymous market research products and services with trusted business partners. If we combine non-personal information with personal information, the combined information will be treated as personal information for as long as it remains combined.

Aggregated data is considered non-personal information for the purposes of this Privacy Notice."


To be sure, CAN-SPAM was not aimed at end users so much as defining what they can complain about, which is not much, and the law is toothless besides.

"if their USP leaked to the public"

That just happen, at last.

Slide deck 1 of 44: "Learn how your competitors are doing."

It's not clear at all that they are reading people's emails and selling the data

They don't think it's any of the customer's business to know that, and it's counter to the company's interests to publicize it.

Noticed this as well...

Here's an open source Google Script that lets you easily unsubscribe from bulk emails in Gmail. Source on Github. https://www.labnol.org/internet/gmail-unsubscribe/28806/

I had revoked unroll.me permissions right after via google connected apps & sites, amusing that in order to delete your unroll.me account[1] you'd have to give them the very same permissions again.

[1]: https://unrollme.zendesk.com/hc/en-us/articles/200165526-How...

If they sell your receipt data to Uber, who else might they sell it to? An insurance company? A hedge fund? Your bank?


It would be much bigger news if they were selling data that could be used against the consumers on an individual basis.

"Hmm, 90% of this anonymous guy rides start/end at this residential address which the public property deed says it's registered to Mr. John Smith, I wonder who this anonymous guy could be."

...and 10% of John's rides are to an address whose only tenant is a drug addiction rehab clinic, but he apparently doesn't work there because 50% of his rides are from a tech company's headquarters to his home late at night. So it looks like John may have a little bit of a drug problem.

You can find out a lot of personal information about someone by tracking where they go on a regular basis.

Yep, once I plotted my Google location history and played this game looking at my "hot spots". Even if you work at a large company and don't go to special places it's very easy to gather insight by looking, for instance, where a person goes in certain dates: he spends Christmas in this village, he stays home in Valentine's day, etc.

Given sufficient data, nothing's really anonymous. That's a bullshit hedge.

Especially if eg Facebook notification emails count as transactional. Group memberships alone massively narrow things down. Throw in a few business notifications and it's game over.

Even without that angle, I find it absolutely scandalous that a company is able to do this, even with the T&Cs permission of their users. Surely at some point this is going to bite them in the behind? The possibilities of it going massively wrong seem endless. Then again - common sense and the law rarely seem to intersect.

The problem is likely suable entities. I'd imagine there's a fair number of corporate cut-outs in shady schemes like this. With the expectation that if there are ever lawsuits a sacrificial faux-company goes bankrupt and takes the damages.

Is there an actual law that would prohibit them from selling un-anonymized emails or do we just have to trust the company's sense for ethical behavior?

They would never know who "your" bank is. Reading a few paragraphs near the top of the privacy policy could have cleared up the initial misunderstanding about the nature of the service and also any FUD about how the data is used.

Presumably your bank is the one sending you statement emails...

A friend recommended unroll.me once. I carefully read their ToS and it was clear that they get full read access to your whole Gmail-Inbox.

Now we know what they are doing it and how. I hope they get shutdown within the aftermath of this story.

Why on earth would a service get shut down for allowing consenting adults to opt into an agreement that allows anonymized data to be shared in a compliant way?

They wouldn't 'get shut down' for it, but if the service was transparent about what they were doing people would likely unsubscribe/cease to sign up. It really depends if anyone is prepared to take them to task on it.

> 'consenting adults'

Are you consenting if you don't know what you're consenting to.

> anonymized data

Widely understood that anonymized data is bullshit, which is why the term "de-anonymized" exists. It's especially bullshit when there are no 'standards' as to what constitutes it.

> shared in a compliant way

What does that even mean.

Deciding not to read the terms of service isn't the same as being in a situation without transparency.

The beef industry would also collapse if all the meat-eaters who thought killing animals wasn't ideal in principle had to watch the cow get killed before every meal.

A couple years ago, I almost signed up for unroll.me but I stopped myself. It didn't seem worth it to give some service full access to my gmail account, when I could just spend a few minutes unsubscribing manually.

https://www.slice.com/ This Slice?

That headline "Your inbox security is our top priority" sure seems hilarious now. Also, I wonder why the article refers to it as Slice Intelligence? Is that their official company name on the paperwork? Is it to portray this shadowy intelligence gathering agency for hire? Both?

If anyone is looking to kick unroll.me, but still has a problem w/ massive subscription piles, I wrote a quick tutorial on Medium about how to unsub from everything at once w/ their product (so you can delete it once & for all):


> lede

It may be of interest to you, but it isn't the lede of the story. The lede is a pattern of behavior by the CEO, of which that is one element.


OP is implying the "real story" (i.e. the most important part of the article) is not what the article begins with

I agree. I'm saying that the OP missed the "real story" and is focusing on one detail of it.

We're all aware of his pattern of behavior, and hope to one day see the death of Uber. However, it's not new news. That's why it's not the true lede

From: Me To: support@slice.com

Please delete my account and all my data.

Every company with large resources does this. NYT gains more clicks and views with this angle.

I used to recommend unroll.me to people but eventually found that relentlessly unsubscribing from things worked better anyway. I assumed they were making money from ads, but should have known better.

I have a few simple rules I follow to hit inbox zero...works quite well:

1. Unsubscribe Relentlessly 2. Use Keyboard Shortcuts or Gestures 3. Snooze Important Emails 4. Use a To-Do App

(I wrote it up in more detail here: https://shift.infinite.red/how-i-achieve-inbox-zero-every-da...)

From the article, explained:

At the time, Uber was dealing with widespread account fraud in places like China, where tricksters bought stolen iPhones that were erased of their memory and resold. Some Uber drivers there would then create dozens of fake email addresses to sign up for new Uber rider accounts attached to each phone, and request rides from those phones, which they would then accept. Since Uber was handing out incentives to drivers to take more rides, the drivers could earn more money this way.

To halt the activity, Uber engineers assigned a persistent identity to iPhones with a small piece of code, a practice called “fingerprinting.” Uber could then identify an iPhone and prevent itself from being fooled even after the device was erased of its contents.

This really doesn't match up with how the conversation/outrage is playing out on Twitter right now. People seem to be interpreting this as "Uber continues to track your location after you have deleted the app," when what really happened seemed to be "If you delete Uber and then reinstall it on the same phone, Uber knows that it's the same phone."

See for example this Tweet, with hundreds of retweets and lots of verified replies:


"This is like a holy trinity of privacy disaster: 1) secret tracking that 2) persists after users delete app 3) in knowing violation of rules"

Uber was tracking people after they left their rides, and it's unsure if they ever stopped.


OK, but that's different from tracking people after they've deleted the app.

I'm genuinely curious how that would even work on the technical level. As an app developer, I'm not making the connection here as to how iOS would even allow that.

Edit: Read up a bit more on it. Turns out it was the practice of fingerprinting and tracking after re-installs, not after an uninstall. TechCrunch provided a better technical description: https://techcrunch.com/2017/04/23/uber-responds-to-report-th...

To me it seems like this is mischaracterized to make it sound worse than it is. Can someone explain why people are making a big deal about this practice?

Its because its fashionable to beat the horse that Uber is a terrible company led by a terrible man. I personally am no fan of Uber or Travis, but I do get disgusted sometimes when the media hypes certain perceptions to an inappropriate degree.

So for all means continue to investigate the seemingly terrible and anti-women culture and the fraudulent stealing of Technology from Google. But like you said, don't mischaracterize other facts to make them sound more terrible than what they really are.

Because it violated their agreement with Apple and accessed private APIs, infringed on user privacy, and they geofenced the behavior to try to sneak it past app review. It's another example of Uber knowingly being evil.

Well for one, it's against Apple's rules to do this. And just because it's "industry practice" doesn't mean you get a free pass.

That's true, but there's a bit of live by the sword, die by the sword here. Uber is no stranger to the power of propaganda with their campaigns on the sharing economy, unions, regulations, etc.

Because we all expect journalistic integrity from Uber. Uber provides a service people need. Journalism is blogspam garbage.

You think that Uber is a service that people need? This sounds like exactly the kind of company that should be held to a high standard.

If we were to judge companies solely through a moral lense, sure.

But in the direct interest of capitalism, and indirectly consumers, it's terrible to restrict important businesses.

Important businesses are the ones that need restrictions the most. "Too big to fail" doesn't work, for society or consumers.

But how do you judge uber if you can't trust journalistic integrity? The very people you trust to think for you are unqualified - which leads me to believe so is your opinion. This seems to be the problem with Fake News.

You're not addressing what I'm saying. What I said has nothing to do with journalism

Is there a specific journalistic integrity issue going on with this article?

Nope. He's just being a contrarian clown.

"A lie will go round the world while truth is pulling its boots on."

C.H. Spurgeon, Gems from Spurgeon (1859)

Well Churchill, of all people would know it pretty well.

One of the risks of refusing to talk about services is that you're at the mercy of games of telephone. People describe what you're doing to journalists who carefully dilute anecdotes and details to obscure their exact source. Then you don't comment directly on the system because it's a secret, and here you are.

The problem for Uber is that they /are/ scummy. They proudly bend every possible rule to their advantage. It's easy to believe the worst about them.

jesus. that guy is a reuters "reporter" as well. stuff like this is what encourages the concept of 'fake news'

The concept of fake news Rose from one guy making shit up for ad money. The whole ordeal about how fake news is this sinister plot to disinform is a disinformation effort in itself. There's a big difference about reporting while being misinformed vs disinforming or to harness views.

what i'm saying is that people doing misinformed reporting causes people to not trust the news at all. and let's cynical people (trump) label all journalism as fake news

I think it takes a certain kind of individual to trust everything they read at face value without getting multiple points of view on an issue and researching their own facts. Not saying we shouldn't hold reporters accountable to higher standards but we live in a time where everything is rushed and pumped out as "content' for ad revenue.

Ah, I get it now. The author took a shortcut.

Basically, they created a unique 'fingerprint' of the iPhone. It was unique enough that even if you reinstalled the app, the fingerprint would still be the same. This was done, ostensibly, to prevent people from scamming them by reinstalling the app and coming over as new users? But they already have the phone number, so I don't understand the point.

Phone numbers can be trivially changed.

In the article this is in the context of fraudsters buying used phones to fake rides in China and take advantage of incentive programs to make money. So they want to track these devices as they change hands.

Phones can change ownership too, so even if you can identify the phone it is dangerous to assume that you have identified a person.

If you can buy a new phone for $5 and the referral bonus is $20, there's $15 arbitrage to be made just by buying up as many phones as you can.

So I think their goal is not so much about identifying people, but identifying devices in order to prevent this loophole.

But can you buy (even a stolen) iPhone for USD 5?

The question is, can you buy it and sell it again with less than a 20$ loss

Yup, no question. I'm not claiming Uber's approach is flawless, I'm just describing what I believe they were doing and why.

> "to be clear, a # of companies practice 'fingerprinting,' and it is fully breaking the App Store rules. But also very clever fraud detection."


wonder why Mike Issac gave that clarification immediately to his twitter-base but didn't put it in the relevant section?

could it be because the article intentionally glosses over complex details in order to pump a specific narrative ... hmm. \s

Sorry to burst the 'specific narrative' bubble but its probably not that. Just look at the change:

Here's the change in question: http://newsdiffs.org/diff/1383350/1383404/https%3A/www.nytim...

changing "tracking" to "identifying and tagging" and changing "even after its app had been deleted from the devices, violating Apple's..." to "even after its app had been deleted and the devices erased — a fraud detection maneuver that violated Apple's..."

In a really long article like this which is probably under some time pressure to publish, there's almost always things that seem clear to the author aren't to the reader. This is a standard clarification bug fix, and tweets were over an hour after the article was published - enough time to gather feedback and realize the need for clarification.

At least in this instance, the only specific narrative being pumped is the one that journalists are always pumping a specific narrative on touchy subjects.

The tweet responses:

> @MikeIsaac 32 minutes ago > Since the line about fingerprinting is being misinterpreted(though it is explained later in piece) adding language up top to better explain.

> @MikeIsaac 31 minutes ago > appreciate Technical community's concerns about how It is presented. Uber was not tracking location after device wipe (which I never said).

> @dangillmor 30 minutes ago > What exactly were they tracking? Not entirely clear (at least to me).

> @MikeIsaac 29 minutes ago > ID-ing devices. so if I steal a phone and wipe it, they can still determine I had that phone and used it to defraud uber, using other data

That's a clever media hack. Using provocative headlines and misleading lead to get clicks and shares, but using a separate medium (Twitter) to get away with it.

Clever, but it's disappointing that even NYT is turning into this madness.

They've also updated the article text now: "To halt the activity, Uber engineers assigned a persistent identity to iPhones with a small piece of code, a practice called “fingerprinting.” Uber could then identify an iPhone and prevent itself from being fooled even after the device was erased of its contents."

Note that this was at least 4 hours after the outrage on Twitter started. Seems like a very intentional, well-calculated strategy indeed.

> Note that this was at least 4 hours after the outrage on Twitter started. Seems like a very intentional, well-calculated strategy indeed.

That comment seems a bit disingenuous. i.e. it's entirely possible it takes a journo 20 seconds to post a correct to a twitter account he/she controls and 4 hours/days/weeks to get his/her editors to sign off on the same correct and the change pushed to the news website.

Large news sites like the NYT have editing procedures and internal hoops to go through. This isn't just joe shmo's blog that is updated at a whim. I've written freelance articles with editing periods of months, you can imagine that it's a lot harder when it's news.

Kind of reminds me of the "motte and bailey"[0]. The misleading but technically accurate claim gets all the play and all the reaction. The author goes on Twitter and says "golly gee I didn't mean for you take it like that, all I really meant was [much weaker claim that wouldn't have gotten all this attention in the first place]."

The correction bounces around but never takes hold the way the initial claim does and people quietly go on believing their initial interpretation. Sad.

[0] http://rationalwiki.org/wiki/Motte_and_bailey

It's wrong to assume malice.

100% sure that all decent banking app use device fingerprinting. 100% sure that it is not breaking the rules and it is really important that they keep doing it.

While you're right that a lot of FinTech applications do use fingerprinting, it is absolutely against the rules. It's rather annoying from a mobile security perspective but given the rampant abuse of persistent device identifiers on Android, I understand and appreciate Apple's stance here.

> While you're right that a lot of FinTech applications do use fingerprinting

Do they really? [Citation needed] very much here. Which fintech app fingerprints devices? What would even be the point of doing that. You can persist a token in the keychain for that which is enough unless you are devious.

Why would a banking app need to use device fingerprinting?

Perhaps to identify a device from which a fraudulent transaction occurred in the past?

Because you don't want any phone in the world to be allowed to access any bank account in the world by just giving a name and password.

Fingerprinting is a form of 2 factor authentication, it's easy to perform and it's relatively efficient against fraud.

If they're doing this on iOS, which is where it's interesting (in that it violates Apple's policies), they have a perfectly good 2-factor solution already present -- your finger.

as if used phones market doesn't exists

Your fingerprint is never sent to the app.

The first time you use an app you have to enter your user name and password and that is stored in the secure enclave that not even the operating system had access to.

When the banking app request validation, you use your fingerprint to authenticate and the secure enclave sends the username and password to the app. The fingerprint scanner is connected directly to the secure enclave.

When you sell your phone, you go through the process of erasing your phone, the encryption key is destroyed and your fingerprint is no longer valid.

instagram do it on android. they place a small file called .profig.os on external storage that is left there even if the app is deleted.

External? So change the SD card and it's gone?

Near the bottom of the article:

To halt the activity, Uber engineers assigned a persistent identity to iPhones with a small piece of code, a practice called “fingerprinting.” Uber could then identify an iPhone and prevent itself from being fooled even after the device was erased of its contents.

I also interpreted "track" as "report geolocation data," but that's not what the reporter means, and honestly the reporter's meaning is more consistent with, e.g., "this website is tracking users" or "Do-Not-Track".

What goes around comes around. Reminds me of that time when Uber would issue throwaway credit cards and burner iPhones to people, so that they would order and cancel Lyft rides...

>" Some Uber drivers there would then create dozens of fake email addresses to sign up for new Uber rider accounts attached to each phone, and request rides from those phones, which they would then accept. Since Uber was handing out incentives to drivers to take more rides, the drivers could earn more money this way."

Could someone explain the logic behind how a driver requesting rides benefited them? Did the drivers fake the ride and pay for it themselves? Was there a cash incentive where they were reaping enough to offset paying for the fake rides themselves and profit hanseomely? Is that correct?

Yes, in the earlier days in each city, they (just pulling numbers from thin air) do something like pay a minimum $20 for each trip if you complete 5 trips within an hour without cancellation. Helps to kickstart the driver supply.

Thanks for the explanations.

Interesting and somewhat ironic to think that Uber had to put countermeasures in place against drivers engaging in their own questionable version of "growth hacking."

Give a reference code to your other phone. That phone now has a credit for their first uber ride or first 20 dollars, something like that. Then take the ride on your driver phone. You get paid from the second phones credit but dont spend any money yourself

I think they're referring to Keychain items surviving an app deletion. That quietly stopped working in a recent iOS update.

It was in one of the 10.3 betas but was removed. I don't think it can be deleted reliably without losing data if iCloud keychain is enabled, e.g. another device might still have the same app or share the app group.

Was the keychain item surviving app deletion a feature that was dropped or an undocumented feature?

If it "quietly stopped working" I'd have to assume the latter.

Thanks, I just remember that when I stored the items the documentation recommended that one put in the keychain list, then deleted the app off of an actual device for testing purposes and reinstalled the app on that same device, all those items would still be there so I (wrongly in hindsight ) assumed it was the desired behavior by Apple otherwise other developers would have complained.

I believe that is the desired behavior with the Keychain.

Pretty sure this is about persisting data after the entire device has been wiped. Not just the app removed and re-installed.

Could be via other apps using ubers native sdk writing to a common key store. ala Google sharing auth between apps on iOS.

Such key stores are wiped on erase. They more likely used the wifi MAC address.

Keychain used to persist data and are shared, this was a very recent change in iOS to "fix"

It can be backed up to iCloud but in the context of scammers they're likely not using the same iCloud over and over.

Is it against the rules to track via the mac address?

The wifi Mac address is obscured from native code

Didn't used to be and it's unclear what the exact timing is of this code.

Apple made the standard API return garbage in iOS 6, and the API would probably trigger an analysis error iTC, so if they were getting the MAC address it was via much sneakier means.

Can't one simply change the MAC address like you can do on PC?

Maybe if you're jail broken? But if you're not then no way.

I believe I tried long long ago on iOS and couldn't get it to work, but I don't know if I'm remembering correctly.

I have heard that it is quite possible. Supposedly if you measure enough characteristics of the phone, the combination of such characteristics is enough to uniquely identify a phone. May be able to poll such information you measure with those measured by other apps.

There are quite a bit of SKUs, but not enough to make the phone unique itself. You'd ultimately need something more special to do so, such as the MAC address.

It's not clear if such code would work today on the latest iOS version but maybe. They probably used a private API to do so, and that itself was obfuscated in the compiled binary such that apples automatic analysis would fail to catch it.

> They probably used a private API to do so

My understanding was that Apple made those APIs return garbage anyway, so more hacky methods were required.

No, they fingerprint the phone like browser fingerprinting.

Unique settings, apps installed etc.

Very hard to have a non-unique set up with enough data points.

No? Did you work on this code and know for sure?

Let's look at roughly what's available:

iPhone model (2 orders of magnitude of possibilities)

Device storage -- increases entropy with iPhone model but still not that much

Device name -- easily changeable by scammer, so not enough

iOS version -- changes over time, not great for a long term fingerprint but might help short term

IP address -- short term attribution ok, but not against scammers. People in china have multiple sims very often so even relying on carrier isn't enough

Cell phone carrier -- same as above

Other apps installed -- as of iOS 9 you have to pre-declare what you want to be able to query, and that's subject to App Store review. It does help give a fair bit entropy. This also can change at any moment. But if you're wiping the device constantly, they might not be installing any apps.

In advertising / web, you want to attribute across sites / installs on a short time basis. You have plugins and their unique version numbers, OS versions and all their attributes, browsers version, fonts installed, etc. Way more variation than iPhones.

To defeat scammers erasing their phone constantly it's actually much harder, and likely needs something a bit more unique.

Each variable you mentioned is not unique, but put them all together and now you are talking. And then apply a heavy dose of statistics and machine learning on top of THAT.

Especially since the behavior/activity of the phone could be suspicious as well.

You also don't need to be 100% accurate all the time. The point is to minimize the damages done to you by scammers, not reduce it to 0 which is impossible.

Doubt it. You can't use those methods to identify users if the phones are just being used to mine new accounts.

I'd assume the miners just wipe a phone clean, reinstall Uber & create a new account.

There are no special settings & variability in installed apps in that case.

>request rides from those phones, which they would then accept.

Rider and drivers are randomly assigned. I am not sure if you can choose your driver. Not sure if all these drivers in China made a huge group to benefit each other

Having looked through Uber's iOS app pretty extensively - that fingerprint methodology is easy to bypass.

It's an APP that has been on your phone at one point. An APP by a company that is peak (maybe valley would be better) SV when it comes to ethical standards. Uber probably has a fingerprinting system that is on par to Google/Facebook and maybe even puts them to shame (maybe paints them in a better light).

I imagine that they wouldn't really have much difficulty tracking you through ad tech, another APP, or some other Cult Of Free system that is willing to sell database access.

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact