"They spent much of their energy one-upping rivals like Lyft. Uber devoted teams to so-called competitive intelligence, purchasing data from an analytics service called Slice Intelligence. Using an email digest service it owns named Unroll.me, Slice collected its customers’ emailed Lyft receipts from their inboxes and sold the anonymized data to Uber. Uber used the data as a proxy for the health of Lyft’s business. (Lyft, too, operates a competitive intelligence team.)"
I am an unroll.me user, but had no idea they sell user data to companies this way.
Their whole value proposition is to help people control their own privacy and now I kind of feel betrayed..
The founders of unroll.me were pretty dishonest, which is a large part of why the company I worked for declined to purchase the company. As an example, one of the problems was how the founders had valued and then diluted equity shares that employees held. To make a long story short, there weren't any circumstances in which employees who held options or an equity stake would see any money.
I hope you weren't emailed any legal documents or passwords written in the clear.
although every day someone would still email with a question "why you are not free like unroll.me".. sigh.
they are encrypted and can only be decrypted by "scan" and "action" (delete, trash, etc) jobs, job servers are not exposed to the outside and can only be accessed via the private network via ssh using access keys and only from a specific node which has those keys. keys are password protected. access to that specific node is restricted to a set of known public ip addresses. database and job servers are different servers of course. database servers are also only accessible within the private network.
the only thing that's publicly exposed is a load balancer.
to access anything else we log in to the "gateway" instance which we access by ip only and it does not have any domain name associated with it.
with all that – I am very open to ideas about protecting that further.
This gives you something fundamental to compete on.
now, whether it's valuable enough to justify the price – depends a lot on how you use your email. we've got users managing 3-5 accounts with hundreds of thousands of emails each and they use our labeling/organization more than removal. think of it as of a way to act upon a group of emails no matter what the size of the group is.
(and I kinda think our website is not really good at communicating this – our traffic is mostly coming from android app right now and we've been putting website work off. who knew!).
You offer plain and simply ask for 8€ per month per account.
That's simply a ridiculous amount of money for 99% of the people, what you have but we can't see is part of the problem, not the trust, the price is just not worth for what you offer, so, don't complain about "not a single new customer from 50 clicks".
and it's even scarier with iCloud for example – they don't have oAuth and people need to enter their passwords to scan/clean. (they do have "app-specific" passwords though but looks like people have hard time figuring those out.)
It's not a perfect solution but it's an option to consider
my day job is in ecommerce (I work as a product manager at FastSpring) and I used to work on CleanMyMac at MacPaw – had to work with trust in both. it's somewhat unexpected but people who are buying software for themselves usually don't care about PCI compliance, audits, and other artifacts of "institutional validation". they care about a "norton secured" badge, proper language, recommendation from a person they know, a review at the website they read, "that green thing with the lock in my browser".. we're now at the phase where we are trying to find the right combination.
just to be clear – it's very different from project to project and depends on the audience. what I'm saying is that we're making decisions emotionally mostly based on our prior experience and rely on internal "thermometer" to tell us if what we're seeing is trustworhty.
Having said that, I deal with independent audits in my job, and they're not all that reassuring.
How does an independent audit detect out of band taps (swapping binaries, re purposing archives/backups, mirroring, etc) on infrastructure the auditor wasn't monitoring before the audit? logs? but more importantly amortized or not the customer eventually pays for all this activity that at the end of the day is more fluff than substance (in terms of what the customer can actually verify)
In the end doesn't all this come down to just another form marketing?
Please note, that I recognize that there are many scenarios where an independent audit would add value. I just don't think it adds anything that social validation doesn't already add when considered from the perspective of a consumer to whom the infrastructure behind the service is unavoidably opaque.
Also, it's only been 30 minutes since your first post, and 50 is a small sample size.
> I haven't been a part of that company for several years now, and did not have any legal agreements or first party relationship with either of the companies named above, and since the deal closed since with Slice it would be difficult for anyone to allege damages.
And if all this disappears, then yes, someone did attack me legally over it. I don't like the business culture that has built up around this kind of thing -- reputation is important, so let's defend it with lots of lawyers and NDAs, but it's too much effort to be up front about business practices that might give us a bad reputation. That's bullshit.
I'm seriously conflicted about this because I too have seen some extremely horrible stuff in the last couple of years, some of which I'm quite sure would rock the world orders of magnitude worse than what unroll.me has been up to and that was secured roughly in the same way (or maybe even worse) and with data best qualified as 'radioactive'. I do sign NDAs and I stick to them religiously but it is very hard at times to do that. Even so I understand that I'd make life miserable for those that employ me if I'd ever break an NDA.
My boundary, and the legal boundary that NDAs (even despite what is written in them) are generally held to is "trade secrets." I would hope that everything in my post is three or more years out of date, and would no longer qualify as such.
I say not doing evil to the rest of mankind trumps protecting the evil few.
Leaking systems that work seem like the moral road?
> we don't store any of your emails on our servers.
Either way, I just deleted my Unroll.me account and revoked access to my Gmail account. I don't think there's anything the company can do to ever get me back as a user.
That might also be a case of an article that's written from the point of view of one feature ("what happens if I delete") and not what's going on under the hood. There are other references to deleting data stored with unroll.me, e.g. When you go through the delete steps you need to do it in a particular order so that data on their side is removed, as discussed in another comment thread.
Granted, I tend to think the people who run Gmail are more honest than that, but if someday the wrong people retired and others took over or what have you, I wonder just how suddenly that could change?
In fact it would be a stupid idea for them to sell any of that data directly to a 3rd party. Instead they package them in user friendly (marketer/advertiser friendly) ways to capitalize. Some of these are shady and I'm not a fan but overall I think this approach is fine.
The problem happens when you sell user data to a 3rd party.
Here's an example: Let's say you start an email newsletter about travel. You get millions of subscribers. Then you start putting ads on your email. Maybe sometimes even send sponsored messages. This is kind of annoying but not "unethical".
On the other hand, the same company could take all the email list and sell it to bunch of travel agencies. Then all the million users who subscribed suddenly start receiving spam emails from these travel agencies. This is unethical because they literally "sold" your email address.
Of course this is more of an extreme example, but the pattern is the same.
Yes, it is called Gmail Sponsored Promotions or "GSPs." Depending on the audience they can apparently be quite effective. 
Your main problem is going to be getting everyone to use it. If you converted 25% of the people using email today to an end to end encryption system it means that they can either only email anyone else in that 25% or anytime they send or receive an email from the other 75% it's not going to be encrypted the entire way.
Just curious if you do anything novel there.
With an HSM, it would have to be marked as exportable (bad for security), or to happen via some proprietary HSM to HSM cloning method endorsed by the vendor.
That said, I don't see that much HSM usage outside of the government or their contractors.
On top of that, it should be very clear that everything I said is hearsay at best. If I had known the attention this would receive, I would have been clearer about it.
Seed money is the first to take the risk and deserves the majority share of profit
VC (series A,B,C) are putting in the most money and brining big hitters for the board and advisors. They clearly deserve the majority share.
Founders do all the work and it's their idea so they deserve all the money.
The second generation leaders productive, operationalize, and bring legitimacy to the company, so they deserve all the money.
Whichever group has the leverage forces the table to tilt their direction.
It doesn't matter how good the potential is, how sure the victory is, how close the first breakthrough customer is, if you don't trust someone, or there's a slimy/smarmy vibe then just walk away. It's not worth putting in years of effort to have to resort to contract lawyers to get paid.
Somewhere in the past few decades we've conflated the two - and a larger portion of our population believe that Capitalism is the goal. It's not, it's a way to achieve our goals. It is efficient, it is effective, it will always have little to no morals and consolidate in the hands of the few. That is not a judgement of the system it is an assessment. No different than stating a hammer will will work well with nails and poorly with screws.
We as a world society (and particularly an American socienty) need to refocus on what our goals of society are. And actively decided when to use and when to rein in specific tools to achieve our goals.
Absent of focusing on goals, our tools become our goals and we get the results we're seeing today.
I'll avoid getting into an internet argument, and just leave this quote here.
> tech has become a cesspool of slimy founders + and unbridled capitalism - this needs to stop for the greater good
This was the thread the comment was posted in and it's entirely the topic of discussion.
In the future try to choose more productive ways of describing people's views and engaging in discussion than as 'rabble-rousing' 'rants'.
If you want to understand what the implications are you need to spend time not with technologists but with ecologists. In nature there is a reason the apex predator doesn't evolve predatory advantages at a faster rate than its prey evolves defensive advantages. These rates grow or shrink in lockstep depending on resource availability. If they don't the ecosystem collapses.
Too often, it's used to shut down all conversations around corporate malfeasance re:privacy, so the industry doesn't get better, we all just move on to the next big story. And victims are blamed and shamed. "Your fault for using a free service, what did you expect?" vs. "This is unacceptable behavior, let's force a change."
Not to mention so many don't understand free vs. non-free. Are there ads? Are there optional purchases that keep the company going? As someone else mentioned here, unroll.me showed ads, which would lead users to believe their usage was being subsidized by those ads - and Slice's About page on its web site says nothing about using unroll.me as a data source, it claims to use its own shopping app.
It's ok that we pay you sub market salary because you get great ipo and equity.
When you bump into your colleagues in the morning you have an extra talking point.
Also spreading unfounded rumors about data storage practices you know zero about is really irresponsible.
The irony is that Rakuten also owns a significant (12 percent?) stake in Lyft. Pretty funny that one Rakuten property was selling data to Uber who used it to hurt a second Rakuten property.
If it's free for you, you're the product.
So hypothetically, if you paid $5 a month for this service, you would be confident they were NOT selling your information?
Since the answer is obviously no (and in fact, purchasing behavior is the juiciest stuff to sell, be it Comcast, Target, etc), then this tired trope is meaningless.
No. A statement's truth doesn't imply its inverse .
It simply means if a company has employees, and they're receiving paychecks, and the money is not coming from you, then it's definitely coming from someone else.
If you are giving them money, it doesn't mean they're not also getting money from somewhere else. But they're less likely to need to do that, especially if it would upset their paying customers.
It's moot anyway because the answer is out there publicly and no heuristic is needed to resolve it.
Now it's obvious why I've got people calling me asking about my recent loan that may have had PPI or that traumatic car crash I was in a few years ago, even though I've never in my life on both fronts. It's my bloody carrier!
If you use Google, shop in a supermarket, use a mobile phone, have the internet, your data is being collected and, in most cases, being sold to "chosen partners for marketing purposes". Everyone's doing it because they can make money from it.
Situations like this one are exactly why the saying became popular.
There's a difference between ad supported businesses and business that actually directly sell user data behind the scenes.
Equating all the ad-supported businesses with this case is not really fair because the types of businesses you're talking about here are not actually literally selling you out. They are simply pushing you ads on THEIR platform which YOU agreed to use. Sure there are lots of shady things going on in this department as well, but it's a completely different game than what this looks like.
Based on this article it looks like they took your data and actually sold it to a third party, this is different from simply displaying ads on their platform. They literally sold you. And it happened OFF of the platform you signed up for.
A product may be free - and you may still be happy to be the product if you think your attention is being sold, or that they plan to upsell you onto a premium plan.
'If you're not a paying user' doesn't immediately lead you to 'They're going to scan my email and sell the data to fucking Uber' and shouldn't require the user to scan the ToS / rack their brains for every nefarious bit of fuckery the company might conceivably use the data for.
1. Emails with subject [xxx] are opened more often than emails with subject [yyy].
2. Lyft does more business in Scottsdale than we expected.
3. 25% of people who use ridesharing use multiple services and 75% are loyal to one service.
4. 33% of people who don't use ridesharing services also don't use traditional taxi services.
"Nefarious" is a strong word.
I agree with the first half of what you said and disagree with the second. That's precisely the value of the original saying: it's not there to be an excuse for a company, it's there to spread awareness and remind people that they should get informed about just _how_ they're being productized _before_ they have cause to regret using the service.
If more of us scanned the ToS carefully, we might catch the nefarious bits of fuckery on time and pressure the company to change.
It's still unethical.
People who go to loan shark know exactly what they're getting into--borrowing money.
People who signed up for unroll.me signed up because they got sick of all the the spammers and wanted to get away from all that easily.
Most people including me, thought they would somehow monetize with ads or something like that, but never thought they would sell our info to 3rd parties like this. So No, it wasn't at all obvious what to expect.
The saying "power corrupts" is more correctly expressed as "power attracts the corruptible".
I just disconnected from one or two services which had access to my gmail (reasonably so).
Paribus is another of such services that I am aware that require you open your inbox access to them (the pull model).
There is nothing wrong with reading your emails with your explicit consent but I believe a push model like TripIt/Kayak's previous push model (send email receipts to trips@tripit/kayak.com) is a safer way to avoid privacy being violated and abused.
> [...] Mixmax may securely access or store your name, your Gmail email address, your Gmail emails and other conversations, and your Gmail contact list [...] We may anonymize your Personal Information so that you are not individually identified, and provide that information to our partners.
Just another reminder that nothing is free =)
Really? I never got that. While I'm also a little unsettled by the selling data part, the value prop was always pretty clearly simplifying unsubscribing en masse, which necessarily involved handing over access to the contents of your emails.
If unrollme made it clear they were making their money by selling every one of your emails in plaintext they'd never have signed up anyone.
Clean up your inbox
Instantly see a list of all your subscription emails.
Unsubscribe easily from whatever you don’t want.
We may collect and use your commercial transactional messages and associated data to build anonymous market research products and services with trusted business partners. If we combine non-personal information with personal information, the combined information will be treated as personal information for as long as it remains combined.
Aggregated data is considered non-personal information for the purposes of this Privacy Notice."
That just happen, at last.
Slide deck 1 of 44: "Learn how your competitors are doing."
They don't think it's any of the customer's business to know that, and it's counter to the company's interests to publicize it.
It would be much bigger news if they were selling data that could be used against the consumers on an individual basis.
You can find out a lot of personal information about someone by tracking where they go on a regular basis.
Even without that angle, I find it absolutely scandalous that a company is able to do this, even with the T&Cs permission of their users. Surely at some point this is going to bite them in the behind? The possibilities of it going massively wrong seem endless. Then again - common sense and the law rarely seem to intersect.
Now we know what they are doing it and how. I hope they get shutdown within the aftermath of this story.
> 'consenting adults'
Are you consenting if you don't know what you're consenting to.
> anonymized data
Widely understood that anonymized data is bullshit, which is why the term "de-anonymized" exists. It's especially bullshit when there are no 'standards' as to what constitutes it.
> shared in a compliant way
What does that even mean.
The beef industry would also collapse if all the meat-eaters who thought killing animals wasn't ideal in principle had to watch the cow get killed before every meal.
That headline "Your inbox security is our top priority" sure seems hilarious now. Also, I wonder why the article refers to it as Slice Intelligence? Is that their official company name on the paperwork? Is it to portray this shadowy intelligence gathering agency for hire? Both?
It may be of interest to you, but it isn't the lede of the story. The lede is a pattern of behavior by the CEO, of which that is one element.
OP is implying the "real story" (i.e. the most important part of the article) is not what the article begins with
Please delete my account and all my data.
I have a few simple rules I follow to hit inbox zero...works quite well:
1. Unsubscribe Relentlessly
2. Use Keyboard Shortcuts or Gestures
3. Snooze Important Emails
4. Use a To-Do App
(I wrote it up in more detail here: https://shift.infinite.red/how-i-achieve-inbox-zero-every-da...)
At the time, Uber was dealing with widespread account fraud in places like China, where tricksters bought stolen iPhones that were erased of their memory and resold. Some Uber drivers there would then create dozens of fake email addresses to sign up for new Uber rider accounts attached to each phone, and request rides from those phones, which they would then accept. Since Uber was handing out incentives to drivers to take more rides, the drivers could earn more money this way.
To halt the activity, Uber engineers assigned a persistent identity to iPhones with a small piece of code, a practice called “fingerprinting.” Uber could then identify an iPhone and prevent itself from being fooled even after the device was erased of its contents.
See for example this Tweet, with hundreds of retweets and lots of verified replies:
"This is like a holy trinity of privacy disaster: 1) secret tracking that 2) persists after users delete app 3) in knowing violation of rules"
Edit: Read up a bit more on it. Turns out it was the practice of fingerprinting and tracking after re-installs, not after an uninstall. TechCrunch provided a better technical description: https://techcrunch.com/2017/04/23/uber-responds-to-report-th...
So for all means continue to investigate the seemingly terrible and anti-women culture and the fraudulent stealing of Technology from Google. But like you said, don't mischaracterize other facts to make them sound more terrible than what they really are.
But in the direct interest of capitalism, and indirectly consumers, it's terrible to restrict important businesses.
C.H. Spurgeon, Gems from Spurgeon (1859)
The problem for Uber is that they /are/ scummy. They proudly bend every possible rule to their advantage. It's easy to believe the worst about them.
Basically, they created a unique 'fingerprint' of the iPhone. It was unique enough that even if you reinstalled the app, the fingerprint would still be the same. This was done, ostensibly, to prevent people from scamming them by reinstalling the app and coming over as new users? But they already have the phone number, so I don't understand the point.
In the article this is in the context of fraudsters buying used phones to fake rides in China and take advantage of incentive programs to make money. So they want to track these devices as they change hands.
So I think their goal is not so much about identifying people, but identifying devices in order to prevent this loophole.
could it be because the article intentionally glosses over complex details in order to pump a specific narrative ... hmm. \s
Here's the change in question:
changing "tracking" to "identifying and tagging" and changing "even after its app had been deleted from the devices, violating Apple's..." to "even after its app had been deleted and the devices erased — a fraud detection maneuver that violated Apple's..."
In a really long article like this which is probably under some time pressure to publish, there's almost always things that seem clear to the author aren't to the reader. This is a standard clarification bug fix, and tweets were over an hour after the article was published - enough time to gather feedback and realize the need for clarification.
At least in this instance, the only specific narrative being pumped is the one that journalists are always pumping a specific narrative on touchy subjects.
The tweet responses:
> @MikeIsaac 32 minutes ago
> Since the line about fingerprinting is being misinterpreted(though it is explained later in piece) adding language up top to better explain.
> @MikeIsaac 31 minutes ago
> appreciate Technical community's concerns about how It is presented. Uber was not tracking location after device wipe (which I never said).
> @dangillmor 30 minutes ago
> What exactly were they tracking? Not entirely clear (at least to me).
> @MikeIsaac 29 minutes ago
> ID-ing devices. so if I steal a phone and wipe it, they can still determine I had that phone and used it to defraud uber, using other data
Clever, but it's disappointing that even NYT is turning into this madness.
Note that this was at least 4 hours after the outrage on Twitter started. Seems like a very intentional, well-calculated strategy indeed.
That comment seems a bit disingenuous. i.e. it's entirely possible it takes a journo 20 seconds to post a correct to a twitter account he/she controls and 4 hours/days/weeks to get his/her editors to sign off on the same correct and the change pushed to the news website.
The correction bounces around but never takes hold the way the initial claim does and people quietly go on believing their initial interpretation. Sad.
Do they really? [Citation needed] very much here. Which fintech app fingerprints devices? What would even be the point of doing that. You can persist a token in the keychain for that which is enough unless you are devious.
Fingerprinting is a form of 2 factor authentication, it's easy to perform and it's relatively efficient against fraud.
The first time you use an app you have to enter your user name and password and that is stored in the secure enclave that not even the operating system had access to.
When the banking app request validation, you use your fingerprint to authenticate and the secure enclave sends the username and password to the app. The fingerprint scanner is connected directly to the secure enclave.
When you sell your phone, you go through the process of erasing your phone, the encryption key is destroyed and your fingerprint is no longer valid.
I also interpreted "track" as "report geolocation data," but that's not what the reporter means, and honestly the reporter's meaning is more consistent with, e.g., "this website is tracking users" or "Do-Not-Track".
Could someone explain the logic behind how a driver requesting rides benefited them? Did the drivers fake the ride and pay for it themselves? Was there a cash incentive where they were reaping enough to offset paying for the fake rides themselves and profit hanseomely? Is that correct?
Interesting and somewhat ironic to think that Uber had to put countermeasures in place against drivers engaging in their own questionable version of "growth hacking."
Pretty sure this is about persisting data after the entire device has been wiped. Not just the app removed and re-installed.
I believe I tried long long ago on iOS and couldn't get it to work, but I don't know if I'm remembering correctly.
It's not clear if such code would work today on the latest iOS version but maybe. They probably used a private API to do so, and that itself was obfuscated in the compiled binary such that apples automatic analysis would fail to catch it.
My understanding was that Apple made those APIs return garbage anyway, so more hacky methods were required.
Unique settings, apps installed etc.
Very hard to have a non-unique set up with enough data points.
Let's look at roughly what's available:
iPhone model (2 orders of magnitude of possibilities)
Device storage -- increases entropy with iPhone model but still not that much
Device name -- easily changeable by scammer, so not enough
iOS version -- changes over time, not great for a long term fingerprint but might help short term
IP address -- short term attribution ok, but not against scammers. People in china have multiple sims very often so even relying on carrier isn't enough
Cell phone carrier -- same as above
Other apps installed -- as of iOS 9 you have to pre-declare what you want to be able to query, and that's subject to App Store review. It does help give a fair bit entropy. This also can change at any moment. But if you're wiping the device constantly, they might not be installing any apps.
In advertising / web, you want to attribute across sites / installs on a short time basis. You have plugins and their unique version numbers, OS versions and all their attributes, browsers version, fonts installed, etc. Way more variation than iPhones.
To defeat scammers erasing their phone constantly it's actually much harder, and likely needs something a bit more unique.
Especially since the behavior/activity of the phone could be suspicious as well.
You also don't need to be 100% accurate all the time. The point is to minimize the damages done to you by scammers, not reduce it to 0 which is impossible.
I'd assume the miners just wipe a phone clean, reinstall Uber & create a new account.
There are no special settings & variability in installed apps in that case.
Rider and drivers are randomly assigned. I am not sure if you can choose your driver. Not sure if all these drivers in China made a huge group to benefit each other
I imagine that they wouldn't really have much difficulty tracking you through ad tech, another APP, or some other Cult Of Free system that is willing to sell database access.