I know of at least one startup who do this and use a "machine learning algorithm" to find details of particular things in your inbox.
That "algorithm" is a guy in their office. He reads your email. He searches it for common keywords, uses some regexes, but ultimately reads your email, then copies bits out into the system.
I'm all for bootstrapping with things that don't scale, but this example was a bad-faith use of Gmail access. I'm glad I don't use Gmail so couldn't accidentally give a random guy access to my inbox.
It's very hard to accidentally give a random guy access to your Gmail inbox. Doing so would require you to opt in to a dialog clearly and explicitly stating that you are giving said permission to a developer.
You’re dismissing the observation that users habitually click accept or continue when prompted with a dialog. Sure, you can blame this on the users being lazy but it becomes ingrained into users when everything they access has a dialog, especially when that contains terms of service that would be twenty pages long in paper form (slight exaggeration). I cannot even count the number of times I’ve had conversations with people when observing this behavior. So many users inherently trust that what they’re agreeing to is not only safe, but widely accepted. After all, why else would the service be so popular and have so many users—“Someone out there had to make sure this was legit before me.”
Im suggesting a UI feature same as the one Github has when deleting repos: clearly input the full name of the repo, or in this case, maybe input ”I UNDERSTAND” in order to proceed. This could be a browser plugin maybe...
Access to my personal email would be pretty much security game over for me as far as I can tell. Other people might feel otherwise.
Breaking news: Third parties scan full transaction histories and balances from bank accounts of users who sign up for financial planning services (e.g. Mint).
The WSJ really knows how to write misleading, incendiary headlines about tech companies. Like, just read that comment section. Gotta stoke that techlash. Plenty of people have plenty of real reasons to hate Google, but hit pieces designed to drive the non-technical (and apparently some HN readers) to think "Oh no, my Gmails are being sold and read by people"--I guess it's too effective of a strategy to pass up. Just an utter shame.
There is a real problem here, that lots and lots of users that glaze over the "This app can manage (view, send, delete) all of your emails" permission setting without thinking about it. I mean, even I double checked my permissions to make sure I hadn't accidentally given my email access away to a random plugin. What do you do about normal users (like my parents) who will gladly click Next and Confirm on every single popup in front of them, without even attempting to read it?
Yeah. I was first employee at a startup that would probably have been the subject of this article if it was still around.
dons flame suit
We used OAuth to gain access to users' email. We were exceptionally explicit about this—it was the entire point of the service! We only looked at header data but technically we had access to the full text of the email—Google's permissions weren't granular enough for our users to only grant us access to headers.
That was one of our requests to the Gmail team when they announced the Gmail API. I wrote a blog post about its deficiencies from our perspective and one of the engineers on the Gmail team reached out. Mostly, it was just way too slow so we stuck with IMAP.
I was more-or-less running the ingest and storage systems at the time. We were pretty careful and data wasn't shared with any other companies.
IIRC we could access any mailbox/folder in the account.
edit: To be clear, the OAuth prompt clearly says you are granting access to read emails. Yeah, people don't read, but I don't believe we were doing anything wrong.
edit2: Our media coverage (Techcrunch etc) touted the fact we had access to your email—that was the selling point of the entire service! Amazing to me that OAuth has suddenly become evil.
We use read-only OAuth scope for FWD:Everyone. The security story is actually very good:
- Tokens can only be used from servers that have been registered with Google, so even if a startup has their OAuth tokens stolen the attacks can't really make requests on their behalf.
- Tokens access has per user and per app rate limits that are configurable in the Google console.
- With read-only OAuth access, there isn't really any value to attackers. E.g. if anyone tried to reset your bank passwords, it would be immediately obvious because they wouldn't be able to delete the password reset emails.
I'm currently (i.e. today) writing a Gmail Add-on that only requires read-only access to the currently open thread rather than requiring it for your entire inbox. (Basically read-only access is granted when you click the icon to activate the add-on within the currently open thread.)
This is something that's become possible in the last six months and is probably a slight improvement from a security perspective, but even the baseline level is pretty solid. The tech industry has yet to see any large OAuth-related security breaches, and frankly we may just never see that given the combination of the good security story and the limited value to attackers.
Thanks for sharing some details on this. While I'm sure it won't help me convince other that this actually happens, it's still good to be able to have a reference.
when you sign up for a service that does shit with your email, and give it access to your email account, of course the service can read your email.
This seems pretty obvious to me. There are many useful services that do this (SaneBox for example) but DUH how do you expect them to do anything intelligent with your email if they can't read it?
This is all predicated upon the user granting access to their email account though, so this is roughly equivalent to saying "People you sent a letter to can read your letter" ... duh that's why you did it.
Apparently, I struck a nerve. The attitude of developers with regard to data flowing their systems demonstrates some of the most remarkable lapses in common sense and historical understanding I have ever seen.
Electronic-Mail. That comes with a plurality of baggage associated with the voncept due to the societal concept of Mail.
In the U.S. at least, one's mail IS sacrosanct. The Postal Service, being the message handlers that "set the standard" as it were for other logistics and postage businesses to be measured against
DO NOT
MESS
WITH
THE
MESSAGE,
beyond setting limits on transiting parcels to facilitate smooth operation of the system.
Emphasis mine.
They do not "maintain state" about message history either.
They do not scan to detect trends. They do not try to "sanitize" that data to sell to marketers.
Just because there ISN'T a codified rule, DOESN'T mean it's a good idea to go around playing with expectations literally over a century old. If you call yourself a Mail provider of any sort, don't be surprised when people find you looking at correspondences that you end up getting blowback.
Stories (entertainment, not journalism) are filled with people eavesdropping on messages BECAUSE it is exceptional behavior. It wouldn't be worth writing about otherwise.
The type of behavior seems to be especially problematic in the tech world because there is seemingly no cost, or visibility to the non-tech-savvy user that it's happening.
Stop treating the user's data like it's yours. The system is yours. It would have no worth if they weren't using it.
Playing word games and hiding behind the legalese has worked thus far, but the tech literacy of the general populace IS increasing. And if the general populace likes anything, it's common sense and cognitive resonance. Tech will be beaten with the stick once the tech literacy is there.
The privacy reckoning will come if tech doesn't get its act together and GET WITH THE LAST TWO CENTURIES.
Yes, that’s the point. Granting access to an App, a person doesn’t assume the all will copy out your email so that it can be read or analyzed by humans.
I feel like everybody has been brainwashed into thinking Google is awesome and I'm looking at it from a distance, thinking how did they do this.... Tech conferences? Yearly Google io PR?
Somehow people is treating this ad company like they are good guys.
I don't think that was ever really true, it's just the depth and scope of their activities over the years has made it pretty clear that they are no better than any other faceless corporation.
In fact they are probably worse, given how much personal information they hold on people and how willing they are to ignore local laws and regulations.
No thoughts on the Labs comment, but i think going public has a huge impact because every google employee's compensation is tied to the stock price (through stock grants). It's very hard to focus on non-monetary goals in that environment.
I'm a little surprised at this sentiment because they are also very often vilified here. For pretty much everything they do you can find a few comments about how what they are doing is evil.
I remember the golden age -- when it seemed like every week there was a Lifehacker.com post about a cool way to use and customize Gmail, GReader, etc. I gladly guzzled the Kool Aid.
It was magnitudes more effective than any marketing I've ever seen Google do. Ethics aside, you could argue that Lifehacker writers should have gotten Google CMO level stock grants.
It appears you can block app access by going to the "Apps with account access" page under 'Sign-in & Security' on your Google account page. I just checked on one of my accounts and there are a few apps like WordPress with basic profile access.
This page is hard to find and it's even harder to parse the meaning of the current settings, let alone track changes to them over time.
It shouldn't be that hard to find. Google seems to occasionally send out a link to the security checkup tool that has a summary of all the authorized apps / logged in devices [0]. Judging from the other items I'm seeing there, authorizing any apps for reading email would likely be highlighted in red.
I don't understand the negativity in this thread. The whole point of email based service is to read users' email. Should we be categorically rejecting a class of application?
How much value is privacy vs. value of those services?
> Should we be categorically rejecting a class of application?
What you mean, "we"?
I categorically reject any third party getting their nose in my email, including Google.
> How much value is privacy vs. value of those services?
Correcting the question to "is the loss of privacy worth the value provided", again, my answer is absolutely not.
On this point, though, it is less about 'privacy' than it is trust. How much do you trust some random startup techbros with more entitlement than sense with everything in your inbox? Not that that's every startup, but it is enough of them.
But in general, I frankly don't understand why anyone invites random third parties to read their mail. That's crazy to me. Maybe select family, or (if I were way richer) agents with a contractual relationship.
>>But in general, I frankly don't understand why anyone invites random third parties to read their mail. That's crazy to me. Maybe select family, or (if I were way richer) agents with a contractual relationship.
It's ignorance, plain and simple.
Most people (even those with tech experience) don't understand, or care, about the reach of that the companies they rely on have.
Convenience comes at a price, always.
Even if Google were to simplify their ToS and app permission notices as much as possible, a good portion of users would blindly click 'o.k.' and move on.
I use Google/Gmail account to login into multiple apps. All the apps have some of these three permissions - (1) View your basic profile info (2) View your email address (3) View your phone numbers.
Are these permissions strictly enforced? I know that Google employees can read my emails (under some specific cases), but can third party app developers also read my emails?
Yes, each of those items corresponds to an OAuth scope. If you try to make an API call for data that isn't covered by the OAuth scopes you have access to, you'll just get an error.
Developers can add broader OAuth scopes to their apps at any point, but if they do then all their users will need to re-authenticate and will see that the app now requires additional permissions.
No. For this app to get access to your email you would have to click through a prompt saying "this app can read all of your email". The profile, email, and OpenID scopes are just profile info. Still sometimes more info than is needed by the relying party (you don't really need to know my gender), but tightly scoped and enforced by any OAuth provider.
The apps that require permission to read your email and ask for it. They generally provide services like
- Act as a desktop mail client
- Backup your SMS messages to Gmail
- Search your email for forgotten registrations and allow you to cancel
Ideally it should be clear from the context that the app requires this permission. But the problems that plague mobile app and browser extension permission systems exist here as well. The stakes may even be higher.
i've been trying to explain this to my wife for a while and struggle with it lol. I get all grumpy and oldman like during tech commercials nowadays saying "pssh that isnt AI!". The other idea that is hard to comprehend is just how much data people have. They most likely dont really know what a 'data center' is and the scale of it. They dont see all of the apps/programs we use like we do, it is magic to them and the consumers are constantly getting 'swindled' by a bunch of hype-drivin buzzword mumbo jumbo.
My SO's response is generally along the lines of 'I'm fine with it, because it isn't in the hands of anyone I know. If you go out and buy that information, that's creepy and scary because you know me.'
I've started coming to terms with the fact that I won't be able to convince her of anything with privacy; because she isn't a technical person, and just like users; they won't care how/why/what cost so long as they can do what they want.
You could always offer to take off all the window curtains/shutters in the house, take down all fences around the property, and leave all windows/doors unlocked. Your SO should be fine with that since it's likely that most of the people that pass by are not people they know.
This does nothing to help convince people that data privacy is important. Physical privacy and data privacy are two very distinct ideas that have very different implications in people's lives. Please don't conflate the two as if they're equally important and you can't value one without having to value the other.
sadly, you really need to have your data privacy abused to understand the potential. Accounts emptied. Stalked. Secrets posted on line. But then only the victim gets it and everyone else goes happily on their way.
Care to elaborate on how they are ultimately different? Sure there's the superficial differences (they could see you poop vs. you emailing someone), but how is knowing exactly where someone has been, knowing the vast majority of their digital correspondence (content and recipients),knowing their purchasing history, knowing their social preferences, etc (and putting it all together), ultimately, any different than being able to watch someone in their home 24/7?
Most users of these products are non-technical, among other things. They have no idea that they should even consider that something like that is happening.
> i've been trying to explain this to my wife for a while and struggle with it lol.
A losing battle, it seems. I find people can't make the pattern connections about good and bad hygiene. So explaining one example doesn't prevent similar behavior in another, even for really "obvious" stuff. Kinda drives me a little crazy.
Technical users or not, the implications need to be MORE CLEARLY spelled out.
Putting it on the user to figure out the implications of some obscured click-through garbage text isn't fair.
There is a big difference between, say, a 3rd party application using a google API to work with anonymized keyword data from a gmail accounts VERSUS an actual person being able to browse someone else's gmail inbox.
If Google makes it possible for some 3rd party asshole to peruse the gmail inboxes of whoever uses the service, that should be UPFRONT and VERY CLEARLY STATED.
> If Google makes it possible for some 3rd party asshole to peruse the gmail inboxes of whoever uses the service, that should be UPFRONT and VERY CLEARLY STATED.
Within the last year Google changed their OAuth policies so that you need approval from Google before you can publish an app that uses OAuth. One of the requirements for this is linking to your privacy policy, where you need to say how the data is going to be used.
That said, there's a difference between understanding what privilege you are granting and understanding the implications of that grant.
For instance, Google help pages [0] just talk about "Full account access" and "View your basic profile information." What about apps that can view your calendar? That's in between. What information do those apps actually see? What can somebody do with the information that I might not like? These are hard questions to answer with the information Google gives you.
Grant an application permission because it asked to in order to fulfill a request to handle one particular thing about one particular email. Instead, grant access to your entire email account, not get notified of which emails the application accessed, and get upset because the application exceeded what you desired to grant.
I don't think Google is a villain here, except for the lack of fine grained access control common to all cloud apps.
For example, I had enabled IFFT to record something to a Google Sheet periodically. I didn't realize that in doing that, I gave the company unfettered access to my entire corpus of data.
This is getting quite irresponsible and sad.
The reason nothing too terrible(apart from election tampering) has yet happened is simply because the current market demand for developers prevents us from engaging in malicious behaviour.
However, once there's a market downturn, this data will be used for criminal purposes in no time. The malware industry in east european countries with good engineering talent exploded in the 90s after their economies collapsed.
But who cares right?
I feel like this level of naivety towards powering interoperability with brute openness today is just plain stupid and will lead towards a backlash or risk all out witch-hunt against developers and intellectuals in general.
That risk is becoming unacceptable in my opinion and it is worth it to start regulating a bit our profession.
> The reason nothing too terrible(apart from election tampering) has yet happened is simply because the current market demand for developers prevents us from engaging in malicious behaviour.
That doesn't seem to be how humans work. Groups that want to do malicious stuff are able to hire people too (eg electon tampering as you mention).
They'd probably have to direct their hiring efforts differently though. Probably towards known areas of er... scum and villany? ;)
Seriously though, they managed to hire people for the electon tampering, so at least it shows it can be done.
Yes it can be done, but the probability of malicious players finding someone competent enough is quite low. It's unlikely you would help a shady organization when you have a high salary and a nice life.
However this also means that security relies on the state of the market rather than on good code.
That "algorithm" is a guy in their office. He reads your email. He searches it for common keywords, uses some regexes, but ultimately reads your email, then copies bits out into the system.
I'm all for bootstrapping with things that don't scale, but this example was a bad-faith use of Gmail access. I'm glad I don't use Gmail so couldn't accidentally give a random guy access to my inbox.