I wrote something which allows storing encrypted files in a Git repository.
The main difference is instead of encrypting a full file it uses a template mechanism to encrypt just values. I wrote a quick start which covers it further. I've been meaning to write a blog post which further explains the use case with more details. Anyway, check it out here:
It also discloses whether someone is a customer or not. Possibly en masse. Problems:
1) Aids phishing attacks against Coinbase and customers
2) Oftentimes harmless tidbits of information can be combined to form non-harmless information. In this case, disclosing email, name, and the fact of being a Coinbase customer, or not, seems minor on its own. However, combine it with some other dataset (let's say emails/passwords taken from an unrelated site), and now it would be easier to break into accounts without setting off warning bells, since you already know who is a user or not.
Dismissing the information disclosure strikes me as akin to the "it's only harmless metadata" argument of the NSA. As we have already seen in many reports, "metadata" can be surprisingly powerful.
I would argue that using a personal email and filling in your full name on coinbase, who CLEARLY state you have no expectation of privacy in this regard, is effectively the same as publicizing the information.
If one cares about the privacy aspect, then don't use an email that is tied back to you in any way, and certainly don't fill in your personal information.
While I don't find Coinbase's response here reassuring, if you work with a business whose bizmodel is "people can send money to your email address" then it becomes essentially impossible to stop someone from verifying that your address exists.
First, the vast majority of attackers are more "smash and grab" than "stealthy jewel theft." They really don't care about leaving tracks, they are going for volume. Want to phish people for coinbase creds? Email a mass of people. Have a list of usernames/password from a data breach? Attackers have automated tools that will automatically try them against thousands of websites. It's more expensive and time consuming for them to try and leverage minor info disclosures to narrow down their attack than to simply brute the crap out of everything. The economies of scale devalue the info discloure.
Second, you are making an apples-to-oranges comparison. The boolean "Is/Is not a Coinbase user" provides a single data point, and is far less valuable than a hundreds if not thousands of datapoints about who is communicating with whom, and for how long. The single piece of meta-datUM of Coinbase pales in comparison to the meta-datA of phone logs.
The second point is a bit of a straw-man. I never meant to imply that this Coinbase disclosure and the NSA metadata are proportional in terms of severity; just that they are structurally similar. The point is that small bits of information can become surprisingly big with the right analysis and effort.
That first point strikes me as irrelevant here. Smash and grab is what you do when your probability of success and/or your take size is small.
But if you know somebody has a lot of money, then the rational amount of effort to apply goes way up. That's why stealthy jewel thieves are stealthy.
Since the whole point of Coinbase is to contain money that, from other BTC sites, appears to be easily stolen and easily laundered, I think a set of known Coinbase accounts could well be worth the effort.
Do you publish that you have a Coinbase account? That's the issue. Now these people are valuable targets for spear phishing and other attacks on their e-mail accounts because it's known that they have hot access to at least some amount of Bitcoin. Without that information, an attacker is shooting blind.
So, this sort of leak or enumeration basically reduces the (though tenuous) degree of security afforded by one's privacy.
Last time a politician was worried about non-zero probabilities, the U.S. invaded Iraq. I mean, if changing the probability someone's home gets broken into is our standard of practice nowadays there's a lot of companies which will have to close down today.
I reckon non-anonymous bitcoin holders are at greater risk than the average person with money in the bank, since draining the account of the former is a relative cinch once the keys are divulged. The whole crime could be completed within a few minutes.
That much may be true, but I'd consider that an inherent risk associated with using Bitcoin without using a pseudonym. Maybe I'm naïve but I have to assume people who care about such things are already tracking IP addresses directly from the Bitcoin network swarm itself for later investigation...
If an employee of either company steals from me, I'd expect them to be easily caught. If Coinbase decides to steal 1k from everyone then shutdown that would be crazy since the people behind Coinbase are very well known and in SF; I accepted that risk when I signed up. If someone inside Patelco decides to steal from me, that's a heavly regulated financial establishment - I don't think that person could get away with it or that Patelco wouldn't reimburse me.
If someone who is not an employee of either company manages to steal funds from me just based on that screenshot then there is some other security-issue somewhere else and it was bound to happen sooner or later.
Probably something like this, which I don't really see how you'd protect yourself against. Like getting hit by a car running a red-light.
I wasn't saying they would steal from you. I was referring you trusting their competence to do their job right and not be social-engineered by some hacker into giving them access to your accounts.
Thank you for the downvote.
You can find people's G+ profile if you guess the email correctly. I wouldn't be surprised if LinkedIn,Facebook,etc. had the same type of thing. I do think that coinbase-API should be rate-limited or unreplayable, but I'm _much more_ interested in where the email-list input data came from. My email wasn't in this alleged partial list, but if it was I'd like to know where they got my email from to begin with because the source of that email-list is the real problem IMHO.
I will say this though: Coinbase, please make sure there is absolutely no api call that returns banking/CC info!
Do it enough, and you get a CAPTCHA, do it more, and you get banned by IP. I would assume that the limits are set to a point that it's very difficult to enumerate the database in any reasonable timespan.
In an not-yet-authenticated state your code should do everything it can not to tell a potential hacker something they do not already know. Being able to check if an account exists and being able to read of the full name associated with it are not the worst problems in the world (though knowing the full name could make fishing attacks slightly less unconvincing) but if you take the attitude of never telling an attacker anything no matter how innocuous you think the information is then you are less likely to accidentally let something sensitive slip due to a bug.
SMTP's the API for checking if an address exists at any mail provider. Start sending a mail, if the server doesn't tell you there's no such mailbox right then, you can abandon the connection without sending a message through. No CAPTCHAs there either.
Many mail servers delay account checking for that very reason.
They take in all mail - instead of telling the sending MTA that the account doesn't exist the message is accepted and sent to /dev/null. A bounce message may be generated but the automated MTA won't see that as the message will be carrying invalid mail sender information.
Many mail relays, especially spam appliances like Barracuda, will always give an 250 OK to the email address provided to prevent exactly this trick. They then toss as spam or bounce once they get the message.
You should fault Sendgrid as they specifically have a policy NOT to perform this change of email request (from the article).
Sendgrid can also change their systems so that phone support personnel can NOT perform this change or perform this change with approval from a supervisor.
Sendgrid being in the business they are in should also know that they are susceptible to these types of attacks and what they can lead to (many, many systems which can have password requests sent to email addresses).
I don't know... Mistakes happen. There seems to be little to gain from faulting SendGrid, but faulting SendGrid would force them to take some kind of action such as terminating the rep. I think I'd prefer the rep remain employed, because I trust they'd never make this mistake again. Also, now all the other reps know to avoid it.
EDIT: May I ask what can be gained from faulting SendGrid in this case?
I agree with you that terminating the rep is not interesting, and I think you're mistaken if you feel like that's what anyone thinks will solve this problem.
Actions SendGrid could take:
* Make it impossible for their front-line support staff to change the email address on file. If you want that -- which should be extremely rare! -- you talk to a high-level manager who is competent at authenticating you.
* Send the email that says "hey, we're going to change your email address now" with a lead time to allow for the possibility that, even after your authentication, you've been conned.
* Make a phone call to the phone number on record, too.
You ask what's gained by faulting SendGrid, because you take it as a given that they will make these changes. But that's not how blame works. The blame serves a function of ensuring those changes by holding them accountable for their current problems.
It doesn't change the fact that running code that's downloaded over http,unverified by any kind of signature, as root -- is a very bad idea, and just having it there in official documentation for package whose purpose is managing systems gives me a very poor impression of the projects security goals.