I recently got locked out of my Amazon account because I made a large purchase after not ordering anything for ~6-7 months. During the reset process, they tried really hard to get me to set a phone number 'for account security'. From what I could tell from their documentation, it's not even just used for 2FA, it's literally just a way to prove my identity if I need to reset my password.
I refused, and then a few days later Amazon called me up to reconfirm the order anyway, even though I had never given them my number. Their entire account recovery process from that point on was based on me having access to information that was already listed on my account, that the hacker would have 100% had access to. It was all just security theater, literally the only thing that mattered was I had access to my email and a phone number.
Fastmail (to its credit) allows you to have 2FA without a recovery number, but it requires you to add a recovery number, activate a real 2FA app, and then delete the number. At least it doesn't (as far as I know) use the number on its own for account recovery.
Twitter's CEO got hacked because Twitter trusted phone numbers as identity, and they still haven't changed the policy, because collecting phone numbers is fun or something.
In theory, a 2FA over SMS is better than nothing. In practice, it trains customers to be insecure and should be avoided. It trains customers to think that identity verification over text is OK. In practice, you can't trust companies not to use it for advertising, or to start using it as identity verification in the future. In practice, there are very, very few legitimate reasons why a company should ever need my phone number, and pretty much none of them have anything to do with security. 99% of your users should be using a 2FA app instead of a phone number.
Companies like Twitter should be shamed for misusing security information this way, but they should also be shamed for using insecure authentication methods. I'm convinced that 5 years from now, we're going to look back at SMS authentication the same way we looked at serving login pages over HTTP.
It seems crazy to me that recovery numbers are a thing. I mean I'm sure it helps reduce customer service load, since people just recover over sms rather than trying to call and get their account re-activated, but it is so insecure.
However, as a user, I go back to the idea that I can't think of many companies I trust to only use my phone number as 2FA and not as identity verification. So I'm am skeptical that it is good to train users to trust SMS 2FA, because those same users will probably not be able to distinguish between 2FA and identity verification when they sign up for other services. It is better to teach users a simple rule (never give out your phone number) than a complex rule (only give out your number for this specific use-case).
The other big thing I just can't get past is that nearly everyone today has a smartphone that will run a 2FA app, and that even users who don't have a smartphone would be better served by getting codes delivered to their email. So sure, it's better than nothing. But there are even better options that exist that aren't that hard for us to switch to.
In practice, even if you know you're only going to use SMS for 2FA, I now lean towards saying you shouldn't use SMS at all. Treat email like the backup SMS option, and just get rid of phone numbers entirely.
Maybe the dynamics of that change for some developing countries? But Twitter, Facebook, and Amazon all know what country I live in. If they want to offer an SMS option for India because of some extenuating circumstance I can't think of, they should still have the good sense to at least discourage SMS verification for accounts that are based in the USA or Europe.
What do they do when their smartphone dies? The phone company makes sure your new phone has the same phone number, but you lose 2FA tokens in apps.
And I have no idea where the recovery codes that I printed on paper are since I last moved.
> even users who don't have a smartphone would be better served by getting codes delivered to their email
But then isn't that just one factor instead of two factors, because both their password for the service and their email password are just "something you know"? I'm assuming if they have no cell phone, they don't have a second factor to secure their email either.
It should be right next to the document that explains your loved ones how to get into your password manager, if you ever get hit by a bus.
> The phone company makes sure your new phone has the same phone number
This is exactly why 2-factor SMS is insecure. You mention later that email is something you know, instead of something you have. In the same way, if a company can transfer my number to a new phone without access to the original phone, then it's not really something I have.
The ease of number transfers are the problem. The reason why 2FA tokens aren't stored online and secured with a password is because they are designed to be something you have, not something you know.
For comparison, switching your number to Verizon only requires information that you know (account numbers, a SSN), so it's just extra steps around a less secure password that you can't change or set yourself.
> But then isn't that just one factor instead of two factors
Expanding on the above -- yes, email is often going to be just another account secured with another password. In practice, hacking two accounts is often harder than hacking one, and in practice, I suspect breaking into someone's Gmail account is harder than stealing their phone number. Google offers much more comprehensive 2FA options than most other companies, and their automated security alerts also tend to be better.
But there's no reason for us to debate over how secure email is.
The situation we have today with companies like Amazon/Facebook/Twitter is one where I can already request a password reset without SMS. Companies are scared of strict 2FA methods because customers get locked out of their accounts. Very, very few of them are willing to take that risk, so email will virtually always be an option. SMS is being added on top of that system -- it's not replacing it.
Here's Twitter's account recovery help page:
> If you do not receive anything back, get help with Twitter via SMS or use the email password reset option.
So if you consider email to be a weak link in identity verification/2FA, adding SMS verification as a secondary option alongside email still doesn't do anything to increase your security. In fact, even if SMS was as secure as email, forcing you to monitor two authentication methods instead of just one would still be less secure.
I'm not advocating email is perfect, I'm just advocating that SMS is less secure than email, and that since companies are already comfortable trusting email, they can continue to rely on that.
Of course if you really want to set up 2FA to literally be 'something you have', then you need to accept that things you have can be lost. And if you're not willing to make that compromise, at least email accounts are harder to hack than phone numbers, because the most common email providers are probably more resistant than Verizon to social engineering attacks.
> Twitter [...] So if you consider email to be a weak link in identity verification/2FA, adding SMS verification as a secondary option alongside email still doesn't do anything to increase your security.
I agree, and your comment's parent (my comment's grandparent) specifically went out of its way to agree: "as you said, many big tech companies say they have 2FA, when they really are just giving you two ways of logging in, where one of those ways is incredibly insecure."
> switching your number to Verizon only requires information that you know (account numbers, a SSN) [...] hacking two accounts is often harder than hacking one, and in practice, I suspect breaking into someone's Gmail account is harder than stealing their phone number
I think the number of people who go to the trouble of using like, a Yubikey or something for their Gmail but won't use it for anything else is vanishingly small. People opting for password + SMS 2FA (NOT the SMS 1FA that let @jack get hacked) are probably using the same thing for their email.
I'm sure it's true that it's easier to steal someone's phone number than break into their Gmail account, but afterwards you can go into Verizon's physical store with your physical government-issued ID and get your phone number back. That's not an option with a Gmail account.
No one is saying any of these are perfect, and everyone agrees SMS is less secure than email. The question is whether password + SMS 2FA is less secure than email 1FA, or whether password + email 2FA with no account recovery pathway is workable—doubtful, and definitely not.
Let's agree that out of password + SMS 2FA, password + email 2FA, and email + SMS 2FA, the first one is the weakest link, because SIM-jacking is terrifyingly easy and people choose terrible passwords. Just for account recovery, though, email + SMS 2FA still provides security benefits over email 1FA (you can guarantee a second factor, even if it's a weak factor, whereas you actually have no idea how strongly or weakly their email account is protected, you're just assuming) and usability benefits over email + TOTP apps/Yubikeys/paper backup codes.
Agreed, but I can't think of a single company, anywhere, that offers what you're talking about. Everyone offers SMS and email as separate options, both of which separately unlock your account.
If either Twitter, Facebook, Amazon, or Facebook required both email and SMS access to recover an account, I'd agree that there could be some value there. But (to the best of my knowledge) they don't. So the debate over whether or not SMS verification is better than nothing is hard for me indulge, when (again to the best of my knowledge) virtually no company is using SMS account recovery in a way that provides real value over 1FA email.
Maybe Lyft is an example? But the last time I used Lyft, I'm pretty sure I could get access to my account with only my phone, no password/email required. I'm not 100% sure Lyft even requires an email to sign up.
> That's not an option with a Gmail account.
I've never been in this scenario, so I'll have to take your word for it, but this seems strange to me. Could I really not fax or mail a government ID to Google to get access to my account?
Assuming this is right though, we again run into the same problem.
I lose access to my password and email. Is a company comfortable letting me reauthenticate with only an SMS message?
If yes, then we have 1-factor authentication over pure SMS.
If no, then we have to be comfortable with the idea that losing your email/password might mean losing your account, or going through a complicated recovery process involving government IDs.
I think you're right. I thought Vanguard or my bank did, but no, it's email or SMS plus personal info like SSN, birthdate, zipcode (LOL, information that no one has on me, thanks Equifax!).
> Could I really not fax or mail a government ID to Google to get access to my account?
To what address? Just plop it down at 1600 Amphitheatre Pkwy? I've never heard of Google offering any account support whatsoever for a free private Gmail account, have you?
I do personally know people who have just given up on accounts they lost access to (they claim they didn't forget the password) and just created a new Gmail account. Not the most technically literate person, but still, that's who support is supposed to be for. But it's a free service, so.
> then we have to be comfortable with ... going through a complicated recovery process involving government IDs
What? The whole point I've been trying to get across is that unlike TOTP or email, you can get your phone number back through a "complicated recovery process involving government IDs", which as an advantage. That's not a tradeoff to be comfortable with, that's an upside.
> I lose access to my password and email.
Why would we design for this? Unless there's a particular reason to think password and email are likely to be lost simultaneously (which I can't think of, unlike say, smartphone TOTP app + phone number), then we should either design for losing any combination of 2 auth methods, or not worry about combos.
By contrast, to me it could make sense to design a system so that you can lose any 1 of 3 things but still be able to log in with the remaining 2 makes sense (e.g. password, email, phone number). But you're right that most services are effectively just email 1FA, and many are SMS 1FA too which we all agree is utterly broken.
Don't know in the USA but here in Europe you have to confirm/prove that you own the number being moved from one operator to another.
Edit: I suppose phone numbers can at least be changed, but that doesn't really solve the problem for 2fa as it might for SS numbers.
What is this supposed to mean? What is the point of having a phone number if you never give it out?
I suppose you can make outbound calls only with own number sending turned off, but that sort of diminishes the purpose of owning a phone number?
In other words, if a company tells you they need your number for account security, refuse to give it to them. Of course you can give out a phone number for general contact purposes.
I think it's easier to teach people phone numbers can't be used for security, period, then to teach them that phone numbers can be used for security in one very specific case -- especially since companies like Facebook/Twitter rarely use terms like 2FA when asking users for their number. They just say stuff like, "you'll use this to gain access to your account" or, "we'll use this to help keep your account secure."
It wasn't my intention to make it sound like users should never tell anyone their number for any reason, but I can definitely see how it might read that way.
Alice is tricked into visiting fakebank.example a site which looks exactly like Alice's real bank. She enters her username and password, the site says it will send her an SMS, behind the scenes Alice's details are plugged into realbank.example automatically, triggering an SMS to Alice from her real bank.
Alice types the SMS code into fakebank.example, it stalls a little bit, very common for banks, meanwhile it plugs the SMS code into the realbank.example site and successfully logs in as Alice. Cool. The fakebank.example site finally gives an error. "Code 418. We're sorry, this service is temporarily wormhole phasers galactic transwarp. Please try again later". Alice is annoyed but decides to try again in an hour. By then her account will be empty.
WebAuthn / U2F work fine here, because they don't give Alice the opportunity to mistake this for her real bank, the FIDO token will cheerfully mint credentials for fakebank.example which are no use to log in as Alice on realbank.example. There's no "I'm sure" step, no "Actually this is my real bank", no opportunity for human error to betray her. The bad guys still annoy Alice with their bogus site, but they don't get working login credentials.
I had an account with Bank of America and a prepaid AT&T plan. I was about to move abroad so I went to my local bank office and asked if there was anything I needed to do to prepare for this. "Nothing in particular" I was told.
After moving abroad I opened another bank account in that country and tried to transfer my money over to said account. However in order to transfer the money out I had to enroll (I wasn't already) in sms 2FA. As at&t prepaid does not have roaming I wasn't able to enroll using it. And only American numbers were accepted. I got on the phone (a different one) with at&t to see if roaming could be enabled. "-Nah, there is no roaming for prepaid only for postpaid. -Can you sign me up for that? -No, only in a store"
Now I weighted my options. I could fly back to the US to resolve it, if I could get another visa. I could fly to Mexico or Canada where there is roaming. In the end I went with the advice the bank's support gave me: I enrolled a friends number. This does not feel secure at all, but what you gonna do?
Moral of the story: If you move abroad, better make sure your sim supports roaming, because you're gonna get locked out of stuff otherwise.
If it's truly always 2FA, including two-factor password reset (need SMS verification code in addition to email verification code), than it's probably better than nothing. But otherwise, it's actually a major exposure you're opening up for yourself, especially if you otherwise have good password practices (no re-use, and ideally pseudo-randomly generated from a password manager).
The postcard contained a call-to-action to visit a URL. Fortunately, I was suspicious and went there in an incognito window, because it redirected to use a signed-in facebook session to send themselves a message, presumably to harvest more data about me.
I wrote a review on the product condemning this practice and Amazon removed the review as "not about the product". Quite frustrating - seems Amazon just doesn't care about customer privacy.
The way Amazon reviews works is that it combines reviews of the same product (even different variations of the same product) so mentioning anything about an individual seller in "ask a question" or "write a review" is meaningless as it applies to the whole site. It makes sense why they would take this action.
That's a real shame, because if anything I suspect sellers on Amazon have become less trustworthy in general, not more.
(One for Amazon sharing the address, one for the seller sending the unsolicited card, once more for the Facebook message without data collection consent.)
SMS for recovery is deeply flawed, but it's 1) better than nothing and 2) better than any current alternative for the vast majority of people.
I mostly agree with the other points (although I don't think I'm the only one with Wi-Fi calling and SMS, even abroad).
...at which point the attacker has already taken over all your important online accounts and changed them to a different phone number.
Also, you're specifically talking about SIM-jacking. The vast majority of people have upgraded phones, or broken or lost and then replaced a phone, far more times than they have been SIM-jacked. What do they do about their TOTP tokens in those scenarios?
For the vast majority of people, that's a dealbreaker, in spite of SMS being deeply flawed, as the comment you originally replied to said.
First, I disagree that only high-value targets are at risk. I think, in general, targets are at risk for SIM-jacking. You can be a target because you're Jack Dorsey, but you can also be a target because you suddenly get swept up in a Twitter controversy, or because you run a corporate social media account, or because an ex-partner decides they want revenge over something.
A lot of our security protects against untargeted attacks because they're more common, but targeted attacks can't be completely ignored.
The second problem I have is that high-value targets use Facebook, Twitter, and Amazon, and they should be able to use these sites without worrying that their accounts are by-default insecure. We're not talking about a local library account, these are the biggest tech companies in the world. The president of the United States uses Twitter.
I think that taking a strong stance on security is important in that scenario.
I'm assuming (hoping) that Trump's account is secured with a phone-number that isn't public and that can't be SIM-swapped. But who knows, Dorsey's wasn't.
If it was harder to transfer a number to a new phone it wouldn't be that bad to use that method of 2FA for non-critical services.
Telco's never made any claims about tieing your identity to your phone number - they just move information around.
Blaming telcos is like blaming a hammer for not being big enough when you're tring to drive a screw with it.
It's not even really identity management. It's just way too easy to transfer a phone number. A simple protocol like, sending a text or email to the information on file and then waiting 24 hours to complete the transfer would be enough to stop 99% of number transfers.
Instead they process these transfer instantly, often using very little information to verify that the request was legitimate. You can often sweet talk the customer service reps with simple, "I forgot what information I used to sign up." They process that request and then it ruins your life and haunts you for the next 5 years.
From the security side, this is incredibly frustrating. Very often there's someone in product management who insists that users love SMS-factor. That it's great because users don't have to use a special app or transfer it when they change phones. They will be backed up by an engineer who swears that SMS is great because using a TOTP app is an unacceptable corporate imposition on their personal device.
Meanwhile, someone in finance quietly sobs as they pay the Authy or Twilio bill, but nobody seems to care about the opex budget.
This is not made easier by the hardware complexities. What if the user has a slightly dated PC (USB A), an iPhone pre-USB C, and a modern Mac?
Though, if sim-swapping is too costly to do en-masse, then I guess it might be an effective 2fa method for an average user. But not for ones likely to be targeted.
Don't they need a phone call to identify that it's really you by matching your voice print against Alexa recordings? ;-)
The available alternative today is a token 2FA, which has to be setup and requires some savvy, and a number of one-off security codes which are issued at account creation and then the user must figure out how to store securely, indefinitely. Forcing this on the avg Twitter user is obviously a nonstarter.
1. Accounts/resets performed because you asked nicely and maybe provided some laughably easy-to-obtain information and
2. Showing up at the Twitter office in San Francisco with notarized copies of your birth certificate, latest utility bill, Social Security Card, etc.
And that happy medium will doubtless be both vulnerable to a determined targeted attack and a sufficient PITA for some users that they'll end up losing access to an account.
In Amazon's case, the real security is the credit card info. If you tried to add a new address you would need to confirm the credit card info, even if you sim swapped the account owner.
Then why can I reset my password with it?
The spam issue is separate. Twitter can ask a user to give them some kind of proof that they're not creating duplicate accounts. There's nothing in that process that requires them to then take that info and make the account less secure. Once they've verified the user is human, they could just throw the number away -- instead, they use it for account recovery.
Even with Amazon:
> In Amazon's case, the real security is the credit card info.
I was able to recover my account with just an email address and a phone call. I'm pretty sure you're right that an attacker couldn't enter a new address. I'm not sure whether re-entering credit card info is required to use Amazon locker. I'm also pretty sure it's not required for digital purchases like music and movie rentals, although I guess it wouldn't be as big a deal for Amazon to refund those later.
But this goes back to the same question: if the real security is the credit card information, then why is the SMS recovery option there? It's not secure, we have to rely on other mechanisms on top of it to keep people from doing really harmful stuff. Amazon still won't let you set up a 2FA method without a phone number.
SMS 2FA has nothing to do with spam, it's just bad security. You can do spam prevention without incorporating the number into 2FA or account recovery.
The company behind Blur is called Abine. Their core service which is providing masked cards and phone numbers requires American citizenry and a fee. But IMO, it's well worth it.
because it provides identity.
Being a unique number that is tied to a single individual, it can function as a proxy for identity. This, obviously, assumes you are operating like the average user.
And even then, biometrics can't usually differentiate between twins.
For twins, there is arguably no way for software to demonstrate identity, ever.
Social security? Proves you're holding a card.
Biometrics? Proves you're one of multiple with these exact genetics.
Etc. I literally cannot think of a way to definitively and authoritatively tell twins apart in software.
Although, I wouldn't want to give the public key to google/amazon/facebook/twitter :)
Or simply access that key of yours and use it?
The public/private key only prove you hold the keys, not that you are you.
Not identity, just proves you have access to the keys.
They are better than the consequences of getting locked out my bank account because someone stole my phone number -- especially since the first thing someone who SIM-swaps will do is change the recovery number to point to a separate phone they own. I'll have to go through the exact same recovery steps, just with the added pressure of having money siphoned from my account.
Most 2FA app setups come with one-time backup codes that you can write down in paper and stick inside a safe or your wallet. If you don't think customers will do that, and you're still worried about account recovery, using email instead of SMS is preferred. For all the crap I regularly give Google, their security team is top-notch, and I generally trust most people's Gmail accounts to be more secure then their phone numbers.
The only scenario I can think of where a service couldn't rely on email for account recovery is if you yourself are an email service like Fastmail.
For online only businesses there’s no escape hatch.
However, substitute 'bank' out for any online-only service and the problem persists, because any reasonably smart attacker who SIM-swaps my account will still immediately change the phone number to point to themselves. If my Amazon account gets hacked, and the attacker goes into account recovery and changes the phone number, Amazon still needs a non-SMS based escape hatch for me to get the account back. The problem hasn't gone away.
So why not just use the non-SMS escape hatch all the time?
In Amazon's case, if they don't have your phone number they'll do account recovery over email. That's almost strictly safer, and just about as convenient as SMS.
I tried making a Twitter two months ago because I was having trouble with my phone and needed to access T-Mobile's social media-only support team.
Within five minutes of making the account, I was banned for "suspicious activity" and required to enter my phone number "for security purposes". But the only reason I made the account is because my phone wasn't working...
Emailed support and was told that they could not make an exception for me because I had broken some vague unnamed rule. Then I said the magic words, "This is clearly a ploy to collect phone numbers for data aggregation purposes," and within the next 24 hours my account was unlocked, accompanied by a very salty and accusatory email.
I have family and friends that share photos on their private Instagram accounts and nowhere else. I didn't have an account, so I installed the app on my phone, signed up, verified email and mobile number, and followed the family accounts.
The next day I open the app only to be prompted to log in again, which it won't accept (password stored in a password manager). A few minutes later an email arrives from Instagram saying that my account has been locked due to suspicious activity and demanding that I provide a photo of myself holding government issued ID. Their helpdesk refuses to do anything until I do.
Used a couple of throw-aways to get kids log into some gaming MS service. In two days arrives a message saying your account was compromised and used for spamming, we need a phone # to make it great again. The kicker - throw-aways were on my own domain and my own mail server.
Fucking morons, they don't even bother to lie convincingly.
I keep wondering, how will they be able to verify? I didn't use my real name, I used a throwaway email address. So why do they need my ID? I could submitted any ID I want, they still wouldn't know it's me.
This to me just sounds like a ploy to gather government issued information, to delete my account from a stupid game...
During log in, they already verified I own the email, by allowing me to sign in with password and email, and then input a code send to said email address.
I wonder what they'd do if you sent them a clearly fake image of a passport or a driving license, with some swear words in name fields. I wouldn't be surprised if they banned the account. Or what if you put "Sir General Data Protection-Regulation" as your name?
If so, is it possible they have access to your name via your past money transactions?
Why do they suddenly not care anymore? My mom constantly calls me paranoid now for not wanting to do things like upload all of my photos to Google cloud.
I.e. their position back then was as ungrounded as their current position is.
At a basic level, you could think of it as providing the content for the audience - the inventory that is sold to advertisers. Twitter as we know it would not exist if all the users were tweet consumers, and none were tweet producers.
T-Mobile is great example of a business providing customer service over Twitter.
The sceptic in me thinks this is more about advertising and less about security.
The same was done to me. The thought that the point of this was not to take my phone number for advertising never occured to me. It seemed very obvioius and I hope the EU, as the only political actor that sometimes seems to care about people, will do something about this.
Do you think this would still work?
I've been wanting to get a twitter but I've been put off by the phone number "requirement" for a long time. Every time I attempt to make an account or log into an old one it requires phone number verification. (I can't get burner phones where I am).
Would you mind sharing the anonymized content of said email?
post the email
Your account was locked due to a violation of the Twitter Rules (https://twitter.com/rules), specifically our rules around abuse: https://support.twitter.com/articles/20169997.
Your account is now restored. Please note that further violations may result in the locking of your account again or permanent account suspension.
They refused to take responsibility for the error.
Honestly, I wouldn't be surprised if the pipeline was set up wrong to match against the wrong field. These matching pipelines are just Hadoop jobs (written in Scalding (Scala library for Cascading) and orchestrated / scheduled by Apache Aurora.
Internally Twitter generally operated the way you would expect them to as an external user. User data and privacy was taken seriously and no data was given to third parties etc.
Source: I worked at Twitter in 2015 as in engineer on ads (on the programmatic ad buying side and partner integrations).
That being said, it will have been incredibly easy for a single engineer to make this mistake (code review probably should have caught it? But maybe it looked just close enough to the right data source), and it would have been extraordinarily difficult to discover.
I've not worked in years at a place that wouldn't understand the importance of PII. Not that it doesn't happen, but let's not mince words here - this was wilfully done.
I don't mean to defend Twitter in any way, but I could easily see this being an oversight or a mistake.
It's entirely feasible to me that this is was a mistake, I think people who assume this was deliberate are ironically putting more trust in tech companies than they should.
Most of the world is being held together by duck-tape, fastened by people who don't understand the systems they're fixing or maintaining. I don't think that tech companies are an exception to that rule.
The JIRA will just have been something vague like "add support for phone number matching to tailored audience matching pipeline" likely created by a manager on the ads infra team. Context will have already been assumed. Given that these are simple data pipelines there likely will not have been a design document specifically calling out the fields to match against for this task.
At Twitter it was also possible to deploy these Hadoop jobs without checking in code. They would require to be run as the main ads system service accounts, but most ads engineers should have had the ability to deploy such a job.
As I mentioned earlier, the fragility of this part of the ads infrastructure I observed in 2015 makes me believe that a mistake is entirely possible here.
Example: Hadoop job writes some output file to HDFS, a different job reads files from a particular location on HDFS and processes them. If no files exist there must not have been anything to process right? But it could have also been the case the first Hadoop job failed which nobody noticed subsequently.
Anyways, it could have been an engineer by mistake, an engineer trying to get promoted and increasing revenue numbers, or an action at the direction of management. Don't rule out the first option though...
As a total shitshow?
They should be deeply fined.
I can't admit to anything I don't know about :)
I agree that in a well designed system and process 2FA numbers should not be accessible (like passwords). However, I've seen this as a common practice in quite a few startups.
They admit to using 2FA phone numbers for advertising purposes.
How is this not wrong doing?
That they believe it was a mistake doesn’t make it right doing.
I'm very inclined to believe it was an unfortunate mistake. A mistake that probably should not have been possible. I'm not sure why or how it was.
Intention should have an impact on punishment / reparation outcomes, but it shouldn’t necessarily impact guilt.
E.g. maybe they at some point had dozens of databases for user data, all maintained by different teams and grown organically. And then had a big effort to merge them into a single system with a rationally designed schema. And maybe they ended up coalescing all phone numbers of each user from all sources together, with some kind of annotations for why each phone number was collected. And then it's very easy for somebody to screw up the code that's supposed to filtering out security-only phone numbers when ingesting data into the ads targeting system.
The above was totally theoretical scenario, but it should be easy to come up with lots of other ways for this to happen by mistake rather than intentionally. Don't think companies make these kinds of mistakes? Both Facebook and Google had issues with plaintext passwords this year. And it should be obvious that neither had anything to gain from that.
If there is some sort of opt-in to use your phone number for ad targeting purposes they have done an incredibly good job of hiding it in the UI.
I have a hard time believing that Twitter didn't know this.
I've never worked on the same scale as FB or Twitter and I know that this is a no-go area.
Nice use of passive voice there, mofos.
just so people know that Twitter at least has standards
Assuming that accidents occur with a specific percentage of data collected, it's not "funny", it's explainable: the more data one has, the greater the chance of things going wrong because someone uses data that is there but should not be used for that purpose.
That happens when systems grow so large that they cannot reasonably be understood by a single person.
I got phone calls from google cloud platform even though I am certain I didn't give them my number. They even managed to find my real name even though the only place I used it was for the payment method in the billing account, with a fake name on the google account. This is way too much information to give to their internal sales reps.
Does this mean the matches they've already made on these identifiers are still active?
They say they inadvertently did it, which may be true. But if they have the data it can be abused.
Goes for all those "omg think of the terrorists" data collection plans as well.
This is exactly why I am a huge fan of the "privacy by design" parts of GDPR and the fact that the regulation places a heavy emphasis on how the data is used in addition to who sees it. It has helped to crystallize the discussion about privacy as not just "what data" but "data + usage".
I like that engineers are increasingly required to think about not just what the data is but also how it's being used. When the act of connecting data sources has ethical impact, engineers can't be agnostic.
Any information that you provide to any company or government will be “misused.”
Opt out on https://twitter.com/personalization
(details and screenshot: https://twitter.com/simpleoptout/status/1178290986868297729)
I went through the recovery process just to make sure that I would be able to access my account and was asked the month and year I opened my gmail account.
I have had my gmail for 17+ years and I have no idea when I opened it, and now if I ever lose my phone I will be locked out of my account.
Does anyone know where I can see when I opened my account?
I travel year round and rarely keep a phone number for more than a few months, 2FA has increasing ly become a problem for me as more and more companies force it on me. Paypal for instance is completely unusable for me, I have been locked out of my own account numerous times.
Twitter know it is a rubbish security system but they do it as it is another channel to send their garbage
Twitter would go head over heels trying to remove that one field from their sign up flow to improve the funnel. In fact they will copy what their competitors are doing to understand things better. But when it comes to security or using data, anything goes unless they absolutely have to address it for legal/compliance reasons.
Just the state of affairs of tech companies these days.
I do believe them that they don't know who was affected and that this probably wasn't approved by the top brass (in that sense being a kind of accident), but that doesn't speak well to their technical or legal competence.
"Data harvesting," marketing and advertising are far from the only business reasons why a company might want to offer a free service.
From a business perspective, a free service might
- build goodwill for the company
- satisfy legal or government requirements
- save money on billing while increasing the value of other goods and services from the company
- enable the company to qualify for certain grants or subsidies
- support the creation and improvement of goods or services
(such as software) which the company uses
- enable the company to influence industry standards
- help to acquire future paying customers (e.g. by offering a free student or limited version which can be upgraded with additional paid features)
- help to compete against other companies that charge for the same service
- act as a loss leader to encourage sales of compatible companion products or accessories
I never gave facebook my number even though prompted for it literally every time I logged in.
Twitter's actions make everyone less secure. The next time an online service asks me to enable 2FA to protect my account, I'll have to consider whether the potential for abuse of my 2nd factor information is worth the additional risk to my account.
This doesn't happen accidentally, someone had to engineer, test and build a system that performed reliable targeting.
This is no different than a chemical plant "accidentally" building a conduit a quarter mile and dumping waste into a river. Of course they know.
However I have made mistakes like that programming all the time... for example, having massively more data written out to disk than I had intended. You are never going to make a mistake in a factory that increases its electricity consumption by 1,000,000x but I have made errors that caused a million times more data to be written out to disk than planned.
Maybe not a million, but a thousand times? Sure. Just short some heavy machinery and watch the amperes light up the day like a second sun as they rush out to pour into the ground. Factories have breakers to prevent that, you'll say. Turns out software projects don't have fuse equivalents, even though they should. The reason is that bleeding electricity is expensive (and dangerous to people, which from the company's POV means "even more expensive"), whereas data leaks aren't. I hope the latter is going to change.
If you step into the street in front of a truck because you think the truck is farther away than it turns out to be, it's an error. If you are playing baseball, and you estimate the vector to intercept a ball rolling along, and you touch your glove to the ball but fail to actually catch it, it will be classified as an error, You made an estimate of what the circumstances were, and it was wrong and you acted on purpose.
Likewise their definition of mistake uses error a couple of times. https://www.dictionary.com/browse/mistake
I get it though, there is a subtle difference. I'm just saying both terms mischaracterize (intentionally) what I think was likely a very intentional, knowing and calculated action by Twitter. I'm asserting without proof that there was no inattention as in your first example, and no miscalculation as in your second example.
The baseball example is problematic because "error" has a game-specific meaning just like "run" and "base," and whether you commit a mistake or an error, they will both be recorded as errors.
Recently I read an article saying that programmers should reserve "error" for the mathematical usage of "distance from correct result" (as in, behavior of the program), and refer to their bugs and other mistakes as "blunders" instead. I have to think on it more but I like the overall direction of the distinction.
I think we can file this under 'dark patterns' and finally realise Twitter is attempting to be as abusive as Facebook.
Or I could email them to unblock it (my account has no email address associated with it)
So perhaps one team forgot to secure it property (based on role authorisation) and another team saw the field and didn't think it was something they shouldn't use (maybe an OKR or some target as kicking them in the direction of being eager to use what they could get).
Obviously nothing happened accidentally at a coding level, but at a business-constraint understanding level a mistake may have been made.
If twitter doesn't have common data classifications and everyone with a keyboard and an idea has access to the database without review, they are probably pushing criminally negligent. I understand at a company with just a couple employees. This shit happens all the time. I don't understand it with a publicly traded company that would be in big trouble with a class action data lawsuit.
Besides that, there're cycles of product reviews with directors and VP's.
How many times have you, during the process of doing work, said "Oh hey, I think this might be a nice feature to work on for the next 3-4 days" and just not done any of your normally scheduled work?
Twitter probably has a serious BI department so I'd imagine they have ways of getting at various data vectors for analysis, and BI folks usually complain loudly when you block data that let's them "cohort" usage data to the point where it can be easier to just open the door for them on a read-only instance.
They mistakenly thought this is fine. They made an error in judgement. Etc
However, you are right in saying that this violates the GDPR, because the data was collected for a different purpose, and using it for another reason like advertising is not allowed.
I am totally fed up with GDPR and its expectation to small/micro companies with the false sense of security it gives to users.
SHAME on twitter, shame on Fb, shane on stupid EU.