Hacker News new | past | comments | ask | show | jobs | submit login
Personal information and ads on Twitter (help.twitter.com)
464 points by coloneltcb 6 days ago | hide | past | web | favorite | 224 comments





Virtually the entire security industry agrees that using phone numbers for account security is an antipattern because of sim-jacking, and yet swaths of the biggest tech companies in the industry do it anyway.

I recently got locked out of my Amazon account because I made a large purchase after not ordering anything for ~6-7 months. During the reset process, they tried really hard to get me to set a phone number 'for account security'. From what I could tell from their documentation, it's not even just used for 2FA, it's literally just a way to prove my identity if I need to reset my password.

I refused, and then a few days later Amazon called me up to reconfirm the order anyway, even though I had never given them my number. Their entire account recovery process from that point on was based on me having access to information that was already listed on my account, that the hacker would have 100% had access to. It was all just security theater, literally the only thing that mattered was I had access to my email and a phone number.

Fastmail (to its credit) allows you to have 2FA without a recovery number, but it requires you to add a recovery number, activate a real 2FA app, and then delete the number. At least it doesn't (as far as I know) use the number on its own for account recovery.

Twitter's CEO got hacked because Twitter trusted phone numbers as identity, and they still haven't changed the policy, because collecting phone numbers is fun or something.

In theory, a 2FA over SMS is better than nothing. In practice, it trains customers to be insecure and should be avoided. It trains customers to think that identity verification over text is OK. In practice, you can't trust companies not to use it for advertising, or to start using it as identity verification in the future. In practice, there are very, very few legitimate reasons why a company should ever need my phone number, and pretty much none of them have anything to do with security. 99% of your users should be using a 2FA app instead of a phone number.

Companies like Twitter should be shamed for misusing security information this way, but they should also be shamed for using insecure authentication methods. I'm convinced that 5 years from now, we're going to look back at SMS authentication the same way we looked at serving login pages over HTTP.


I think it is important to separate out criticisms of 2FA over sms, and companies who say they have 2FA. I think even in practice 2FA over sms is definitely better than nothing, it's a lot harder to both guess someone's password, and put in the work to hijack their phone number. But as you said, many big tech companies say they have 2FA, when they really are just giving you two ways of logging in, where one of those ways is incredibly insecure.

It seems crazy to me that recovery numbers are a thing. I mean I'm sure it helps reduce customer service load, since people just recover over sms rather than trying to call and get their account re-activated, but it is so insecure.


I am sympathetic to claims that 2FA over SMS is better than nothing, because technically it is completely true. You're right.

However, as a user, I go back to the idea that I can't think of many companies I trust to only use my phone number as 2FA and not as identity verification. So I'm am skeptical that it is good to train users to trust SMS 2FA, because those same users will probably not be able to distinguish between 2FA and identity verification when they sign up for other services. It is better to teach users a simple rule (never give out your phone number) than a complex rule (only give out your number for this specific use-case).

The other big thing I just can't get past is that nearly everyone today has a smartphone that will run a 2FA app, and that even users who don't have a smartphone would be better served by getting codes delivered to their email. So sure, it's better than nothing. But there are even better options that exist that aren't that hard for us to switch to.

In practice, even if you know you're only going to use SMS for 2FA, I now lean towards saying you shouldn't use SMS at all. Treat email like the backup SMS option, and just get rid of phone numbers entirely.

Maybe the dynamics of that change for some developing countries? But Twitter, Facebook, and Amazon all know what country I live in. If they want to offer an SMS option for India because of some extenuating circumstance I can't think of, they should still have the good sense to at least discourage SMS verification for accounts that are based in the USA or Europe.


> nearly everyone today has a smartphone that will run a 2FA app

What do they do when their smartphone dies? The phone company makes sure your new phone has the same phone number, but you lose 2FA tokens in apps.

And I have no idea where the recovery codes that I printed on paper are since I last moved.

> even users who don't have a smartphone would be better served by getting codes delivered to their email

But then isn't that just one factor instead of two factors, because both their password for the service and their email password are just "something you know"? I'm assuming if they have no cell phone, they don't have a second factor to secure their email either.


> And I have no idea where the recovery codes that I printed on paper are since I last moved.

It should be right next to the document that explains your loved ones how to get into your password manager, if you ever get hit by a bus.


This is a reasonable concern, and users should be aware of the risks behind real 2FA. But if you really dig into this, it starts to fall apart.

> The phone company makes sure your new phone has the same phone number

This is exactly why 2-factor SMS is insecure. You mention later that email is something you know, instead of something you have. In the same way, if a company can transfer my number to a new phone without access to the original phone, then it's not really something I have.

The ease of number transfers are the problem. The reason why 2FA tokens aren't stored online and secured with a password is because they are designed to be something you have, not something you know.

For comparison, switching your number to Verizon only requires information that you know (account numbers, a SSN)[0], so it's just extra steps around a less secure password that you can't change or set yourself.

> But then isn't that just one factor instead of two factors

Expanding on the above -- yes, email is often going to be just another account secured with another password. In practice, hacking two accounts is often harder than hacking one, and in practice, I suspect breaking into someone's Gmail account is harder than stealing their phone number. Google offers much more comprehensive 2FA options than most other companies, and their automated security alerts also tend to be better.

But there's no reason for us to debate over how secure email is.

The situation we have today with companies like Amazon/Facebook/Twitter is one where I can already request a password reset without SMS. Companies are scared of strict 2FA methods because customers get locked out of their accounts. Very, very few of them are willing to take that risk, so email will virtually always be an option. SMS is being added on top of that system -- it's not replacing it.

Here's Twitter's account recovery help page[1]:

> If you do not receive anything back, get help with Twitter via SMS or use the email password reset option.

So if you consider email to be a weak link in identity verification/2FA, adding SMS verification as a secondary option alongside email still doesn't do anything to increase your security. In fact, even if SMS was as secure as email, forcing you to monitor two authentication methods instead of just one would still be less secure.

I'm not advocating email is perfect, I'm just advocating that SMS is less secure than email, and that since companies are already comfortable trusting email, they can continue to rely on that.

Of course if you really want to set up 2FA to literally be 'something you have', then you need to accept that things you have can be lost. And if you're not willing to make that compromise, at least email accounts are harder to hack than phone numbers, because the most common email providers are probably more resistant than Verizon to social engineering attacks.

[0]: https://www.verizonwireless.com/support/local-number-portabi...

[1]: https://help.twitter.com/en/managing-your-account/forgotten-...


You know, your reply sounds reasonable, but if you really dig in, it starts to fall apart.

> Twitter [...] So if you consider email to be a weak link in identity verification/2FA, adding SMS verification as a secondary option alongside email still doesn't do anything to increase your security.

I agree, and your comment's parent (my comment's grandparent) specifically went out of its way to agree: "as you said, many big tech companies say they have 2FA, when they really are just giving you two ways of logging in, where one of those ways is incredibly insecure."

> switching your number to Verizon only requires information that you know (account numbers, a SSN) [...] hacking two accounts is often harder than hacking one, and in practice, I suspect breaking into someone's Gmail account is harder than stealing their phone number

I think the number of people who go to the trouble of using like, a Yubikey or something for their Gmail but won't use it for anything else is vanishingly small. People opting for password + SMS 2FA (NOT the SMS 1FA that let @jack get hacked) are probably using the same thing for their email.

I'm sure it's true that it's easier to steal someone's phone number than break into their Gmail account, but afterwards you can go into Verizon's physical store with your physical government-issued ID and get your phone number back. That's not an option with a Gmail account.

No one is saying any of these are perfect, and everyone agrees SMS is less secure than email. The question is whether password + SMS 2FA is less secure than email 1FA, or whether password + email 2FA with no account recovery pathway is workable—doubtful, and definitely not.

Let's agree that out of password + SMS 2FA, password + email 2FA, and email + SMS 2FA, the first one is the weakest link, because SIM-jacking is terrifyingly easy and people choose terrible passwords. Just for account recovery, though, email + SMS 2FA still provides security benefits over email 1FA (you can guarantee a second factor, even if it's a weak factor, whereas you actually have no idea how strongly or weakly their email account is protected, you're just assuming) and usability benefits over email + TOTP apps/Yubikeys/paper backup codes.


> Just for account recovery, though, email + SMS 2FA still provides security benefits over email 1FA

Agreed, but I can't think of a single company, anywhere, that offers what you're talking about. Everyone offers SMS and email as separate options, both of which separately unlock your account.

If either Twitter, Facebook, Amazon, or Facebook required both email and SMS access to recover an account, I'd agree that there could be some value there. But (to the best of my knowledge) they don't. So the debate over whether or not SMS verification is better than nothing is hard for me indulge, when (again to the best of my knowledge) virtually no company is using SMS account recovery in a way that provides real value over 1FA email.

Maybe Lyft is an example? But the last time I used Lyft, I'm pretty sure I could get access to my account with only my phone, no password/email required. I'm not 100% sure Lyft even requires an email to sign up.

> That's not an option with a Gmail account.

I've never been in this scenario, so I'll have to take your word for it, but this seems strange to me. Could I really not fax or mail a government ID to Google to get access to my account?

Assuming this is right though, we again run into the same problem.

I lose access to my password and email. Is a company comfortable letting me reauthenticate with only an SMS message?

If yes, then we have 1-factor authentication over pure SMS.

If no, then we have to be comfortable with the idea that losing your email/password might mean losing your account, or going through a complicated recovery process involving government IDs.


> I can't think of a single company, anywhere, that offers what you're talking about

I think you're right. I thought Vanguard or my bank did, but no, it's email or SMS plus personal info like SSN, birthdate, zipcode (LOL, information that no one has on me, thanks Equifax!).

> Could I really not fax or mail a government ID to Google to get access to my account?

To what address? Just plop it down at 1600 Amphitheatre Pkwy? I've never heard of Google offering any account support whatsoever for a free private Gmail account, have you?

I do personally know people who have just given up on accounts they lost access to (they claim they didn't forget the password) and just created a new Gmail account. Not the most technically literate person, but still, that's who support is supposed to be for. But it's a free service, so.

> then we have to be comfortable with ... going through a complicated recovery process involving government IDs

What? The whole point I've been trying to get across is that unlike TOTP or email, you can get your phone number back through a "complicated recovery process involving government IDs", which as an advantage. That's not a tradeoff to be comfortable with, that's an upside.

> I lose access to my password and email.

Why would we design for this? Unless there's a particular reason to think password and email are likely to be lost simultaneously (which I can't think of, unlike say, smartphone TOTP app + phone number), then we should either design for losing any combination of 2 auth methods, or not worry about combos.

By contrast, to me it could make sense to design a system so that you can lose any 1 of 3 things but still be able to log in with the remaining 2 makes sense (e.g. password, email, phone number). But you're right that most services are effectively just email 1FA, and many are SMS 1FA too which we all agree is utterly broken.


> For comparison, switching your number to Verizon only requires information that you know (account numbers, a SSN)[0], so it's just extra steps around a less secure password that you can't change or set yourself.

Don't know in the USA but here in Europe you have to confirm/prove that you own the number being moved from one operator to another.


Yeah, this is basically the same thing that happened with social security numbers. Companies promised only to use it to verify they had the correct credit file, but +/- 90 years later it's a defacto ID that no longer has any security value.

Edit: I suppose phone numbers can at least be changed, but that doesn't really solve the problem for 2fa as it might for SS numbers.


> It is better to teach users a simple rule (never give out your phone number)

What is this supposed to mean? What is the point of having a phone number if you never give it out?

I suppose you can make outbound calls only with own number sending turned off, but that sort of diminishes the purpose of owning a phone number?


Never give out your phone number [for security purposes].

In other words, if a company tells you they need your number for account security, refuse to give it to them. Of course you can give out a phone number for general contact purposes.

I think it's easier to teach people phone numbers can't be used for security, period, then to teach them that phone numbers can be used for security in one very specific case -- especially since companies like Facebook/Twitter rarely use terms like 2FA when asking users for their number. They just say stuff like, "you'll use this to gain access to your account" or, "we'll use this to help keep your account secure."

It wasn't my intention to make it sound like users should never tell anyone their number for any reason, but I can definitely see how it might read that way.


Against Live Phishing, both SMS 2FA and TOTP are ineffective. If anything the SMS 2FA might actually falsely reassure naive users that they're not being phished.

Alice is tricked into visiting fakebank.example a site which looks exactly like Alice's real bank. She enters her username and password, the site says it will send her an SMS, behind the scenes Alice's details are plugged into realbank.example automatically, triggering an SMS to Alice from her real bank.

Alice types the SMS code into fakebank.example, it stalls a little bit, very common for banks, meanwhile it plugs the SMS code into the realbank.example site and successfully logs in as Alice. Cool. The fakebank.example site finally gives an error. "Code 418. We're sorry, this service is temporarily wormhole phasers galactic transwarp. Please try again later". Alice is annoyed but decides to try again in an hour. By then her account will be empty.

WebAuthn / U2F work fine here, because they don't give Alice the opportunity to mistake this for her real bank, the FIDO token will cheerfully mint credentials for fakebank.example which are no use to log in as Alice on realbank.example. There's no "I'm sure" step, no "Actually this is my real bank", no opportunity for human error to betray her. The bad guys still annoy Alice with their bogus site, but they don't get working login credentials.


I have a story of how 2FA over sms is worse than nothing (written as a throwaway, because I don't want a target on my back).

I had an account with Bank of America and a prepaid AT&T plan. I was about to move abroad so I went to my local bank office and asked if there was anything I needed to do to prepare for this. "Nothing in particular" I was told.

After moving abroad I opened another bank account in that country and tried to transfer my money over to said account. However in order to transfer the money out I had to enroll (I wasn't already) in sms 2FA. As at&t prepaid does not have roaming I wasn't able to enroll using it. And only American numbers were accepted. I got on the phone (a different one) with at&t to see if roaming could be enabled. "-Nah, there is no roaming for prepaid only for postpaid. -Can you sign me up for that? -No, only in a store"

Now I weighted my options. I could fly back to the US to resolve it, if I could get another visa. I could fly to Mexico or Canada where there is roaming. In the end I went with the advice the bank's support gave me: I enrolled a friends number. This does not feel secure at all, but what you gonna do?

Moral of the story: If you move abroad, better make sure your sim supports roaming, because you're gonna get locked out of stuff otherwise.


I disagree. SMS 2FA is much worse than nothing, unless a service has no password reset option via SMS alone. I believe many services do allow this, including Gmail. Barring password re-use (which admittedly is common), it's way, way easier and faster to spend 15 minutes discovering someone's phone number and calling their provider to request a SIM swap than trying to bruteforce their password via the service's login page/API.

If it's truly always 2FA, including two-factor password reset (need SMS verification code in addition to email verification code), than it's probably better than nothing. But otherwise, it's actually a major exposure you're opening up for yourself, especially if you otherwise have good password practices (no re-use, and ideally pseudo-randomly generated from a password manager).


I recently had a third-party seller from an Amazon purchase send me a marketing postcard. Item was "fulfilled by Amazon", so transmitting my address to the seller was not needed to fulfill my order.

The postcard contained a call-to-action to visit a URL. Fortunately, I was suspicious and went there in an incognito window, because it redirected to use a signed-in facebook session to send themselves a message, presumably to harvest more data about me.

I wrote a review on the product condemning this practice and Amazon removed the review as "not about the product". Quite frustrating - seems Amazon just doesn't care about customer privacy.


> I wrote a review on the product condemning this practice and Amazon removed the review as "not about the product". Quite frustrating - seems Amazon just doesn't care about customer privacy.

The way Amazon reviews works is that it combines reviews of the same product (even different variations of the same product) so mentioning anything about an individual seller in "ask a question" or "write a review" is meaningless as it applies to the whole site. It makes sense why they would take this action.


Do you know if there is a way to "report a seller" or something like that? I was unable to find any option for this in their support system.

I was going to say yes, and also show you how to review a seller, but I'm shocked to report that apparently Amazon removed any way to directly review or comment on a seller as of their latest site design. So I suppose there isn't outside of emailing or contacting Amazon support directly.

That's a real shame, because if anything I suspect sellers on Amazon have become less trustworthy in general, not more.


Ah, I poked around. They still have a way to review sellers if you click on the seller from the Orders page, but clicking the seller's name no longer allows you to rate the seller.

If you or the seller are in the EU, you could report this to the data regulator.

(One for Amazon sharing the address, one for the seller sending the unsolicited card, once more for the Facebook message without data collection consent.)


You can review the seller as well as the product.

For the 98% of people who aren't high-value targets (CEOs, journalists, etc), a phone number is a perfectly adequate second factor.

Especially in the absence of a universally available, universally usable alternative. The user experience of TOTP authenticator apps, paper recovery codes, and (so far) U2F tokens just isn't there yet, and the negative impact to individuals from being inadvetantly locked out by the loss of their second factor is massive.

SMS for recovery is deeply flawed, but it's 1) better than nothing and 2) better than any current alternative for the vast majority of people.


But... the experience of SMS as a second factor is also miserable. You don't have access to it when you're abroad, you can lose access to a phone number just the same as any other 2nd factor. In fact it seems more likely because there are other unrelated reasons that the phone company might take the phone number from you, in addition to the possibility of physical loss. In addition if you need to change phone numbers it is not easy to remember all the services that have that number stored as a factor to change them all.

But you can also gain back a phone number, by bringing your new phone to the phone company's physical location and presenting ID. That's not really an option with TOTP tokens.

I mostly agree with the other points (although I don't think I'm the only one with Wi-Fi calling and SMS, even abroad).


> But you can also gain back a phone number

...at which point the attacker has already taken over all your important online accounts and changed them to a different phone number.


Not without your password. The comment I was replying to specifically referred to the phone number as a 2nd factor. This excludes services like Twitter where a phone number can be the sole factor used to reset your password (which is awful security, obviously).

Also, you're specifically talking about SIM-jacking. The vast majority of people have upgraded phones, or broken or lost and then replaced a phone, far more times than they have been SIM-jacked. What do they do about their TOTP tokens in those scenarios?


That depends on the way the phone company operates surely? You can’t do it if you are abroad, or if it’s a prepay SIM and the phone company doesn’t know your identity. Some telcos will give your number to somebody else if you don’t connect with the network for too long.

Sure, you're totally right that in some cases, physically losing a phone also means loss of that phone number. But in 100% of cases, physically losing a phone means loss of those TOTP tokens.

For the vast majority of people, that's a dealbreaker, in spite of SMS being deeply flawed, as the comment you originally replied to said.


And, unless you're willing to say "tough noogies" if someone loses access to their token/app/codes you're back to the original problem of reliably verifying identity and ownership, especially in a virtual setting where telling someone to show up at a physical location with all their paperwork is not really viable.

I think the 1Password experience for 2FA is pretty good, even on mobile. Literally just have to paste the code.

It’s a shame people don’t know how to print out paper tokens and keep them safe.

To a good first approximation, nobody has a printer.

I assume everyone does have a pen. You don’t even need to write down all the provided recovery codes; just one will do.

How would I, a human being with no knowledge of cryptography or application security, know that?

I have two objections to this.

First, I disagree that only high-value targets are at risk. I think, in general, targets are at risk for SIM-jacking. You can be a target because you're Jack Dorsey, but you can also be a target because you suddenly get swept up in a Twitter controversy, or because you run a corporate social media account, or because an ex-partner decides they want revenge over something.

A lot of our security protects against untargeted attacks because they're more common, but targeted attacks can't be completely ignored.

The second problem I have is that high-value targets use Facebook, Twitter, and Amazon, and they should be able to use these sites without worrying that their accounts are by-default insecure. We're not talking about a local library account, these are the biggest tech companies in the world. The president of the United States uses Twitter.

I think that taking a strong stance on security is important in that scenario.

I'm assuming (hoping) that Trump's account is secured with a phone-number that isn't public and that can't be SIM-swapped. But who knows, Dorsey's wasn't.


You can be targeted just for ending up in a database leak.

... until you become a medium-value target for someone (maybe in a business-related dispute), and now your email is easily hackable

I actually blame AT&T and Verizon for this. They make it FAR too easy to transfer a number to a new phone. 2FA aside, losing control of your phone number can cause all sorts of damage.

If it was harder to transfer a number to a new phone it wouldn't be that bad to use that method of 2FA for non-critical services.


Then blame the government because telco's didn't used to allow number transfering. Then consumers complained and the government made it mandatory.

/Citation needed/


They didn't allow number transfers between carriers, but they did always allow number transfers within carriers AFAIK. Perhaps there was a time when numbers were hard tied to SIMs and I'm too young to remember?

Is it really the telco's fault, though? It seems that telcos suddenly found themselves pushed in the business of identity management and proffing without haing any intention of doing that.

Telco's never made any claims about tieing your identity to your phone number - they just move information around. Blaming telcos is like blaming a hammer for not being big enough when you're tring to drive a screw with it.


It's definitely not like that at all.

It's not even really identity management. It's just way too easy to transfer a phone number. A simple protocol like, sending a text or email to the information on file and then waiting 24 hours to complete the transfer would be enough to stop 99% of number transfers.

Instead they process these transfer instantly, often using very little information to verify that the request was legitimate. You can often sweet talk the customer service reps with simple, "I forgot what information I used to sign up." They process that request and then it ruins your life and haunts you for the next 5 years.


If you hold down Alt/Option on the Fastmail security page you can enable 2FA without a phone number. Their helpful support person cautioned me to not lose my 2FA device or recovery codes :)

> Virtually the entire security industry agrees that using phone numbers for account security is an antipattern because of sim-jacking, and yet swaths of the biggest tech companies in the industry do it anyway.

From the security side, this is incredibly frustrating. Very often there's someone in product management who insists that users love SMS-factor. That it's great because users don't have to use a special app or transfer it when they change phones. They will be backed up by an engineer who swears that SMS is great because using a TOTP app is an unacceptable corporate imposition on their personal device.

Meanwhile, someone in finance quietly sobs as they pay the Authy or Twilio bill, but nobody seems to care about the opex budget.


How does FIDO fit in here?

It's a good idea. Inconveniently, it's also one that most consumers are unfamiliar with and not really prepared to implement.

This is not made easier by the hardware complexities. What if the user has a slightly dated PC (USB A), an iPhone pre-USB C, and a modern Mac?


It's not for 2fa that phone numbers are required. It's to deter spam accounts.

Though, if sim-swapping is too costly to do en-masse, then I guess it might be an effective 2fa method for an average user. But not for ones likely to be targeted.


> It was all just security theater, literally the only thing that mattered was I had access to my email and a phone number.

Don't they need a phone call to identify that it's really you by matching your voice print against Alexa recordings? ;-)


Alexa, who am i?

Apple always asks for a phone number. Google I don't think you can sign up without a phone number. Yes, it drives me crazy. I'm travel often and stay abroad often. I use different sims so my phone number changes so even if they wanted to contact me via the phone number to send an SMS or whatever they often can't. I can sign up for an internet run number but the entire point is I don't want a have to deal with a phone number in 2019. People don't call me anymore. If they want to talk they use one of the 150 messenging services that are free.

What practical ways are there for a company like Twitter to verify your identity for purposes of account restore verification without using some increasingly unreliable piece of info like phone or social security numbers?

The available alternative today is a token 2FA, which has to be setup and requires some savvy, and a number of one-off security codes which are issued at account creation and then the user must figure out how to store securely, indefinitely. Forcing this on the avg Twitter user is obviously a nonstarter.


It's certainly a conundrum. Depending upon the type of account involved, there has to be some reasonable medium between

1. Accounts/resets performed because you asked nicely and maybe provided some laughably easy-to-obtain information and

2. Showing up at the Twitter office in San Francisco with notarized copies of your birth certificate, latest utility bill, Social Security Card, etc.

And that happy medium will doubtless be both vulnerable to a determined targeted attack and a sufficient PITA for some users that they'll end up losing access to an account.


The phone number is not used for individual account security, it is there to increase the cost of creating accounts in most cases. For services like gmail, this is an important step in cutting down on spam

In Amazon's case, the real security is the credit card info. If you tried to add a new address you would need to confirm the credit card info, even if you sim swapped the account owner.


> The phone number is not used for individual account security, it is there to increase the cost of creating accounts in most cases.

Then why can I reset my password with it?

The spam issue is separate. Twitter can ask a user to give them some kind of proof that they're not creating duplicate accounts. There's nothing in that process that requires them to then take that info and make the account less secure. Once they've verified the user is human, they could just throw the number away -- instead, they use it for account recovery.

Even with Amazon:

> In Amazon's case, the real security is the credit card info.

I was able to recover my account with just an email address and a phone call. I'm pretty sure you're right that an attacker couldn't enter a new address. I'm not sure whether re-entering credit card info is required to use Amazon locker. I'm also pretty sure it's not required for digital purchases like music and movie rentals, although I guess it wouldn't be as big a deal for Amazon to refund those later.

But this goes back to the same question: if the real security is the credit card information, then why is the SMS recovery option there? It's not secure, we have to rely on other mechanisms on top of it to keep people from doing really harmful stuff. Amazon still won't let you set up a 2FA method without a phone number.

SMS 2FA has nothing to do with spam, it's just bad security. You can do spam prevention without incorporating the number into 2FA or account recovery.


Have you ever heard about Blur? It lets you mask your true identity by offering masked email addresses, credit cards and phone numbers.

The company behind Blur is called Abine. Their core service which is providing masked cards and phone numbers requires American citizenry and a fee. But IMO, it's well worth it.


>biggest tech companies in the industry do it anyway.

because it provides identity.


No it doesn't, it just provides proof that you currently have access to that phone number. Phone numbers are not identity.

Most people aren't walking around with one time phone numbers, they have a phone number that's shared by family, friends and co-workers that will consistently resolve to the same individual whenever someone wants to connect.

Being a unique number that is tied to a single individual, it can function as a proxy for identity. This, obviously, assumes you are operating like the average user.


There are plenty of attacks where some random person can get hold of any phone number for a few seconds. It is not a proxy for identity, as it's available on demand for the exact people trying to impersonate you.

Regular people don't "have" or control any phone number, telcos do.

So, then, the only option is multi-factor biometrics. Everything else is just "not identity", right?

And even then, biometrics can't usually differentiate between twins.

For twins, there is arguably no way for software to demonstrate identity, ever.

Social security? Proves you're holding a card.

Biometrics? Proves you're one of multiple with these exact genetics.

Etc. I literally cannot think of a way to definitively and authoritatively tell twins apart in software.


I have a public/private key-pair from my local government. Comes with my passport and is guaranteed by the gov to represent a single person only.

Although, I wouldn't want to give the public key to google/amazon/facebook/twitter :)


In the twin example, what is to prevent your twin from taking your documentation and receiving a public/private key in your name?

Or simply access that key of yours and use it?

The public/private key only prove you hold the keys, not that you are you.

Not identity, just proves you have access to the keys.


What happens when you have a "proper" second factor and lose it?

The consequences of getting locked out of my bank account because I've lost my 2FA method are pretty bad -- that would be a major inconvenience.

They are better than the consequences of getting locked out my bank account because someone stole my phone number -- especially since the first thing someone who SIM-swaps will do is change the recovery number to point to a separate phone they own. I'll have to go through the exact same recovery steps, just with the added pressure of having money siphoned from my account.

Most 2FA app setups come with one-time backup codes that you can write down in paper and stick inside a safe or your wallet. If you don't think customers will do that, and you're still worried about account recovery, using email instead of SMS is preferred. For all the crap I regularly give Google, their security team is top-notch, and I generally trust most people's Gmail accounts to be more secure then their phone numbers.

The only scenario I can think of where a service couldn't rely on email for account recovery is if you yourself are an email service like Fastmail.


A big difference is your bank (likely) has a physical branch where you could show up with your ID card, debit card, etc. the inconvenience can be addressed directly if there’s an emergency.

For online only businesses there’s no escape hatch.


In my specific case, I use an online-only bank. But I'm being petty with that objection, my bank still has enough information about me that I could verify my identity via a combination of a scanned ID card, proof of residence, etc... In general, point taken, that's a good distinction for most people.

However, substitute 'bank' out for any online-only service and the problem persists, because any reasonably smart attacker who SIM-swaps my account will still immediately change the phone number to point to themselves. If my Amazon account gets hacked, and the attacker goes into account recovery and changes the phone number, Amazon still needs a non-SMS based escape hatch for me to get the account back. The problem hasn't gone away.

So why not just use the non-SMS escape hatch all the time?

In Amazon's case, if they don't have your phone number they'll do account recovery over email. That's almost strictly safer, and just about as convenient as SMS.


> safety and security purposes

I tried making a Twitter two months ago because I was having trouble with my phone and needed to access T-Mobile's social media-only support team.

Within five minutes of making the account, I was banned for "suspicious activity" and required to enter my phone number "for security purposes". But the only reason I made the account is because my phone wasn't working...

Emailed support and was told that they could not make an exception for me because I had broken some vague unnamed rule. Then I said the magic words, "This is clearly a ploy to collect phone numbers for data aggregation purposes," and within the next 24 hours my account was unlocked, accompanied by a very salty and accusatory email.


Twitter isn't alone in this sort of invasive activity.

I have family and friends that share photos on their private Instagram accounts and nowhere else. I didn't have an account, so I installed the app on my phone, signed up, verified email and mobile number, and followed the family accounts.

The next day I open the app only to be prompted to log in again, which it won't accept (password stored in a password manager). A few minutes later an email arrives from Instagram saying that my account has been locked due to suspicious activity and demanding that I provide a photo of myself holding government issued ID. Their helpdesk refuses to do anything until I do.


Microsoft does the same with their accounts.

Used a couple of throw-aways to get kids log into some gaming MS service. In two days arrives a message saying your account was compromised and used for spamming, we need a phone # to make it great again. The kicker - throw-aways were on my own domain and my own mail server.

Fucking morons, they don't even bother to lie convincingly.


After the whole Blizzard and Hong Kong incident yesterday, I tried to delete my Blizzard account. Despite having never submitted a copy of my ID, they're now asking for one to verify that this is indeed me.

I keep wondering, how will they be able to verify? I didn't use my real name, I used a throwaway email address. So why do they need my ID? I could submitted any ID I want, they still wouldn't know it's me. This to me just sounds like a ploy to gather government issued information, to delete my account from a stupid game...

During log in, they already verified I own the email, by allowing me to sign in with password and email, and then input a code send to said email address.


This is ridiculous. So government ID required to delete your account, but if you start spamming offensive messages in the chat, you'll likely get it banned very quickly.

I wonder what they'd do if you sent them a clearly fake image of a passport or a driving license, with some swear words in name fields. I wouldn't be surprised if they banned the account. Or what if you put "Sir General Data Protection-Regulation" as your name?


Did you ever buy anything on their platform?

If so, is it possible they have access to your name via your past money transactions?


Don't think I ever bought anything, think I only used it for Hearthstone.

What the heck happened to our parents' advice of "never reveal your personal information online"? I was being force-fed this lie since birth?

Why do they suddenly not care anymore? My mom constantly calls me paranoid now for not wanting to do things like upload all of my photos to Google cloud.


Back then it was new and potentially dangerous, so they wanted to protect you as a kid. Today, it's just convenient, so why can't you just upload photos of all your kids to Google and share the album with them?

I.e. their position back then was as ungrounded as their current position is.


Facebook has done this to me multiple times as well. However i think my activity could very well be considered suspicious: I signed up with a fake name and gave no other information, then followed a load of people.

I'm not sure I like the trend of these sites trying to collect identity information. They might justify it with spam/scam/fraud protection, but they don't really have an ethical reason to do so.

Instagram is amusing. You can remove an email address from your account. And its still valid for logins many weeks after the fact.

Yup, this just happened to me too.

I'm sort of amazed your account was unlocked instead of you just being shown the door. Especially since, as a company policy, letting users like you through is likely to result in very little revenue for users. They're much better off just targeting the tech illiterate folk who can be easily swayed into clicking through ads.

Twitter has put some effort into the "customer service" use case. https://www.forbes.com/sites/shephyken/2016/04/30/how-to-use...

At a basic level, you could think of it as providing the content for the audience - the inventory that is sold to advertisers. Twitter as we know it would not exist if all the users were tweet consumers, and none were tweet producers.

T-Mobile is great example of a business providing customer service over Twitter.


I tried making a new account two or three months ago. Twitter sent me a verification email, which I confirmed, but they immediately requested phone verification as well - even though I had never been able to use the account in any way. I refused and never opened an account because of that.

The sceptic in me thinks this is more about advertising and less about security.


>Within five minutes of making the account, I was banned for "suspicious activity" and required to enter my phone number "for security purposes".

The same was done to me. The thought that the point of this was not to take my phone number for advertising never occured to me. It seemed very obvioius and I hope the EU, as the only political actor that sometimes seems to care about people, will do something about this.


> Then I said the magic words, "This is clearly a ploy to collect phone numbers for data aggregation purposes," and within the next 24 hours my account was unlocked, accompanied by a very salty and accusatory email.

Do you think this would still work?

I've been wanting to get a twitter but I've been put off by the phone number "requirement" for a long time. Every time I attempt to make an account or log into an old one it requires phone number verification. (I can't get burner phones where I am).


This happened to me only two months ago, I assume it's still a possibility. Ask them if they like the idea of you constantly making new alt accounts until one finally gets through.

> accompanied by a very salty and accusatory email

Would you mind sharing the anonymized content of said email?



I don't have a Twitter account and I agree that Twitter (and all the other platforms) use misleading tactics to collect data on their users. To play devils advocate though, Twitter genuinely has a problem with troll accounts don't they? Adding a phone number requirement could reduce the amount. I have the impression that some people are holding Twitter accountable for the way their platform is affecting public discourse, this could be primarily a tactic to curb fake accounts.

This seems like a dark pattern; requiring the use of an external life invasion platform to access actually decent tech support.

I tried asking T-Force if we could continue the conversation over email, but they misunderstood or something and sent me an account validation form via email but then just continued the conversation over Twitter. I'm not sure if they have been given an avenue for support outside of Facebook and Twitter.

I have changed my phone to opt-in only by setting it to "do not disturb" mode permanently. If you're not in my contact list, my phone doesn't ring and you can only leave a message. It's mostly great. I may try the Call Screen feature I noticed the other day as an alternative too.

I wonder if requiring a phone number for access could create ADA compliance liability for web services.

> accompanied by a very salty and accusatory email.

post the email


" Hello,

Your account was locked due to a violation of the Twitter Rules (https://twitter.com/rules), specifically our rules around abuse: https://support.twitter.com/articles/20169997.

Your account is now restored. Please note that further violations may result in the locking of your account again or permanent account suspension.

Thank you,

Twitter"

They refused to take responsibility for the error.


That doesn't sound salty coming from an automated system.

Surprisingly the tailored audience targeting had very little tests and certainly no end to end tests when I was there. Once the whole targeting / matching pipeline was down for 3 days and nobody noticed (I uncovered this)

Honestly, I wouldn't be surprised if the pipeline was set up wrong to match against the wrong field. These matching pipelines are just Hadoop jobs (written in Scalding (Scala library for Cascading) and orchestrated / scheduled by Apache Aurora.

Internally Twitter generally operated the way you would expect them to as an external user. User data and privacy was taken seriously and no data was given to third parties etc.

Source: I worked at Twitter in 2015 as in engineer on ads (on the programmatic ad buying side and partner integrations).


If Twitter didn't distinguish between phone_2fa and phone_identifier then they really shouldn't be in business.

I don't recall how user data was stored or accessed, but I'm certain there will have been separate fields, or at least a flag indicating whether the person opted out from being targeted by their phone number.

That being said, it will have been incredibly easy for a single engineer to make this mistake (code review probably should have caught it? But maybe it looked just close enough to the right data source), and it would have been extraordinarily difficult to discover.


Not a chance. It's never a single engineer, code gets the PR checked by another engineer and the Jira will be specific with any PII, probably written by committee, all of whom know the importance of the data. Don't conflate this crap with blaming a single nebulous engineer.

I've not worked in years at a place that wouldn't understand the importance of PII. Not that it doesn't happen, but let's not mince words here - this was wilfully done.


Your comment made me audibly laugh at the notion that most companies would have a committee checking PR and Jira tickets for PII. I've worked at plenty of companies, even ones at the scale of Twitter and larger, that don't approach anything even remotely close to that level of sophistication. I've seen audits uncover precisely what the GP comment is talking about. IME, it's not at all uncommon for someone to send an email saying "hey can I get a dump of usernames and phone numbers" and some naive engineer dumps it into a CSV file and sends it to whoever. Hell, most of the places I consulted at don't even consider phone numbers to be protected PII.

I don't mean to defend Twitter in any way, but I could easily see this being an oversight or a mistake.


I bet if we could get a hard percentage of companies that have strict access rules for engineers around even just sensitive data in general, let alone PII, that would easily be <50%.

It's entirely feasible to me that this is was a mistake, I think people who assume this was deliberate are ironically putting more trust in tech companies than they should.

Most of the world is being held together by duck-tape, fastened by people who don't understand the systems they're fixing or maintaining. I don't think that tech companies are an exception to that rule.


fwiw, Google at least has policies around how to handle PII in support tickets, as well as how to handle PII in bugs reported against public-facing-ish software (like Chrome). That's not to say there can't be bad actors or lapses due to poor training or inappropriate behavior, but the tools & policies exist.

I get your perspective and skepticism, I really do. I have no incentive to defend Twitter. I cannot say whether this was done deliberately or not, but it absolutely could have been a mistake by a single engineer at Twitter.

The JIRA will just have been something vague like "add support for phone number matching to tailored audience matching pipeline" likely created by a manager on the ads infra team. Context will have already been assumed. Given that these are simple data pipelines there likely will not have been a design document specifically calling out the fields to match against for this task.

At Twitter it was also possible to deploy these Hadoop jobs without checking in code. They would require to be run as the main ads system service accounts, but most ads engineers should have had the ability to deploy such a job.

As I mentioned earlier, the fragility of this part of the ads infrastructure I observed in 2015 makes me believe that a mistake is entirely possible here.

Example: Hadoop job writes some output file to HDFS, a different job reads files from a particular location on HDFS and processes them. If no files exist there must not have been anything to process right? But it could have also been the case the first Hadoop job failed which nobody noticed subsequently.

Anyways, it could have been an engineer by mistake, an engineer trying to get promoted and increasing revenue numbers, or an action at the direction of management. Don't rule out the first option though...


Yes they should employ stronger vetting processes new engineering hires.

> Internally Twitter generally operated the way you would expect them to as an external user.

As a total shitshow?


Is Tailored Audiences not a third party?

The feature is part of the Twitter ads platform, just like Facebook's custom audiences. A third party partner on behalf of an advertiser or the advertiser themselves can upload the list of identifiers (here phone numbers) to match against (the audience). Note that an audience can only be used if it matched at least 500 users.

Hopefully the FTC sees this comment and fines Twitter a couple billion for privacy violations.

That doesn't make any sense. I have not seen any evidence of wrong doing or improper use of data [edit for clarification: I'm talking about my literal observation in 2015]. Happy to speak on the record.

Then you admit that twitter has a deficient mechanism for dealing with PII data? You can’t blame this on a single engineer. It’s the product manager, the technical lead, the engineer and the legal mechanisms for PII that led to this breach. An engineer working on ad tech should have never had access to the PII used for security. Just a shit show all around.

They should be deeply fined.


I have zero insight into how the 2FA phone number was stored or how it could have been accessed.

I can't admit to anything I don't know about :)

I agree that in a well designed system and process 2FA numbers should not be accessible (like passwords). However, I've seen this as a common practice in quite a few startups.


> have not seen any evidence of wrong doing

They admit to using 2FA phone numbers for advertising purposes.

How is this not wrong doing?

That they believe it was a mistake doesn’t make it right doing.


Exactly. "Whoops, we did a thing that accidentally made us money," is not the most confidence-inspiring excuse. One, that's a very convenient accident. And two, if procedures around private data are so sloppy that one small accident can do this, that strikes me as the kind of negligence that a regulator should punish vigorously.

Yes that is wrong-doing but I have not seen any evidence during my time there that such a move would have been intentional.

I'm very inclined to believe it was an unfortunate mistake. A mistake that probably should not have been possible. I'm not sure why or how it was.


Intention or not it is still wrong.

Intention should have an impact on punishment / reparation outcomes, but it shouldn’t necessarily impact guilt.


Intention doesn’t matter.

Obviously writing an ads targeting system wasn't an accident. But why did it end up using data it wasn't supposed to? It seems unlikely that it was expressly written to work against data they weren't supposed to use for that purpose. It's much more likely that they had multiple data sources for email addresses and phone numbers, and had cross-contamination between the data sources.

E.g. maybe they at some point had dozens of databases for user data, all maintained by different teams and grown organically. And then had a big effort to merge them into a single system with a rationally designed schema. And maybe they ended up coalescing all phone numbers of each user from all sources together, with some kind of annotations for why each phone number was collected. And then it's very easy for somebody to screw up the code that's supposed to filtering out security-only phone numbers when ingesting data into the ads targeting system.

The above was totally theoretical scenario, but it should be easy to come up with lots of other ways for this to happen by mistake rather than intentionally. Don't think companies make these kinds of mistakes? Both Facebook and Google had issues with plaintext passwords this year. And it should be obvious that neither had anything to gain from that.


Twitter has exactly one source of users' emails/phone numbers- the user providing them.

Sorry but I’m not clear what point you’re trying to make. I may provide a phone number for the express purpose of securing my account but that doesn’t mean I’ve authorised use of that data for marketing. Users are not liable for a company’s misuse of their data (which in this instance is possibly in contravention or Twitter’s privacy policy).

The point I'm trying to make is that nobody authorized the use of their data for marketing. As far as I'm aware the only phone numbers they have are ones submitted for 2FA purposes, I don't see how they could have possibly accidentally mixed them up with some other source of phone numbers.

If there is some sort of opt-in to use your phone number for ad targeting purposes they have done an incredibly good job of hiding it in the UI.


They seem to have been allowing the use of phone numbers as a search parameter when matching result sets for their 'trusted partners', which means they are revealing user's phone numbers without technically giving the phone numbers to the trusted partners. Very poor OpSec as it allows an advertising partner to further segment its proprietary database with users' twitter profiles (how much data is available there, public or private, I don't know.)

I have a hard time believing that Twitter didn't know this.


Ah ok. We’re in violent agreement. I actually thought you were implying the opposite. My bad.

If they didn’t have other data sources for this, they would have shut down the now useless ads targeting features.

There are other identifiers that can be used for targeted advertising besides email addresses and phone numbers.

Twitter could easily have dozens of sources of this data. What makes you think this is the only one?

Bollocks. A phone number is an ideal identifier to match on, they've done exactly the same as FB - pretended it's something for security then gone 'oops'.

I've never worked on the same scale as FB or Twitter and I know that this is a no-go area.


> We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system.

Nice use of passive voice there, mofos.


They should just take it one small step further and blame it on the data itself. “We discovered that telephone numbers had synced up with advertising profiles. Twitter regrets that the data correlated in this manner.”

"Mistakes were made"

Also >> a version of an industry-standard product

just so people know that Twitter at least has standards


Didn’t this almost-exact same thing happen with Facebook? It’s funny how frequently these kinds of “accidents” happen. They seem to occur with the kinds of frequencies that you might hear a manager or developer say something like, “meh, what’s the worst that could happen?”

> Didn’t this almost-exact same thing happen with Facebook? It’s funny how frequently these kinds of “accidents” happen.

Assuming that accidents occur with a specific percentage of data collected, it's not "funny", it's explainable: the more data one has, the greater the chance of things going wrong because someone uses data that is there but should not be used for that purpose.

That happens when systems grow so large that they cannot reasonably be understood by a single person.


I am still waiting for the same article to be written about google so they can fix their shit too.

I got phone calls from google cloud platform even though I am certain I didn't give them my number. They even managed to find my real name even though the only place I used it was for the payment method in the billing account, with a fake name on the google account. This is way too much information to give to their internal sales reps.


You don't need to. If you've ever filled out a form anywhere online to access an article, it's likely you opted in to having your data sold to broker agencies, who sell it for lead gen. The same goes for magazine subscriptions, credit card applications, even debt collectors, etc.

I think instead of "frequently," it happens "consistently." I mean, how many social media companies can we say have not abused these phone numbers?

I think the difference it Facebook got caught and called out, whereas it appears here like Twitter identified this as an issue and Twitter decided to publicly disclose the privacy violation rather than sweeping it under the rug. Is disclosure of this sort of breach mandated by GDPR?

At least Twitter is admitting that information collected for one purpose (two factor authentication, account recovery) should not be used for another unrelated purpose (advertising and use by "business partners.")

"At least Twitter is admitting that they've heard of the concept of privacy"

" we have addressed the issue that allowed this to occur and are no longer using phone numbers or email addresses collected for safety or security purposes for advertising. "

Does this mean the matches they've already made on these identifiers are still active?


Any advertiser that made the match previously will likely have that stored somewhere and will also probably have that identified with the twitter handle, so yeah those identifiers are still active, just not within Twitter.

Raise hands everyone who's surprised by this...

They say they inadvertently did it, which may be true. But if they have the data it can be abused.

Goes for all those "omg think of the terrorists" data collection plans as well.


Say what you want about the morals of their data collection practices, but at the very least they found it, deemed it inappropriate, removed it (hopefully), and apologized.

Agree, I think that's the only good thing here.

On twitter/fb, people can look up your account by phone number. This is really fucking stupid because I only added my phone number for two factor authentication. So now I have to choose between security or people finding my dumb throwaway accounts.

You can disable this functionality in the privacy settings for Twitter-- not sure if Facebook allows it, though. Either way I totally agree that it's a dumb feature to have turned on by default.

It can be disabled (and at least in Twitter's case, I remembered it asked me if I want to enable it).

I get so annoyed by restaurants that want my phone number - even takeout, “so they can let me know my food/table is ready” even worse are the stores that don’t offer paper receipts, want a receipt provide an email or phone number...

I don't understand why people use Twitter to begin with. It's what, 160 characters of text per tweet, right? But to load the webpage, it wants kilobytes or even MBs of data in scripts and media objects, and it doesn't even work with JavaScript or media disabled. To simply show a text message! There's no graceful failsafe mode to display a few bytes of text! This makes one conclude that Twitter is a data gathering and marketing business disguised as a messaging platform. Anything they do as a company should be very, very suspect.

"...in an effort to be transparent, we wanted to make everyone aware" The fact that the company published this hidden in the help site speaks of how little commitment to that oath they actually have. It doesn't even have a date! If you read the reference to "As of September 17, we have addressed the issue" you don't know when it happened. I guess it might be industry practice, I don't know, but still... it leaves a great deal to be desired for an apologize.

I imagine some PM and engineering team on the marketing product realized that the phone number was available in the user's database entry, and so... why not expose it indirectly for marketing? It'll make a ton of money!

This is exactly why I am a huge fan of the "privacy by design" parts of GDPR and the fact that the regulation places a heavy emphasis on how the data is used in addition to who sees it. It has helped to crystallize the discussion about privacy as not just "what data" but "data + usage".

I like that engineers are increasingly required to think about not just what the data is but also how it's being used. When the act of connecting data sources has ethical impact, engineers can't be agnostic.


This should really be taught as a fundamental rule of modern human society by now:

Any information that you provide to any company or government will be “misused.”


Sure, I guess the column was named "twitter_mobile" instead of "twitter_mobile_2fa".

To see which Twitter advertisers are targeting you (by email address, phone number, mobile device ID, or Twitter username), go to https://twitter.com/settings/your_twitter_data/audiences and click "Request advertiser list."

Opt out on https://twitter.com/personalization

(details and screenshot: https://twitter.com/simpleoptout/status/1178290986868297729)


huh, I wonder if there is a way to get the audience segments? It says I'm part of over 2500 audiences, but currently being targeted by 0 advertisers.

Recently Google started prompting me to use my phone to confirm logins every time using my phone.

I went through the recovery process just to make sure that I would be able to access my account and was asked the month and year I opened my gmail account.

I have had my gmail for 17+ years and I have no idea when I opened it, and now if I ever lose my phone I will be locked out of my account.

Does anyone know where I can see when I opened my account?

I travel year round and rarely keep a phone number for more than a few months, 2FA has increasing ly become a problem for me as more and more companies force it on me. Paypal for instance is completely unusable for me, I have been locked out of my own account numerous times.


You could check the date if you still have the "Welcome to gmail" email that I assume was sent.

Twitter asks for a phone number for security and then sends sms's to tell you, you have a direct message or to wish you a happy birthday.

Twitter know it is a rubbish security system but they do it as it is another channel to send their garbage


It is also funny because providing a phone number is initially optional. It becomes mandatory when someone tries to log in to your account and fails a couple of times. Your account gets suspended (reason: suspicious activity) until you provide a phone number. Bullshit system that is unfortunately popular.

I mean, when Facebook came out with a similar admission last year, Twitter should have at least checked that they weren't doing a similar thing. But I guess that would have made them culpable had they discovered the same issue. So, just wait out till some employee "accidentally" discovers it!

Twitter would go head over heels trying to remove that one field from their sign up flow to improve the funnel. In fact they will copy what their competitors are doing to understand things better. But when it comes to security or using data, anything goes unless they absolutely have to address it for legal/compliance reasons.

Just the state of affairs of tech companies these days.


Did anyone else genuinely laugh out loud while reading this? It's so devoid of useful information (we don't know how this happened or at least won't tell you, and we don't know who is affected or how many people are affected) other than "it was an accident". It's just so ridiculous.

I do believe them that they don't know who was affected and that this probably wasn't approved by the top brass (in that sense being a kind of accident), but that doesn't speak well to their technical or legal competence.


I don't understand; isn't this literally the point of running a free service? Harvesting data and using it for marketing purposes?

The point of any service, free or otherwise, should be to deliver some kind of benefit to its users. If it doesn't do that, it has little reason to exist.

"Data harvesting," marketing and advertising are far from the only business reasons why a company might want to offer a free service.

From a business perspective, a free service might

- build goodwill for the company

- satisfy legal or government requirements

- save money on billing while increasing the value of other goods and services from the company

- enable the company to qualify for certain grants or subsidies

- support the creation and improvement of goods or services (such as software) which the company uses

- enable the company to influence industry standards

- help to acquire future paying customers (e.g. by offering a free student or limited version which can be upgraded with additional paid features)

- help to compete against other companies that charge for the same service

- act as a loss leader to encourage sales of compatible companion products or accessories

etc.


In this case, Twitter was lying (or more generously, not being entirely truthful) with what they were using the data for. Some users may have chosen not to give up their phone number or email if they had known it would be used for advertising in addition to account security.

When signing up for a "free" service, I basically assume all data entered will be used for advertising / marketing purposes. This is a safe assumption to make.

I don't disagree, but I think we should still be able to get upset when a free service uses the data for something other than what they said it would be used for.

It's why I love GDPR. Since it requires explicit, opt-in consent, I can just register or visit a site and don't worry much - abuse of my data is a bigger risk to the service than it is to me.

Sounds good on paper. Reality is GDPR isn't going to stop an "error" from happening.

I just have a hard time imagining anyone wouldn’t expect that, maybe I’m getting paranoid though.

I never gave facebook my number even though prompted for it literally every time I logged in.


In fact, prompting every time is also a red flag. They wouldn't be so desperate if providing the number was for your benefit as they claim (for "security", etc). The real reason they're so desperate is because it's for their benefit.

> Harvesting data and using it for marketing purposes?

Twitter's actions make everyone less secure. The next time an online service asks me to enable 2FA to protect my account, I'll have to consider whether the potential for abuse of my 2nd factor information is worth the additional risk to my account.


FIDO tokens (for U2F or WebAuthn) don't give the relying party anything valuable. If they literally publish everyone's parameters it makes essentially no difference to anything. It doesn't even mean they stop being useful for authentication.

You don't have to do this.

Yes but the user should in principle have some control over what data gets used for such purpose and what doesn't, it is their data.

Not allowing opting out of data collection is actually in breach of the GDPR, so actually in the EU you can not run a business for the sole purpose of stealing data.

I don't like these corporate press release titles either but the submitted title, "Twitter misused phone numbers and e-mail to target advertising", seems tendentious. What's an accurate, neutral title we can use above?

"misused" seems to be the only word with affect, but seems consistent with and descriptive of the facts in the press release. I don't think "Personal information and ads on Twitter" is very descriptive - I prefer the previous title.


This sort of shitty behavior will not change until it becomes expensive.

"When an advertiser uploaded their marketing list, we may have matched people on Twitter to their list based on the email or phone number the Twitter account holder provided for safety and security purposes. This was an error and we apologize."

This doesn't happen accidentally, someone had to engineer, test and build a system that performed reliable targeting.


Yeah this is hilarious. I feel like tech companies get away with this because the internet is more "mysterious".

This is no different than a chemical plant "accidentally" building a conduit a quarter mile and dumping waste into a river. Of course they know.


I don't think it's equivalent. In the physical world, it's easy to make a mistake that destroys the entire thing (e.g. causes a power plant to explode) but it's hard to make a mistake that causes the system to do something completely random and not designed. Like, you can't make a mistake with a weedwacker that turns it into an airplane.

However I have made mistakes like that programming all the time... for example, having massively more data written out to disk than I had intended. You are never going to make a mistake in a factory that increases its electricity consumption by 1,000,000x but I have made errors that caused a million times more data to be written out to disk than planned.


> You are never going to make a mistake in a factory that increases its electricity consumption by 1,000,000x

Maybe not a million, but a thousand times? Sure. Just short some heavy machinery and watch the amperes light up the day like a second sun as they rush out to pour into the ground. Factories have breakers to prevent that, you'll say. Turns out software projects don't have fuse equivalents, even though they should. The reason is that bleeding electricity is expensive (and dangerous to people, which from the company's POV means "even more expensive"), whereas data leaks aren't. I hope the latter is going to change.


You really don’t think that a chemical company dumping waste into a river is worse than using phone numbers to target ads? Deep in your heart of hearts? Are you sure you don’t have an axe to grind against tech companies and perhaps that’s clouding your moral calculus?

He wasn't saying they are just as bad as one another, he was saying the excuses are just as unbelievable.

They never said it was an accident. They just said it was an error, which is true! They did err.

why call it error ? there was no undefined behavior; someone actually implemented and tested whether it worked properly. why not call it a mistake? this was a mistake. calling it an error makes it sound like it was not on purpose.

They did call it a mistake, later in the text. Not appreciably different from an error, in my opinion.

If you step into the street in front of a truck while reading Hacker News on your phone, it's a mistake. You wouldn't have done it had you been paying attention.

If you step into the street in front of a truck because you think the truck is farther away than it turns out to be, it's an error. If you are playing baseball, and you estimate the vector to intercept a ball rolling along, and you touch your glove to the ball but fail to actually catch it, it will be classified as an error, You made an estimate of what the circumstances were, and it was wrong and you acted on purpose.


I'm not saying dictionary.com is the be-all and end-all of dictionaries, but their definition of error uses the word mistake several times. https://www.dictionary.com/browse/error

Likewise their definition of mistake uses error a couple of times. https://www.dictionary.com/browse/mistake

I get it though, there is a subtle difference. I'm just saying both terms mischaracterize (intentionally) what I think was likely a very intentional, knowing and calculated action by Twitter. I'm asserting without proof that there was no inattention as in your first example, and no miscalculation as in your second example.

The baseball example is problematic because "error" has a game-specific meaning just like "run" and "base," and whether you commit a mistake or an error, they will both be recorded as errors.


I was wondering about this as well, ie. if there is a qualitative difference between error and mistake. (English is not my first language, so can't really tell.)

IMO "mistake" implies an incorrect choice, whereas "error" implies an incorrect result. A mistake may or may not cause an error (and an error may or may not be caused by a mistake). There seems to be an element of "could have been done differently" in "mistake" that is absent in "error".

Recently I read an article saying that programmers should reserve "error" for the mathematical usage of "distance from correct result" (as in, behavior of the program), and refer to their bugs and other mistakes as "blunders" instead. I have to think on it more but I like the overall direction of the distinction.


Accidents happen all the time there though, like when they accidentally lock your new account minutes after you create it, and all but force you to give them your phone number in order to reinstate it.

Same happened to me. However, I did do a support request which 2 days later agreed to let me in without a phone number. 2 days.

I think we can file this under 'dark patterns' and finally realise Twitter is attempting to be as abusive as Facebook.


I haven't worked there in a while, but you would not believe the volume of fake account creation they get. Spammers. Fraudsters. Jerks creating a second (or 20th) account to bypass restrictions. And more recently, state actors trying to manipulate things. So I am inclined to cut them some slack on the, "Please prove you're a real user" stuff.

A week or two ago anything I did with my hacker news account took me to google to solve a recaptcha. (I had blocked google when accessing this site, thanks umatrix)

Or I could email them to unblock it (my account has no email address associated with it)


Depending on what you mean by "a week or two ago", this was probably because of a botnet attack HN was under in mid-September, when we resorted to turning on recaptcha for logins for a while. We know why people hate this and want to avoid it in the future, but defending HN against attack took priority.

I'm not sure about processes at 'big' companies like Twitter but in my experience if an inexperienced developer sees a field in a DB or API it's fair game to use it even if inappropriate, and this could be the source of the mistake.

So perhaps one team forgot to secure it property (based on role authorisation) and another team saw the field and didn't think it was something they shouldn't use (maybe an OKR or some target as kicking them in the direction of being eager to use what they could get).

Obviously nothing happened accidentally at a coding level, but at a business-constraint understanding level a mistake may have been made.


I work at a big company, every field in a database of user info is tagged with a data classification. Everyone knows the rules about which classification can be used for what. Products go through a review of data use and storage (i.e. some data should be encrypted, and encrypted correctly not some hand rolled hash)

If twitter doesn't have common data classifications and everyone with a keyboard and an idea has access to the database without review, they are probably pushing criminally negligent. I understand at a company with just a couple employees. This shit happens all the time. I don't understand it with a publicly traded company that would be in big trouble with a class action data lawsuit.


Pretty sure there was a product manager who drove the project. This person needs to understand where the data is coming from and where it flows to. This person is also responsible for checking with legal to see that everything is in line.

Besides that, there're cycles of product reviews with directors and VP's.


I could buy that marketing/sales snuck around dev's oversight to manually get a copy of that to use - possibly by enlisting a poor junior dev to do an export - but there was definitely intent.

How many times have you, during the process of doing work, said "Oh hey, I think this might be a nice feature to work on for the next 3-4 days" and just not done any of your normally scheduled work?


That shouldn't be possible though. If you classify data in your database as pii and classified but then let any one with door access in there you're really doing something wrong. To add to that, what are the chances the team that owns that database didn't notice a significant uptick in requests from a new caller? I'd say a program like this is probably enough to tip over a service if it's not coordinated.

I agree that it'd be nice if devs couldn't see this data at all, but perhaps the backups are less restricted than the live DB or it's standard practice for folks to run BI/marketing queries against prod there?

Twitter probably has a serious BI department so I'd imagine they have ways of getting at various data vectors for analysis, and BI folks usually complain loudly when you block data that let's them "cohort" usage data to the point where it can be easier to just open the door for them on a read-only instance.


Facebook blocked accounts created for my Tutanota accounts within minutes. Real names but I hide a few things. I setup one for my private email only domain I use through Tutanota and it finally worked. But they continue to add a banner asking for a phone number

It was an error (as in it was against the law and they got caught), it wasn't an accident. Read what they wrote not what they're implying. I promise you their lawyer vetted it.

It’s a mistake or an error, not an accident.

They mistakenly thought this is fine. They made an error in judgement. Etc


Whatever information they already have, I don't mind them to use to target advertising. I just don't want them to share it with anybody and I'd like them to have as little information as possible. Does anybody care about ads actually? It obviously is desirable to see none yet if I am shown some I don't care if it's targeted. I would only care about somebody tracking me and leaking data about me to other parties.


Is this against GDPR?

That's a rhetorical question I hope? In case it isn't: yes, that's against the GDPR that requires that when data is collected the purpose of the collection is stated and consent is obtained for the use of the information for that purpose. No consent->violation of the law.

This is not entirely correct, GDPR does not necessarily require consent, see https://en.wikipedia.org/wiki/General_Data_Protection_Regula...

However, you are right in saying that this violates the GDPR, because the data was collected for a different purpose, and using it for another reason like advertising is not allowed.


In this context it does.

Wonder will they be hit under the GDPR for this...hopefully...

That bloody rogue engineer again? What is it with that person. Job hopper too, first Volkswagen, then Facebook and now working at Twitter. That's what you get for skipping reference checks.

Ok, so what about GDPR now? Now every EU citizen can sue Twitter? Compensation?

I am totally fed up with GDPR and its expectation to small/micro companies with the false sense of security it gives to users.

SHAME on twitter, shame on Fb, shane on stupid EU.


Why would somebody ever give their phone number to Twitter, Facebook, or any other service for that matter? You loose anonymity when you give away your phone number.

You can't make a Twitter account without a phone number. Well you technically can, but it will be instantly banned upon creation until a phone number is provided.

I've had a Twitter account for years (to contact customer support). They keep asking me for a phone number but I never provide it. I never got banned.

This is a recent-ish change that only applies to new accounts.

I still don't get it, I just signed up for a new Twitter account and didn't have to provide a phone number. Was there an official policy change from Twitter? Do you have a link?

As far as I'm aware there was never any sort of official policy announcement. They just started banning any new account without a phone number a few minutes after creating it at some point. I personally experienced it a couple of times and there was a big thread about it on here a few months ago[1]. Maybe they changed it back again?

[1] https://news.ycombinator.com/item?id=19487304




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: