Remember that phone numbers are only 10 digits long, so brute forcing all phone numbers is totally doable.
Considering that, if you implement any flow that involves checking if a phone number is already in use, then you are effectively leaking to an attacker a list of every phone number that uses your product.
It's interesting to wonder why only 5M accounts were affected by this exploit, especially if it's brute forceable. IIRC this vulnerability was widely known about for at least months before it was fixed, so I can't imagine nobody in the know had access to the resources/botnets necessary to enumerate through every account.
Have only 5M accounts linked their phone numbers on Twitter? That's less than 2% of their total accounts (~290M). I don't know what the industry average is for linking phone numbers, but this seems like an exceptionally low ratio.
What percent of mobile numbers do you think are associated with twitter accounts? I don’t know, but it wouldn’t surprise me to find out they had to try 500M or more numbers to find 5M accounts.
Independent of Hollywood, some American cars just might do that. Maybe not in such an impressive manner, but I've been through so many Dodge transmissions and Ford's reputation here is even worse.
joking aside, the 5M figure probably came from targeting like this, such as choosing a few area codes with high tech populations and testing the ~10M phone numbers for each area
Rate limiting should be used to mitigate this, although I suppose a botnet could overcome that to some extent proportional to the size of the botnet.
And for anyone who didn't read TFA, this incident goes well beyond leaking what phone numbers use the product, it leaked the usernames associated with each as well.
Rate limiting is not useful meaningfully. For a service we ran we regularly had botnets with 100k+ IP addresses making one request an hour to endpoints, which absolutely decimated the backend but hit no limits at all that a real user wouldn't also trigger. Even with a couple of requests an hour you could enumerate the entire phone number space in a very short period with that botnet.
There are "residential proxy services" offering exactly this and you only ever pay for bandwidth. Using 100,000 unique non-datacenter IPs will only cost you few thousand dollars as long as you only sending tiny API requests.
And this is service offered by registered Israeli company that get formal agreement from "bots" to route traffic through them. Very shady, but totally legal service that used by a lot of data collection agencies for price tracking on Amazon or getting data from Linkedin, etc.
How do you defend against such an attack? Putting a service behind something like Cloudflare won't bring it down but it will still leak the phone numbers existence, no?
Don't leak whether or not the phone number belongs to an account. All failed login attempts should be some form of "Invalid login" regardless of whether or not it was an attempt against an actual account or not.
Usually you'd try to make the effort/cost no longer worth the data with minimal user impact. For instance, text/email the inputted address with the result instead of displaying it to the requestor through the browser
Or if this functionality needs to return the value, require an authenticated user and impose rate limits based on reputation (which could just be account age)
For instance, Facebook and Twitter used to tell you which profile a phone number belonged to when you put it in the search box (maybe it was this issue). You could restrict that to authenticated users that were 30 days+ old and impose rate limits per day on top of that. A regular user could still look up a few numbers per day but someone enumerating phone numbers would need lots of 1 month old accounts (more effort/cost)
I guess I was thinking more like "limiting the number of attempts" than "limiting the number of attempts over time" -- take time out of the equation (but then NAT causes trouble). But even so, you're right: as the threat landscape approaches the size of the result set, it breaks down no matter what.
That has some problems. If you limit the total number of attempts globally then the feature is effectively disabled, every botnet and script will blow through the attempt budget and real users can't use it. Global limits and IP address limits are not useful, and because we're assuming the user is unauthenticated (using the password reset), we have no other way of distinguishing good traffic.
Captcha comes to mind, but that's a cat-and-mouse game in the age of machine learning (not to mention actual humans working for a bad actor). Cloudflare seems to be on the cutting edge with their newest challenge mechanism, but good vs bad is somewhat distinct from human vs script.
My wife was in charge of security at MySpace back when MySpace was still a thing and there was one occasion that the MySpace team was manually feeding images to a suspected human acting as a bot. As I recall it became clear to both sides that there were humans on the other end and it ended with a picture of a scantily-clad woman and a response of “very funny.”
It's typically smaller though, not every phone number is allocated and many are in sequential groups. Some are special cased, you don't need to search any number matching `****555***` in north america for example, which cuts down on the search space quite a bit.
Try the math, this is a good problem to work through. The position of the 5 doesn't impact the search space like that. 10% of the 10 digit numbers start with a 5. 10% of the 10 digit numbers end with a 5. 5... in your example shouldn't be 1%.
Maybe they should store salted hashes of phone numbers.
The purposes of phone numbers:
1. Verify you are a not a bot: no need to store anything except TRUE once verified.
2. 2FA - well use something better than SMS, but if you must, store the hash, and make me enter my number for the 2FA each time. Compare with hash and then send SMS.
Hashing numbers has other implications, like support impact (some folks don’t know their own phone number), preventing the ability to offer SMS updates in countries that need it (or to reactivate that feature in national emergencies for countries that SMS support was pulled from), as well as making potential marketing, data mining, satisfying legal requests, and future feature development harder.
So your suggestion is a good one for a privacy-conscious service that doesn’t already depend on (or that is unwilling to relinquish) unhashed numbers, but it probably isn’t in the nature of twitter to seek to protect user data at the expense of existing or future features, even after leaks like this.
Non-geeks dislike the hassle of 2FA enough as it is, having to enter their phone number every time too sounds like it would hurt adoption quite significantly.
With technology like FIDO Passkey built into newer phones (both iOS and Android), I see passwordless multi-factor attested auth becoming the standard for most services very soon. Then, users will have to do even less to get more security.
already doable with e-mail addresses. doing this with just a phone number is not really a problem. It is a problem when you can link the phone and email. But discovering a phonenumber in itself is nothing more then pressing random numbers and see who answers?
So after forcing users to enter a phone number to continue using twitter, despite twitter having no need to know the users phone number, they then leak the phone numbers and associated accounts. Great.
But it gets worse... After being told of the leak in January, rather than disclosing the fact millions of users data had been open for anyone who looked, they quietly fixed it and hoped nobody else had found it.
It was only when the press started to notice they finally disclosed the leak.
That isn't just one bug causing a security leak - it's a chain of bad decisions and bad security culture, and if anything should attract government fines for lax data security, this is it.
The whole announcement reeks of "Stop hitting yourself!"
What scum. They had lots of chances to fix this, the first one being not collecting phone numbers in the first place. They chose to do that, and then they didn't adequately protect it, and now they're oh so very surprised that someone might be doxing their most vulnerable users.
If anyone is harmed by this, Twitter should be held liable.
didn't actually not just protect the phone numbers. They actively used it illegally to market services outside of the purpose for which the numbers were gathered
I know the answer is money in politics, SV culture, etc. But it's near certainty twitter will continue as they do in and 2 weeks everyone will move on.
Maybe they get a small boo-boo in the form of a symbolic fine, mangers scramble for a bit, and then the whole thing happens again and again.
Because twitter users care more about the convince twitter provides than they do about the risks their privacy and security as a result of using twitter. I suspect most have no idea what the risks are or have some very limited idea of some of them. Maybe if they had a better understanding of the risks they'd close their accounts and move to something new, but I doubt there be enough of them to cause twitter to invest in securing the unnecessary amounts of data they collect.
This sort of thing will only be fixed when we hold companies accountable for failing to protect customer data through regulation with many rows of sharp teeth.
Twitter is vulnerable, most vulnerable of the big social media sites it seems. The Musk deal has fallen through, and it seems like Musk was not the only one to lose confidence in Twitter. It could easily go the way of Myspace. How many users does Myspace have these days? Active users
They also refuse voip numbers. I am now at 20 back and forth emails with Discord support explaining I do not own a cell phone. They are seriously suggesting I buy one just to use Discord.
Yeah. I used to live in a semi-rural area with no mobile phone coverage, and the insane level of disbelief from places when you tell them "I have no mobile phone" was a real problem. Including banks, and other utilities. :(
Perhaps if you paid for discord. I happily pay for nitro because I see value in supporting discord. Still had to give them my number despite already paying them. I'd be happy about that sort of regulation.
I usually don't do ads, however there is a tool called SMS pva where you can rent phone numbers specific for services for a one time confirmation. You usually get a working one on first try.
I can't even count how many companies suggested that I should 'just get a phone number' to use their service.
> The FTC says Twitter induced people to provide their phone numbers and email addresses by claiming that the company’s purpose was, for example, to “Safeguard your account.
> ...
> But according to the FTC, much more was going on behind the scenes. In fact, in addition to using people’s phone numbers and email addresses for the protective purposes the company claimed, Twitter also used the information to serve people targeted ads – ads that enriched Twitter by the multi-millions.
So you're right, it wasn't for "no reason", but it also wasn't just for fraud and spam prevention, security, or any of the other lies Twitter told users.
They no longer use it for ads, so the value now is just fraud and security.
> if it's just to prevent bot signups, why keep it on file at all?
I mean, you need the actual number for 2FA. I guess maybe you could hash it after some amount of time just for blocking bots? You couldn't just discard it or one number could create unlimited bots.
Multiple companies have been caught using information for ads that they said they wouldn't, and Twitter have already proven that they're not trust worthy
I have seen too many services that ask phone number for account recovery purposes and then end up using it for other purposes for which the user didn't consent. Given how insecure SMS OTP is, I try not to enable that if I can avoid it. Then, on top of it, bugs like this make the service behave like a globally accessible open reverse-directory of mobile numbers to names.
How is twitter notifying users? Has anyone posted screenshots of this notification? I want to know where this notice will appear.
Not defending them but I think a major reason why Twitter (and for example Gmail nowadays) is asking for phone numbers is to decrease spam accounts (which is of course a good thing in itself).
As I said, not defending them. They are likely doing dozens of other things as well. But using phone numbers is a quite effective method of hindering spam/bot account creation - in most countries in Europe at least getting a prepaid SIM requires ID nowadays. Not that Twitter would go as far as to inquire ownership records of phone numbers... but/so you could still go and buy 100 SIM cards if you wanted to, but it'd be way more expensive than just spawning new email addresses.
No spammer ever buys sim cards in store with ID.
5sim.net apparently has direct SS7 access and nearly infinite numbers and offers bulk purchases for receiving SMS. Even for countries like Germany, where ID authentication is mandatory to get a phone number. They have thousands of +49 numbers.
Costs only a few rubles. If you convert it to euros it’s between 1-10 cents, depending on the service and country.
The bottom line is: IDs for sim cards are useless.
Oh, that's interesting. I wonder how they get past regulation in countries like Germany as you said. I'd assume they'd have to be registered as an official operator there?
We consistently have to go through Data protection practices, and limit the purpose of what the data collected can be used for. This seems like either a blatant miss in process, or willful ignore where $150m is under the EXPECTED value of the rewards through marketing
I think you will see more of this class of attack.
Lots of companies have various 'forgot my username'/'forgot my password'/'trying to sign up for a new account with a new email address but existing phone number'/'add a friend by email or phone' flows. It's very easy to accidentally leak some info that shouldn't be leaked while implementing such a flow, since you are peering into the users database querying by email/phone/other identifier while the user hasn't properly authenticated yet.
Yes. The proper way to implement this flow is to ask for the information, and then present the exact same result screen regardless of the actions taken. Any additional information or action should be done exclusively through the contact information you have on record.
And making sure constant time on the response. Otherwise the slower response likely corresponds to a real phone number if the backend synchronously did more actions, such as sending a recovery email. The backend would need to be really slow however in order for a strong enough signal for this to be useful.
No, the binary information too is a privacy concern. For example, one could enter a coworker's phone number to confirm that the coworker has a 4chan account. This isn't good.
> If you operate a pseudonymous Twitter account, we understand the risks an incident like this can introduce and deeply regret that this happened. To keep your identity as veiled as possible, we recommend not adding a publicly known phone number or email address to your Twitter account.
First time I've heard a company actually say this. It's obvious to people who understand a bit about tech and security, but not obvious to the layperson. Twitter actually deserve a tiny amount of credit for giving practical advice that reduces adversity for users in the event of a breach.
No, that's just shifting the blame onto the user. If they are asking for something as sensitive as a mobile number, then they need to protect it properly.
They ask for a mobile number to verify you're a real human, then they say "Ha it's your fault you gave us a sensitive mobile number". 99.9% of users only have one mobile, and have no idea how to get an alternate number, so they just give the number they have.
Even so, it's the first time I've seen a company actually imply to the public in plain English that they can't protect private info, rather than maintain a facade of security that doesn't actually exist.
As you point out though, if Twitter requires a phone number to sign up and 99.9% of users use their personal number, then Twitter are basically saying "our security sucks and if you want an account you have no alternative...".
Some interesting corollaries:
- Are there any services that will sign up to twitter on behalf of users? (and would they work or would it be merely shifting trust from Twitter to a potentially less trustworthy party?)
- I wonder if Twitter could consider not requiring personal info at sign up so as to avoid this dark UX
I signed up for twitter a couple weeks ago to follow some ukraine folks. They didn't require a phone number and just double checking my account doesn't have one.
So you have a well-established account from years ago that doesn't have a phone number. Congrats. Now try to get a new account to protect your identity.
Except for a long time they shut down accounts without a phone number under the pretense of "suspicious activity". For some reason, these suspicions could be immediately allayed only by providing your phone number.
Being forced to do something and later being advised not to do that thing out of deep concern for my well-being? Yeah, that's the Twitter UX vibe: the most self-regarding, passive-aggressive person you know, in software form.
Twitter often FORCED users to enter a valid phone number by locking accounts, and then verified if it was active in comparison to accounts. To this day there is no way to remove the phone number or disassociate it with an account. Please do not oversimplify the offense, it does not do justice to the cited issues involved.
Two days ago, I've tried to create an account tied only to an email. During account creation, the wizard suddenly inserted an additional step and required my to enter a phone number.
I realise though that this is possibly an anti-spam measure (which I'm in favour of), since I've connected through Tor when creating the account. But this procedure stands in stark contrast to the advise given in the article.
Perhaps Twitter needs to make it easier to create accounts anonymously and stop virtue signaling (i.e suspend accounts created over Tor onion-service)
With pseudonymous usage of public services information minimisation to maintain operational-security against private user-data being disclosed by external hackers or rogue insiders is a mantra that needs to be followed religiously.
I’m six months in and they haven’t asked for a phone number yet. I dread the day when they do. This is where proficiency in the Twilio API comes in handy.
when I started liking "too many" tweets I got hit with it and my mobile carrier (canada btw) refused to deliver txt msgs from Twitter so I could never get verified.
Lucky you. I can't create another twitter account as my number is on a network unreachable by their SMS system. Worst of both worlds for me as when that number was on another network they could verify. So leaked number that I cannot even use to verify a second business account :-(.
Virtue signaling? Preventing completely anonymously accounts doesn't seem to fit that colloquial definition of that, I always assumed it meant taking an action simply for social signalling, that has no benefit to you otherwise.
How about the fact Twitter recently launched an official onion-service yet it is claimed by users when attempting to create an account with email over it the account is locked for 'abuse' within short order?
I certainly understand why you want to use Tor to create a Twitter account, I guess the disconnect is you seem to feel it is fundamentally and obviously wrong to prevent this, but it does seem fairly clear why you'd offer a service to allow logins yet not signups. And in any case, can't speak to why an individual account got banned
$5k seems embarrassingly low so something with such horrendous impact. Potentially allowing for doxing, and because phone numbers are the lynchpin for many 2FA and consumer-facing telco security is generally lax, total user hijacking across multiple platforms. What an absolute disaster.
I have found many far more serious bugs, even at larger companies, that have paid me under $500. No one feels security researchers time is even worth that of the internal engineers creating the bugs.
Anyone have any idea how many of these bounties are collected by people who actively look (seems like a hard way to make a living) vs. say people with some knowledge who stumble across the issue and wouldn't take the time to properly report, otherwise (might convince me to take a couple of hours)?
Turkish law authorities have abused Twitter's login system in the past several years. If an anonym Twitter account was critisizing Erdoğan they were trying to log in, try to reset the password, choose phone number and then Twitter was showing last two digits of the phone number.
They also have list of known people who were critisizing the Erdoğan publicaly but without any bad words, unable to open a criminal case agains that person.
Then they were matching probable phone numbers (last two digits) from Twitter with these knnown people'phone numbers. If there was a match (last two digits) they opened a criminal case.
And then that person was being visited by police officers in the morning, arrested for several hours, then he had to attend hearings for 3 years, like once evry 4 months. Also he had to hire a lawyer, for 5 minimal salaries.
At the end he probably wins the case if he is not the owner of that Twitter account, and Erdoğan pays around 1x minimal salary to defendant's lawyer.
Pretty disgusting they don't have a thing to check if they leaked my personal information, which lets not forget they screamed and stamped their feet to force me to hand over in the first place.
I never wanted to give you my phone number, Twitter. You demanded it.
Well yeah. Some accounts could be two. If I see language like that in a headline, I pretty much ignore it. It's like when I see the word "may" in a headline. "New wonder drug may cure cancer." That isn't even news.
That's not unusual for a security bug; it's not like this stopped people from using the app in a way that they'd loudly complain about or that would show up in metrics.
Given they didn't think it was exploited they must have pretty poor logging and analytics around that part of their infrastructure. Someone managed to abuse it millions of times and they didn't know about it even after they'd fixed it and knew exactly where to look for abuse.
I said this before years ago about Signal, Robinhood and Coinbase [0] and right now it's 2022 and SMS 2FA is still being used despite SS7 attacks, SIM swapping, one-click zero-day SMS attacks as found in Pegasus and sophisticated SMS phishing attacks. [1]
Really. One needs to think about logging into any service that requires ONLY Phone number 2FA and this should be a wake up call.
Twitter really should get a massive multi-million dollar fine for this breach.
It's always hilarious: Whenever any company is caught not taking X seriously, the first thing they do is issue a press release that starts with "Here at COMPANY, we take X very seriously!"
A story an old coworker of mine often told was about the CEO at a previous company he had worked for. This guy was apparently pretty scummy in general, but one time he got threatened with a lawsuit for sexually propositioning his secretary.
He settled that issue with an under-the-table payout, but the first thing he did after that was to send out a stern memo to all staff warning them that "we will tolerate ABSOLUTELY NO sexual harassment at this company!"
You can pretty much read a list of company values to find out exactly the things they do only for show.
The companies I've worked for have always ignored any stated values as soon as it costs them money or gets in the way of making money. Which is, you know, always.
> When we learned about this, we immediately investigated and fixed it. At that time, we had no evidence to suggest someone had taken advantage of the vulnerability.
> In July 2022, we learned through a press report that someone had potentially leveraged this and was offering to sell the information they had compiled. After reviewing a sample of the available data for sale, we confirmed that a bad actor had taken advantage of the issue before it was addressed.
Yikes. Sounds like they either didn't dig deep enough to see if it was exploited or they don't keep records long enough to be sure.
This link is not particularly relevant, as it talks about how the phrase "no evidence" is used within a specific community and that community has little overlap with the community which writes press releases after security incidents.
Security incident response teams do not have the same strange distinction between "real" evidence and the non-published non-peer-reviewed evidence which cannot be relied on or even really mentioned.
Probably the latter - all companies operating in the EU have had short (ie. 30 days) retention policies on anything user-identifiable (ie. http logs) for a while now.
But if they didn't keep sufficient logs, they should have alerted the users back then, not now.
AFAIK there are exceptions for many purposes, taxes, law enforcement, "critical business functions", etc of the 30 day window. Tax records, which can be quite PII and personal, need to be kept for ~7 years in the US for instance. Anything that needs to go to law enforcement stays around until the court case is over which can be longer.
For security reasons IP addresses needs to be available in plain text. There is no time limit for how long time you can store the data, but you need to be able to motivate why.
No that's not valid at all! You must remove any trace of your ability to backwards engineering the IPs. Hashing isn't sufficient since it's so easy to run over the whole IPv4 space. This is one of the trade offs.
You could probably make the argument that you need to store http logs with cleartext IP addresses for more than 30 days for operational security and fraud detection reasons. I would certainly consider 180+ days of cleartext IP addresses quite necessary to be able to react to any security or abuse incidents.
You can if the hash collides within the IPv4 address space; ie it's a hash of less than about 16 bits. Enough to let your roughly see if something fishy is going on but you can't reverse engineer to any specific IP, only a set of 64 thousand.
That isn't good enough. By taking that hash and old request data combined with your current request logs it's enough to de-anonymization a significant portion of those logs making you not in compliance.
Data from the past few days we do have a legitimate interest in; protecting our network. If someone is spamming us we need to be able to find out who did it and the only way to do that is deanonymized logs to begin with. Atleast in my workplace we have worked with the DPA to ensure that we are in compliance and there is no issue in keeping around 7 days of IP logs without further anonymization. All or long term logs are hashed below the bit minimum, and that can't be paired with old request data as easily since we strip all but major version identifiers from User Agents, for example.
If something uniquely identifies someone, it's considered a PII and a salted (but still useful) hash of the IP address is that. At least under GDPR. That means you will need to throw away the salt and have different salt for every instance. At that point, you might as well replace with a random string, and that isn't very useful.
"In the context of the European GDPR the Article 29 Working Party has stated that while the technique of salting and then hashing data “reduce[s] the likelihood of deriving the input value,” because “calculating the original attribute value hidden behind the result of a salted hash function may still be feasible within reasonable means,” the salted-hashed output should be considered pseudonymized data that remains subject to the GDPR."
Under CCPA, I think that is enough, HOWEVER, business must implement business processes that specifically prohibit reidentification. So again, not useful at all in this case.
The question should be is IP address a PII or not. Under CCPA and GDPR it is, but only if it “identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”
Out of curiosity, why is it only 5M and not 500M? You would think the same vulnerability applied to every server, not just one or one cluster, if they are using automated deployments
Doing it slowly over time to not raise an alarm and collect the information, rather than twitter noticing a massive upticks in password resets that don’t go through?
"We have no evidence that this was exploited" is a standard psychological trick they pull in vulnerability announcements to give an unfounded impression that it hasn't been exploited.
I always wonder who "we" refers to in that usage, legally speaking. Does it refer only to a subset of employees / board members who are authorized to speak for the company? Because then even if someone analyzing logs sees something damning, if middle management is trained to stop that knowledge from reaching the top, then those speaking for the company can continue saying "we" didn't know it.
Tech needs regulation like the finance industry in this regard. Regulation that can push responsibility for breaches up the chain. There must be ways to escalate and if something is seen and reported but not acted on, then liability goes upwards.
CEO's in Finance and Banking do A LOT of compliance work and it does catch a lot of problems.
really? what do you mean 'middle management is trained to keep that from getting to the top'? intentional malfeasance?
where I work people are trying their best but dealing with complex systems, memories, and methods of communication. because of this, security issues are sometimes missed, sometimes poorly communicated, and sometimes poorly remediated.
I guess to poster's claim, "I have seen this happen" is an existential claim, not a universal one.
Fwiw, I've ended up being "middle management" at a large company, with deep technical background, and I'm trained and incentivized to report, escalate, inform, communicate, share, and otherwise ensure its addressed up the bloody wazoo. I get slapped on the hand for not communicating / informing enough, never for communicating too much. Over 2 decades, I've never seen my executives try to cover something. "Manage the narrative", sure, but that's largely about how they craft a sentence, not about not reporting.
However, I have also witnessed corporate culture in other places (as embedded consultant) where each layer is terrified of layer above, and each layer is heavily punished for reporting "bad news". They were institutionally set up to fail project deployment as risks are not escalated and they proudly plunge forward. They're not sure much top-down knowingly obstructed to hide stuff, as much as electroshock therapied that it's a bad experience. Taking the most cursory log at the most basic logs and saying "whee, no evidence of exploit!!" Would be par for the course :-/
Probably not 'trained' as much as 'heavily incentivized'. Nobody wants to be the messenger that gets shot for bringing bad news. Much easier to cover up and tell the big boss what they want to hear as long as you can.
This certainly happens. If you speak to a corporate lawyer about a potentially sensitive issue, they will encourage you to use the phone, don't put anything in writing, and don't tell anybody especially not higher ups in the company, until you sort things out with them first.
Seems both ethically questionable and maybe not the best strategy for the individual if they're being instructed to keep information to themselves instead of passing it up the chain in the company. Is that intended to keep just that employee responsible for whatever mess?
Right, but how so? A person or company can get into trouble with things being written down or made known to others. Having a lawyer consider it first is legally prudent and is entirely reasonable and common advice given out to any person (don't speak to police/regulator/other party/internet/newspaper/etc before consulting your lawyer). If you think that's ethically sound advice for a person, then what changes the calculus for a corporation?
> and maybe not the best strategy for the individual if they're being instructed to keep information to themselves instead of passing it up the chain in the company. Is that intended to keep just that employee responsible for whatever mess?
Probably less instructed to keep it to yourself, more encouraged to stick to "official" reporting channels, and then when you do that or come into contact with such issues by other means, more encouragement to use the phone.
And it completely depends on what it is as to the intention I guess. Initially so that the lawyers are able to consider and advise. But sure you aren't paying the lawyer so they are only taking care of your interests so far as that coincides with the company's interests. So if you had a concern that you would be responsible for a legal problem, or are a victim of a criminal or civil legal matter from the company or another person in it, then I would say you should consider discussing that with your own lawyer.
It means silos and information hiding are baked in — as a matter of corporate culture — at least in part to preserve the option of plausible deniability for statements like Twitter’s.
It would be more honest to say "We aren't able to determine whether it was exploited" which could better brace potentially impacted users for the possibility they might be affected.
This is a relatively benign case but the same language is used in other breaches when people should be taking measures like freezing their credit or reviewing financial transactions.
The only thing that could happen with the data would be that it is exploited.
The only thing that happens to stolen cars is not going to the taliban.
These are not even similar in nature. They aren't saying "the data was stolen". They also aren't saying "the data was available for exploit we are unable to determine if that occured."
What if they never looked for evidence of unauthorized access? They wouldn't have any!
This is the same as modern science and medicine frequently using this academic phrase, no evidence, when what they mean is that there has been no investigation.
It's more like saying "I left my car in a shady neighborhood unattended for 72 hours with the doors open and the key left in the ignition but I haven't been keeping track of the millage or the fuel level so I'm not aware that anyone used it while I was away."
Nothing would have stopped someone from using it. Probably best to assume that they have.
You can make positive assertions though. E.g. attack might have been simple in which case it's possible to produce indicators that cover 100% of variants. Or it could have been complex and indicators either don't cover every possible attack or they produce large number of false positives.
Another thing to mention would be how long in the past you were able to look. E.g. in this case they have found out that the bug was introduced in 2021, were they able to inspect logs covering all of that period or did they only had limited logs/other evidence so it's impossible to know whether anyone used this opportunity or not?
How about we don’t use terse language and a short blog post to describe a complex thing and instead talk about what happened, what you did to investigate, WHY you couldn’t determine if it was exploited, and what the heck you intend to do about it? How about some facts and transparency? How about some real honesty?
> instead talk about what happened, what you did to investigate, WHY you couldn’t determine if it was exploited, and what the heck you intend to do about it?
This will be read by optimistically 1% of people, the rest will just catch the summary. This way, you at least get to write the summary.
Well, “after investigating by <insert actual efforts taken here>, we were unable to find evidence it was exploited” would be a good start, as it would indicate some effort was put into disproving the hypothesis.
It provides close to nothing, because it doesn't indicate whether there was no evidence because there could be no evidence - you keep no logs - or whether there was no evidence in spite of the fact there definitely should be if it was exploited because of copious information kept that would show it.
“We have no evidence” strongly implies some sort of extensive forensic dance was performed, and was fruitless. “We have no way of knowing” sounds much more like epistemological resignation. “Evidence” is a pretty loaded word to use.
"We have no way of knowing" may not be correct statement. There could always be a way to know that you may have missed. It would be inhuman to claim "we have no way of knowing" in circumstances like this.
But then you might as well just assume everything is compromised, at all times, even if there's been no announcement. They could just not be telling you.
Which is maybe not the worst strategy, but it's going to be pretty exhausting.
I'd suggest that instead we should just expect and enforce a certain amount of openness and honesty from companies when they fuck up in this way, so we can make informed decisions.
Well, yes - this is the dilemma which is not resolved with empty platitudes, even though "you can't prove a negative."
In the US and elsewhere, there are already some penalties for covering up a problem, and they should be expanded commensurately with the potential harm.
I mean in practice what it tends to mean is the logs only had a 3 month ttl so really could be either way. "no evidence" implies there is at least a place there could have been evidence, they looked, and didn't find any, which is a weak but nonzero update towards it having not happened. It would be nice if they clarified exactly what they checked.
> "no evidence" implies there is at least a place there could have been evidence, they looked, and didn't find any
Yeah I'd never assume that any of that is true. Sure, there probably are ways twitter could find out if something has been being exploited like evidence in server logs or new batches of accounts showing up for sale on the black market, but I wouldn't trust that they looked for them, or that they looked very hard, or that the person making press statements was told about it either way.
If a company has a financial incentive to not find information it's weird to assume they'd seriously look or be trusted to be honest about what they found.
Other purpose than being a psychological trick, what purpose could pointing out the lack of evidence at the time have? Instead they could have written something like "We found the problem in 2021 and promptly fixed it. We first learned that it has been exploited in 2022."
That is not a normal statement if it is your company's fault the question even came up.
"We left a giant tub filled with cyanide completely unsupervised in front of our door for months. We have no evidence that it was used to murder someone."
"We left our gun outside, unsecured, but no one has complained they were shot with it and we didn't detect any fingerprints on it when we finally noticed it wasn't locked up properly"
"We left a giant tub filled with cyanide completely unsupervised in front of our door for months. We have no evidence that it was used to murder someone."
No one would say that second sentence, if you don't have evidence of something you don't state that because of the set of objects and events that didn't happen is infinite.
"We left a giant tub filled with cyanide completely unsupervised in front of our door for months. We have no evidence that someone accidentally fell into it, an animal died in it, it was used in a bank robbery, someone's cell phone slipped it in............"
"That person owns a gun legally, we have no evidence that he used it to murder someone"
I wonder, if you destroy all the evidence this was exploited, can you still claim you don't have any evidence this was exploited? Asking for opinions from non-lawyers only please
Don't currently have? Sure. The quote says "At that time, we had no evidence" so I think that would be harder to argue. You could maybe make the case the statement means: At that specific moment we didn't have any evidence because we already destroyed it. But it certainly implies they mean they had not found any before that point in time.
Works the same way with government. The "I am not aware of ..." is a great trick for when your organization is intentionally silod. The folks who get subpoenaed are left out of detailed info. It's a complete non-statement.
I could bring up examples across both sides of the isle. It's all a big game.
haha. I am a lawyer so sorry, but while you might be able to claim that, you are legally and ethically obligated to also divulge the intentional spoiling of hte evidence.
It would be a lot more convincing if they said they put a team on to it to investigate extensively and didn't find anything indicating it was exploited.
Absence of evidence IS some evidence of absence if you look thoroughly. It sure isn't anything of the kind if you haven't actually tried to gather the evidence or are aware of giant holes in what you were able to gather.
Saying there is an absence of evidence (of a leak) isn't useful by itself unless they also indicate whether that is evidence of absence (of a leak). I.e., they should indicate whether it is likely that they would have caught it if a leak had occured (e.g., via extensive logging).
Provide some level of detail on how they looked for evidence. "We have no evidence" could mean "we didn't bother looking for evidence", or "we looked extensively for evidence, but didn't find any." In fact, the company has an incentive not to keep logs or collect evidence specifically so they can truthfully claim they don't have any evidence of a breach
It's not a trick. Incident response (not vulnerability announcement) is all about evidence. If you can't prove it, it didn't happen. They can probably stil take precautionary measures though which the announcement is part of.
That's why I referred to it as a psychological trick.
They should be open and forthcoming about their level of confidence, instead of using the least worrying language they can offer while remaining technically correct.
You seem to believe "we had no evidence to suggest someone had taken advantage of the vulnerability" implies "we looked for any evidence of it", it doesn't, not in that case nor in any similar situation.
Yes I wonder about this as well. Say Musk had good reasons to suspect some private information was at risk and Twitter kept denying anything was going on. No matter how minor the actual impact would be in the end, this would not paint Twitter in a favourable light especially in a legal battle where Musk claims Twitter held back vital information.
The page isn’t loading for me and I notice Twitter itself is either slow or not loading at all right now. I also see a spike in reported problems for Twitter on DownDetector.
You know... in the last major tech bust, downsized teams working on oversized software didn't have thousands of productions services to maintain. What's a company with 10k services, and 10 languages going to do when when it comes time to patch security vulnerabilities. Or merely keep them from emerging?
It's one of the many reasons why I don't like to associate my phone number with an account for 2FA and such... Or any other information that they don't need (like name, etc...).
I think that Google recently forced most accounts to give a phone number even if you don't use 2FA (probably for ID purposes). That's one reason why I like this service: https://www.emailnator.com/, instead of using my own gmail address for signups.
Anonimity is going down the toilet really fast in the US...
> We will be directly notifying the account owners we can confirm were affected by this issue. We are publishing this update because we aren’t able to confirm every account that was potentially impacted, and are particularly mindful of people with pseudonymous accounts who can be targeted by state or other actors.
So they may contact you, or may not. It would be nice if this gets added to something like haveibeenpwned
This is why Managers and PMs should not be deciding priority of security betterments. I've never worked at, or heard of, a company that adequately incentivizes or takes posthoc corrective actions for EMs/PMs around long term consequences or brand threats. They're tragedies of the commons of sorts.
> "At that time, we had no evidence to suggest someone had taken advantage of the vulnerability. "
This sounds misleading or incompetent. If someone was harvesting data, then logs would indicate how many such login attempts were being made per second/minute/hour/day and the activity would spike in certain days, times, geographical areas to suggest this kind of activity is going on.
Even if the attacker was really careful spreading their activity over long periods of time & routing it via multiple geographical areas, the overall activity would show an uptick before & after the bug.
I find it highly unlikely that a company of the size of Twitter could not ascertain from their internal data that a bug like this was exploited or not.
I'm disappointed and growing hopeless about the state of software engineering at these companies that this sort of issue is not caught in engineering design documents, during development/debugging, or during code reviews. Any competent engineer should have the sensibility to have seen that the implementation they've designed or programmed leaks private information in some scenarios.
Or, perhaps the UX design team intentionally decided that mentioning the Twitter username associated with the email address would be a "helpful" piece of info to present at this point in the login/signup flow. In this case, too, the design team should have known that privacy far outweighs any potential helpfulness.
Tying identity to a phone number is one of those things that solved an immediate need (2FA) but it's riddled with so many issues and concerns that the only reason we're still doing it is because the alternatives are a huge step up in complexity and user frustration.
It's why I've been relatively ok with everything Apple has been doing here. Someone needs to drag us into the modern age of authentication and it hasn't been any standards body. They can write specs all day but unless they can get players to adopt them then they're worthless. It's Netscape 3.0 all over again.
Another nail in the coffin for phone numbers? Useless for security, most calls are scams now (friends/family use other tech to communicate), and now leaked enough that everyone knows everyone's number anyway.
I thought this was going to be a tongue in cheek announcement to wit-
A chaos actor with malintent executed a social engineering attack and thereby acquired sensitive private data from several million active human accounts with the goal we believe to misclassify humans as bots and thereby thwart the actor's own publicly but impulsively stated goal of acquiring Twitter and becoming custodian of this data. This actor is still at large though we expect to see him in Delaware Chancery Court in September where he will be punished for his impulsive chaos.
>If you operate a pseudonymous Twitter account, we understand the risks an incident like this can introduce and deeply regret that this happened. To keep your identity as veiled as possible, we recommend not adding a publicly known phone number or email address to your Twitter account.
I'm so sick of this kind of victim blaming, you're forced to add a phone number to use twitter.
-- not only do they block one time numbers - google voice numbers - etc - they claim you CAN sign up with just an email account - let you - and then 30 minutes later automatically lock your account and tell you the only way to verify it is with a number - I was setting up an account a week ago for a client and I eventually gave up - because I was sick of being lied to by their UI --
No mention of that fact that 'use another phone number' is quite an expensive thing to do in countries where a phone number has an annual fee of hundreds of dollars.
Suddenly 'use twitter securely' has gone from 'free' to 'hundreds of dollars a year'. Perhaps they should announce this as a price change instead?
Many "IOT" providers give physical numbers for almost no cost, and they provide physical SIM cards for the service. The aren't VOIP so aren't blocked by twilio, etc for use with Twitter and other services.
You need a plan to have a number because it's difficult/impossible to get a number allocated to you as an individual. If we assume "hundreds" means >=$200/year, then the maximum monthly payment we can have for that not to be true is $16/mo. The absolute cheapest phone plans I could find in the US that weren't for alarm systems were $15/mo on mvnos like mint. In practice, I suspect few people are paying less than $25-$30 a month, or "hundreds" a year for their numbers.
Prepaid phone plans in the USA charge you a monthly rate just like subscription plans do. They may also charge you for usage, though that appears to have fallen off compared to the past.
Some years ago I looked into prepaid pricing and determined that it was significantly more expensive than a subscription plan at even my almost-never-use-it levels of phone use. (At that time, pricing was based on (1) a reasonable per-use rate, which would have been very cheap; combined with (2) a high flat fee charged on any day you used any feature of the plan, which already nullified any price advantage; and (3) a requirement to add funding to the plan every month, regardless of whether you had an existing balance.)
1. You need to top up by €20 at least once a year to keep your account
2. You may sign up to an offer, which will deduct a portion of a top up each month to activate the offer (e.g top up by > €20, the phone company takes €10 for unlimited texts, or €20 for unlimited data).
3. If you don't top up as required by your offer, you fall back to a state as if you had no offer
4. If you have no offer there's fixed fees of like 20c/sms and €0.50/min of calls, €2/day for 100mb of data
We used to have pre-paid plans like that in the US, but they've fallen out of favor in the last 10-15 years. They were complicated to use, and very expensive: many MVNOs had rules such has having at least one top-up a month to keep the line active, and money used to top-up had time limits before they'd expire.
Now pre-paid is often just paying for a month before usage rather than after usage. Even cheaper providers like Mint sign you up for 3 months at once, which can get expensive if all you want it for is just satisfying Twitter.
It's true in the USA if you stick to the big providers... Ring up t-mobile and say 'I'd like a line with 0 minutes and 0 GB of data, just to receive verification texts for Twitter' and they'll probably quote you $200 a year or so...
Tracphone is the company you want for this sort of thing. A SIM costs $0.99 (requires unlocked phone of course) and you add $15 to the account to get 500 texts. (I think you can do this with cash at a place like Walmart.)
It is expensive if you need to keep the plan around, but Twitter doesn't seem to regularly send SMSes to the phone number, so you probably don't need to pay beyond the first month.
Why the arbitrary limitation to the “big providers” - you can get a basic Tello plan with SIM for $5/month prepaid - and they’re a T-mobile MVNO so it’s a T-Mobile number.
Yes, I meant that even if somebody who wants to keep the identity unattached to twitter (& thus not risk doxxing after twitter data leak), in India its not possible at all even if they have money to afford.
No, technically every SIM gets activated only when mobile phone provider gets the user's documents copy & a verification call comes from mobile company's service center to an existing number of yours or family (& you verify your documents details). If you don't have a existing number to reach, they make you to bring documents to official store. There is no pre activated SIM cards.
Mostly, like any other country, this happened because they found bad people were using pre activated sim cards for terrorism.
My exiting phone number is now 14 years old, same provider, prepaid. I have been required to submit updated KYC about 4 times in these years.
> I'm so sick of this kind of victim blaming, you're forced to add a phone number to use twitter.
I had some old accounts that did not require a phone number.
At least until I wanted to enable TOTP 2FA.
At which point the numnuts at Twitter would not just let me "just" enable TOTP, I was forced to provide a phone number (which, to add insult to injury, for at long time they refused to accept because they would only send messages to a limited number of carriers).
The company entity requires blaming others. It can't blame itself, otherwise stakeholder value is affected. If you want to blame anyone, blame the environment that allows these types of actions by companies, or simply stop using them.
BTW, no Twitter account is "ours". If it was, we could download everything (friends and all) and move it somewhere else. Twitter needs to take ownership of all data on their platform - user accounts included. Trying to separate them into different entities is ridiculous.
These are cogent points and I completely agree not admitting fault seems the playbook for publicly traded companies.
It’s unfolding in real-time with Tyler Technologies and we’ll have to see how it plays out. Intelligent institutional investors are poring money into a company that is responsible for leaking millions of intended to be confidential CRIMINAL RECORDS and is trying to blame JudyRecords for finding their mistake.
Again it goes to show we don’t really own anything that turns digital, and no safeguards are guaranteed. The only recourse is legal action, which is, IMHO going to bankrupt Tyler r force numerous spin offs to pay the class action results from the CA State Bar…and potentially hundreds more.[0]
The environment is one of no consequences when hiding behind a corporate banner, for most intents and purposes. Choose who you work for wisely.
It might have some PR speak sprinkled, but it’s genuinely good advice, put more bluntly:
“We can screw up, if it’s important enough for you to stay anonymous you should get a separate phone number and email”
That is a good tip with every company. If you want better security, have less trust in the services you’re using.
This goes to what victim blaming is. Yes. It would be great if the victim lived in a better world. But sometimes extra caution could help them now without waiting for the entire world to change.
In Germany and other countries you have to show government ID to get a GSM number. Phone numbers are like bank accounts: strongly linked to official name and identity.
There is an exceptional difference you left out. In criminal situations, the criminal is punished, there is a deterrent. What is the deterrent here? Without a deterrent, there is a moral failure.
If you operate a pseudonymous account anywhere, you should always assume there's a slight possibility that one day your identity is known.
I think it's not far stretched to think that in the future, malevolent governments will have access to whatever things we may have posted and use it against us.
It can be triggered for opaque reasons. My account dates to February 2007. I was prompted for a phone number a few years and given no other options to recover the account. Burner & VOIP numbers that work for many other things, including SMS verifications, were rejected.
I suspect the reason was some rapid changes in my IP address in a short period, together with a lot of Twitter tabs open – whose constant background requests often seem to trigger, for me, some sort of Twitter-side connection-slowing. (Their own shoddy, high-weight design makes my normal usage pattern look like a DoS attack to them.)
So your style of usage, moreso than your account age, is likely for being spared their arbitrary phone-number inquisition.
I don't get the definition of "publicly" here. Does it mean something on Internet, or include numbers I tell people in-person? If the former, not so many people put their number online I suppose...
When I created an account, they blocked it 30 seconds later (before I had done literally anything) and would only unblock it upon me adding a phone number. Google suggested that this was common practice by them at the time.
Yes. They will let you sign up with just an email but after few minutes of activity your account will be locked and they will demand phone number verification.
All of them. You don't need to provide one on sign up, but your account will be soft banned typically in a couple of hours until you provide one. So it's a requirement that they aren't forthcoming about.
A year or so ago, I created an account and followed ten or so people (no tweets at that time). When I went to log in the next day, it wouldn't let me log in until I attached a phone number. As I understand it, that was a relatively common occurrence.
And, this is just one of many examples of a deep, deep dishonesty at the core of Twitter Inc's operations:
Pretending they're not requiring something when in practice, a giant proportion of their userbase faces it.
Pretending anything changes when you click 'See This Less Often' on some annoying feature.
Constantly undoing a user's preference for 'Latest' over algorithmic 'Home'.
Claiming they don't "soft-ban" but absolutely, verifiably, hiding some users' content from others who have explicitly followed them.
Implying there's some effective "appeal" process for arbitrary & often clearly erroneous moderations decisions – when instead it's just designed for coercing compliance, including the simualted "voluntary" deletion of tweets, under penalty of losing your account indefinitely.
Slurring & hiding replies with no hint of offense as "potentially offensive".
Describing tweets as "unavailable" when (often) all you have to do is click to see it - wasting users time.
Offering "Show additional replies" even when there's nothing more to show – again wasting users' time.
Tip: If you email(anonymously ofc) twitter support that you do not have a phone number to receive the OTP for verification during account creation, they generally approve your request.
Isn't this widely known and very old trick? I'm pretty sure I even saw youtube tutorials and non-techy people discussing there is a way to find a person's twitter account by their number. This article says like it's something recent that was only available for short time and quickly fixed. Doesn't seem like that at all.
Twitter by default lets you compose, but not send, a direct message to someone who doesn't follow you. Then Twitter leans on you to give them your phone number, and won't send it unless you do.
I've stopped using twitter.com for consumption of tweets and only user nitter.net now. It works most of the time. If your use case for twitter is similar to mine, read-only, it may be useful for you as well.
Facebook had a very similar information leak just a couple of years ago. It is amazing these companies seem to learn very little from each other when it comes to protecting personal information.
This abbreviation is not in the article (nor is the number). And the HN headline now says "5M" which is maybe a more common abbreviation for "million".
So what you’re saying is that you discovered a vulnerability that leaked the private information of your users, said absolutely nothing for 6 months, then finally came clean, but only because you were forced to because people were selling data on the deep web.
Please take your “sorry” and shove it where the sun doesn’t shine. You don’t “take our privacy seriously”. This is utterly ridiculous and unacceptable, and in a fair world you would be punished heavily for it.
Edit: an earlier version of this comment criticised Twitter for not doing an investigation earlier to uncover the fact that a leak occurred. This accusation was based on me misreading the press report - see one of the child comments for details. I’ve removed that part of the comment.
The methods to scrape numbers from social media have been published on YouTube for ages now. They share those numbers publicly because they themselves run services that share user data with other companies openly... Twitter (for example) is used as an authentication service with Disqus and a few other online apps too, an online comment service which could easily save/track sensitive ID data across comments on multiple sites unwittingly to the user, so it's a really shady overreach if that is indeed the case. These numbers are gathered under the guise of security, but they are used for entirely different purposes.
I think the real fault is in them forcing users to enter this type of data to begin with, because that makes the only options to surrender your data to them or to not use the app at all.
It would be interesting to see if numbers from verified accounts were included in the leak, that would be very telling.
They said don't add a publicly known phone number to your account, so you have to create a Google Voice account that you'll never use except for account credentials like this. But Twitter will probably ban you for not using a real phone number. Or, you'll reuse that phone number across other accounts until one of them gets hacked and that phone number sold on the dark net, and now it's a public phone number again.
I'm thinking out loud for various other options that can be utilized: a private 256 char length key? You can also store it in a (Azure) key vault, so that it's easily accessible to you from other devices as well. I hope social media companies get open to more secure alternates, but security seems to be their after-thought.
Strange you say that. I’m six months into my pseudonymous account and they haven’t tried to extort my phone number. It’s like they know from my behavior that I don’t want to be doxxed by Twitter Inc. I signed up using a VPN and a weird email address, and used an AD blocker.
I think your experience is irregular. A while back I was forced to create an account just to report an impersonator and they insta-suspended it for "suspicious behavior" until I provided a phone number. I asked around and heard uniformly "oh yea, twitter does that".
There are many different levels of shadowban apparently. You can be excluded from being able to trend, or gain followers, or to have post visibility at all based on what I've observed. It mainly gets triggered by complaining about Twitter or a favored sponsor... Twitter considers those things censorable, but not upsetting violence and shocking pr0n for some strange reason... ugh.
And this is even when you visit the post (from someone else) which you reply to while being signed out of your account and your IP address?
Most shadow banning show the posts if visited via the profile but are invisible if viewed as replies to someone else's post or while signed out and using different IP address. Might be worth checking. Not asserting that this is the case with Twitter though.
>If you operate a pseudonymous Twitter account…we recommend not adding a publicly known phone number to your Twitter account.
Dear Twitter,
We need a phone number to be able to use Twitter longer than a week otherwise we get blocked for “suspicious activity” (which is entirely bullshit - logging in from the same IP is not suspicious).
So what should we do? Go to AT&T and open a new line? Jokers.
I was just able to remove my phone number from my account settings and wandered into a Fred Sanford-level of junk data -- Twitter had me identified as a female (I'm male), had "interests" tied to me for both "Alexandria Ocasio-Cortez" and "Ben Shapiro" (they're most certainly not), and had my languages as "French" and "Indonesian" (I know only English). Bad digital hygiene.
having worked in the data industry, this sounds about right. Digital fingerprinting is certainly real, but I was way more paranoid about what I thought companies knew about me before working in the industry. the data quality across the board is dogshit. Even for the best companies doing B2B data like D&B and Zoominfo which are talked about as being better than most of the others - it's still mostly dirt.
Data right now is typically bought and sold with an expectation that most of it is crap. it's faster to buy and process 5000 dirty items that probably has a few good leads buried within it than to find leads manually / naturally or broadcast random advertising. (I left the industry in 2020 and my NDA expired in 2021)
Data quality is typically assessed at the "Does this data field have a value for this line item" level. That means data vendors are financially incentivized to make shit up about you as much as they can get away with. think about it for a second, these companies are selling themselves as the source of truth. the actual accuracy does not matter, and the better you are then the less data your customers buy. the data goes stale faster than the accuracy of the data becomes relevant
Did you like a post about a fresh baked baguette that had #french as one of the 100 tags associated with it? congrats, you're french now. it's not exactly this ridiculous, but you get my point
there are some verification focused services - like they take a list of emails and check if they are valid email addresses. Some use fine print to say they are only validating whether or not it is of valid email address FORMATTING, and make no claim about whether or not the email will bounce. verifying if the email address actually belongs to the person it claims to is not part of the deal.
it's nearly an impossible task, because you have no actual source of truth to verify it against. So data vendor A and B give you different results for the same search - now what? you have to manually research and see whos "right" or "more recent".
even if it looks like good data, it might be stale. For example, company size, revenue, C level email addresses, etc all change over time.
so if a customer wants cleaner data - you basically charge them to pump the dataset through Mechanical Turks or upwork or something to have people try to verify things manually. Datasets can be large though and this gets expensive, so it tends to be better to just buy the crap data for cheaper and figure it out yourself
I have a conspiracy theory that these verification services are behind a lot of the phone spam today. they are just checking if your phone number is valid, they dont actually care if you answer.
> data vendors are financially incentivized to make shit up about you as much as they can get away with
Exactly this. But they can get away with basically anything. Worst case for them is they show you a premium ad you aren’t interested in. Best case is they guess correctly
Had a similar issue with Prime Video - it kept displaying only Indian suggestions even though I only visited India for a short time. I don't remember how I corrected it.
Three-dot "More" > Settings and Privacy > Privacy and Safety > Content You See > Interests
"These are some of the interests matched to you based on your profile, activity, and the Topics you follow. These are used to personalize your experience across Twitter, including the ads you see. You can adjust your interests if something doesn’t look right. Any changes you make may take a little while to go into effect."
I hate politics, follow neither of those people, nor have ever knowingly clicked on content about either. I'm browsing geeky product-management tweets, '80s pro wrestling, music, and random tech.
They recently (early this year) onboarded a few million kids with the Minecraft account migration, and a lot of those new accounts will have flagged as "suspicious activity" and demanded a mobile number to verify who they are..
> To keep your identity as veiled as possible, we recommend not adding a publicly known phone number or email address to your Twitter account.
I had to look up whether this was actually official communication, since it sounds like a kafkaesque fever dream, but yes it's real.
Tech CO's have been doing everything in their power to get your number and email, used it for advertising, and deliberately disabled non-regular phone numbers. And now suddenly you're being gaslit that it's your fault for complying with their demands.
The cream of the cake is the vague "if your phone number is publicly known" stuff. Well yeah, every single phone number is publicly known because it's enumerable. Even if it weren't, almost everyone's number is harvested and resold by gray-market data brokers. Sounds like they want to muddy the waters and make it sound like a targeted vulnerability when in reality it is indiscriminate.
Amen. Google is asking me to add 2FA to an account for work, and there's no way to do so except from phone numbers or Google Authenticator which I'd rather not use. It's the only service that doesn't let me use something like Authy for OTP.
You know Google Authenticator is just an implementation of the TOTP open standard right? There are plenty of alternative apps that will give you the same number to key in...
Totally understandable GP wouldn't know that, from what I recall of logging in to a Google account when I did that at all often (a few years ago, but relatively recently) Google does its best to hide that.
(If you want it, mine's another recommendation for Authy.)
Yeah I did not know you could use any TOTP besides Google Auth but even that that's not an option that's presented to me anymore at the moment now that I'm checking.
If you’re on a Mac, in safari you can just right click on the QR code and set it up in keychain, then you can just auto fill from safari, no need for a third party app.
That is not entirely true. If I remember correctly, if you select the Google Authenticator as your option, it will display a QR code. You can then scan the QR code and the OTP information will be in that payload that can then be pasted into you app of choose. (How o got mine I to 1Password)
Replying to everyone who said to use Google Authenticator. I in fact did fall for the devious wording that implied no other Auth app would work but never fear, even that is not an option for my account now that I'm checking. The only available options are physical security keys (which I lack), phone numbers (which I won't disclose), and tapping a notification on an Android phone (which ties me even more into the Google ecosystem and I'd rather not pick). I'll appreciate any comments if anyone knows how to get a TOTP going with Authy for Gmail, don't assume I know everything and I'm willfully ignoring it!
2-step verification can be turned on by going to https://myaccount.google.com/ and selecting security and then "Signing in to Google". The "2-step verification" finally leads to the point where phone number is asked for enabling SMS based verification. Only after enabling SMS based verification it is possible to enable Authenticator App (TOTP) or some other options.
At least I couldn't find other way to enable TOTP i.e. first SMS.
I have the 2FA for my work account in 1Password (if that's reasonable is another discussion) so there should be a way to use something else besides Google Authenticator or phone number.
Google will allow you to use (and they prefer, and you should too) a Security Key. I use this wherever I can. I don't Tweet, but if I did it would be secured with Security Keys. I have Facebook only inside a single contain on one machine, secured with Security Keys. And so on for many services. More services should do WebAuthn.
Also, you can't really use Twitter with just an email. Sooner or later, anti-bot misidentifies you, locks you out, and asks you to "verify" by entering a phone number.
While no passwords were exposed, we encourage everyone who uses Twitter to enable 2-factor authentication using authentication apps or hardware security keys to protect your account from unauthorized logins.
So it actually does not imply adding a phone number, which is seemingly what you have tried to imply with the cut-off quote provided.
> "it actually does not imply adding a phone number"
It actually does.
The sentence you quoted contains the link "enable 2-factor authentication", which goes to a page where adding a phone number is the FIRST method described.
"There are three methods to choose from: Text message, Authentication app, or Security key..... If you don’t already have a phone number associated with your account, we’ll prompt you to enter it."
Anyone else annoyed by the growing use of the word "impact" to speak increasingly passively?
People are so afraid to make a claim nowadays, even if it's obviously true. They speak of "impacts" or that something will be "impacted". But they seem to want to avoid saying who or what will be impacted.
"I was impacted by today's layoffs."
"We expect there to be impacts to website traffic."
These meaningless words do nothing except to say "something has happened" which puts the reader in the mindset of having to unravel a mystery.
Anytime you write it's your job to make yourself understood. I don't want to have to be Encyclopedia Brown to figure out what you're trying to tell me.
Orwell's "Politics and the English Language" really should be required reading for all high school students. Personally, I re-read it every few years - chronic exposure to terrible English makes the bad habits grow back, so you need to pull the weeds regularly.
Exactly. They're giving the least information possible to formulate a coherent headline which is technically accurate. If they told the truth in the headline, it would get WAY more clicks. These are clicks they don't want.
"we recommend not adding a publicly known phone number or email address to your Twitter account."
This is literally impossible. You can't create a Twitter account without a phone number. It sometimes allows you to do so, but then is blocked within 24 hours until you add one.
It's insulting that Twitter should lie about that.
IME, numbers they have classified has VOIP or otherwise not a consumer or business cell service are disallowed. Skype numbers do not work, and I have had a spotty experience with Google Voice numbers as well.
Is this going to be the thing that gets Elon Musk off the hook for his billion dollar fine for backing out of the deal?
They had a breach and actively actively hid it for an extended period of time. Obviously both sides have good lawyers, but it's hard to see how this doesn't hurt Twitter in regards to the legal battle over the Musk deal unwinding
Truly. It is infuriating dealing with the phone number rigamarole.
Why does X company require me to use a certain phone number/IPv4 address/2FA? It doesn't improve security, it does not protect against sybil attacks. The reason is vendor lock-in and data collection.
It's not worth dealing with this crap to access another time-wasting/brainwashing app.
At the same time, there is no shortage of users here willing to give lip service to these backwards practices.
Considering that, if you implement any flow that involves checking if a phone number is already in use, then you are effectively leaking to an attacker a list of every phone number that uses your product.