This matters because creating a new email account for recruiting is trivial, yet creating a new phone number for recruiting is not. Most phones are not dual sim, and you want to have a phone on you in case the recruiter or future employer calls. Hence the account id rate using a phone number should be much higher and makes it dramatically easier to find somebody.
The nasty part is even if you rename your account to your "first-name middle-name" or an alias, you could be found out via phone number search. So simply renaming your account no longer ensures random recruiters can't just find your profile. Your FB name could be "giant blue monkey" which prevents a regular name search but would still be identifiable via phone number search.
Apparently when a woman uses a pseudonym on facebook, it is not unlikely that it is because of a nasty/stalkerish ex that she would rather get away from.
I am actually in favor of a transparent society (a la david brin) , but we have to grown up a lot and handle such cases before the advantages that come with it can even be contemplated.
I suggest you read The Transparent Society by Byung-Chul Han. It’s a brutal 50-page indictment of the hypercult of transparency and its effects on the human soul, on the political discourse, and on traditional values like truth and beauty. Might change your view on the costs of transparency.
Speaking of homeomorphic encryption and Facebook:
- first of all, Three Body seems like the most boring videogame ever developed, not sure how people could actually be believably playing that.
- the characters are pretty much caricatures of stereotypes (the cop), or just plain uninteresting.
- the massive exposure/infodump chapters killed the immersion from me, especially the ones written from the point of view of the other side of the conflict: it really felt like the author was getting towards the end and wanted to Explain All The Things, but couldn't find a subtle way to do so within the narrative, therefore decided to just vomit it all in a single spurt.
I was really looking forward to it but found it disappointing, won't read any more from this author.
I would argue that the next generation coming up will have little to no _need_ for privacy. When you grow up with nanny cams in your bedroom, privacy per se isn't even valued, let alone met with an expectation.
IMO it doesn't sufficiently deal with the problems of the "full transparancy" ideal that Han points out though.
Also men do this...
True story. A stalker found my blog via my Instagram account and commented some far-out there shit on it, inferring a conversation that we never even had, "Told you that you weren't dead." WTF!?
Now, it's pseudonyms, pseudonyms everywhere; except, of course, where it serves a necessity to have your actual name being used, such as on LinkedIn, but even those will be discarded, as soon as it's no longer necessary. (Necessity, here, meaning seeking gainful employment.)
That being said I did once call my male first grade teacher "mom" so maybe I'm not a good example
We do this all the time for students that can't/won't use their legal names in school for a number of reasons.
There are quite a few lines of work where this might not be a bad idea. E.g. also Police and government employees in general.
The "Growing up" reminded me of this song: https://www.youtube.com/watch?v=5ja-mHeYAKM
IMHO, it doesn’t make sense to blame “society” for being pre schoolers, any more than it makes sense to blame individual humans for becoming mentally ill. Both are just failure modes of our mental hardware (our tribal-status instincts in the societal case.) The only solution to either that would “stick”—what I would interpret “growing up” to mean—is to remake ourselves without those failure modes.
Anything less might help individual humans who get some sort of maintenance treatment for their failure modes; but our society as a whole will still be a function of interactions between both people who have treated those failure modes in themselves, and those who have not yet (because e.g. they aren’t aware yet that they have a problem; or don’t see it as a problem; etc.)
Mind that when I mentioned society as a whole, this is inclusive to its individual parts: There is no society without individuals and as such any judgment on a society is a judgement on its parts. That being said, I know it is a naive view, as no one can condemn a society completely.
As for the bio engineering to try and weed out the "bad wiring", I am not even sure we are close to identify them clearly, as seems to indicate the science articles about the brain and other nervous system in recent years.
In Sweden, you can request a pre-paid SIM be sent to you in the mail.
To be fair, it's not entirely a "burner" SIM, as it's associated with your personnummer but it's a lot more convenient than queuing in a shop (if that's not your thing).
Maybe your country has a carrier that does the equivalent? :)
 - https://webbutik.comviq.se/kontantkort/checkout/comviqcart/p...
"Google Voice is only available in the United States. To use Google Voice, sign up with a US-based phone number."
I keep blocking spammy numbers, and they keep creating new ones.
also compassion with domain is not exactly fair in this case,
it would make more sense to compare it to ip address.
Not really rocket science, but the VOIP industry is stuck in the 1990s when it comes to security practices related to their core offering. Its a terrible state of affairs!
Public social networks are poison, they’re inimical to privacy, and we need to get off of any that harvest personal information. Telling people to “quit FB (and other similar services)” may come off wrong, but for a long time so did “quit smoking.” I get it, I’d love to have cigarettes that are healthy and delicious, but until then... quit smoking. Maybe someday public social media will be safe and sane, but until then... quit.
I don't think it's an unhelpful answer to a complex problem. I think it's the right answer.
"Only" oversells the option. There's also:
* mix fake/mining-hostile data in with real data as needed for social media profiles
* use public institutions to enact consumer protection policies
Both of these have their own difficulties and limits, of course, but so does ditching social media entirely.
Phone numbers change. This is less frequent now that we have mobile numbers, but they still change. Email addresses can also change as people move through different phases of life. Hell, email seems to be becoming unreliable as calling since unsubscribing from mailing lists you never joined doesn't seem to work any more.
I get it. Facebook is evil.
By the way: I haven't actually used facebook since 2012...
The random "How are you are you single wanna talk about sex with me?!!???" messages are bad enough. I certainly don't want those as phone calls.
Those on the list that are actual Friends might have my phone number. Might not, though, since there is no point (I moved countries). In any case, they know not to freaking call me unless it is important or you have reason or you want me to answer my door.
That's only true for you if you choose it to be true for you.
Not all of these groups would have my phone number, but facebook isn't exactly a telephone service at its core.
I could include folks I've been to school with, folks I met in language class, a few folks I've worked with, and so on. I don't want all of these folks to call me, though, and I*d rather some didn't have my number. I also don't fully dislike these folks.
More seriously, though, it would keep my social circle smaller. I met my spouse online around 10 years ago, and this sort of caution you ask about would mean I wouldn't have my life.
Right out of "Myths programmers believe about phone numbers" which was on the HN front page a week back. Has everyone forgotten about landlines? Even cell numbers are not permanently assigned to a single person: especially in jurisdictions where pay-as-you-go accounts are the majority.
And I don't put my Mobile number on FB at all.
Actually it's really straightforward with Twilio and you can set it to forward to a regular number. Works in most countries/regions.
I used it to give my wife a US number for clients (she's a freelancer).
(They could have always searched you by name, if you're worried about your identity then it makes sense to not connect your phone number to FB)
OT, but I had a not-so-nice experience with that. Some years ago I got myself a data-only phone number/SIM card for my 3G dongle.
It turned out that the number was reused, and the previous owner of that number had subscribed to a pay-for message service for upcoming events (concerts, movies, etc). And they kept arriving to my data-only dongle, and I didn't notice for a while. Had to pay for it, though.
I don't remember how I eventually managed to turn that off (the software for accessing/manipulating that part of the dongle was Windows only).
I create (and destroy) new phone numbers all the time. Via the command line. I can route and forward and block them any way I like, to and from my existing phone (which is single SIM) or to ... nowhere.
 I have a "ring forever" TwiML bin that I like to use.
Doesn't Google Voice solve this problem? Or perhaps there are restrictions I don't know about. Of course, you still have to remember to create a burner phone number before giving out your contact info...
Only sort of. I wouldn't be surprised if Facebook rejects Google Voices numbers and will only accept a real-live carrier numbers. It seems pretty easy to do, given how often my GV is rejected by sleezy services I'd like to give a throwaway (e.g. peoplesearch opt-outs).
However, I'm not going to personally test this theory by attempting to give a panopticon like Facebook even my semi-throwaway GV number.
That would solve the issue voiced by OP
If it never occurred to you that this could happen, you would not take steps to prevent it.
Data hygiene is both the responsibility of users & social media cooperations.
I still wonder why people rely on the kindness of strangers (the people that run the social networks) trying to make money off them. & then start crying foul when they are treated like garbage.
- Submit data you find sensitive.
- Use bonus credentials where feasible
- Deny app access to your mic, location, contacts, camera & calendar
- Delete your account if you have substitutes for the features Facebook offers
You have to realize that privacy is not a priority for most governments, so unless it's explicitly against the rules, I suspect they won't do much.
That's Facebook job to make it clear when it happens.
I don't know the actual text of law, but I guess even if you are friends with someone, you doesn't have any right over any of it personnal data, thus they wouldn't be able to use it except if you previously accepted that usage (being matched with someone you know using that phone number).
Source: Number articles over many years of laws being broken, Facebook.com still works and the POTUS is still the POTUS.
That's confusing. Is Palantir regarded as better or worse than Google?
Your role in your job there could cross the line of what some people would as ethical to some people, and Facebook's malicious behavior is making employees who work there to be viewed in a similar light.
This case is a small but clear oversight, one team (Security) set-up a necessary 2FA option; another (Growth) re-using information attached to a profile without context. Both teams have clear objectives but should have clearer lines when edge-cases like these appear. Two remarks on that: 1. clarity in large organisation and 2. prioritisation.
1. Overall, Facebook teams need clearer demarkation but every company in the world has far, far worst practice so as soon as you try to interview, you reek in horror at practices anywhere else — and that’s what they are willing to tell you before you join.
The internal discussion is probably split between many debates; I’ve never been very good at expecting issues around security, but probably a dozen philosophical questions like:
- phone numbers and SMS are not safe from MITM attack, the company should not accept them at all; vs. other options like a device are too selective, demanding, etc. so if people are happy and their threat model doesn’t include MITM SMSs, the company should offer that as an option;
- this is the only piece of information that is “User only” and that visibility option was removed because it used to lead to abuses; vs. we can monitor abusive use of a visibility feature even if that’s extra work, more technical plumbing that could lead to more internal abuse;
- there are no identified threat actually unblocked if there were, our bug bounty would have caught them; vs. we do not have to limit Security to known threats, but “feels” for bad practices should be trusted as a sign there is a threat in there that the company should respect even if we can’t isolate why.
Knowing what to do as an individual contributor when you have gods fighting over your head can be daunting; you want to have a clearer picture that, say “Only me” will be a visibility option for longer and not replaced by “Hide that from anyone, even having access to the account to prevent an access even from escalating into a worse security threat” or that when it’s replaced, this piece of information won’t be missed or excluded.
Anyone who has build large data schemas would be familiar with how tricky changes like that can be when done without coordination. Anyone who follows visibility of information from Facebook has noticed a lack of clear purpose: more nuanced options appear and disappear because there’s a tension between simplification and curating interests.
2. Overall, working for Facebook feels like you are dealing with a fire, an earthquake, a zombie invasion, a revolution and a flood at the same time — and the public only seems to care about electricity shortages. And when you look into internal numbers about who cares about any of the above, the flood seems like a big deal, no one cares about electricity but someone you know that the fire and the zombie invasion are far worst. Facebook is the only place where managers are very clear that the fire will destroy your water pump much faster than the water goes up, and zombies are actually quite slow — and you can not prevent earthquakes, only deal with the aftermaths, so they want you to deal with them in a specific order: #1 Extinguish Fire, #2 Automate the water pumping for the Flood, into the fire-prevention stock, #3 Delegate the dyke-building, #4 Once you have a plan for that, expand dykes to protect from zombies, #5 Schedule a town-hall for after the physical security of everyone is guaranteed because talk is better than a revolution, #6 Imagine what seismographs could be like (network?) and how they could prevent bad things, given how fast earthquakes are. Nothing about electricity because it escapes everyone’s mind at this point.
I once had a task that was about preventing thousands of crimes from happening; it was #3 on my list. That felt wrong, but my manager explained how, if #1 and #2 were not done, I couldn’t do #3. It felt very strange. #2, in particular, was very debatable: I reached out to a friend of mine, a lawyer outside the company and probably one of the top 10 people on deciding if something like #2 was ethical. My friend told me that he had far bigger issues to deal with. So I did #2 reluctantly; I did it first because it made #1 easier. In the mean time, #1 was cancelled without my manager telling me. I had asked someone else to do #3 and he got a massive promotion.
Two years later, the press was up in arms because thinking about #7 was presumably unethical. #7 is about making sure that vulnerable users were even more protected than they were on Facebook (while no other platform did anything for them) and the press really objected to vulnerable users being on Facebook at all. The most widely circulated OpEd on the topic explicitly didn’t care for them being protected: that they were on Facebook at all was the problem. As a former employee, I knew why they really needed to be there: it is their only source of needed social life.
My experience was a little extreme, but it’s quite representative.
Take the recent appeal to have more community monitoring:
- Facebook notices, years before anyone, external agents using social media to spread inflammatory messages; they understand that they won’t be able to prevent the gutter-press from spreading it, so they appeal to institutions because they carry editorial authority and local understanding that Facebook can’t have.
- That is dismissed as interference, and Facebook is mocked for knowing nothing about the free press. As a reaction, Facebook publishes articles on polarisation and clearly point at external sources; they asks researchers to measure how much the News Feed bridges that gap and helps moderate the worst messages. The article is summarised clearly with graphs by internal comms. The article is summarised in the press as: Facebook is pouring gas on the political fire.
- Facebook anticipates that astroturfing will get worst, at an exponential rate, and decides to enforce strict “authentic identity” rules to cut most of it; also starts efforts in identifying “fake news”. Explicitly connects the efforts to political manipulation. Both efforts are openly disparaged by people who spread false information and openly ignore that Facebook has a clear handling process for people who don’t want to be found for legitimate reasons. Political parties gladly finance negative attack ads that are the main source of inauthentic, false coverage.
- Facebook gets signal that human censoring is not scaling; details become increasingly worrying. Facebook ramps up their AI research program to identify increasingly relative inauthentic users, messages; the program is ignored, or only presented as an Orwellian effort by “the Borg”. Mentions of issues in human reporting are completely overlooked by the press.
- Facebook realise that scaling its community enforcement won’t work because they don’t know how to manage those and the third-party company are treating them like lab rats at best. Asks for improvement on work conditions; nothing, or rather systematic executive-level Me-Too scandals. Facebook fires said companies out of desperation. Instant backlash because ‘Facebook fired journalists’. Facepalm, partial decision reversal. Silence from the press, which honestly is a relief at this point.
- Major progress on the front of automated community enforcement. Facebook is the first to identify several threats to democracy (Cambridge Analytica is banned in 2014; everyone finds Trump funny when he asked for Russia’s help, while Facebook Security reveals to the FBI suspicious behaviour). Unsurprisingly, Facebook is blamed for acting as a Good Samaritan; internal debate on whether to come clean publicly, or only tell law enforcement. Law enforcement is clearly dependent on electoral results, so coming clean publicly proves important… but extremely costly for the company brand. Should the company sacrifice the little goodwill it has left among the press now, to prevent current threats, or keep it for a worse crisis?
- No surprise: political parties don’t like being targetted as being bad actors and defend themselves by empowering lunatics and doubling down on a constant barrage of incendiary news. Community enforcement is completely overwhelmed by its own scale and size and catastrophic situations emerges. No one raises that Facebook has offered several solutions, from institutional standards, automated detection, visibility control and just blames the company for its subsidiaries. The company is just the enemy of everyone at this point. Facebook has two options: not having any community enforcement, or trusting suppliers that have repeatedly lied to them. The third one is what many employees are working on: automation.
Your question is: why wouldn’t they leave? Answer: many do. Drama is hurtful no matter how you understand the whole story. Whether those who stay are more confident, or less reliable in their ethical stand is debatable.
If you care more for technical problems, I’m happy to explain why facebook.com/ads/preferences is the best implementation at the moment of user-control over dark data brokers. It’s insufficient, but helping people identify threats and we can implement reporting from there that no other company will let you have, not without the transparency of Facebook.
1) "As long as there's someone worse we don't have to worry"
2) "working for Facebook feels like you are dealing with making profit, pleasing management, pleasing partners, progressing your career and going home early at the same time — and the public only seems to care about privacy violations."
I’m also not saying that large or influential companies should not be held to higher standards; they absolutely should, and they are where I come from. I’m simply saying that, if you consider problems where Facebook made a bad decision (a minority of the scandals) those issues trickle down to two systemic problems: clear, non-contradictory internal guidelines and prioritisation. Facebook employees are trained to recognise both. When they consider other options, they would often see companies where both are significantly worse. Other companies have simply not been through a decade of excruciating oversight by the international press. Those who have are not managed by someone who is nearly as willing to admit his fault as Mark.
I doesn’t mean that those companies are not better options for ex-Facebooker: they often are; or that they would not make the world a better place by joining those, and advocating for higher standards: they often would. Those companies typically should be held to a lesser standard because they have less of an overall impact. But, as an employee, if you want to prevent problems like those that you regret being a witness to at Facebook, leaving is hard because you can easily see the rest of the world as worst more often than not. If you come with your expectation, gained from working at Facebook, that any minor issue will be twisted into a scandal, most other companies feel very wrong.
You can see that by looking at how many people are above ex-Facebookers at the companies that they join: it’s unusually few. That’s because they rarely trust too many layers to make the right call.
Happy to give more examples, or to argue that the most visible and debated issues are not the most relevant. Even happy to say that this is a problem, but I don’t see it as an internal problem.
However, I wonder if you'd agree that maybe the main reason for Facebook problems is their desire to overconnect the social graph basically. All those moderation problems wouldn't be there if the newsfeed stayed simple chronological summary of updates by their friends and families instead of an algorithmically generated mess. Reddit AFAIK doesn't have problems as huge as Facebook's, and the reason for that is that communities tend to moderate themselves pretty well, and moderating communities themselves (e.g. banning drug sale groups) scales way, way better. In a sense FB itself kinda acknowledges that with its recent emphasis on Groups.
On the other hand, interpersonal connections between real-life friends and families (what Facebook sold to its audience and the way it keeps the users on the platform) barely need any moderation at all. It's quite unlikely your uncle James starts promoting antivax (or child suicide) in your family, and even if he does, he can either be contested (not letting spiral of silence to form) or banned/muted/unfriended/etc. Unfortunately, it's not how FB works: it baits you with friends and family and switches to engagement optimised cesspool.
It feels like the hyperconnected, algo-driven feed is actually the root of most facebook troubles, despite (obviously) generating massive revenues.
Yes, because Facebook would not be a usable service. This is not a joke: raw feed is really bad. Unusable. Spam-folder on steroids bad. If you care about the Facebook employees mindset: “My News Feed should be chronological” is about three times worse than “I’m just looking for a technical co-founder, I’m an ideas guy” to the HN crowd.
> Reddit AFAIK doesn't have problems as huge as Facebook's, and the reason for that is that communities tend to moderate themselves pretty well,
Reddit has significant efforts into massaging their own feeds. It’s less visible, but quite significant, mainly around abuses from large coordinated groups. They’ve talked about it extensively. They have fewer issues because a subreddit community has clear values (_News_ value immediacy; _Politics_ controversy; _WritingPrompt_ values long comments more than total Upvotes) which allows them to tailor their algorithm per context rather than per person. Well, they do that too, but it’s significantly messier. This part is hidden because there’s a lot more than you can see on Reddit, so you see a lot of good things, no matter the order. There is less good content from your friends simply because you don’t have a million of them, so your Facebook News Feed is a lot more sensitive to clues. Reddit also has a lot more input information with upvotes; people really don’t understand their feed would become massively better if they click on Like — including professionals who build recommendation engines for a living and complain about not having good data.
Finally, if you think that Reddit is a welcoming community without issues, I can easily guess your gender. That’s a big part of what Facebook empowers.
> interpersonal connections between real-life friends and families (what Facebook sold to its audience and the way it keeps the users on the platform) barely need any moderation at all
This is not true: anti-vaxxers (and before them, MLM) are a massive hindrance to their family; usually, they get ignored now, but being able to hide them (and dynamically detect problematic posts from important, non-MLM updates) is a key feature of the News Feed. The most common, and occasionally biggest pain-points that we’ve measured have been where friends and family merged, from your lame dad barging in a Let’s-go-to-the-club thread to gay people still in the closet liking posts about flamboyant things.
What you might be trying to say is that there is a real problem in merging all your aspects of life into one context. That’s definitely true. Without going into drama-prone topics, I speak several languages and that was not taken into account at all when I joined; my cousin routinely complained that she didn’t understand why I wrote in English “all the time”. There was some progress (thanks in very small part to my impulse) there.
Raising consciousness around those issues was part of what I did more generally. One effective and clear solution for that was Groups, that finally, for the last two years, got their place in the sun with a dedicated, empathic team in _Engagement_, rather than be a subsidiary of Pages that where a subsidiary of Ads. I don’t have internal knowledge but from public communications from Facebook management, I’m guessing they are growing much faster than Reddit, with a similar product.
> generating massive revenues.
Not really. The family stuff is great because if gets people to post more: they feel like they can share if they see similar things in their feed. That’s empowering people which Facebook believes in as a core value, but it’s not making much money. If Facebook wanted to print dollars, they’d go full Video.
The money comes from basic stuff: age, gender, location, family status (age of children) and interests; language, too: you’d be shocked to see how many ads are shown to people who just can’t read them. The money comes for “Custom audience” which is essentially Upload the emails of your users, and we’ll find them on Facebook, and “Similar audience” which is an augmentation of that, and let you advertise to those groups separately. That you love sharing lame puns with your uncle, or photos playing with your nephews isn’t going to attract anyone’s crazy CPA. If you click on ads about nappies, that‘s a really good signal though.
Honestly, Facebook is eating other people’s lunch in ads because they get very basic things right: separate your customers from non-customers. They don’t advertise for diamond rings next to an article about the war in Congo, like the NYTimes does, or for stilettos to burly football players. They remember which ads work for you, and show more of that. In my case, I’m in the market for a nice leather bag: Facebook knows that and shows me a lot of that. They are certainly making more than the average $50 by helping me general a list of a dozen nice options for my birthday.
Marketing (outside, possibly of political advertising) hasn’t really moved to crazy Orwellian stuff at scale. If, one day, posting landscape in black-and-white is correlated to you liking yogurt, maybe… but for now, it’s probably easier to ask your local supermarket.
This seems critically relevant to the issue at hand - can you explain it in a little more detail please?
When I was working there, I’ve worked on custom visibility option (things like lists of friends, a feature that hardly anyone used; home city, languages had been occasional “Smart lists”). Those options were unsupported or become discontinued in some cases. There was a significant mental load to have more visibility options than ‘Friends‘ and ‘Public’, and very few reasons (as in: active users) to support more. One option that I felt really corresponded to a lot of people’s need was “Friends except Acquaintances”: ‘Acquaintances’ had become a really good shorthand for “drama-prone” relations. Yes, you are friends with them, they can see some activity from you, just not everything. They have no reason to think you’ve excluded them. It was discontinued and I never figured out why.
I can imagine the “Only me” option being changed because of someone, either working on an easier audience selector or trying to make bad-behaviour-detection faster, might overlook the need to have some information on the site not shared with your friends. The impact of visibility option on every aspect of the site is nightmarishly complicated, and genuinely hard to keep in mind to think about clearly. I’ve dedicated a decade to that, and I struggled. People who are officially very smart (Math Olympiad laureates, Mensa-type: not that those are proof of social smart, but they should understand formal reasoning really well) struggled. If you include blockages, it gets really hairy. If you include bad actor and impersonation, you will lose your sanity. The alternative that I suggested (that information is probably what hackers would look for, so hide it even from an authenticated user) is probably more likely, given the overall privacy-conscious of the company. In that case, that information changes status even more.
I’m not saying that kind of blatant or casual disregard for nuanced privacy control is not bad. God knows I raised hell before I joined the company and after about list support, and more. I even convinced a friend to join because he made a great tool to manage your list of friends. He joined the company before me, helped me a lot internally — but never even mentioned his tool to anyone internally because there was no appetite for it, inside or outside the company. Just to tell you how much: do you remember the fiasco that was Google Plus Circles? Well, after _that_ blew up, I had low expectations. This was lower.
The company moved to Groups, non-friends contextual entities, and that was I think a lot better in many ways.
- Facebook employees consider that Cambridge Analytica was caught, abused their power, lied under oath and was never able to leverage Facebook Custom Audience anyway; short of using Police forces (and Facebook really should not have that kind of power) there wasn’t much that Facebook could have done more;
- As a consequence of that, there was a movement to control abusive pages in politics, and everyone was claiming for a clamp-down. It took 24 hours for Facebook to ask for ID for the moderators of all large political pages, because they realised that any of those could be GRU operatives, and 5 minutes for everyone to find this deeply objectionable. :facepalm:
- Apps sending their user’s actions to Facebook Analytics when those actions are Heartbeats, blood pressure, migraine and Period time: Facebook employees consider that Analytics is a service for people to target their own users based on their own classification. There are some possible issues with it (the classic being ethnic discrimination for real estate) but no one at Facebook cares about your period. Facebook could enforce more strongly that the apps communicate with their users about the Analytics tool, but that’s 100% going to blow back around the common theme that Facebook is abusing their power. You’ll have virulent op-ed outraged at how their dare to threaten to de-platform feminist apps about empowering women and their health, and how dare they. I’m happy to bet real money that the people asking for more control will be the same to protest against it, less than a week later.
_Catch-22_ comes to mind.
The abuse potential of this is far greater than some people assume.
Then after a while the nag started popping up with my phone number already filled out.
Really fucking creepy and not okay.
SMS verification increases linkability maybe, but a yubikey or something similar?
Telegram is so aggressively anti privacy that I tried around 5 numbers I own and only 2 of them worked.
Though of course, every time I created a new account there it was quickly blocked, only allowing me to unblock it by providing a phone number. That might have to do with a number of privacy-enhancing browser extensions I've got installed.
That said, thanks to GDPR, I'm relatively confident that they've actually deleted my phone number when I asked them to after verification.
The potential for abuse by spammers is too high. They not only would know who they're calling, but also all the public info on their Facebook profile and whatever else they can stitch it with.
Someone has registered a Facebook account using my email address (apparently they do no verification at all) and I can't stop the spam ....
I'd be surprised if Facebook ever deletes anything like this voluntarily. Most likely "deleting the number" just means hiding it and using it only of ad targeting and more subtle forms of profile matching.
They'll pay a fine, perhaps, but it's not like Mark Zuckerberg will be laying in a gutter pissing blood.
Regardless, you can't put the cat back in the bag. Bad actors will still have scrapped your number, or way too much personal identifying information, anyway.
I should add that removing the phone number didn't actually delete the number from my profile (who could have known?). You have to go into the Mobile tab and remove the phone number again.
All this despite me reluctantly adding my phone number just for 2FA. Dark patterns throughout the website.
I just checked my FB settings, and the article is correct, the best you can restrict your phone number is to just friends.
Why then are the top comments about the inescapablility of employers looking you up by phone number?
Unless you have your employer as a friend on Facebook, just change the setting to friends only.
My understanding is that on this setting, Facebook only allows someone to find your Facebook profile if they:
1. are your friend on FB and
2. have your phone number.
That seems quite reasonable to me.
(No idea whether this is true either way and I'm no longer on Facebook to be able to check; just trying to clarify).
For the vast majority of people who aren't on TC/HN, it's a surprise. Once info is leaked, it doesn't get unleaked. It's not just this once incident, it's a pattern of Facebook abusing user trust and chronically launching new features that leak private or semiprivate information by default and destroying the culture of security industry wide by showing users that account recovery phone numbers are not safe to use.
I can keep my data out of Facebook,but "friends and family" usually aren't careful and get tricked into opening up all that info.
It seems as though if a group of people all had the same app installed on our phones--like Signal, or something similar--we could bypass the phone number system, and set up our own audio-chat sessions over the data connection. If anyone else wanted to talk, they would also need to have the same app, or use its API, and add my contact information. That would add quite a bit of friction to person-to-person communication, but I kind of want that now, as it would also add equal friction to robot-to-person communications. But it isn't exactly easy to get a mobile data network connection without also buying in to the phone network addressing system.
Everywhere I look, in the realm of security breaches and vulnerabilities, a large number of exploits use the public switched telephone network, or its addresses, to compromise the security of the people forced to use them by network effects. Many of the others exploit email addresses on the large providers--apple.com, gmail.com, yahoo.com, hotmail.com--which are popular due to ease of use, and the difficulty involved in getting mail delivered from a private domain to someone using addresses from those providers.
I get the impression that if I set up a phone to ring on a SIP request to 'email@example.com' instead of something like 'firstname.lastname@example.org', it would never get robo-dialed calls. But it would also never get calls from people I might want to speak with, because their phones can only call phone numbers. What's the way out of this trap? How do we stop jackasses from jumping into the middle of any private communications we might wish to have, in a way that is also easy enough to use that nontechnical relative can use a prepackaged setup for it?
I was a user since 2004, deleted last year, and never looked back.
My policy of never giving out my phone number to an American company proves wise once more.
They're not joking when they say this isn't new.
I remember a situation while I was still in high school (2011-2012 maybe?) in which someone tried to prank me by sending me SMS from a number I didn't have. I figured out exactly who it was using a simple Facebook search. The best you could do even then was to set it to friends-only, and the prankster didn't do that.
All the comments refer to being findable to strangers and stalkers by phone number but that’s not the case.
There's something seriously wrong with society if an app owned by a malicious tech company is considered fundamental to the human experience.
Surprisingly, it didn't generate discussion or upvotes like this particular submission.
What will it be tomorrow? We should also make a list so someone can actually see the number of things they do within a year alone.
Who in their right mind wouldn't want two-factor authentication?
Particularly as other sites were offering it (and haven't changed their terms), and a significant portion of the tech press were saying it's a good idea — and it plainly is a good idea, of course.
For example I know nothing and don't care about car stuff. I just leave the car at the mechanic, pay, and have to trust they did a good job.
Seriously, just delete your account, no one needs Facebook even those that make up ridiculous excuses for why the can't leave Facebook.
For a while, it was extra-creepy whenever I did a semiannual log in, because pop-ups would appear, and in a very distinctive "Audrey II" voice, scream "FEED ME [your data], SEYMOUR!" But the last time I tried it, I didn't see that. Can't tell whether they gave up, or already got the data from other people and just don't need me any more.
Delete your history, starting from the oldest thing you can see. Un-like everything you liked. Un-tag everything you are tagged in. Remove your photos. Replace profile pic with a public domain image. When you finally get to zero, log out and delete all cookies and block all bugs and trackers. Then start increasing the time between log-ins. When you get the nag e-mails to come back, go on just long enough to turn off nag e-mails. Wean yourself off until you don't care about being on Facebook any more.
Then you can use Facebook on infrequent occasions, to tell all your still-trapped friends about all the other ways to contact you that have lower latency, better signal-to-noise ratio, and more privacy. If you just delete, that leaves a you-shaped hole in Facebook that the company could fill with a placeholder. Your character might become an NPC that the game-master could control to manipulate the other players.
Facebook, and Facebook uniquely (although Google too), must be regulated by world governments, and citizens must be protected from it through GDPR like rules.
Don't put the 2FA number in the search index, just don't do it. By all means make it convenient to copy it there, but for the love of god do not put the 2FA number in the search index.
It really is an option. It really is the best option. It isn't flippant or sarcasm.
Facebook has done a sufficiently good job of intertwining itself into people's lives, so that to abandon it would mean losing touch with family and friends, missing out on invitations to events, excluding one's self from valued discussion groups and more.
What people resent is that Facebook seems to hold people to ransom like this, now they've made people so reliant on it.
So, sure, people can just leave, fine. But don't diminish the fact that for many people, quitting Facebook would have serious drawbacks.
If people don't like the way things are then they have to be the change. Facebook is not going to change as it's business model is based on selling it's users. In the absense of taking responsibility for themselves or Facebook chosing to change many default to arguements for government regulation. Bringing in the use of force like this creates problems significantly worse than the privacy issues of the original voluntary Facebook use.
But you just don't stimulate constructive discussions by asserting that the only rational path is the quitting, whilst not factoring in the costs.
This is not a simple case of "watching someone stick their hand on a hot stove and complain over and over that it hurts", because the alternative action you advocate just causes pain of a different kind.
Most of what you're saying is valid, except for the fact that it's simple and obvious to fix.
If it were that simple, we wouldn't need to have this discussion at all.