Hacker News new | past | comments | ask | show | jobs | submit login
Fake_contacts: Android app to create fake phone contacts, to do data-poisoning (github.com/billdietrich)
499 points by karlzt on Feb 27, 2021 | hide | past | favorite | 338 comments

Recently Apple added a feature to iOS that allows you only to allow selected photos to be accessible by an app. This allows the user to respond positively to an access request, but allow the app to see only a subset (or zero) actual photos.

It would be a very useful feature for Apple to do the same for contacts: the app would think it's getting access to your contacts, but would only actually receive a subset of them, and be none the wiser. This would be a tremendous boon for privacy.

"Recently Apple added a feature to iOS that allows you only to allow selected photos to be accessible by an app."

What we really need to see from Apple is a permissions index in the app store that allows me to inspect, and consider, the permissions that an app will request before installing that app.

I shouldn't have to install the app (or do laborious research online) to discover what permissions it will attempt to utilize and which of them are required to function.

It would be trivially easy to list that in the app store, for each app.

> and which of them are required to function.

On the iOS App Store, none of the optional permissions can be required for an app to perform it's basic functions - that's a store policy, and it's generally well enforced. Obviously if your app's function is mapping, GPS can be required to use those features (but only at the user's discretion - ie while running or all the time, granular or coarse), but the app can't just refuse to launch without it.

Tell that to Citizen which refuses to operate without location enabled, and even worse, refuses to operate with coarse location. And being a free app there's no place on Apple's site to report this bad behavior.

To those who like me are not in the know, I think the parent comment is about an app that used to be called vigilante


> Citizen is a mobile app that sends users location-based safety alerts in real time.[1][2][3][4] It allows users to read updates about ongoing reports, broadcast live video, and leave comments.[1][2] The app uses radio antennas installed in major cities to monitor 911 communications,[5] with employees filtering the audio to generate alerts.[5] In March 2020, Citizen added the COVID-19 digital contact tracer SafePass. The app is currently available for iOS and Android devices[6] in 20 cities,[7] including New York City, the San Francisco Bay Area, Baltimore, Los Angeles,[8] Philadelphia.[9] Detroit,[10] Indianapolis,[11] Phoenix,[12][13] Cincinnati,[14] Chicago, Minneapolis, Saint Paul, and Cleveland.[15]

Yes, they are extremely aggressive. Also, their payment screen about "start for free" leads to a $199 payment after a short two-week trial.

All these permission choices should be invisible to the app. If I say no contacts the call should succeed but with a zero Len response. It shouldn’t be possible for apps to say you have to agree to this or I won’t run. I can run the software and as the root user control what data the software can use.

> It shouldn’t be possible for apps to say you have to agree to this or I won’t run.

It's not - that's a violation of the App Store TOS. That's also not what's happening here - you can use clubhouse without allowing contacts access, but you can't invite someone to the closed beta without allowing it.

GP means that it shouldn't be technologically possible, not just that it shouldn't be possible as a matter of policy.

The policy solution clearly doesn't work in all scenarios because Clubhouse is still on the store. But an on-they-fly permission model that allowed the user to deny the permission invisibly or share a subset of their contacts would completely solve the problem regardless of whether or not Apple was effective at moderating.

Apple could still do whatever moderation they wanted to reduce annoyances for the end user, but the sandboxing approach would catch any apps they missed or refused to moderate.

This would also solve the problem where an app legitimately needs some access to contacts to run, but doesn't need access to the entire list. Clubhouse does need access to some contacts to invite someone to the beta, but it does not need access to the entire contacts list, and there's no reason for it to have the ability to tell whether or not a user is providing the full list.

> that's a violation of the App Store TOS.

not if the app still functions with deliberately reduced functionality. What i want to have is for the app to be unable to tell the difference between being denied permission, and having no data (or be sent fake data).

They must know that I have disallowed access in that case.

> If I say no contacts the call should succeed but with a zero Len response.

Actually I would take it further and say that I should be able to define its response or have it render a random but plausible template response. Otherwise a zero len response is too obvious that you didn't give it permissions.

I once had an app yell at me for not giving my GPS permission, but then yell at me again when I enabled a mock GPS on Android. It really shouldn't have been able to know I was mocking location.

Or even as a a service fake data - feed fake location data and fake contact list. Full of 202-555-1234 type numbers. I always put fake data into web forms and it is a sign that I don’t truly own the phone that I can’t do the same for local software.

Like I want a pop up: this application is requesting your location data. Shall we give the real data, no data, or simulated data. Same for contacts, photos, apps installed, etc.? Not saying that would solve all the problems but it would be user centric in a way the privacy conversation just isn’t.

> Shall we give the real data, no data, or simulated data.

Or even a selected subset of real+fake data. (With the ability to define sets that could be reused across application.)

Sometimes I want the app to have access to some of my contacts, but not all of them. For example, my work contacts but not my personal friends, or vice versa. Or simply "my contacts that told me they were also using this app". PLUS hundred fake contacts to poison the data.

Giving fake location data could create real problems. Suppose you do this then forget. If it’s a safety or navigation app, you tell the phone to give it fake data, then you forget and maybe use the app much later. Now you’re using a service that thinks you’re in a different location.

One of the examples given here was an app that gives you safety alerts. A navigation app might give you useless directions. There are a thousand ways this could go horribly wrong.

I suppose iOS could present some warning, but that might interfere with the UI, or be misunderstood.

This works fine for contacts, but what should happen when I deny the microphone permission? Should I be able to send Shazam a monotone beep or even worse a sample of random sounds that can't even be filtered out?

I didn't realize iOS doesn't have that. Google Play shows each app's permissions on the listings page.

I'm not sure the permission index would be very useful.

Most iPhone chat apps for example work perfectly fine with zero permissions granted yet provide the option to send pictures, invite contacts, use mic/camera, send gps location, etc if a user is so inclined. With a permissions index, you would likely end up with the majority of apps listing all permissions and users would simply ignore it.

+1 and a filter you can use on related permissions when searching for apps

They have added that, but it's written by the app developers so you still can't trust what they claim they're gathering from you.

I think Apples app reviewers have tools to analyse what APIs and permissions an app tries to access to check this.

Under Android, if apps want to be respectful, they can. For example, the Discord Android will fall back to system file-pickers if you block storage permissions. Since there's no incentive to be privacy respecting, very few other apps I've used let you do this.

I don't see what the point is.

"Data poisoning" gives companies a bunch of fake contacts... on top of all your real ones?

Who cares? So they send some e-mails to addresses that don't exist or something? So it takes up an extra 1% of disk space in their database?

If you could share an empty address book then that would actually preserve the privacy of your contacts. But this doesn't do that.

I don't get it.

Bad data makes it less valuable for resale. It's an attack on the market that these things operate under.

Can also be used as a canary trap.

> Can also be used as a canary trap.

Can you please explain how this can operate as a canary?

Edit: another post explains that the method is if the bogus data end up an a data leak, but that would require keeping track of bogus submissions and generating new data for each company where you create an account. Then you’d have to cross reference like crazy. Am I missing something simpler?

I know at least one person who has their own personal family domain set up so that his family members can just create new email addresses specific to the vendor when shopping online (for example, 'amazon@familyrobinson.com' and 'bestbuy@familyrobinson.com' ). Then all their shopping emails just get routed to his domain. Being able to track which company leaked or sold an email address seems like another benefit in addition to catching all the marketing emails.

Gmail has this feature baked in. Append a + sign to the username and then append any string you want, ie. username+ycombinator@gmail.com. It will forward these mails to your regular email address. I started doing this for the exact same reason as mentioned above, but you can obviously do more than just creating honeypots. Also you have to ignore the fact that it's Google...

There already exists at least one popular js validation framework which removes Gmail (and others) subaddresses per default in its "normalizeEmail" method: https://github.com/validatorjs/validator.js/blob/master/src/...

I do this. A small annoyance can crop up when trying to log in to a service with your + modified email address. Hmmm, what did I append after the plus? If you can't remember that, you can't use the "forgot my password" function either :)

Usually soluble, but irritating.

This is basically a mechanism that would allow for a honeytrap on steroids. https://blog.finjan.com/honeytokens-used-to-track-cybercrimi...

> that would require keeping track of bogus submissions

You don't have to keep track of the submissions because you can generate them with a reversible algorithm. Basically use a word list method https://github.com/bitcoin/bips/blob/master/bip-0039.mediawi... but have the list be entirely people's names (or generated from a corpus of known accounts and something like https://github.com/minimaxir/textgenrnn to make them harder to spot)

> generating new data for each company where you create an account.

yes https://arxiv.org/abs/2006.15794

Basically treating the data from various email honeypots as a "Numbers station" but instead of using it to prime encryption keys, you use it as a form of steganography. To do this entirely anonymously, the next step would be to publish on a public blockchain or anonymous service so that the owner's device (that generated the emails originally) can uploaded a signed statement that proves they were the phone pwned and who the offending app was.

A similar idea seems baked into a couple of crypto initiatives https://coincentral.com/sentinel-protocol/ but fundamentally we're talking about an anonymous reputation system modeled after how swarms operate to gossip risk.

It would be necessarily stochastic in nature because you'd be depending on the 3rd parties to send emails a bit at a time, but if you get a deluge of phones all reporting the same app, you can assume fairly confidently that app has been compromised. Punishment (Brand reputation, sanctions by app store) for being compromised would encourage better security.

This could (and would need to be) operated at the hardware level and orchestrated by the OS, and OS provider. This is the kind of thing apple and google could do as part of their privacy initiatives around "differential privacy" https://venturebeat.com/2019/12/21/ai-has-a-privacy-problem-...

What, you think that deep learning chip on your phone is there to make cute avatars?

Well since it uses names (last and first) that all start with Z most of these crooked outfits could survive losing 0.5% of the "real" names off their list by filtering the Z. Z. names.

Not an expert on guerilla cyber-warfare, but isn't it the whole point of this sort of poisoning? If enough people do this the cost of those bouncing emails would become prohibitive. That's my speculation. Would be great to know more from someone who knows the domain better.

Oh! I know! I work for a large company. At one point, we sent so much junk mail, we were the Post Office's #1 customer (in the US). We have started to send junk emails, too.

There is a Swedish company (non profit? Activist group? IDK) called SpamHause. They partner with ISPs to help block spam. Their process is something along the lines of:

1) Inject fake email addresses into lists of email addresses that are bought/sold WITHOUT user consent

2) Wait until someone UNAUTHORIZED emails them spam

3) Tell the ISPs (and anyone else who will listen) to STOP processing emails from the companies that sent them spam.

4) The ISPs block the companies because SpamHause is reputable and REALLY good at finding spam, also the ISPs save money by "not having to process the spam

We accidentally got a hold of a bad batch of email addresses several years ago and we spent MONTHS trying to fishout and overhaul our email authorization process. It cost us $10s of millions.

Also, WHY were we sending that spam?!?

Even if you make your contact list 99% bounces and 1% real (and every user of the app does the same), I don't see how this becomes a problem for the app's operator. Remove a contact after 1-2 bounces and you're golden.

Fair. Still golden if this needs to be done for all contacts of all users?

If they bounce they are extremely fast to cull.

Who bounces any more?

Everyone except nutters. Google bounces. Microsoft too. I don't know of any provider that doesn't. Surely someone somewhere does not bounce but they are in the minority.

You don't bounce emails you prebounce them and clean up your list. This is part of any sensible data engineers process.

More helpfully, salting their db with real emails but fake contact info requires a more durable hygiene process and often isn't worth the effort for data driven shops.

You serve a variety of email domains that validate as deliverable, then you accept emails and report the sender, which hurts their deliverability.

what is prebounce?

My guess was they are referring to one of these services that check the validity of any email address. A false signal from one of these services prevented me from signing up for some random website with a .name domain the other day.

Email them from a throwaway domain and ip, toss out hard bounces from the list, so you don't poison your SMTP reputation?

Pretty nifty side point:

> If enough people do this the cost of those bouncing emails would become prohibitive.

This idea got a ton of attention in early days tech that led to what's known as proof of work: see bitcoin. The primitives of btc show up in a lot of interesting areas.

It will be interesting to see if these fake contacts show up in a leak somewhere someday. Almost like how people do myname+yourcompany@gmail.com, we could create similarly fake contacts to see who is selling or leaking data.


Yeah I've been using yourcompany@mydomain.tld for ages to track who's sold or fumbled my data. Since haveibeenpwned I can even approximately separate the two groups. Surprisingly up to now nearly all incidents (that I know of) have been breaches. Not that it makes it any better.

Same here. Though people usually give me weird looks when I do this IRL and ask me "Is your email address really ourcompany@yourdomain.tld"?

I had these weird looks, and I even got engaged in a nice conversation with an employee at a t-mobile store, but some people don't even bat an eye.

I'm surprised it has such a harsh name. Years ago I was wondering what would happen if people just came up with random data, e.g. derived from the own personal data, thinking about crawlers, automatized personal data processing. But I'd just call it creating garbage data because that's what it is. Eventually it will be impossible for an algorithm to distinguish between real and garbage data. (And probably not only for an algorithm)

But I agree with you, it's probably the wrong approach. Personally I've deleted quite a lot of accounts/uninstalled bloated apps. In addition I use tools that actually set additional boundaries, but I'd prefer if the apps wouldn't be so data hungry in the first place.

I think the trick is that different users might create "identical garbage" so that contacts match on the backend.

Hm ok, that's a nice trick though

The vast majority of phone calls I receive are spam calls by people/robocallers which I did not give my phone number to, but apparently someone else did. I don't want people sharing my phone number with random other people

Nobody had to give them your phone number.

They just dial numbers at random. Phone numbers aren't sparsely distributed. There are entire area codes that are essentially fully utilized.

I thought that the new challenge-response systems between providers (shaken-stir?) was supposed to take 99.9% of this. I guess I'm off to see if they have actually fully implemented it between providers.

Edit: looks like summer is the mandated time. From: wikipedia article on Stir/Shaken As of 2019, SHAKEN/STIR is a major ongoing effort in the United States, which is suffering an "epidemic" of robocalls.[1] Both the Canadian Radio-television and Telecommunications Commission and the Federal Communications Commission are requiring use of the protocols by June 30, 2021.[2][3]

In addition to that, you probably don't get into contact with fake contacts very often... it is just a smoke mirror to make the users think that they have privacy...

Better to use canary email addresses that actually go somewhere you can detect incoming emails. Then it would be useful.

Clubhouse requires contact list in order to get invites, which are required to sign up right now.

I get why they are doing this, and it caused me to share my contacts with them.

However, I resented it and it put me immediately in a defensive posture with the product and company.

There is no possible way to trust a company with your contact list and Apple should make it how Photos works now--where you can select which data to share. There are some folks I don't even want to possibly find in a social app.

I mean this is why they do it. You knew it was wrong, you knew they were going to take that data and mine it, and you still said sure.

Are you writing that to emphasize the urgency for the government to pass legislation to reign in unregulated online casinos as they continue refining their dark patterns? (I.e., without legislation, these companies will continue finding more and more sophisticated ways to get the user to act against their own interest.)

Or do you mean imply that a practical approach to reign in unregulated online casinos is to spread the message of, "Just Say No," in web forum comments to the ostensible addicts?

Or to be fair, something else entirely? My point is I can't tell without context there whether you are sympathizing with the user ("ah yes, something needs to be done because they've found your weak spot"), or chastising them for not having the force of will to resist dark patterns.

Edit: clarification

Not the poster you are replying to, but I stopped feeling empathy for people who complaint about lack of privacy, yet willingly give up their data to non-essential services that ask for it with all the proper disclosures.

If you agreed to sharing all your contacts to listen to “musical tweets”, I don’t see why you’ll be complaining. You willingly made a trade off.

Social status is a hell of a drug. Clubhouse is a place where people like Elon Musk and famous successful scientists and businesspeople hang out so all the hustler startup get-rich people want to be on board. It's exclusive, it's just for fancy iPhone users. Finally an elite place where you can only get in by invite, most cannot resist. If they miss out on the bandwagon, how can they call themselves an early adopter on the bleeding edge? What will their friends thi k of them? Almost as if they used Android or something.

I wish it were more clear what exact info we were giving to the app (not just generic "contacts" or "photos") and when the app is receiving that info.

I know more about coding than 95% of my friends and I still don't fully understand the depth of info that I transmit to an app when I agree to give it permissions on my iOS device.

E.g., if I give Whatsapp access to my photos on my iPhone, does that mean all of the photos that are stored on my iPhone, including screenshots and hidden photos, are uploaded to Whatsapp servers? Does it upload when I take a new photo or when I open the Whatsapp app?

So if, in this case, Whatsapp is indeed pulling all of a user's photos, including hidden photos, to their servers, I imagine many people would not want that to happen. So 1) I'd want to know ahead of time exactly what will be pulled and 2) ideally, I could have a way to use the app without giving it the keys to everything.

... willingly give up other peoples‘ data.

Exactly. What a jerk right?

“Oh noes I had to give up all of my contact’s personal information... but I got into that beta!!”

Correct. I've been a member for going on a year now and I have scores of invites I don't appear to be able to send because I won't share my contacts. Not that I care enough to invite people, but it's a dark pattern to even require it.

I have heard there's a way to share invites without sharing contacts, but I haven't cared enough to even do a cursory search on that.

unsync your contacts from whatever service provider you're using, make sure they're gone, go ahead and share the contacts (which are now empty) with Clubhouse, get the invites, then revert everything back?

That means that more than likely clubhouse have our details even if we have no desire to be part of it.

It’d be fun once they’ll have EU presence.

I think they had a wave of people join from Germany either earlier this month or last month, so I imagine there are already plenty of Europeans on the app. Plus, doesn't GDPR apply even if there's just one user who resides in the EU?

Having European data subjects is enough ground to ask them to abide by the GDPR. But assume they won't, then you'll have to go to a European court, which could rule whatever, but it can't do much to a company that has no money or persons in the EU to collect from.

Yup, I've been wondering just how inept national and local government seem to be in regulating a global internet. Seems that if we want to have global internet, we eventually might have to come up with a more coherent form of global governance.

Fomo is a helluva drug.

And this tells me that there is a need for another step up for this app - to not only poison the contacts, but to temporarily 1) backup => 2) delete => 2a) share poisoned list => 3) restore contacts.

So we can share the list, but they'll never get our real contacts, only trash data. Enough use it, maybe they'll stop

But wouldn't this company have to periodically review your contacts, to slurp up new ones?

Yup, probably their next move would be to require constant access to contacts list and check whenever the app runs.

The next move on this side would be to keep contacts in a separate app from the std Android/Apple app, and then have to make calls, texts, etc. from there.

If only there weren't so many sociopaths running these companies... sorry, wrong planet

They do it because all the successful social apps need to make contact discovery easy. The ones that don't use this trick - ethical - we don't hear so much about, maybe they don't succeed.

They do it because all the successful social apps need to make contact discovery easy.

Signal does it with hashes which it doesn’t store anyway


There are quite a few that have not done it. I don't think it's necessary for success at all.

HN seems to be doing pretty well, and it's never done this sort of thing, as far as I know.

Reddit never did it during their growth phase, instead they provided their own seed content.

Metafilter has never done anything unethical to my knowledge.

There are many, many successful social networks which have not performed unethical contact harvesting and other shady things.

Clubhouse raised money at a billion dollar valuation. Hacker News specifically and Metafilter aren’t in the same stratosphere

But still, why do they need to steal your addressbook? They can offer you to spam your contacts without demanding. Is the profit contingent on selling the address book data? To the point where they won't let you invite more people (help them grow!) without it?

The pushback is minimal. A lot of the pushback possibly includes people that are going to be upset by many things. Specific Reddit communities and Hacker News are good examples of that. If these demographics are unlikely to be happy with your social product’s privacy and dark or non dark patterns, catering to them makes no sense.

I don’t know any one outside some geeky sites and only one person personally who cares about any of this. Some do say lame casually. But it’s not going to be a deciding factor for using the app.

To add on to the whims of the geeky communities. Some companies escape it more than others. Airbnb doesn’t get much shit for spamming Craigslist people in early days. Compared to the negative talk of Uber, Facebook, etc, they also get no where near any criticism for the way they incentivize negative aspects of their platform.

All of this to say - there’s no real downside if money and power is the primary goal.

>I don’t know any one outside some geeky sites and only one person personally who cares about any of this. Some do say lame casually. But it’s not going to be a deciding factor for using the app.

My experience is completely different from yours. Out of the dozens of people I've spoken with about this stuff, I can't remember a SINGLE PERSON who didn't express dissatisfaction with at least one of: lack of privacy and potential willy-nilly snooping by CompanyX employee; arbitrary blocking and post removal without good cause; bad interface design; low quality of content.

I don't go fishing for it either, it just happens in conversation, although I sometimes am the first to broach the subject of social networks.

Sorry, I was exaggerating and didn't make it clear what I meant. Yes people care. A lot of people will especially say something or another. Most won't actually not connect to Clubhouse with their contacts though.

So I see the same dissatisfaction. I find that to be closer to slacktivism level of caring for most people though. If they stop using an app or don't use it from the get go for X reason, it usually doesn't align with how they're using other apps. So technically they did not use something because of the dissatisfaction you described. However, they aren't actually applying that even remotely consistently.

I think it is just habit and culture, and both can change very quickly. There are many examples throughout history, e.g. civil rights movement.

What are you trying to say?

That because they have a lot of VC money riding on it, they have to do "growth hacking" in order to justify the funds and grow quickly enough to satisfy the investors?

Well, I guess I have to agree.

I assumed the OP saying successful social apps as in successful to the point of being known by at least some average people. Metafilter and Hacker News are both very niche and tiny.

Hacker News doesn’t have the same business model as others either. It’s to help the namesake incubator. It succeeds with that. Getting contacts etc wouldn’t benefit Hacker News. Hacker News could lose a decent amount of money yearly without any prospect of breaking even and still be run.

Most other apps of any kind can’t be run that way, including Reddit and Metafilter.

Perhaps it is just a fact that we have to accept that large (1M+? 10M+? 100M+?) social networks cannot remain sustainable without abusing their users. That would mean we can benefit from building smaller, sustainable communities for ourselves and those we care about. I'm surprised it's not happening already, to be honest.

With today's technology, you can spin up a community website for, e.g. your family or your organization for the price of basic Web hosting and have all the perks of connecting without the downsides of e.g. your data being harvested and reviewed by anyone at CompanyX.

Sure, you have to do your own security, but the big social networks aren't impervious either. And you gain the advantage of not having your account randomly disabled or spamfiltered or shadowbanned.

It won't protect you from NSA or FBI, but I don't think most people care about that. On the other hand, people I've spoken with are aware and do care about snooping by CompanyX employees.

The more I think about it, even as writing this comment, the more I can see that we are very close to rapid disruption in the social network space.

Perhaps it is my family being minority immigrants with other caveats. Snooping by employees is rarely done. It most likely isn't going to mess you up too badly either.

On the other hand, the number of people I know who are either deathly afraid or just normally afraid of the government and its agencies like the ones you mentioned is high. This is definitely going to be a minority of people. My family and extended family aren't common cases.

I don't see either as big deals though. Not enough to have people not stick with bigger apps that have network appeal.

I don't believe disruption of anything but more of the same in different clothing happening. I'm a pessimist though.

In my case, I don't even remember giving them permission to use my contacts, yet I got accepted because one of my contacts sent me an invite.

I might have given them permission without realizing it, but what could've also happened is that they saw my phone number in someone else's contact list, and assumed we were contacts.

You probably didn’t share, as I didn’t. I believe the contacts permission is only required if you want to share an invite, not to accept one.

Clubhouse can bite me.

I refuse to use tooling from shitbags who try to exort me into compromising others' privacy for shiny toys.

I know other shops do it, as if that makes it OK.

I remember signing up for Facebook back in the day. They tried to get me to share something about my email contacts list. That just made me not use Facebook instead. Unfortunately, everyone else didn't seem to have a problem with it.

Facebook literally had a box on their web site asking for your email address and email account password, so they could log in to your webmail and scrape your contacts.

Normal people value their social standing and their relationships, bragging rights etc. higher than abstract principles. It's only loners who will resist. Popular people will be on board because they manage their brand and image instinctively. Wannabe popular too.

App Store guidelines forbid using the Contacts for anything except the intended purpose: https://appleinsider.com/articles/18/06/12/apple-disallows-d...

Do we give CH the benefit of the doubt =p ?

In any case, I also hope (and expect) Apple to implement better controls for sharing contacts.

EDIT: Typo

Huh, so Clubhouse is explicitly breaking Apple's rules.

Surely Apple knows this, but is allowing it because... it's mega-popular?

What's the point of having rules if clawing your way to popularity by leveraging their violation is deemed permissible?

How are they breaking the rules? It seems like they are using it for the same purpose they prompt permissions for.

They create ghost profiles for those contacts, just in case that user ever signs up. That’s fucking garbage and they should be ashamed of doing that, let alone immediately removed from the App Store.

This seems like the shady type of thing lawmakers should pass laws against

It's the sort of thing Apple should ban to protect its claim that it is more private than Android.

It really is.

I keep buying into the Apple ecosystem because of their stance on user privacy. Sure, they aren't perfect; but they are miles ahead of there competitors.

I see this as a key problem of our times. Social convention used to have a stronger impact on behavior. Now it isn't enough for behavior to be disdained, it must be flagrantly illegal.

Growth by any means necessary.. seems like there are a tens of thousands of apps that each act like their own data bureau, totalling dossiers on billions of people, just because it makes money. Maybe a few percentage points' value lost as a slap on the wrist every now and then. I feel, that in this scenario, rather than a better carrot, we need a better stick..

When was the time when social convention had a stronger impact?

Social convention has always had a strong impact. Where I live, people will cut you in line if you leave a space big enough for them to fit and it's perfectly ok. Where I'm from, if someone did that, you'd end up with an angry mob and probably a fist in your face.

Social conventions are always stronger than the law; at least in person.

Really? Goddamn that's some dirty tactics. Guess I won't even look up what they actually do. FB already got its hooks in me when I was young and stupid, never again.

It's disingenuous of them to say they "have to" do contact upload. Why can't I type in a phone number to invite? Completely hostile. Consequently, I have invited nobody.

Same here. It also seems to burn through battery more quickly than other apps.

An app that recreates party lines on POTS burning through battery is unfortunately unsurprising!

When you leak your contacts, you harm others, not just yourself.

This, among other reasons, is why I never give out the number of my SIM card, or my residential address, et c, to anyone. They're just going to click "allow" and give it to a thousand shady companies, starting with Facebook.

I never give people data I don't want stored in my shadow profile.

> When you leak your contacts, you harm others, not just yourself.

As Eben Moglen puts it, privacy is ecological, not transactional.

See http://snowdenandthefuture.info/PartIII.html

Here's how to get around Clubhouse uploading contacts. We shouldn't have to do this, but here we are.

1. Disable contacts for all your configured accounts 2. Add a dummy Gmail account, enable contacts. 3. Add invitee to dummy account 4. Give contacts access to Clubhouse 5. Send invite 6. Remove contact access 7. enable contacts disabled in 1

0. Don't use Clubhouse because it adds no value?

When you run a business, you have to go where the people are. If my customers are there, I have to be there.

I’d think that depends on the business. What is the engagement like on clubhouse? Do you participate or just have a presence?

This is the only reason I have to touch Facebook. It’s icky.

I did the same and I’m still annoyed at myself.

Clubhouse is pretty shit, really. So I sold my soul and got nothing in return.

Thanks for sharing this.

I have similar feelings about the product, but am curious to hear your reasons in detail first if you'll share them.

The one thing that got me interested is them using a photo as the app icon. Intriguing. Maybe there's some fun to be had. The rest was of no real interest to me. Silly, but here we are.

Trivialities aside, the content is not for me. It's either some self-help thing or a get rich fast scheme. And I don't care about either.

Worse though is the content delivery. They talk so much and say so little. Horrible.

It really is this:

> Clubhouse is C tier people listening to B tier people talk about A tier people

And here I am, a D tier person not wanting to be part of this circlejerk.

Thanks for that feedback.

I noticed the app icon as well. I wondered who it was, but didn’t look into it.

I’m in agreement about the content.

The “entrepreneurship” culture on Clubhouse seems to be either VC / media worship or this hustle thing.

I joined a "Real Estate Money" group that was pitching $500 investments in shared AirBnB properties. It had calling cards of a scammy “investment” group. A mix of cheerleading and leadership-enriching sales to suckers.

I spent some time last night in some social groups and while entertaining from a sort of shock value perspective they were not well moderated. They would not work with bigger audiences.

Andrew Sorkin interviewed Bill Gates on Clubhouse on Friday. My eyes widened when I saw that, I questioned my initial assessment of the product and it’s velocity. Then I realized I was thinking of Aaron Sorkin. Andrew is some mainstream media journalist. I doubt Gates gave a hoot about the medium.

I think VC, media and “influencers” all slept on TikTok and are now overcompensating with involvement on Clubhouse. It’s better suited to them anyway, and it matches the atmosphere of conferences.

It is the type of activity PG labeled “playing house” and is perhaps well suited as a stand in for puffery, fakery and “influence.”

Regarding content delivery, it is like you read my mind. I’d describe Clubhouse content, even purportedly serious stuff, as awash in loquaciousness word salad.

Clubhouse content forces listeners to be beholden to linear progression. So you lost the most valuable thing about audio content found in podcasts: editing and people who have developed the skills to be engaging.

Clubhouse lacks the crucial “trick play” of podcasts, where you can option out of minutes of content (and ads) at any time.

So anyhow, I regret sharing the address book with them and took my lumps ITT for admitting as much.

However, I don’t regret trying it. Critically evaluating nascent platforms and technology is what I do.

> Andrew Sorkin interviewed Bill Gates on Clubhouse on Friday. My eyes widened when I saw that, I questioned my initial assessment of the product and it’s velocity. Then I realized I was thinking of Aaron Sorkin. Andrew is some mainstream media journalist. I doubt Gates gave a hoot about the medium.

Funny, when I read that, my eyes equally widened, until your next sentence corrected the interviewer’s name in my head.

I believe Andrew runs DealBook on the NY Times, but I’m curious how the interview was conducted?

Bill Gates is on record some years ago, saying that “no iPhone for me” when asked if he used an iPhone. This was around the time Windows Phone lost the mobile market to iOS and Android, and since ClubHouse is iOS-only, I’m wondering how Gates was able to take part, unless he’s changed his mind since he was asked that question.

I have an old iPhone with an empty address book for testing dodgy apps that require contacts access, I use that for sending Clubhouse invites. OTOH, Clubhouse seem work fine on my primary phone, where I haven't given it contacts access.

For Android I can recommend "Shelter"[1] which lets you setup a work profile, so you dont have to share your contacts, files, etc.. Downside: If you have already a work profile, it does not work (Android allows only one work profile)

[1] https://f-droid.org/en/packages/net.typeblog.shelter/

nice find.

is there a list of known existing "big brother" apps ? or is it just as good to look at app permissions to figure this out ?

If your invitees don't also have a spare iPhone, what's the point of inviting them? They'll have the same problem with no workaround?

You don't need to grant the Clubhouse app access to your contacts to use it. ATM, that's only needed to invite people.

I was kinda confused at first to see the top suggestions were all Doctor’s Offices. Then I figured it out.


First I have to keep a burner number with a real sim card for things that require signup, now I have to keep a burner phone with no contacts?

Would never sign up or use a service that has such an invasive requirement..I only use my google voice number for any type of public to even dating transaction. Spam and robocall that all you want which I surprising never receive/received many such calls.

It's not really understandable. It should be an opt in with and OR "I would like these 10 people who are important to me to be on the list you look at, not these other 400 people who I've taken the phone number of at some point"

They did something bad and yet here we are. I don't know what Clubhouse is, but I'm somewhat tempted to look it up. Marketing: successful. (I won't, in an attempt to counter that effect of growing due to negative publicity, but I find it noteworthy how well it works.)

What about dividing your contacts into circles and only give permission to a specific set?

Sure, as long as it’s possible to create a circle containing only one contact, the way giving permission to access photos now works on iOS.

Google+ anyone?

Or a feature implementation which would essentially means - "select fake/random data instead".

Fake/mock GPS (w/o telling the app that it's fake unlike what Android does), fake contacts etc.

Should you not tell your contacts that you gave their details to Clubhouse?

I mean, unless you're a newbie to the internet, how is this possible?

> Clubhouse requires contact list in order to get invites, which are required to sign up right now

How is this GDPR compliant?

I think this is a wording issue if you haven't used Clubhouse.

You don't need to share contacts in order to get invited, like you don't have to do it to use the platform. You have to do it to invite others (like your friend that you told about Clubhouse) after you are already on the platform, so that is not regulated by GDPR.

It is a shitty user experience and I also want Apple to control this at the OS level. Let me select which contacts if I want to do it at all.

> How is this GDPR compliant?

It isn't, really, but the question whom to prosecute is complicated. Clubhouse gets the contact list data from you, the user. Usually, somewhere in the ToS, there is a little thing where you confirm to have the right to share all the data you share with Clubhouse. That means that first and foremost, you as a user are responsible.

If you are a non-commercial user using Clubhouse from your private phone, what you do with your private contacts isn't covered by GDPR, private stuff is an exception. However, as consumer, European legislation protects you from surprising and unusual terms, which this might be. Legislation might also protect all your contacts. However, this is a question that still needs to be litigated in court, and I don't remember any decisions around that problem (WhatsApp basically has the same constellation).

If you are a commercial user, because this is your work phone and your contacts are colleagues, business partners, customers, things are quite different. You are, as a data processor, responsible for how you pass on your contact list. You better make sure that you are allowed to do that (because you have a GDPR-compliant reason like legal obligation, contractual obligation with your customer, assent or legitimate interest) and that your contacts have been informed about what you are doing beforehand. Also, you then need a written contract with Clubhouse about the data being passed along, about how it will be used and protected, etc. Also, passing along the contacts to Clubhouse must be necessary for a predetermined, well-defined reason that can be considered more important than your contacts' right to privacy.

So as a private person, you might get away with using Clubhouse. As a company, employee, self-employed, state official, whatever, you are probably in hot water, because surely you didn't do all the required things. But for Clubhouse this might not be a problem, because as current case law stands (imho, iirc, ianal, ...) Clubhouse isn't the party that did something wrong there.

On Android if you use Work Profile your work contacts are in a separate partition and can only be accessed by approved company apps. This works really well for gdpr compliance with dual-use (company & mobile) devices.

I see the point, but if I upload my contract list the non compliance is mine (I didn't ask permission to each one of my contacts) or of Clubhouse (they asked me to do it)?

It should be blaringly obvious to Clubhouse that they don't have the right to even store most of this data, let alone use it for anything.

So even if you are at fault, I can't imagine that would help them a lot, if some data protection authority looked into this.

both, yours for sharing, clubhouse's for storing.

It isn't, but in addition to the (valid) arguments the other commenters make about Clubhouse not having any assets in Europe (thus making enforcement of any kind of penalty nearly impossible), the majority of the data protection agencies are also completely incompetent at enforcing the GDPR even against companies that they can collect from.

Why would you want to be GDPR compliant?

Because in European Union it is regulation, and you (as a company) are fined if you are not compliant.

I recommend having a look over the Wikipedia page on the subject:


If you’re not subject to the EU (I.e. don’t have any offices, servers, etc. in the EU) I don’t see how the GDPR is relevant: non-EU citizens generally aren’t subject to the laws of the EU.

Then you cannot have ue customers. Or make wire transfer through the ue.

You can also forget vacation trips in EU.

If thoroughly enforced, which is currently not the case.

"The GDPR also applies to data controllers and processors outside of the European Economic Area (EEA) if they are engaged in the "offering of goods or services" (regardless of whether a payment is required) to data subjects within the EEA, or are monitoring the behaviour of data subjects within the EEA (Article 3(2)). The regulation applies regardless of where the processing takes place. This has been interpreted as intentionally giving GDPR extraterritorial jurisdiction for non-EU establishments if they are doing business with people located in the EU."

Source: https://en.wikipedia.org/wiki/General_Data_Protection_Regula...

Countries or groups of countries don't get to impose their law on other countries.

That's called colonialism, and Europe is supposed to have given it up.

I am not a lawyer, and I don't claim I understand the legal mechanisms involved. I don't even claim GDPR is perfect.

But, as I see it, EU is protecting its citizens. If you want to do business with EU citizens you must abide by EU regulations. It's that simple. I don't get how this came to be all of a sudden about colonialism. Any business is free to stay out of EU.

And any EU citizen is free to not do business with a company outside the EU.

Do you think the EU laws should apply to people selling things to EU citizens while they are on vacation in other parts of the world? If someone from Germany travels to Brazil and buys something from a store, are they required to abide by EU rules?

If someone from the EU leaves the EU digitally to buy something in another country, it isn't up to the seller to enforce EU rules.

Unless you have an entity (either yourself or your business) under EU jurisdiction, you don't have to follow their rules.

There's an asymmetry of information and power in the relationship between a business and a citizen. Governments, generally, attempt to mitigate this asymmetry. Hence, we have consumer protection laws, GDPR and the likes.

While these solutions may be incomplete, or imperfect, having none is definitely worse.

> If someone from the EU leaves the EU digitally to buy something in another country, it isn't up to the seller to enforce EU rules.

> Unless you have an entity (either yourself or your business) under EU jurisdiction, you don't have to follow their rules.

Please _do_ read the link I already posted in a previous comment [0]. It clarifies many things, but I don't want to paste too much content here.

[0]: https://en.wikipedia.org/wiki/General_Data_Protection_Regula...

I am not sure what you are trying to argue here. I am not making any moral claim about whether a GDPR-type regulation is good or bad. I am simply saying that the EU saying the law applies outside their borders doesn’t make it so.

If I am a US citizen living and working in the US, and break the GDPR by storing data illegally from visitors to my website from the EU, the EU can certainly try to fine me or issue a summons or whatever they want to do.

However, there exists no extradition treaty for this law, and there would be no way for the EU to enforce judgement.

Yeah, this is the really frustrating thing about conversations about the GDPR: whatever you think about how companies should act, legislation doesn’t really matter unless there’s some way the government can take retributive action against those who ignore it. When someone asks about what this mechanism is, you inevitably get a whole host of people assuming you dislike the legislation.

This article basically confirms my suspicion that this provision is basically unenforceable:


> If you want to do business with EU citizens you must abide by EU regulations.

No, no more than if I want to do business with Saudis I'm liable for punishment if I drink a beer.

But that's not really a good analogy (not that analogies are proof). A better analogy would be you selling beers in Saudi Arabia.

I urge you to read this, it should clarify things:

Applicability outside of the European Union:


They claim it applies. That doesn't make it so:


I wonder when the USA will follow suit?

If some of your users are in the EU you need to be GDPR compliant.

This is what the law says, but I don’t understand how this is expected to work: without some kind of treaty from the US government, the EU has no way to make US companies comply.

The US and EU have a treaty specifically about enforcing each other's laws. (More accurately, the nations that comprise the EU are individual signatories to such treaties.)

Source? This lawyer seems to think that there’s no applicable treaty.


Here's the one between the US and the largest economy in the EU:


There is no legal mechanism because such exist mostly for criminal law and civil and public debt collection. So the EU maybe cannot use most of the enforcement mechanisms, except one: You can be fined some amount of money, creating a public debt which can then be collected if there is a treaty about such collections.

Have you not heard of extradition treaties?

For example, that's what the US is using on Kim Dotcom.

There's a slew of individual things that can be done. EU companies can be prevented from doing business with a (willfully) noncompliant company. Wire transfers going through the EU and other operations can be blocked. And, of course, the service itself, its apps, its sites, its traffic, can be blocked from accessing the EU internet (or being accessed from it).

That's not even getting into international pressure levers.

I don't know that we've seen any of those kinds of actions yet, but they're clearly on the table if a company breaking the rules became a real "problem".

The thing is, if you're just completely avoiding doing any business with the EU, having any EU customers or users, and just not touching the EU with a 1000 mile pole and avoiding the GDPR in such a fashion - well, then there's no reason to go after you. The legislation has done its job.

> And, of course, the service itself, its apps, its sites, its traffic, can be blocked from accessing the EU internet (or being accessed from it).

In other words, the EU can attempt to extend its internet regulations over the rest of the world by implementing a China-style firewall. Well, we'll see if that happens.

It is more akin to the US Sanctions. You don’t have to abide. If you do trade with sanctioned countries, you should not do any kind of business with the US, or pay a hefty penalty.

Here’s a case example, BNP Paribas dealings with sanctioned countries. https://www.wsj.com/articles/bnp-agrees-to-pay-over-8-8-bill...

If you're operating a business that interacts with customers in the EU, GDPR applies.

I thought US companies had to agree to Privacy Shield if they wanted to be considered GDPR-regulated.


Why any US company would voluntarily agree to this is beyond me, unless one of its EU customers insisted on it.

The EU says it applies but, AFAICT there’s no legal mechanism by which it applies.

Here’s a lawyer’s take on this: https://tinyletter.com/mbutterick/letters/you-re-not-the-bos...

To avoid substantial financial risk.

Has the EU sued and won against any company who is not located in the EU?

That's not a good test, because the law is still relatively new, and it takes a while for litigation to make its way through the system. We also don't necessarily know who has settled out of court.

Would you like to be a test case for us?

I would love to be a test case on it, I am not in a position.

I'd be extremely interested for a company who doesn't operate in the EUs being brought to court and what other countries are willing to help the EU exercise those judgements if any.


On a side note, Germans are obsessed with Clubhouse.

They are in california. They can give the finger to the gpdr. It's irrelevant to most people in the world

People tend to forget that it is not applicable. For instance nothing I build will ever comply to it regardless of users that might be in europe

Clubhouse has no duty to obey european law

The question is: why do you think the need to be compliant?

This is not how it works. If you make it available to EU users, you have to comply with GDPR (at least when it comes to those user's data).

For the same reason WhatsApp's new T&Cs don't really change anything for EU users.

However I don't think the collection of contacts is actually illegal under GDPR, considering WhatsApp does exactly this too. And it's huge in Europe, much bigger than in the US. if they haven't gone after WhatsApp for this, they will probably not do so for Clubhouse.

If they don't do business there they don't have to comply. Making it available doesn't count

Just like I don't have to comply if I have EU users on a service, I am in the united stated. europe cannot enforce their laws here. It's just the same as if saudia arabia tried to enforce their laws here. They carry no wait

That is what makes the GDPR insignificant. It applies to Europe. Not the rest of the world. The cookie warnings for the vast majority of the internet are stupid an unnecessary

So call it illegal in europe but who cares?

It honestly is maddening how many people care about the GDPR that don't need to

There's many EU things that take effect with vendors outside the EU. Like software sales: Try to buy a license for a software package from the EU (or with an EU payment card) and you will always be hit with VAT at the rate of your country :( Even if the company is US based only. With the exception of really small ones I guess. In the above case it's annoying for us :) But in the case of GDPR it's good IMO.

Anyway the EU says it applies but I agree they don't really have much in the way of enforcement capability with companies that have no presence here. Though they could ask Apple/Google to remove it from the store I suppose.

And of course most companies do have a presence here. All multinationals do, and even the smaller ones. Even if it's just a sales office.

Most American companies don't though. They can safely ignore european laws

And also choose not operate in the nations whose laws they are flouting in most cases; EDIT: a few weeks ago EU posters here were describing how ERCOT was preventing access to the company's public facing website, citing not wanting to comply with GDPR


Not exactly on topic, but historical context maybe: Long ago (early 90s?) when it was guessed/assumed that intelligence agencies were scanning emails, emacs was still among the best ways to read and send email. So emacs provided a handy function to append a random list of "hot" words to each outgoing email in the signature, just to degrade the signal-to-noise of such surveillance.

It's still there today, and you can see the output via M-x spook.

That used to be the case on usenet too - people would put attention-grabbing words in .signature as "NSA Food" - to overwhelm the NSA data capture algos. It seemed like a futile gesture even at the time, but particularly poignant looking back from a post-Snowden world.

The real poignancy is the shift in hacker political views. Call it post-software-is-sexy world. Those usenet sigs were by hackers who lived in a world where software engineer or programmer were social reject code words. That world changed after geeks came into money. Suddenly but soon thereafter, paranoia about privacy was rewarded by tinfoil hats. (And then yes, years later, came along this guy called Snowden.)

Such an interesting context. Thanks for sharing this. I appreciate the nostalgia poetics of this today.

Remember: some apps check for what apps are installed on the device, and if they see this installed they can deduce you're poisoning the well.

Also if you want to research obfuscation and how it thwarts surveillance, check these:






>> some apps check for what apps are installed on the device

I can't believe that's allowed by the OS - seems like a horrible policy.

Probably should be removed but I have seen it used legitimately sometimes. Some apps for things like contact syncing will tell you there are other apps for caldav and stuff and check if you already have them installed to not show the message.

Nextcloud and DAVx5 by chance?

agreed. Id like to see a source or reference for this.

But the app in the original article doesn't even work on Android. It is an ios app. The link you provide is about android, right? (Still concerning , however)

Seems you have too many HN tabs open at the same time.. But the article I have linked shows the study done on how apps read this information. Goes both ways for Android and Apple at the time, not sure if much has changed

Yeah sorry :)

If they saw this app installed, what might they actually do about me or my contact list?

They could just flag you as someone who poisoned the well and ignore you I suppose. Remember: bad actors go after low hanging fruit and tend to ignore privacy-aware folk and those doing anti-surveillance.

Remove all contacts that first name and last name start with Z.

Docs say that they prefix every first & last name with Z so that would be a start.

Also: check for contacts with weird country-code prefixes that don't match the country the user is based in

I guess they may decide to not sell your data. Which is actually a good thing.

This is a common technique in the mailing list industry. It's called "salting". You add fake names, but real email addresses, street addresses, or post office boxes. You then monitor what shows up in these places addressed to "Mr. Fake Name". It's how mailing list companies monitor who is using their lists and helps control misuse.

A general term for this is a "copyright trap" [1]. Map makers for example often add small, fake features to be able to tell if another map was copied from theirs.

[1] https://en.wikipedia.org/wiki/Fictitious_entry

Have you worked in this industry? Curious about more details of tricks from various list makers/sellers.

I have bought lists. I'm working on a system to manage lists.

I seem to remember CyanogenMod having a per-app sandbox feature around 2013 that returned blank info from a virtual root.

Like many point out, this isn't data poisoning, especially if there aren't metric-breaking honeypots around the web seeding these services with enough noise to make these collection practices useless, which there are not.

A more effective alternative might be hashing real contacts to generate seeds of complete but false profile information. Apps thinking they got the mother lode wouldn't be able to assign confidence to any results they didn't have duplicates of, and slowly over time, groups who used this would become worthless.

I remember that too; it was great. That feature disappeared at some point though - it's not in Lineage OS these days as far as I've found. I recall it made some apps crash, but only as far as I could tell those that weren't robust enough to handle being fed junk data. I'm not sure why that feature disappeared.

EDIT: my guess is that a later Android update broke the existing Cyanogenmod code and no one was maintaining it.

There's XPrivacy framework that runs on top of Magisk or XPosed (not sure how it works now). I remember it allowed you to give very fine-grained permissions to apps and poison the data as well, with fake contacts, location, etc.

Back in the day it required a lot of tinkering to set it up, and would likely make your OS pretty unstable.

Apps using contacts is a $#%$ing anxiety attack for me. The scum companies don't care. They just want more leads. But for me, it's this fear that they're going to spam my exes and old roommates and bosses and professors and landlords and everyone who ends up added to my contacts.

Signal did that to me last week. This person I'm not on speaking terms with got Signal and it added us and announced to each other we were on it and put our empty conversation onto my list of convos.

Phone contact lists are a complete $&^*ing disaster and Apple needs to make it far more clear what specific contacts I share access to.

Not to be unkind but I suppose most people are not really traumatised by merely seeing someone's name, even if they're not on speaking terms with that person. It probably falls on the side of convenience for the vast majority. For the Signal org, it's possibly even an existential issue, since it helps them counter network effects in the incumbents. It's hard to expect them not to do it, then.

Having said that, I think it would be nice for Apple to implement what you describe.

> but I suppose most people are not really traumatised by merely seeing someone's name

I mean there are cases where that can be devastating.

"Ohai here's your old abusive ex, here's a chat box just for good measure, good luck!".

There are people who I'd never ever want to be within a textbox and tap away from accessing me, for any reason, period.

You can get restraining orders in the physical world, the digital world however has no boundaries when the apps themselves are too stupid and are defined by real-world-illogical programming code. I wouldn't expect an app to understand a 'court order' but that's a real human construct. How do we design against that in the digital space, when you are so accessible that if you have a crazy dude following you you're basically forced to retreat as there's no effective measures/guards against this?

Well, a couple of things:

(a) You can't take seeing their name, but you keep them in your contacts? Don't you occasionally scroll past it with a call button right there, which is just as easy to hit and put you in touch with them? How is this any different? Seems a bit silly.

(b) As far as I know, research suggests hyper-avoidance is not a good way to resolve trauma. So I'm not convinced by the idea that this is harmful, especially when you can control it through (a).

A contact list often operates as a database of what number belongs to who, for guarding incoming calls. It can be a security tool.

In iOS and Android, incoming call blocks are in a separate database and explicitly not the contacts database.

You can generally block calls by number, without having them as a named contact.

I do see Waterluvian's point though. You might still have business with them yet you don't really want to deal with them otherwise. Knowing who this SMS or call was from can be helpful rather than blocking the number outright.

Then again, seeing their name when installing Signal and figuring "oh hey they have signal too" seems no less weird to me than seeing their name in my phone book and thinking "oh hey they have a phone too". If that really sets you off... that seems unlikely. So I don't really get this subthread, even if I see the general point that you might not want to be reminded of certain people on a regular basis (for me, installing a phone number-based social application is not a monthly occurrence).

> You can't take seeing their name, but you keep them in your contacts?

If I start getting abusive calls or texts from a usual suspect, I want to know who it is. My carrier-level number blocking resets every couple of years, and I cannot remember everyone's phone numbers.

Even if you don't keep them in your contacts, the connection tracking can be problematic if they keep you in their contacts.

"But what if you didn’t give Clubhouse access to your contacts, specifically because you didn’t want all or any of them to know you were there? I regret to inform you that Clubhouse has made it possible for them to know anyway, encourages them to follow you, and there isn’t much you can do about it... I got followers who weren’t in my contacts at all — but I was in theirs."


Why do you have the authority to dismiss many's experience of a feature? Because you can think of a way you would handle it and you've read some things?

Because we're all here talking about how things should be designed, which often inherently requires fulfilling some needs at the expense of others? Not quite sure how you expect those decisions to be made without people gathering to discuss the relative merits of each approach.

If you're about to tell me we should just implement every user request that they claim is of 10/10 importance to them personally, then I'm not even sure what to tell you. Have you taken all of a few seconds to consider what happens when two people make conflicting requests? Then we're back to evaluating things and discussing them again. How arrogant of us.

I appreciate the implied authority you've given yourself to be the conversation police, though.

In my case it wasn't traumatic, exactly. More, targeting.

There was an individual that I kept in my contacts, you see, for the the sole purpose that if he ever called me, I'd know to let it go to voicemail. We had been close long ago, but he stopped living in consensus reality and wasn't interested in treatment. I considered him disturbing but not immediately dangerous, just someone I didn't want to reconnect with.

When I installed Signal, he got the notification that I had done so, and immediately messaged me, along the lines of "Oh hey, you still exist! And I guess by the timing of this install, you must be at [security-focused event] this weekend, yeah? Hey let me tell you about my latest harebrained scheme..."

I understand that Signal needs to do that sort of connection to work behind the scenes, but they don't need to generate an alert on the guy's lock screen about me.

"Did this cause trauma" is not the bar we're trying to set here, any level of anxiety caused by tech companies misusing contacts is bad.

> Not to be unkind but I suppose most people are not really traumatised by merely seeing someone's name, even if they're not on speaking terms with that person.

Domestic abuse, harassment/sexual harassment, stalking etc are all more common than they should be.

I've got a dead friend that I'm reminded about every time I open signal. "DeceasedFriend is on signal!". No, no he is not.

I'm sure I could clear it, but I don't really want to yet.

On the whole, I still like the feature.

I'm sorry about your friend. I've had similar experiences with tech products, but I tend to think that unexpected reminders (of any kind) are all part of the process of dealing with loss. That hyper-avoidance seems an unhealthy route, popular though it is in modern discussions about emotionally difficult subjects.

Yep. I can't claim to know how everyone else responds to these things.

The Signal example isn't the worst. It's a mutual connection. It's not like they're emailing hundreds of people saying "Waterluvian wants you to get on signal!"

What's to stop them from doing that when they get sufficiently desperate? I don't even own my contact lists. They seem to grow on their own with anyone I've ever emailed.

Signal does it for anyone in your address book, not just mutuals.

Your "anyone I've emailed" example is a great reason not to use the same service you use to host your email to host your contacts.

Personally I would never in a million years sync my contacts to Google, which I assume is what you mean here (most people use gmail).

Probably. Contacts have been confusing. I've had Gmail list. My phone. What's in my Sim card. My Sony contact list...

I had a really infuriating time trying to clean them all up many years ago and I've just tapped out.

Same here. I recently went to LineageOS and use fastmail for email/contacts/calendar. It's been wonderful.

The problem I have with Whatsapp is even more than Signal: Not only they engage me to start a conversation with that customer to whom I only wanted to appear super-stern and rigorous, but they also send them my profile photo and my name!

My business name is not my private name! At least let me remain under my name in their address book, don’t give them information.

Signal shows contacts (and just bare phone numbers as well) inside the app which have not been in my contact list for years (but once were).

And this is how Signal suggests doing it https://support.signal.org/hc/en-us/articles/360007319011#io...:

> Remove someone from your Signal contact list

> Contacts must be blocked in order to be removed from your Signal Contact List. To learn how to block someone, click here.

I wish telegram had a setting for "Block everyone in my contacts list" Unfortunately it only seems to have the reverse

Does Signal share contacts the same way others like WhatsApp does?


> Signal clients will be able to efficiently and scalably determine whether the contacts in their address book are Signal users without revealing the contacts in their address book to the Signal service.

Note that this SGX thing is broken seven ways from sunday, but in principle, yep they have some security measures here. We just have to trust them not to crack their SGX environment as well as (regardless of SGX' security) Intel not to generate an identical MRENCLAVE for anyone else but with additional logging code running inside.

This is the best system I know of anyone running, by the way. Threema, Wire, etc., nobody else has this (but then neither requires a phone number, so...). I also don't know of a better way to do phone number matching than having a trusted third party that bakes their private key into chips and verifies that you're really talking to the code you think you're talking to. The upsides of DRM technology!

This makes me wonder if anyone has set up canary emails or phone numbers in their phone contacts.

"This makes me wonder if anyone has set up canary emails or phone numbers in their phone contacts."

We (rsync.net) have a handful of dummy/fake users in our database whose emails we monitor. The email addresses are cryptic and random and use a different domain, etc.

We should never see an email sent to one of these "canary" email addresses and, so far, we have not.

I am also aware that many of our customers sign up with service-specific email addresses, using the '+' character ... something like john+rsync@mydomain.com.

I personally have a rich and well developed pseudonym that I use for all online non-governmental transactions but in some rare cases I need to use my actual name and email - and in those cases I create '+' aliases.

I've noticed a bunch of spammers starting to strip out anything after the + and before the @. This is why I've long used a catch-all e-mail domain (subdomain.example.net) where I can put anything I want to the left of the @ sign and no one is the wiser for my real e-mail address.

Is there some service where I can easily create unlimited custom email addresses for a flat monthly fee? I want to use a unique email for each new website/service. That would go a long way to solving some data leak/privacy problems. The problem with custom domain is I have to maintain it right? I want a service which I don't have to maintain. I used to use new Yahoo accounts but they are a hassle and recently they disabled free auto-forwarding.

You can do this with any email provider that supports a catchall. I personally use fastmail and have been very happy with it. You don't need to 'create' the accounts, you just set it up so that *@yourdomain.com goes to your catchall. When signing up for a new service, you pick a unique/random email. Then you know unambiguously where each email in your inbox came from.

I personally use the website as the email (example, if HN required an email it would be hn@mydomain.com) to make it easier to filter. But this can be gamed/guessed, to be more secure it is better to generate an actual random email for each site and store it in your password manager.

Mozilla has such a service: https://relay.firefox.com/

I also remember seeing one Show HN recently that offered similar functionality, but couldn't find it via search. The problem is that if the e-mail alias provider becomes popular enough, their subdomains are soon disqualified from being used when registering to sites.

Protonmail allows wildcard emails from 1 custom domain if you pay for the ~$5/mo plan. No maintaining a mail server, just point your MX records to their servers.

Catch-all support starts at €8 at ProtonMail.



I have been using this since 2002. You don't even have to set anything up - just make up addresses on the fly. It's pretty awesome.

I believe Fastmail supports that.

The robocall epidemic has pretty much made the notion of "canary phone numbers" useless.

What do you mean by "canary" in this context? How do you detect that the canary is dead?

I assume that the "canary being dead" ~= "an adversary added the contact to their watch list". But I don't think you can detect that.

The best you could do is to add a random physical address hoping that you can detect physical surveillance (which is probably not realistic anyway).

It is like signing up with an e-mail +suffix for services, or the non-existent streets on digital maps; if you come across your fake contact elsewhere, you know that information has been shared.

it is trivial to strip suffixes off of aliased email addresses

If you control your own email routing, by using your own mail server, Google Workspace, Microsoft 365, etc, you can choose whatever convention you want.

How would you know to strip everything after my first name?

I wouldn't care about the people using their own mail server

I would just strip everything after a + sign

Can't strip it if it's not there. Everything after my name is the comment, and you have no way to know.

"it is trivial to strip suffixes off of aliased email addresses ..."

This actually is not a bad point to make ... it would, in fact, be simple to strip +aliases but ... economically I don't think it makes any sense.

You'd have to have a high level decision maker dictating an engineering fix in order to increase email authenticity by ... .01% ?

... and that assumes that the "engineers" down the chain understand how '+' works in email to begin with and have somehow communicated that back up to management.

My response here is that I think this discussion is naive, as the data brokers themselves already do it.

So who cares about what some engineer at a random new business thinks.

Aliasing isn't new. So this isn't a cat and mouse game that just got started.

What is the equivalent to that in the fake phone contacts domain? I guess removing people with the +21 country code would work for this particular approach, but otherwise...?

Good question hmm, I think its just a different strategy with phone contacts

A data broker primarily wants the social graph to make a user profile with a phone number, to show ads later on. Those people wont typically be texting or calling with spam and ads, theyll just match the number and contacts up with information shared in other apps so that ads in your normal internet browser or ad-include app use are more targeted

so if an erroneous contact never logs in thats of no consequence to them, so searching to exclude numbers would be less interesting and less likely than with just sanitizing emails


Detection would require a call/sms/email. The idea would be just to detect, if your leaked data has been acted upon.

I create a new email for most services I use, (run my own email) but I had'nt thought of this! Thanks for the idea.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact