Hacker News new | past | comments | ask | show | jobs | submit login
Most Americans don’t realize what companies can predict from their data (theconversation.com)
165 points by pseudolus 36 days ago | hide | past | web | favorite | 110 comments



I’m increasingly thinking that targeted ads are eerily similar to Isaac Asimov psychohistory[0]. E.g. you cannot reliably predict individual behavior, but with right|enough data you can reliably predict how a large enough population will act.

This is why individually we often feel that they’re off the mark, or we’re thrifty enough to ignore the ads or political or other targeting. But like others have pointed out, data is out there, and ‘they’ have infinite tries to get it right. And more importantly, it works already today. And it’s impacting everyone, so as an individuals, we also get impacted in indirect and subtle ways when e.g. friend of ours raves about new toy she bought without even realizing that she chose this product over the other because of all the ads that she never clicked.

[0] https://en.m.wikipedia.org/wiki/Psychohistory_(fictional)


Great analogy.


I’m not thrilled about the amount of information being hoovered up, but....the predictions being made with it aren’t terribly impressive.

Facebook should know a great deal about me, but the advertising categories it puts me in are either completely obvious (based on locations and group memberships that I’ve explicitly told it), or bonkers. It currently thinks I’m part of multiple wildly incompatible religions and political groups and am interested in a weird collection of abstract concepts (“decay”?)

Amazon thinks that my interests are dominated by textbooks and vacuum cleaners (if only!)

Twitter has correctly sussed out that someone who mostly follows scientists might be interested in science....or dogs.


> the predictions being made with it aren’t terribly impressive

So what? They still have the data and can refine their methods tomorrow. Today their predictions might be low quality, but they can retry as many times as they want. The problem is not the predictions they are making today; it's the many predictions (or inferences) are able to keep making in the future.

> political groups

Remember that some types of advertising is not targeted. Some political advertising or branding advertising is intended to reach "all voters"j or maybe a very broad category like "Everyone Californian of voting age". Branding campaigns don't care if you're interested in e.g. vacuum cleaners. They just want you to think of their name first every time you happen to think of or hear about vacuum cleaners.

edit: (Multiple contradicting groups could be pushing ads at your (very general) demographic.)

> science....or dogs

Many scientists like dogs?

> Amazon thinks that my interests

No, they think that showing you textbooks and vacuum cleaners has a greater chance of increasing their revenue, according to various statistical models. Targeted advertising isn't about targeting what your are interested in. It's about letting other people target your with what they think they can sell you.

edit2: Of course, it could also be a terrible model trying to use data in stupid ways. I'm just suggesting that there are many plausible explanations.


Sure; It's obviously a mix of clever and stupid.

One of the more memorable political groups was the Sandinistas. I have never been to Nicaragua, so....limited relevance. I actually just looked (increasing my likelihood to be targeted again?), and was surprised to learn that they're still around; I had no idea they out-lasted Iran-Contra.

As for Amazon, I agree that P(bought a vacuum) is a good signal for P(will eventually buy another vacuum). Still, if they have this detailed psychographic model of me, you'd think they could do slightly better than what is essentially a zeroth-order prediction.


> I have never been to Nicaragua, so....limited relevance

Ok, that's probably some sort of error.

> you'd think they could do slightly better

It depends on how they made their model. With machine learning, it's easy to build something that does improve the whatever high-level metric they were tying to optimize (like "revenue") , but gives essentially random results for many individuals. This is especially true for any anything that wasn't fully covered in the training data/; a handwritten digit recognizer may work wonderfully on average, while giving random results random output when presented with anything that wasn't covered by the MNIST dataabsse. Sometimes this can be especially fragile[1]!

[1] https://medium.com/@ageitgey/machine-learning-is-fun-part-8-...


I do modeling for an advertising recommendation engine (not at Amazon). We do controls against both human-made rulesets and random, and we consistently outperform both by a big margin. However, click through rate is always really low, and a model that massively increases revenue still serves irrelevant content most of the time, and still seems random from the POV of the user. The point of ML in advertising recommendation is to guess better in aggregate, not get it right all the time.


What about human models plus baseline likeliness to click on any ad?


When the machines can predict high click-through rates, humans will be obsolete.

What is even scarier is when ads don’t generate click-through rates at all, rather success will be measured by “subject A saw this propaganda” correlated to “subject A subsequently performed this action.”


Well basically yes. Cambridge analytica more or less measured how stupid you were and rationed out appropriately outlandish lies.

What I’m asking is whether we can establish your baseline likelihood of clicking any ad, and then use human made models to determine placements. My hypothesis is that naive human models only lose out because they miss the low hanging fruit of avoid people who never click anythingf


What if the reason click through are so low is that there is little demand created by marketing and only a relatively low rate of unexpressed demand?

In my case, when I want something I will go find it. I spent three months looking for reviews of small, efficient cars before deciding on a VW Polo TDI. The day I went in to buy one, the salesman was only interested in selling stock they had in the floor that day (it was some kind of stocktake clearance) and none of those were TDI. He was extremely rude too. So I walked out of that dealership and bought a Mazda 2

Then I got three months of ads on various sites advertising Mazda 2.

Never any ads for small, fuel efficient cars while that was what I was searching for on Google. The magic of targeted advertising completely failed to work.

For the next few years, any ads for small efficient cars will be absolutely useless because I have no need for another one.

And for some reason I am getting lots of ads for women’s clothing. I suspect this is based on assumptions about men in long term relationships looking for a bit on the side, but once again those ads miss their mark because I have never bought womens clothing, I do not visit sugar daddy dating sites, so I have no idea why they think I should be interested.

It’s not just “people who don’t click anything,” it’s “people who aren’t interested in what you are selling,” and “the thing you are selling is not an impulse buy.”

Nobody buys a second economy car as an impulse buy. A good deal on a 600mm Nikon F Mount might catch my interest though (but they’re all the same price as my car, so I don’t necessarily have that kind of money)


> I’m not thrilled about the amount of information being hoovered up, but....the predictions being made with it aren’t terribly impressive.

Conspiracy theory: what if they intentionally throw random garbage in there so you don't get paranoid? The things they want to hit you with will still be there, but they'll be surrounded by misses, and you're inclined to think "wow, they sent me an ad for a new dishwasher just as soon as mine broke, but they also sent me an ad for cat toys knowing full well I'm allergic to cats"? The dishwasher is still a great hit, but less suspiciously so.


> Conspiracy theory: what if they intentionally throw random garbage in there so you don't get paranoid?

That's not a paranoid conspiracy theory (and certainly not something that should be down-voted to oblivion). It's well documented that Target did that exact same thing to throw off its customers, so they wouldn't be weirded-out by its pregnancy-prediction algorithm.

https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ...:

> “Then we started mixing in all these ads for things we knew pregnant women would never buy, so the baby ads looked random. We’d put an ad for a lawn mower next to diapers. We’d put a coupon for wineglasses next to infant clothes. That way, it looked like all the products were chosen by chance.

> “And we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”


Predicting a pregnancy sounds so creepy and personal, but it actually seems like it should be one of the easiest life events for a retailer to predict.

The story plays up changes in consumption (scented->unscented lotion) but there are also whole categories of products that are used only by pregnant women (e.g., maternity clothes, pre/perinatal vitamins). Another huge swath of the store is devoted to infants (clothes, diapers, toys, formula). There are also really strong demographic priors (women only, 18-40 or so, though you could fine-tune that age bracket much more precisely with socioeconomic status (credit score?) or zip code.


> Conspiracy theory: what if they intentionally throw random garbage in there so you don't get paranoid?

No, it's more like random exploration in the multi-armed bandit algorithm. Randomness is used from time to time to detect categories they might have missed out. If you don't take a risk, you don't win.

https://en.wikipedia.org/wiki/Multi-armed_bandit


That's not a conspiracy theory, chaffing to obscure intent and capability is a real thing.


Then if your worst paranoid nightmare does come true, you'll be rounded up with whatever random group is getting sent to the camps.

On a more realistic side you'll just identify yourself as technically savy. And be targeted on that.


Amazon has emailed me multiple times to answer questions about an item I never bought (in the email they literally say "as the owner of" which I'm not). If they can't even get that right then I suspect they have a lot of data quality issues which would result in bad predictions.


Something you should keep in mind is that companies like Amazon are all well aware of the anecdote about Target and teenage pregnancies. And the lesson they learned from that story is they should conceal from the user the full extent of what they know about that user, to avoid creeping people out.

If you receive 10 product recommendations, perhaps only one of them is actually targeted with a very high degree of confidence and the other 9 are basically noise added to deliberately deceive you, to lead you to believe Amazon knows less about you than they really do.


There’s a distinction between what these companies can predict about you vs what they can roll up into products that other entities want to pay for.

For example, a ton of advertisers will predetermine some targeting criteria based on simple demographics, brand loyalty or rewards program data, etc. and then commit to it for a whole ad campaign, even if it means leaving money on the table by not electing to dynamically shift into more precise targeting.

For these clients, no amount of fancy predictive capability or algorithmic targeting will matter, they just don’t care. So Facebook or whomever just offers them big, sloppy and easy-to-conceptualize segments or buckets of users. They just want aggregate intelligence anyway, so nobody in the transaction cares much if it’s wrong about your age bracket or general TV interests.

This doesn’t mean Facebook is unable to produce far more alarming forecasts of your behavior, or assign you to categories based on things like political activism, privacy conscientiousness, or detect personal life details like your location trail, purchases, etc.

The really scary stuff just doesn’t end up being surfaced in connection to lowest common denominator adtech products.


Low cost drives ordinary predictions. The average prediction is probably a few milliseconds of in memory operations at best. No network calls. Certainly no IO's to disk. Even with more work, you're targeted because you're among the best matches within your zipcode. The best results does not imply good results. Ordinarily ads reflect what an advertiser specified not what the platform thinks is most likely to be successfully sold to you.


What if the data is used to find whistleblowers? Or predict them? I'm sure there's plenty of corporations and governments that will be willing to pay for such services.

https://www.abc.net.au/triplej/programs/hack/how-team-of-pre...


Came here to say this. From my own experience + an Amazon Software engineer agreed.

You can do lots with data, but in the end, many times its uselessly applied.


Totally agree I wonder if they ever had a control group that they showed things in random just to measure how good their tech is.


Dunno about Facebook/Amazon/Twitter, but Google uses a control group on every single experiment. They also usually have holdbacks (control groups over time, where the a small user population does not receive a new feature to measure the performance as people get accustomed to it) and ablation experiments (where a previously-launched feature is turned off for a small population of users to verify the expected loss in performance).

Advertising has always been a business of extremely large numbers. The chance that a given user will purchase a product because of an ad is imperceptible; aggregated over hundreds of millions of ads shown, imperceptible changes add up to large increases in sales. People tend to forget just how bad ads were before Google & Facebook; it's not that the latter have solved advertising, it's that TV, print, and billboards were so terrible that it didn't take much to improve markedly over them.


Possibly, I used to work for a smaller ad company and one of the tests we did was random ranking of ads. Problem is that it can be a very expensive test to run so you wouldn't run it on anything except a tiny fraction of traffic.


The thing that makes me increasingly concerned is the possibility of an entity using this data-surveillance NOT so they can sell us more crap, but for ulterior malicious purposes.

We already got a taste of what this can mean with cambridge analytica.

But what if some hate group (or other extremist org) with deep pockets decided to buy up and use such data in more sinister ways, targeting individuals or organizations at large scale, developing "Stasi style" dossiers to use as leverage for future actions?

The information would not need to be "perfect" but it could get increasingly more accurate over time depending on how much attention they focus on their targets.


Just imagine if it's just some tech guy, maybe unemployed or something, tired of being broke, who just needs to pay rent or something? That's what's concerning, people don't even need to have a cause, they can just be desperate. The government will be tracking the people who have a cause. You can go to the government to get help against those people because those people have something to lose. What about the people with nothing to lose? More and more people are out of work or underemployed, but I'm pretty sure the number of people who need to pay their bills remains constant.

All of a sudden all this data starts to make well off people look more like meal tickets. Imagine how easy it would be to get money out of that rich looking lawyer guy who's having an affair? Or maybe the well off looking doctor lady who voiced some views about blacks that her hospital, and the local naacp, might find interesting?

This economy, combined with massive data retention and security breaches, will make for some real perverse incentives in the future. We could conceivably get to the point where all you'd need to be is some guy with internet access who needs to pay rent by the end of the month.


Right, and what if you can also fabricate the evidence used for the blackmail. Image generation techniques are improving year over year and we've gotten to the point where generating a video of a person saying anything you want in their voice with their face is coming closer to the realm of home computing power. Anyone with the time could focus on hacking social media accounts or even just faking a sceenshot. The media doesn't need much inclination or evidence to start the slander.


Yeah, what I'm getting at more is the stuff that doesn't make it to the level of the media, but some guy could still use to make money. The media won't care about some random lawyer having an affair, but the lawyer's wife would. The media won't care about some random doctor who talked about how she could off the sub-human blacks if she wanted, but it'll be about 5 seconds before lawyers and, more importantly, medical review boards, start looking into statistics at her hospital.

You have to be prominent before the media cares, but there are plenty of people with money who are NOT prominent, and I suspect they'd make very tempting targets. In fact, I'd bet the people at that level would actually be more likely to pay up.


> wholinator2: Right, and what if you can also fabricate the evidence used for the blackmail.

I have a suspicion this threat, and the OP are Quantitatively different at the present time. If unscrupulous companies correlate an effective network with PII, they could do a lot of mischief on a massive scale and many people would be largely unaware. Sell your data, use your data as training data for some other bad action etc.

Making fakes requires an audience of, possibly discerning, humans to find the fakes convincing and then take some action.


If you haven’t seen Lives of Others[0], you probably should. Talks pretty directly to what parent is proposing.

[0] https://en.m.wikipedia.org/wiki/The_Lives_of_Others


>The thing that makes me increasingly concerned is the possibility of an entity using this data-surveillance NOT so they can sell us more crap, but for ulterior malicious purposes.

Like causing a measles outbreak?

https://www.oregonlive.com/clark-county/2019/02/measles-outb...


> General interest data | .... political leanings, magazine and catalog subscriptions .... preferred {celebrities,movie genres,music genres} ... {Bible,New Age/organic} lifestyle ...

If they wanted to get people's attention, this list should have included "preferred pornography" and maybe even "the other type of pornography that is viewed when your spouse's cell phone and your cell phone are in different zip codes.


I wrote about this relatively recently, basically there’s now enough data and enough good systems out there that companies can start predicting what you’ll do next.

This has been a thing since credit scores. However, now it’s to the point they can even mimick your voice and predict how you’ll respond to situations.

We are walking dangerously and blindly into a nightmare right now, and no one seems to realize it.


Seems like it. I tend to watch specific type of videos at specific time of day on YouTube. Even though I watch tons of other videos on the same account, YouTube can do a pretty good prediction at that time of day and present me the videos I'm going to watch. I find it so useful!


Have mentioned it before. Watch any recent videos of Yuval Noah Harari and he almost always talks about the concept of 'The Hackable Human'. Forget about what data they have. Soon they will know more about you than you do. It has far reaching effect but let's hope humans find a way to keep outsmarting technology.


If data collection can be sifted to discover behavior of groups...

And groups of people can have that behavior correlated to cultural and ethnic and racial factors...

Then, technically, isn't all of Silicon Valley violating the Civil Rights Acts of the 1960s? For example, let's say black males statistically swipe phones a certain length and certain time... doesn't this mean ads targeting them can be engaging in disparate impact?


Maybe. If such a swipe gesture discrepancy existed, ML could pick up on it as a proxy for race, despite having no concept of race and despite no human directing it to do so. One example I've heard of is lending software learning to use zip codes as a proxy for race, then systematically denying loans to minorities.

Much more egregious than this though is for years facebook was apparently allowing realtors to target only certain ethnicities. This was a case of deliberate human-driven discrimination, and as far as I know nobody has been held accountable for it. So far the tech industry has proven itself pretty good at getting around the law.


> Then, technically, isn't all of Silicon Valley violating the Civil Rights Acts of the 1960s? For example, let's say black males statistically swipe phones a certain length and certain time... doesn't this mean ads targeting them can be engaging in disparate impact?

Yes, and lawyers have had, and will continue to have, a field day in every instance that they can prove this is true.


Funny how the brain works.

People do realize something is happening. It’s why so many think Facebook is “listening” to their conversations then showing ads for “products I’ve never searched for then talked to about with a friend”. No, FB inferred you would buy it because your friend just did.

It’s hard to comprehend the effects of data collection. Which, of course, makes it even more powerful.


Just yesterday I read a post on Reddit where the OP was wondering about an online ad being mere coincidence or some deep data-collection plot.

Per the story: They had purchased ice cream at the grocery store using a credit card "never used for online purchases", and then at home they see an online ad for that very brand//flavor of ice cream. This raised alarm bells, hence the post to sanity-check.

Sometimes it's like we shouldn't fear the Terminator but the access terminal in our pocket. Other times it seems like both, or neither.


That isn't an "access terminal" in your pocket, but a fully-fledged computer and sensor platform operated by hostile actors.

The ice cream is straightforwardly due to the full purchase data being backhauled to the surveillance companies. The real wtf here is assuming that a card not being "used on the Internet" affects anything. People want to cling to this really weird "if I don't see it, it can't be happening model", as opposed to realizing that the surveillance industry is based around operating without your involvement or consent.


How would FB know what my friends have purchased? I've experienced the "ad that's way too specific to a conversation held just moments ago to be mere coincidence" phenomenon several times. Each incident has involved the kind of product neither of us owned, none of our other friends would ever buy, and dull/boring enough that no one would have a reason to post/chat about it on Facebook.


Why were you talking about this terrible sounding product?

You're part of a cohort that the brand thinks would buy it. Or at least talk about.


Is this the only reason the current house of cards stays up and functions? I think so.

Almost no one realises what companies predict and infer from their data, or the extent of its collection. Once they see some of the surface effects they start calling it creepy or scary.

If people ever start realising the true extent, expect a backlash, surely?


> If people ever start realising the true extent, expect a backlash, surely?

No. Since it has been going on for so long, people, when they realize it now, will rationalize:

• To not have realized it for this long, I must have been stupid.

• I am not stupid.

• Therefore, I must be OK with what is happening.

And they will come up with numerous ridiculous rationalizations – fake reasons to be OK with the current situation, all to avoid admitting to themselves that they did not realize it.


Yes it is. And I always mention it on this board and others whenever people start concluding that "people just don't care about privacy."

It's not that they don't care, they just don't understand the true implications of having someone like Google or Facebook have pixel tracking on all websites on the web tracking you, or them tracking wherever you go, and the thousand ways in which that data could be misused by them, their partners, or people stealing that data from those companies.

I've noticed from other older stories that even pro-surveillance politicians don't understand what they are pushing for, as some of them were later "shocked" to discover that those very powers could also be used to gather information on them. And then they started singing a different tune about the surveillance powers spy agencies should be given.


> It's not that they don't care, they just don't understand the true implications

Just to give you a data point since you drew that conclusion. I'm fully aware of the tons of the mechanisms website use to track you and all of it's implications. I am not concerned about it at all.


> If people ever start realising the true extent, expect a backlash, surely?

I'm not sure. We seem to be in a sharing culture where we want to broadcast on the internet where we are, who we're with, what we're eating, what we think about current events, etc.

And it seems to be a source of pride to publicly identify with political groups and social movements.

So, I feel like people freely share much of this info.


Everything about every person, living or dead, is known in near real-time.

Seisent (bought by LexisNexus) was being used to solve cold cases in the mid naughts. Just by using fragmentary data and sifting thru millions of demographic profiles to see who matched.

[FWIW, that "What data brokers know" table is pretty good.]

--

If we choose to protect people's privacy, give individuals control over what is publicly known about them, we'll need to encrypt demographic data at rest.

Meaning translucent database strategies. Just like how password files are salted and then encrypted. You need your pass

Meaning using universal identifiers. Like implementing Real ID.

It is counter intuitive that identifying (catalog) everyone is how we protect everyone. But if there's another way, I haven't heard of it.

--

There will be some upsides to finally having one master identifiers.

Data quality will dramatically improve.

Truly portable health records.

Nearly 100% accurate voter registration (& eligibility).

The government "census" will be just running a report.

We'll daylight all the bad data broking actors.


Setting aside the ethical/political implications of a universal identifier for everyone, your description makes me wonder about the technical implementation of the ID itself.

My first thought was whether it's possible to design the syntax of the IDs, so that they're not just sequential or random, but have some inherent properties that make them easier to organize, i.e., for sorting/categorizing. Kind of like Open Location Code [0], but for people. Since they should be immutable (same ID for lifetime), I suppose it could encode birth date/location, or maybe genetic "markers".. (Edit: On the other hand, that would by itself be a leak of private data..)

Once that's globally practiced (easier said than done!), there could be a searchable database of all registered individuals on the planet. I could see the practical advantages of having such a system, but it sure does have a hint of dystopian future.

[0] https://en.wikipedia.org/wiki/Open_Location_Code


Layer in the fact that on Facebook--and other ad platforms--advertisers can import lists of people by name/contact info to target specifically using data from brokers, it gets creepier. I'm less bothered by a algorithm that tracks me and serves ads as long as I'm aggregated in the result tracking, but when they're tracking/analyzing me as a specific person is super creepy.

Edit: If you check out your Facebook > Ad Preferences you can see which companies have added you specifically to their ads by email/phone number. My list has a ton of car dealerships and real estate brokers.


I think the primary problem is glossed over a bit in the article. Even though most people are unaware of the kinds of information gathered about them, 36% are "Somewhat comfortable" or "Very comfortable" with the kinds of profiles being built. This seems (to me) a depressingly high number, one that indicates that there will be no grassroots rebellion against the surveillance economy.


> For example, data about a mobile phone’s past location and movement patterns can be used to predict where a person lives, who their employer is, where they attend religious services and the age range of their children based on where they drop them off for school.

Is it me? I don't consider any of these to be particularly sensitive information.

Where you live: We used to have these things called "phone books" where they listed the name, address, and phone number of everyone in town. The world didn't collapse.

Who their employer is: My name and picture are listed on my employer's public website. Not exactly hard to find.

where they attend religious services: I don't, but of the people I know who do, nobody has ever considered it something they need to hide. Many would want to tell you and ask you to join them.

age range of their children: So? If somebody knows you have a kid aged 5-10, then they can...what?

I mean, I still try to limit how much information I expose online, but if anything this makes me less worried rather than more.


That's a rehash of a number of silly 'nothing to hide' pseudo arguments.

Phone books were not instantly searchable in bulk all across the globe. Who your employer is is not important, but in bulk to know who your employer employs and to be able to access that information in bulk and within a couple of milliseconds gives a lot of power to outsiders. I should know because I use that power regularly for my work, and trust me, lots of people we read up on would do better to keep a much lower profile online. It does not benefit their employers either.

Whether you attend religious services or not has been used to target (and kill) people in the past, and if you are willing to extrapolate a bit, was used for mass murder.

The age range of your children may not be so important, the fact that you have children may be, depending on your station in life.

The fact that you personally have not been inconvenienced by any of this - yet - is not a datapoint worth recording.


> Whether you attend religious services or not has been used to target (and kill) people in the past, and if you are willing to extrapolate a bit, was used for mass murder.

Right, it was, long before the ability to aggregate this information en masse existed. The fundamental problem there is that there are people who want to kill other people because of their religion. Taking away information from them doesn't solve the problem, it just delays it a little. It's a form of security by obscurity.

If anything, they can still go and target the actual place of worship itself, which is not hidden. If all they care about is killing people of a certain religion, they don't need to know who they are particularly, just go to the place of worship for that religion.

The larger point here, is that if we can't trust the authority figures, the government, the corporations, etc. It doesn't matter how well we hide. The problem of a fascist government can't be solved by hiding. It has to be solved by enacting a government that's not fascist.

Edit: This is not a "nothing to hide" argument. The information listed is not the sort that is private information. Your address is not a special secret that only you can know. There are numerous legitimate reasons that the government or a corporation might need to know your address. The government needs to collect taxes and keep property records. Companies need to deliver products that you order. All of that is going to require sharing your address.


That went well for all the dutch jews in WW2 where the government also held detailed records about religious affiliation.. sometimes it is not about what is now, but what will or might be later.


Exactly. People forget the role of IT ("data processing") at all of our peril.

https://en.wikipedia.org/wiki/IBM_during_World_War_II


Using this[1] map created from the Dutch census data (which included religious affiliation), the Nazis were able to very efficiently send about 3/4 of the black dots on that map to be murdered in the camps.

Collecting that census data probably seemed reasonable at the time. The new use case that needed a new way of aggre4gating and presenting the data happened later, almost as an emergent property. The only y to prevent problems that haven't been invented yet is to minimize the attack surface: if the records don't exist, they can't be misused.

[1] https://www.verzetsmuseum.org/uploads/archive/museum/topstuk...


> Right, it was, long before the ability to aggregate this information en masse existed.

No, it was not, please educate yourself:

https://jacquesmattheij.com/if-you-have-nothing-to-hide/

> The fundamental problem there is that there are people who want to kill other people because of their religion.

Yes, that is the fundamental problem. And technology makes that much easier to achieve. Of course the religious idiots in Algeria murder people. As do the religious idiots in many other places. But they tend to be a lot less effective than what can be achieved when one harnesses the engines of progress: industrialization and automation.

> Taking away information from them doesn't solve the problem, it just delays it a little. It's a form of security by obscurity.

That is dangerously naive. No, it does not 'just delay it a little', it is the difference between mass genocide and a slow trickle.

> If anything, they can still go and target the actual place of worship itself, which is not hidden.

They can and they do, but this is not about any adversary, it is about all adversaries and that includes the ones who would have zero compunction about going after places of worship and everybody who visits them. Now, given a bit of tech and a place of worship you can target all those who ever went there long after the fact even if they decide to stay away from the place of worship because they are on to the would be murderers.

> The larger point here, is that if we can't trust the authority figures, the government, the corporations, etc. It doesn't matter how well we hide.

If not for the ability of people to hide other people the number of Jews today would be substantially smaller than it is. Really, again, go and educuate yourself.

> The problem of a fascist government can't be solved by hiding.

No, but individuals can stay alive by hiding.

> It has to be solved by enacting a government that's not fascist.

Unfortunately Fascist governments will happen every now and then, saying that they shouldn't is denying the fact that they will, several countries are on the path to Fascism today, whether they will get there or not is anybody's guess. But if they do, I guarantee you that all this tech will make the next pogrom a lot more effective than the previous one.


> Unfortunately Fascist governments will happen every now and then, saying that they shouldn't is denying the fact that they will, several countries are on the path to Fascism today, whether they will get there or not is anybody's guess. But if they do, I guarantee you that all this tech will make the next pogrom a lot more effective than the previous one.

And, unfortunately, information such as what religion you are can and will be recorded. Even if it's just someone standing outside a place of worship and watching who goes in and out or the records of the church/synagogue/mosque etc itself. Saying that it shouldn't is just diverting resources into a futile attempt to stamp out information that cannot be contained.

Fascist governments don't just "happen", they are not a force of nature like a hurricane or an earthquake. They are a result of conscious decisions by human beings, which are within our power to control.


> Saying that it shouldn't is just diverting resources into a futile attempt to stamp out information that cannot be contained.

You can't stamp out information and you can't contain it. But you can create awareness of the effect of information in the aggregate and that is what this article is doing. You are taking the defeatist position that there is nothing we can do about this so we should all just fall in line and give up our data. I disagree.

> Fascist governments don't just "happen", they are not a force of nature like a hurricane or an earthquake. They are a result of conscious decisions by human beings, which are within our power to control.

No, they are not within 'our' power to control. There were many 'good' Germans in WW II. And yet, their government happened and there wasn't a thing they could do about it.


Indeed. There is the concept of the Good Nazi. Just a regular woman/man living in germany, who goes on about his life, doing nothing to stop the events leading to and including genocide.

For people in China today you have to cheer on the party, or you will get docked social credits. Contrast that to the strongman leading the US now, who is a demented serial failure. Things would be much muddier if he had the ability to undertake the slow and steady march towards persuasive authoritarianism, like the current chinese leadership.


The most frequently heard words in the aftermath of WW II in .de were 'Wir haben es nich gewusst.'.

https://de.wikipedia.org/wiki/Zeitgen%C3%B6ssische_Kenntnis_...


In the end, something was done about it, was it not? Germany is not currently fascist, so clearly it is within the power of humans to reverse it.

Yes, it wasn't easy, but hopefully we have learned something from the last time that we can use to make it easier the next time?

If it is not within our power to control...then how was it controlled? Did the Nazis just lose interest and give up?


They were defeated, trialled and hanged in Nuremberg, by judges from all over. One of the most important things we learned was that following orders is not an excuse.


Or given cozy jobs in the US rocket program.


"Germany is not currently fascist, so clearly it is within the power of humans to reverse it."

Germany as a whole is not ruled by fascists, but there are a significant number of neo-nazis, fascists, fascist sympathizers, bigots and racists in it. They've even done surprisingly well in some elections and control parts of the government. It's not inconceivable that some day there may be a Fourth Reich.


It is almost as though there is a race on to see who will go into history as the next country to be embarrassed for decades. I'm surprised to see Germany in that list to begin with and it is a pity that there are still plenty of people who refuse to learn from history.


> Yes, it wasn't easy, but hopefully we have learned something from the last time that we can use to make it easier the next time?

You didn't notice the irony that you are the one who insists on not learning from it, did you?

> If it is not within our power to control...then how was it controlled? Did the Nazis just lose interest and give up?

You are seriously telling us that you cannot distinguish 6 million jews killed from no jews killed?

Or is "controlled" a word that you use to describe anything that at least one person survived in the end?


What lesson do you think should be learned? That when freedom is threatened,we should cower, run away and hide? Because that sure isn't what stopped fascism the last time. Bullies need to be fought, not capitulated to


> That when freedom is threatened,we should cower, run away and hide?

How about that when freedom is threatened, we should not be telling the enemy all our secrets?

> Because that sure isn't what stopped fascism the last time.

Except that is exactly what is stopping fascism all the time. What you are doing is that you are looking at only the cases where fascism happened, and then concluding that--surprise--fascism wasn't prevented. That's exactly the same logic as saying that "vaccines haven't stopped the disease last time", pointing to the exceptional cases where vaccination didn't lead to immunity, and using that as an argument against the effectiveness of vaccination.

> Bullies need to be fought, not capitulated to

So, which bullies are you fighting right now?


> How about that when freedom is threatened, we should not be telling the enemy all our secrets?

Who is talking about secrets? What I have been saying all along is that none of this information is particularly secret or sensitive. Your address is not a secret. Your employer or the fact that you have children is not a secret. Your religion is not a secret. Even Jews, who have more cause to be concerned than anyone, do not go to great lengths to hide their religion, nor should they. If anything, many outwardly wear markers of their religion in public, and why should they live in fear and hide as you would have them do? Instead of telling them to hide themselves, how about we stop people who would attack them for their religion?

As you go about interacting with people in the public sphere, you will necessarily expose certain pieces of information about yourself. That is unavoidable. You want to live in a place where the government collects taxes and pays for roads and police and such? Well, then the government is going to need to know your address. You want to have public school for your kids? Well guess what, there's going to have to be a record somewhere that you have kids. You want to go to church in an easily accessible building in a public space and not in a secret underground bunker? Well, people are going to be able to see that you go to that church. Deal with it. Unless you want to live alone in the forest as a hermit, a certain amount of information-sharing is necessary to participate in society.

The fact that that information can be collected and aggregated is also unavoidable. What happens next, the actions that particular people or groups take with that information, THAT is what is avoidable, and where our efforts should be concentrated. The fact that there is a database of people's addresses is not a problem. If there is a goon squad rounding up people at those address, the goon squad is the problem.


> Who is talking about secrets?

We both are.

> What I have been saying all along is that none of this information is particularly secret or sensitive.

What is the point of forcing this binary distinction that doesn't exist in reality? There is no binary distinction between "secret" and "public" in reality. There is only information, and effort required to obtain it. "Secret" is the label we commonly use for the end of the spectrum where it's close to impossible to obtain information, "public" was traditionally something along the lines of "can be obtained with medium to no effort".

Just because some information isn't impossible to obtain, doesn't mean that the distinction between "can be obtained with medium effort on a single person" and "can be trivially obtained on everyone" is of no consequence to how information can be used and abused.

> Your address is not a secret. Your employer or the fact that you have children is not a secret. Your religion is not a secret.

Except they all are, to a degree. That is, they are all not necessarily known to you exclusively, but that doesn't mean that there is no difference between everyone knowing their neighbour's religion and a company having a database of the religion of every single person on the planet. This binary distinction that you are insisting on is what is preventing you from seeing the actual structure of the world, and the actual consequences of aggregation of information in the wrong hands.

> Even Jews, who have more cause to be concerned than anyone, do not go to great lengths to hide their religion, nor should they. If anything, many outwardly wear markers of their religion in public, and why should they live in fear and hide as you would have them do?

I would not have them do anything, that's just a problem with your authoritarian attitude. You are only seeing the options of forcing people to do A or forcing people to do B. But the non-authoritarian solution to this problem is to give people the individual choice between A and B. Noone wants to force anyone to hide their religion. But everyone should have the option to do so if they wish, and not be forced to reveal it because that is what you want.

> As you go about interacting with people in the public sphere, you will necessarily expose certain pieces of information about yourself. That is unavoidable. You want to live in a place where the government collects taxes and pays for roads and police and such? Well, then the government is going to need to know your address. You want to have public school for your kids? Well guess what, there's going to have to be a record somewhere that you have kids.

So? How is that relevant to the unnecessary collection of information?

> You want to go to church in an easily accessible building in a public space and not in a secret underground bunker? Well, people are going to be able to see that you go to that church.

See what I wrote above about that useless binary distinction. There is no logical argument connecting "I want to go to church in an easily accessible building in a public space" to "therefore, we must accept that someone is going to archive high-resolution footage and use it to build a detailed profile of me and everyone else visiting that church" or whatever other abuses would be possible with that information that you insist to categorize as "public".

> Unless you want to live alone in the forest as a hermit, a certain amount of information-sharing is necessary to participate in society.

OK. Again: How is that relevant to the unnecessary data collection that happens in addition to that?

> The fact that that information can be collected and aggregated is also unavoidable.

... because? I mean, it sounds like you are saying that it is in principle impossible to ever prevent people from collecting information? As far as I can tell this is a completely unsubstantiated claim, that's also obviously contradicted by reality, where we all the time prevent people from collecting information just fine.

> What happens next, the actions that particular people or groups take with that information, THAT is what is avoidable, and where our efforts should be concentrated.

... because? You are just making those claims, but what is your justification for that?

> The fact that there is a database of people's addresses is not a problem. If there is a goon squad rounding up people at those address, the goon squad is the problem.

Yeah, I got that that is your claim. But just making unsubstantiated claims is not convincing.

I assume that you would agree that a goon squad that wants to round people up but doesn't know where to find them would not be particularly successful with their goal, right? And a goon squad that does know where to find the people and thus does indeed round them up ... would have worse real-world consequences, wouldn't it? And the difference between the two is that one knows the addresses, the other doesn't, right? So, how do you justify then that the difference that decides whether people end up rounded up or not is not a problem?

Mind you, I agree that the existence of a goon squad that wants to round up people is a problem. I am only addressing your claim that the availability of data is not a problem, even though it obviously makes a difference in the consequences.


> I would not have them do anything, that's just a problem with your authoritarian attitude. You are only seeing the options of forcing people to do A or forcing people to do B. But the non-authoritarian solution to this problem is to give people the individual choice between A and B. Noone wants to force anyone to hide their religion. But everyone should have the option to do so if they wish, and not be forced to reveal it because that is what you want.

> See what I wrote above about that useless binary distinction. There is no logical argument connecting "I want to go to church in an easily accessible building in a public space" to "therefore, we must accept that someone is going to archive high-resolution footage and use it to build a detailed profile of me and everyone else visiting that church" or whatever other abuses would be possible with that information that you insist to categorize as "public".

You seem very eager to ascribe powers to me that I do not possess. I am not forcing anyone to do anything. My will is completely irrelevant. I'm saying something like "It's cold outside. If you go out there you will be cold". I'm not forcing people to be cold. I am telling them that it is cold. Do you see that?

If you would like to have a completely private religion that exists only in your head that you never tell anyone about, go ahead. Knock yourself out. I don't see how I could stop that even if I wanted to, which I don't. It makes no difference to me.

Acting in the public sphere reveals information about you. That is a fact. It is not a matter of a creepy guy with a thin mustache sitting in a van outside your church with a high resolution camera. That is not even necessary. Plenty of legitimate interactions will expose your information.

Let's say you want to subscribe to your church's newsletter. A perfectly normal thing that anyone would want to do. Well, now you, your church, your church's email provider, your email provider, your church's ISP, your ISP, all have access to the information that you go to that church. If they use some third party software like Mailchimp, then Mailchimp has that information as well. If they outsource the management of that newsletter to their web design firm, then they also have that information. If they want to fund it by having the local auto repair shop or funeral home place an ad in the newsletter, then maybe they have that information as well. They did not illegally collect it. They did not unnecessarily collect it. It was absolutely necessary for them to have that information for them to be able to do the thing that you asked them to do. But now the information exists.

This applies to most information that's collected. If I want Google to give me directions somewhere, they necessarily need to know where I am and where I'm going. If I want to be able to make calls from my cell phone, that cell phone needs to ping a tower that is in a place that gives information about where I am at that time. If I visit a website, the website is going to have access to the fact that a certain IP address with a certain browser configuration visited it.

Yes, some nefarious actor like a fascist government could force them to hand over that information, or a fascist terrorist group could hack into their database and steal it and commit acts of terror, but if you want the information to not exist at all, then things like your church newsletter are also not going to be able to exist. Most people want to be able to do things like receive a newsletter from their church, so making that not exist isn't really an option. Hence we need to focus more on controlling our government by making sure that fascists do not come to power, and prosecuting people who would harm others moreso than we need to worry about the fact that information exists in a database somewhere because we put it there to serve some purpose that we wanted.


Stop with the strawmen, please. It is getting silly.


A couple of million people and their descendants would like a word with you.

It's all fun and word games until its your relatives that end up in a gas chamber somewhere. So yes, sure, 'something' was done about it. But that something did not magically resurrect the people that didn't make it.

And as for having learned something from the last time, it does not look like we have learned all that much.

> If it is not within our power to control...then how was it controlled? Did the Nazis just lose interest and give up?

I take it these are rhetorical questions that need no answer.


>following someone around takes time. If you photograph them you have to develop film (and you won't know if the photos turned out until then). And while it may have been legal, social norms said don't stalk people

It also required a minimum ratio of one person to one target to capture such information. The barriers to justifying round-the-clock surveillance of someone were far higher when such activity entailed paying for round-the-clock manpower. An individual would need to have made very wealthy or very determined enemies in order to demand such close attention but technology has lowered these barriers such that literally everyone with a smartphone is effectively under round-the-clock surveillance.


> That's a rehash of a number of silly 'nothing to hide' pseudo arguments.

We shouldn't so dismissive of the "so what" attitude that is pervasive in this neverending stream of data collection articles. Many are using their opinion of "great harm" to influence their cries for oversight, I think it's only fair for others to use their opinion of "no harm" to counter it. GP didn't say they had nothing to hide, they said what was collected was harmless and that opinion, especially given the tradeoff in services received, is widely held and it's ok.


>Phone books were not instantly searchable in bulk all across the globe.

I think this is something that's overlooked. Think back to the 80s. A lot of things that are legal but would be creepy are very time intensive - following someone around takes time. If you photograph them you have to develop film (and you won't know if the photos turned out until then). And while it may have been legal, social norms said don't stalk people - don't expect a social life if you follow people around photographing them.

Now what would have been a week long horror montage is accomplished with a single web search and perusal of their Instagram.


This isnt a nothing to hide argument because there is fair trade of information, not compulsory extraction. Also the information is not publicly available

And the counter argument is an FUD theory that all information should be banned because it can potentially be misused


> This isnt a nothing to hide argument because there is fair trade of information

You can have the one with or without the other.

> Also the information is not publicly available

Until the next hack or subpoena.

> And the counter argument is an FUD theory that all information should be banned because it can potentially be misused

That is nonsensical and a strawman argument to boot.


> You can have the one with or without the other.

then that is not fair trade

> Until the next hack or subpoena.

here, the FUD again

> That is nonsensical and a strawman argument to boot.

if something doesnt make sense i can explain. where is the strawman/


> here, the FUD again

It's only FUD if it does not happen.

https://www.cnbc.com/2018/10/16/facebook-hack-affected-3-mil...

And tens of thousands of other examples besides that one.

> if something doesnt make sense i can explain. where is the strawman/

Nobody said 'all information has to be banned'. You made that up.


i believe we disagree on whether state should serve the people or the other way around.


How exactly is Facebook spying on me on every damn website without ever even telling me about it, let alone asking me, and with giving me nothing in return, a fair trade?

How exactly is Facebook pressuring people into using their services by establishing a quasi-monopoly on the social graph a fair trade?


If you were getting nothing in return you wouldn't be using it. I believe that facebook should be transparent about the kind of inferences they do for you, but they should also be able to deny you service if you don't agree.

Whether facebook is a monopoly is a different matter, which is orthogonal to the fact that they offer you a valuable service in return for advertising, which is imho a fair trade, as long as both of you are clear about the consequences.


> If you were getting nothing in return you wouldn't be using it.

I am not using it, obviously. So, how is that a fair trade?

> Whether facebook is a monopoly is a different matter, which is orthogonal to the fact

No, it is not orthogonal because a monopoly inhibits price discovery, so your argument is essentially: Because you are willing to pay the price that the monopoly demands, it's a fair trade.


Putting all this info together used to be much more labor intensive, basically infeasible to do at scale. The scale problem is solved, and now companies can mine that data for profit.

Maybe it's not that scary in a western society that's generally considered free, but imagine that power in a fascist state.


In general, when you're in the majority this information doesn't matter much since you're washed out among the crowd. When you're in a minority however it can be used for nefarious purposed by the right group. For example, the KKK may like to know all the black people who live in predominantly white neighborhoods so they can set a few crosses on fire. Or maybe, you live in a fundamentalist Christian town and prefer your neighbors not know you're not actually Christian since they'd harass you (and the cops would help).


Is the KKK going to search through a marketing database to find out where black people live, or are they going to just drive around the neighborhood and see which houses black people go into?

The information flows both ways. We could also use location data to see who regularly attends KKK meetings and who was at the house at the time a cross was burned in the yard and then arrest them, which is ultimately going to be more effective in stopping things like that from happening.


>Is the KKK going to search through a marketing database to find out where black people live, or are they going to just drive around the neighborhood and see which houses black people go into?

The intelligent ones will figure out the fastest way to get what they want or maybe some foreign government interested in impacting an election will send them a list of targets.

>The information flows both ways. We could also use location data to see who regularly attends KKK meetings and who was at the house at the time a cross was burned in the yard and then arrest them, which is ultimately going to be more effective in stopping things like that from happening.

Attending a KKK meeting is not illegal and turning off your cell phone during the crime is easy.

Furthermore, the police are not angels but merely people. If you make it easy for the police to get this information then they will abuse it against those they don't like. Like the FBI threatening Martin Luther King.


> The information flows both ways.

But the willingness to commit crimes and to hurt people does not. If one side is willing to be violent towards the other, but the other is not, then the symmetry in information flow does not result in a power balance.


"Is the KKK going to search through a marketing database to find out where black people live, or are they going to just drive around the neighborhood and see which houses black people go into?"

You're assuming that every "black" person can be identified by what they look like. To many racists, however, one's ancestry matters a lot, so if one happened to have too many (or maybe any) black ancestors, then they would consider you black (or at least "non-white"), no matter what you look like.

Take a look at the elaborate Nuremberg Laws[1], which decreed who was or was not legally considered a Jew. This was done based on ancestry as well as religious affiliation. It was done with 1930's and 40's paper records and technology, but could be done much more efficiently with the computers and digital records of today.

Incidentally, the Nazis' views on race were greatly influenced by Eugenics movement in America and groups like the KKK.

[1] - https://en.wikipedia.org/wiki/Nuremberg_Laws


This is a narrow and reductionist view that misses the scope and scale of the issue to make it personal when this is not personal.

Nobody is interested in the random individual so looking at this the purely personal level is pointless.

It's the ability to do this enmasse, 'collect' and 'collate', analyze and drill down to 'people of interest' and the power it gives the data holders that makes it toxic and ominous.


Now pretend all this is public and someone stalks you. Maybe it is your abusive ex that occasionally sends you creepy messages and harasses anyone you date.

It isn't like we have many protections in place to keep the bad folks from the good. And I'll add that phone books didn't have everyone in town - only the person who paid for the line. For many, you had to look up their family member's or roomate's name to get their phone number and address. Your name and picture might be, but most people's are not. You might not want folks in your conservative town to know which brand of religion you follow. You might not want those folks in that conservative town to know you go to a gay bar most Saturday nights either.


If someone is stalking and harassing you, we need the police to arrest them. Hiding your location data from Google isn't going to help that. The problem there is that an individual is breaking the law. I doubt very much they are finding your location because it's stored in an advertising database somewhere.

If you can't openly practice your religion or sexuality in a town, that is a failing of the people of the town. Either stand up for yourself or move to a town with better people. Again, this is not a problem that can be solved by hiding, and living your life having to tiptoe around core parts of your identity so that idiots won't be rude? That is a miserable way to live.


> Again, this is not a problem that can be solved by hiding, and living your life having to tiptoe around core parts of your identity so that idiots won't be rude? That is a miserable way to live.

And yet, it has been and is being used successfully by people. Sure is it miserable, but it is a lot less miserable than being killed or out of a job without the money to move and without a social support network.

As long as you are not the one who is funding every oppressed person's move to a better place, your argument is simply authoritarian bullshit, where you are trying to dictate how other people solve their problems for purely ideological reasons and without consideration of the costs that has to them.


If someone is stalking and harassing you, we need the police to arrest them.

You do realize that this isn't really happening? I mean, I had someone physically following my car. I ran red lights. I stopped at the jail instead of going home - and they weren't going to even do a police report until they realized they knew who it was. Schools cover up rapes. They often do little to nothing with protection orders. I have, however, wound up in the "wrong neighborhood", told it was a crack neighborhood, and that "we've found out that white people do it too". If you want folks to be arrested, perhaps have a functional police force.

Or that folks don't always have the means to "move someplace better". Or, you know, that some of the people are 15 years old and can't actually have control over this stuff. Or worse, you think it is freaking normal.

Until society comes down hard on such things or provides actual relief to folks in bad situations, perhaps you shouldn't look down on wide swaths of society that hide from a stalker, hide their religion, or hide sexuality.


> Either stand up for yourself or move to a town with better people

That is extremely optimistic. You're right that it's a "people" problem, but not one that's easily solved in the near future. You can't just say: This would be a problem if there weren't assholes around. Well some people are assholes, so this is a problem.

You should also be able to leave your wallet in the park, without it getting stole, but you can't so you don't. You don't move as a result of it, that would be stupid, the major cities of the world would be empty.

Also we shouldn't forget that one of misuses we have seen of these types of information are law enforcement agents tracking (ex-)partners.

Location data may not be secret necessarily, but it can be private.


True, some people are assholes. And if we just hide from them and never confront them, they will continue being assholes because in their experience, being an asshole has worked out well for them, so why would they stop?


So, where are you right now confronting assholes that are bullying other people?

It is really cheap to say that other people should just confront people who are assholes towards in order to justify that it's not your problem to deal with.


It is really cheap to demand that everyone else fight your battles for you. I mean, sure, I'd love to play kindergarten teacher and run around making sure everyone is nice to each other, but I've got my own life here to live, you know? Actually, I wouldn't love to do that, it sounds exhausting, and if you won't do it on your own behalf, why should I? For my part, I treat everyone as fairly as I know how and I expect others to do the same.


> It is really cheap to demand that everyone else fight your battles for you.

Noone is demanding that. You are demanding that people fight the battles even though they don't want to. So, if your demand is that the battle should be fought, it is up to you to fight it, or to keep your demands out of other people's lives.

> but I've got my own life here to live, you know?

You might be surprised, but other people have their own life to live as well. So stop telling them that they should be fighting a battle they don't want to fight, just because you are ideologically opposed to how they deal with the problem that they are facing.

> Actually, I wouldn't love to do that, it sounds exhausting

So, what is your justification for demanding that other people do this exhausting work?

> and if you won't do it on your own behalf, why should I?

Noone says that you should. But if you don't, then don't tell them how they have to deal with their problem.


> For example, data about a mobile phone’s past location and movement patterns can be used to predict where a person lives, who their employer is, where they attend religious services and the age range of their children based on where they drop them off for school.

Well as far as google thinks I warp immediately from Vancouver to Calgary once I switch from wifi to data then back again every day. I'm guessing it has to do with my phone company but they always give me results from Calgary and show my location as being in Calgary based on my internet address.


I'm curious, who do data brokers sell this data to, and how is it eventually used?

For instance, if they know where I live and who my family is, do real estate companies buy data of people that died recently to contact my family and get a deal on my house?


Question if google asks you explicitly if you are OK with using such info would that be OK with most people?


They don't because it doesn't provide them any value in their day-to-day lives.


thats one reason why most people dont care about privacy




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: