tl;dr, if it happens, there will be a 5 year retirement period. We have quite a while to deal with this it seems. This isn't a nothing-sandwich, but pretty close (unless your company is something .io)
The cognitive dissonance is in the voters and users.
Even right here on HN, where most people understand the issue, you'll see conversations and arguments in favor of letting companies vacuum up as much data and user info as they want (without consent or opt-in), while also saying it should be illegal for the government to collect the same data without a warrant.
HN is filled with folks that wrote the code in question, or want to create similar products. And they hate to have it pointed out that these tools may cause harm so they thrash around and make excuses and point fingers. A tale as old as this site.
Explaining that modern technology is user-hostile and destructive to the society is nowhere else more on-topic than Paul Graham’s ego blog. While it might be true to say the site is “for” robber barons, There are a lot more users here than the ones you described.
>The cognitive dissonance is in the voters and users.
People really need to learn to say “NO” even if that means an inconvenience “Your personal information might be shared with our business partners for metrics and a customer tailored experience” no thanks, “what is your phone number? so I can give you 10% discount” no thanks, “cash or credit?” Cash, thanks, “login with google/ apple/ blood sample” no thanks
However, that's not at all a cognitive dissonance. Fundamentally, there's a difference between governments and private companies, and it is fairly basic to have different rules for them. The government cannot impinge on free speech, but almost all companies do. The government cannot restrict religion, but to some extent, companies can. Etc.
Of course, in this case, it's understandable to argue that neither side should have that much data without consent. But it's also totally understandable to allow only the private company to do so.
There is fundamentally a difference between corporations and the government, but it's still a cognitive dissonance. These aren't the laws of physics - we chose to have different rules for the government and corporations in this case.
There are plenty of cases where the same rules apply to both the government and corporations.
There isn’t a single intellectually honest harm associated with the majority of app telemetry and for almost all ad data collection. Like go ahead and name one.
Once you say some vague demographic and bodily autonomy stuff: you know, if you’re going to invoke “voters,” I’ve got bad news for you. Some kinds of hate are popular. So you can’t pick and choose what popular stuff is good or what popular stuff is bad. It has to be by some objective criteria.
Anyway, I disagree with your assessment of the popular position anyway. I don’t think there is really that much cognitive dissonance among voters at all. People are sort of right to not care. The FTC’s position is really unpopular, when framed in the intellectually honest way as it is in the EU, “here is the price of the web service if you opt out of ads and targeting.”
You also have to decide if ad prices should go up or down, and think deeply: do you want a world where ad inventory is expensive? It is an escape valve for very powerful networks. Your favorite political causes like reducing fossil fuel use and bodily autonomy benefit from paid traffic all the same as selling junk. The young beloved members of Congress innovate in paid Meta campaign traffic. And maybe you run a startup or work for one, and you want to compete against the vast portfolio of products the network owners now sell. There’s a little bit of a chance with paid traffic but none if you expect to play by organic content creation rules: it’s the same thing, but you are transferring money via meaningless labor of making viral content instead of focusing on your cause or business. And anyway, TikTok could always choose to not show your video for any reason.
The intellectual framework against ad telemetry is really, really weak. The FTC saying it doesn’t change that.
> There isn’t a single intellectually honest harm associated with the majority of app telemetry and for almost all ad data collection. Like go ahead and name one.
You’ve already signaled that you’re ready and willing to dismiss any of the many obvious reasons why this is bad. But let’s flip it. What intellectually honest reason do you have for why it would be wrong if I’m watching you while you sleep? If I inventory your house while you’re away, and sell this information to the highest bidder? No bad intentions of course on my part, these things are just my harmless hobby and how I put bread on the table.
In my experience literally everyone who argues that we don’t really have a need for privacy, or that concerns about it are paranoid or that there’s no “real” threat.. well those people still want their own privacy, they just don’t respect anyone else’s.
More to the point though, no one needs to give you an “intellectually honest” reason that they don’t want to be spied on, and they don’t need to demonstrate bad intentions or realistic capabilities of the adversary, etc. If someone threatens to shoot you, charges won’t be dropped because the person doesn’t have a gun. The threat is extremely problematic and damaging in itself, regardless of how we rank that persons ability to follow through with their stated intent.
> What intellectually honest reason do you have for why it would be wrong if I’m watching you while you sleep? If I inventory your house while you’re away, and sell this information to the highest bidder?
This is an interesting idea, but it's a pretty far analogy from app telemetry or ad data collection. If you're really saying, "would it be wrong for me as a camera app developer to collect the videos end users record?" I suppose the answer would really be, "It depends." Like that's what Instagram does, it collects videos end users record. But without their permission? I guess not, no, but that's pretty obvious. The same would be true if you made firmware for security cameras, which happened to be pointed at my bedroom. I suppose if you asked for permission, and I granted it, go ahead - if you didn't ask for permission, I would be surprised why you would need to collect the videos as a firmware developer. The house inventory thing is the same tack - are you talking about, does it make sense for Amazon to sell my purchase history, or something? I guess they asked for permission, go ahead... Nobody forces me to use Amazon or whatever.
Instagram, Amazon, etc. do the things they do with permission. And I don't think anyone who is fully educated is surprised what the idea is for the transactional attribution data it collects. There's lying by omission, which is bad, but that is an issue of leadership and education. Everyone in the EU still chooses telemetry and free over no telemetry and paid service, when it is spelled out to them. It's too bad that leadership has to be taken in that form, but there's no alternative in the regime they built there.
If this is just a competition over the leadership and education of laypeople, so be it, but this real life experiment keeps happening, and the people who try to inject drama into ad telemetry keep losing, so I really don't think it's just about lying. There is a real lack of harm.
> reason that they don’t want to be spied on
Nobody forces you to use Instagram. If you think ad data attribution is a form of spying, go for it. Delete the free social media apps. I don't use them. I don't have Instagram, TikTok, etc. I spend less than 10m a week watching something on YouTube. I don't even have a TV in my house. Do you see? They are not enriching your life.
> In my experience literally everyone who argues... well those people still want their own privacy, they just don’t respect anyone else’s.
In my experience this is pure projection. I respect when people don't want to give permission to Instagram to collect ad telemetry when they choose to not install the app. Of course, you say these things on the Internet, but you, you personally, are not going to migrate off of Gmail, which does all the same things. This is all really about vibes, about vibes being vibesy against social media, but not vibes being vibesy against Gmail, which would be a major inconvenience to say no to, and it would suck to have to pay $35/mo for e-mail - at the very least!
So basically your argument is everything is fine because consumers can opt out. Another tired old argument where even the people saying it don’t really believe it.
You can’t even rent a hotel room without giving them an email and a phone number they don’t need, and are looking to sell. If this works for you.. the person at the counter probably faked it rather than arguing with you. Some people will be happy when menus disappear and you need to install an app. What happens when you can’t check out of the grocery store without the requisite likes-and-subscribes? What happens when your flashlight app has 37 page ToS that says they reserve the right to steal your contact list for the purposes of selling flashlight apps? All is well because you can see in the dark, and no one makes you choose anything? Well I hope there’s healthy competition amongst the manufacturers of your pacemaker, and they don’t inform your insurance company that your health is deteriorating..
If you’ve got no sense of right or wrong beyond what is legally permissible, just exercise your imagination a bit to look at the likely future, and ask yourself if that’s how you really want to live.
All you have to do is tell me how you are harmed by your email sold for marketing by a hotel room. The flashlight thing sounds like a bad actor that doesn’t have anything to do with anyone’s opinions about privacy, and it doesn’t sound like it has anything to do with Meta or YouTube. I’d be most interested in naming a specific harm in something that their app telemetry does.
I’m harmed because I did not consent to it, and that should really be enough for you. What intellectually honest reason do you have that it’s ok to coerce others into things that they don’t want?
> There isn’t a single intellectually honest harm associated with the majority of app telemetry and for almost all ad data collection. Like go ahead and name one.
The harm is the privacy violation. App telemetry needs to be "opt-in", and people should know who can see the data and how it's being used.
Can you define a harm suffered by the people that the FTC represents? What about the EU beneficiaries of the GDPR? This is sincere, it is meant to advance to a real and interesting conversation.
I think privacy violations are a harm in themselves, but you seem to have already dismissed this issue, so I'll move on. How about behavioral manipulation via microtargeting, economic harm via price discrimination, reselling of the data via monetization to unscrupulous aggregators or third parties, general security reduction (data and metadata sets could be used for APT, etc), or the chilling effect of being tracked all the time in this way?
> How about behavioral manipulation via microtargeting...
I don't know. Ads are meant to convince you to buy something. Are they "behavioral manipulation?" Are all ads harmful?
> ...economic harm via price discrimination...
Should all price discrimination be "illegal?" This is interesting because it makes sense for the FTC and for anti-trust regulators to worry about consumer prices. Price discrimination in software services - the thing I know about - helps the average consumer, because it gets richer people to pay more and subsidize the poor.
> reselling of the data via monetization to unscrupulous aggregators or third parties
"Unscrupulous" is doing a lot of work here.
> ...general security reduction...
Gmail and Chrome being free ad subsidized has done a lot more for end user security than anything else. Do you want security to be only for the rich? It really depends how you imagine software works. I don't know what APT stands for.
> chilling effect of being tracked all the time in this way?
Who is chilled?
I guess talk about some specific examples. They would be really interesting.
Not everyone but almost... and it's the same in other places (was already the case in Buenos Aires when I went there a few years ago). And of course when you tell people that there are better alternatives, many of them don't want "another app"... (but then they install one full of trackers to hope get some kind of prize at the local supermarket).
It isn’t cognitive dissonance, the state does lots of things we’re not supposed to do. Like we’re not supposed to kill people, but they have whole departments built around the task.
Should the state do surveillance? Maybe some? Probably less? But the hypocrisy isn’t the problem, the overreach is.
The FTC is bipartisan, no more than three of the five commissioners can belong to the same party. The present report was unanimously voted by all five.
I don't know if you've been watching but the FTC has actually been extremely proactive during this cycle. Lina Khan is an excellent steward and has pushed for a lot of policy improvements that have been sorely needed - including the ban (currently suspended by a few judges) on non-competes.
It is disingenuous to accuse the FTC of election pandering when they've been doing stuff like this for the past four years consistently.
Begs the question of agency authority which is manifestly not resolved. You will find that the elections’ results will effect the eventual resolution of the question of the unitary executive quite dramatically.
The problem seems deeply fundamental to what it means to be a human.
On one hand, there's a lack of clear leadership, unifying the societal approach, on top of inherently different value systems held by those individuals.
It seems like increasingly, it's up to technologists, like ones who author our anti-surveillance tools, to create a free way forward.
In the matter of corporations vs governments, if you tally up number of people shot it's clear which of the two is more dangerous. You would think Europe of all regions would be quick to recognize this.
I don't like corporations spying on me, but it doesn't scare me nearly as much as the government doing it. In fact the principle risk from corporations keeping databases is giving the government something to snatch.
Who is arguing for corporations to wage war? What an absolutely insane strawman. What I am arguing against is letting governments grant themselves the ability to spy on their own populations on an unprecedented scale, because governments "waging war" (mass murder) against their own people is a historically common occurrence.
It seems entirely reasonable/consistent that we would allow some capabilities among publicly sanctioned, democratically legitimate actors while prohibiting private actors from doing the same.
In fact, many such things fall into that category.
Why is the burden of proof on the users? Why shouldn't the burden of proof be on Clearview? They should be required to know that a person is in a place they can legally operate before doing so.
If identification of dutch citizens are a part of the product for services rendered by Clearview AI, they should be punished severely for this. Same goes with any country, or person, who doesn't want to be a part of their scheme.
Since they don't operate directly in the EU, there is not much else they can do aside from collaborating with other countries DPAs to ban any EU company from integrating with them (per the article currently only companies in the Netherlands are banned).
Even the fine itself is a bit problematic because it looks like unenforceable as they don't operate in the EU thus not subject to EU law.
However if it were to be discovered that the user images where not only retrieved by scrapping publicly available information, but involved data brokerage or other forms of personal information selling all those involved throughout that chain could be fined.
Neither do you want to normalise organisations violating the rights of a given state's citizens with impunity because they operate from outside of their jurisdiction and refuse to engage. If there's no other recourse than to arrest the members of the (for all intents and purposes) criminal organisation if they step foot in the affected state then so be it.
Why not? Sends a strong message; and also forces employees to think about what they are doing instead of passing the buck.
This is also not like some stupid patent dispute or DMA compliance argument. These employees are directly responsible for stockpiling personal identities of millions of people for the express purposes of making the surveillance efforts of their government easier. That's a very political action, which is directly aggressive against a country's citizens, and they should feel that.
There are 27 countries in EU with different motives, morals, interests, etc. Just because you agree with the decision of one country in one instance it doesn't mean you would agree with them all. But once you give them the powers it's impossible to take them back. It's a bad slippery slope.
Just to be clear here, your issue is with the number of member states in the EU?
In other words you would be fine with Canada arresting employees of Clearview if they tried to enter the country after Canada deemed them profiting members of an organization that was breaking the law in Canada?
If I go to Saudi Arabia after having called for their leader to be executed on X for his treatment of women, yeah, probably not a good idea to go.
If I go to Iran after saying Khamenei deserves a rocket to his mansion, yeah, probably not a good idea to go.
If I go to Europe after having run a multi-million dollar scheme affecting European countries by white-labelling services from North Korea (legal in Brazil), and I'm a Brazilian citizen and know Brazil almost never extradites, yeah, probably not a good idea to go.
If I go to the US with my two 12 year old brides from Niger, yeah, probably not a good idea to go.
In addition to the examples you mention, if you become involved in doping at an international competition in which US American athletes were competing, it's probably not a good idea to travel there: https://en.wikipedia.org/wiki/Rodchenkov_Anti-Doping_Act_of_...
Is it really so unreasonable to expect that countries prosecute people who commit crimes against their citizens with expectation of impunity the next time they visit their country?
You're proposing an alternative where people can just commit crimes with no recourse from the victims simply because a border exists somewhere and they commit the crimes on one side of the border.
Would you expect it to be reasonable for Canadian citizens to be shooting at Americans on the border and it unreasonable for American authorities to arrest them if they came to America?
> Would you expect it to be reasonable for Canadian citizens to be shooting at Americans on the border and it unreasonable for American authorities to arrest them if they came to America?
Physical violence is incomparable to working at a company that did something that would be illegal in a certain jurisdiction. Should the person maintaining Clearview's website be arrested in the EU simply because they work there?
What do you suggest as an alternative? That we live in a world where people can evade legal prosecution simply by incorporating or working for a company that incorporated?
Why isn't violent crime comparable to stalking crime? Why is it socially acceptable to hoard personal information about someone and pictures of them as long as someone does it under the auspices of a business but it's creepy and weird to do it as a lone-wolf stalker type? Maybe they're both terrible and creepy business models and the EU is right to prosecute anyone who does it, articles of incorporation or not.
I'm not sure, but I don't think "I did it from abroad" should just make it OK. The whole point of the GDPR is that personal data is valuable and important. Would you feel the same if Clearview were instead taking people's money?
I think you'll find most people would be more than happy to see a few scummy C-suites landing behind bars. I certainly welcome it and can't wait for the day when these psychopaths actually get punished for their greedy behaviour.
You're getting downvoted, but it's an interesting idea. Violating the GDPR is illegal.
You can break your home country's laws when you go abroad and it's usually OK. You can smoke cannabis when you visit the Netherlands* from Ireland, for instance, and go back home to Ireland without worry.
Violating GDPR is illegal. It's acceptable to arrest people who do things that are against the law. And if, say, I write a lambda that runs hourly and violates the GDPR from my home in California, and then take a holiday to the Netherlands while the lambda is still running, should I be immune from arrest? The offense is still ongoing in that instance.
If we truly take privacy seriously then this should be treated like a crime. If I had something that scammed people in Europe and then holidayed in Europe I'd expect to risk arrest. Or is that somehow less important than violating people's privacy?
* (It's actually technically still illegal but that's a different story). Gedoofd is weird.
Arresting tourists for crimes they did not commit is hostage taking and could be considered an act of war.
The US is willing to prison swap terrorists with Russia, we definitely wouldn’t tolerate some EU country (that we spend billions of dollars defending) arbitrarily arresting tourists so they can hold a foreign company hostage.
Anyway I think you're right that the US would strongarm EU governments in to getting their way (look at privacy shield, etc.) but I still think "you're allowed to continue breaking our laws that affect people in our country while you visit us because it happens to be running on a computer you left at home" is a weak defence.
We’re talking about xx million dollar dispute between allied countries, it’s not a reasonable method of conflict resolution to start throwing people in cages that work for x company until the EU gets their way.
> what do you mean “did not commit”
It’s standard around the world that employees are not held personally responsible for the crimes of the corporation they work for.
Edit: if we’re talking about an individual US citizen that’s found guilty in the EU, then the EU will go through the extradition process to have them arrested.
This isn't a dispute between the two countries, it's a dispute between the law enforcement of one country, and the people they're accusing of breaking the law from another country.
> It’s standard around the world that employees are not held personally responsible for the crimes of the corporation they work for.
Is it really so simple? Is all that the cartels missing to avoid persecution from the US gov't simply incorporating in their home state? Of course not.
Imagine that offered death by drones. You tell 'em who you want killed and they mail a package containing a drone that pops out and kills the person when it's delivered. Would it be reasonable to say "Yeah we can't arrest anyone from that company when they come to our country because they incorporated in another jurisdiction?"
If the EU wants to arrest someone they can submit an extradition request which the US will approve or deny after reviewing. The EU can also already arrest individuals whom are found to be criminals.
You are suggesting a totally new weapon for EU law enforcement which is to imprison individuals who are not found guilty of a crime because they work for a company that owes the EU money. That sounds a bit insane to me, I think if the EU wants to collect their fine they should find a more diplomatic approach that does not equate to a literal war crime [1]
The flaw in your line of thinking is that it is legal and very common place to arrest people who are suspected of committing a crime.
We're talking about people who have suspected of committing a crime in the EU. Should they step foot in the EU the EU is free to arrest suspects of a crime and they can get their day in court.
You probably are posting this as a joke, but without a clear technical solution to this problem, flooding the industry with bullshit data seems like a great avenue.
I have a silly standup joke along these lines, about how I'd Google things crazy things like "circus lawyer" or "giraffe mitigation tactics" to throw the algorithm off every now and then.
My friend is a thriller writer and is convinced he’s on some FBI list. He’s googling stuff such as “how to dissolve a body with quicklime” and all sorts of other fun stuff while researching for his books.
The quicklime method shouldn't be particularly fast, at least that's what my chemical intuition says (CaOH2 is barely soluble in water). What a bad name!
In the most general context it means "with the characteristics of the living" (as seen through a middle ages lens).
In the context of "quicklime" the quick refers to the heat of the reaction when making lime for slaking on walls, etc.
"Quick" historicaly has been applied to plants and animals (alive), rivers and streams (moving), coals, fires, quicklime (burning, heat producing, glowing), to speeches and pamphlets (Lively, full of vigour or sharp argument), to tastes, to smells, and more.
The full blown Oxford English Dictionary entry for quick is a lengthy one, multiple cases and variations over a page and more.
that was the idea behind certain applications and add-ons that would browse around to popular websites and randomly click ads so that marketers couldn't tell your actual interests from fake ones.
Unfortunately that strategy is deeply flawed and dangerous because nobody cares if the data they have on you is accurate or not. They still can, and still will, use it against you at every opportunity. Every scrap of data they have, accurate or not, can be used to hurt you.
The only way to flood data brokers with garbage data that can't hurt anyone is to fill it with entirely fictitious people who somehow can't be mistaken for any actual people. Even that runs the risk of hurting real people though. For example, an insurance company might go to a data broker and ask for the number of people within a certain neighborhood or zip code who bought fast food more than once a week in the last year and how many have a gym membership. If the number of frequent fast food buyers is higher than it was last year and/or the number of gym members is lower the insurance company might decide to raise the rates of every single member within that neighborhood or zip code. Even fake people could skew those numbers if their fake data said they lived in those zip codes or neighborhood and ate out a lot or didn't have a gym membership. Indirectly, the fact person is mistaken for being a real one in that community.
The best way to deal with data brokers is to regulate them with strong data protection laws. Anything you give them risks hurting someone and gives them another data point to sell.
I doubt it, since nobody is being denied housing or services. Health insurance companies have plenty of data to back up their practice. Your zip code might be the single most important predictor for longevity (https://time.com/5608268/zip-code-health/).
More importantly, your insurance company is never going to tell you that that's why they raised your rates. You're just going to see a high bill. Same way that a potential employer isn't going to tell you that you didn't get the job because of something you said on social media 14 years ago, or because the information they got from a data broker says you drink a lot. You just get ghosted.
That's the problem with surveillance capitalism. Even as all that data increasingly impacts your life you're almost never aware that it's happening and have no ability to appeal or correct the record.
Isn't something like regulation with strong data protection laws a bit late at this point? It seems fair to say that most people alive are already scooped up in 1 large data breach or another.
And that data has been made public likely in some form, and is probably replicated to dark corners of the planet.
Don't get me wrong, regulation on these industries seems like a no-brainer, but it seems unlikely to remediate the damage already done.
That's kind of true. Preventing the sale of it will make it harder for it to be used against you. Even if scammers can still buy or download your data from the darkweb your future employers and the companies you interact with are a lot less likely to go that far to get their hands on it, so all that data being out there will impact your life less and less. Even better, fewer places will be collecting new data about you. Your social security number and date of birth don't really change, but your income, medical conditions, home address, spending habits, sex life, and location history do.
You can never know what might prejudice someone else against you. Maybe you get flagged as being gay when you aren't, or as holding certain religious or political views that you don't. Extremists, activists, and protestors can go to a data broker and buy up lists of people to harass or attack. Data brokers have already been caught collecting data on people who visited Planned Parenthood locations and selling that data to anti-abortion groups.
You could be incorrectly flagged as having more money than you do, causing companies to charge you more than they charge your neighbors for the exact same items. Discriminatory pricing has been happening for a very long time. Just using a different browser can cause prices for some online services to change. (https://www.bostonglobe.com/business/2014/10/22/online-shopp...) For example, Apple users might be seen as having/spending more money and so the prices they get for hotels and airfare can be higher. Increasingly, brick and mortar stores have been trying to get in on the action too. (https://link.springer.com/article/10.1057/s41272-019-00224-3)
If you have a browser extension that randomly visits sites and clicks on ads. Maybe it clicks a bunch of ads for alcohol or marijuana. Maybe it clicks on ads for mental health services, addiction/recovery services, or suicide hotlines. That data can be used against you in court during a divorce/child custody case. It might make a company less likely to hire you. It might cause your health insurance company to charge you more.
Maybe it clicks on ads for DUI attorneys and suddenly your auto insurance rates go up. The company isn't going to tell that's why. They might not even know why. their algorithm just decided you were more high risk than before.
Any data for sale, accurate or not, is going to be used against you. The people paying data brokers for information about you aren't doing it because they want to help you. They want to help themselves at your expense. And its insane how many people are buying up that data and using it whenever they feel it might give them even the smallest advantage. Companies are using that data to decide things like how long to leave you on hold when you call them. (https://www.nytimes.com/2019/11/04/business/secret-consumer-...)
That has been my strategy for the last decade or so, Unless I have a solid reason to I never use my real name when placing orders and generally never the same fake name twice, always use a virtual credit card, if it's a non-physical product I don't even use my real address. I have some old phones I throw pre-paid sim cards into when I need to do number confirmation. The goal is to create a little consistent linkable data to me and at least generate some noise in all these data broker collection processes.
I do the same, I worry that eventually someone's going to need to see my driver's license and refuse me because my ancient account info doesn't match.
"It says here that this shipment is for Firstname Lastname at 1 Main St, Yourcity, born January 1st in the same year as you. Your license has a different address and different birth day and month, so you're not the same person."
tl;dr, if it happens, there will be a 5 year retirement period. We have quite a while to deal with this it seems. This isn't a nothing-sandwich, but pretty close (unless your company is something .io)
reply