"There is no valid counterargument to the abstract idea. The fact is that if a company had better data and analyzed it more completely, they could obviously produce better experiences for their users."
I think this is something that you don't need hindsight to understand is wrong. Modern ad driven companies work to maximize 'daily user interactions', not quality of experience. A lot of people think these are strongly correlated, but I think optimizing for that is likely to drive your products into bad directions for users.
There is a simple counterargument to the phrasing: could does not imply would.
When you have relatively little information about your users, the two goals "make the user happy" and "make the user engage more with the product" are approximately the same goal. So making the user happy = engagement, with little data.
But when you have a lot of data about your users' emotions and behavior, the two goals become separate. A typical business is going to pursue the second goal, not the first. They're not charities.
It's sort of like an archer shooting at two neighboring targets. With fuzzy vision, they look like the same target, but with clearer vision, the archer sees them separately. The users want the archer to hit one target, but the archer will aim for the other. So, the users may be better off if the archer's vision is a little fuzzy, to make the targets blend together.
To add another example:
In the abstract, the fact is that a government with absolute monitoring and enforcement power over its citizens could obviously move faster to improve society and address injustices.
However, like you said, could does not imply would, and most people would say that giving governments absolute power to monitor and enforce their whims on citizens is a terrible idea.
They could do a better job at doing whatever it is that solely-for-profit companies do!
Which may include "provide a better user experience" but may also include other activities such as "sell the data of their customers to third parties for profit" or "make ads for unnecessary products even more accurately targeted" or "more effectively scrape the contact books of their customers to provide better sales leads" or "use individual data to influence custom pricing models to offer higher prices or decline service to some potential customers who are statistically less likely to be attractive customers" or "more accurately predict where individuals associated with government regulation might be trying to constrain the company's business activities" ...
> I think this is something that you don't need hindsight to understand is wrong. Modern ad driven companies work to maximize 'daily user interactions', not quality of experience. A lot of people think these are strongly correlated, but I think optimizing for that is likely to drive your products into bad directions for users.
Unfortunately that is also not a counterargument to "the abstract idea": this shows that companies don't produce better experiences, not that they couldn't produce better experiences. In fact I think, much as I am on your side on this, that the evidence shows that companies with more information can and do provide better experiences—for just long enough to normalise the privacy violation and lower expectations, at which point they turn from their users to the true audience of advertisers and other, uh, curious parties.
> Apple is going to realize very soon that it has made a grave mistake by positioning itself as a bastion of privacy against Google, the evil invader of everyone’s secrets. The truth is that collecting information about people allows you to make significantly better products, and the more information you collect, the better products you can build. ...
How the times change. Not a whiff of Facebook. Snowden who? Google was still mostly not evil.
From the more recent article:
> I argued in my essay that Tim Cook had conflated privacy with security. He may have. But in the five years since 2014, the following fact has become absurdly clear to me: there is no difference between privacy and security. Security is an illusion, just like the lock on your front door. ...
In time we'll all come to realize that privacy is merely a special case of security. What can't be seen can't be attacked.
Until then, a lot of damage will have been wrought on people too distracted or naive to care.
I can hardly wait. I can't tell you the number of times I've been lambasted for asserting that very thing.
It's a bit amusing, though, that the original piece wasn't far off. I've heard the 2015 mbp was the best laptop they've made, and since then have been a number of missteps. I was unfortunate enough to personally experience the awful lack of a tactile escape key on one of the newer mbp+touchbar laptops for about a year.
A computer company forgetting how a keyboard works. Really now.
Obviously, gathering usage statistics from hardware / software analytics wouldn't have captured the fact that there are touch typists who would be quite displeased by (a) the loss of the tactile escape key, and (b) the downward quality of the keyboards in general.
When the author writes the following, I wonder if he is still struggling today to make that connection:
> The building of tools to aggregate private information in order to ostensibly improve user experience has in fact, at scale, caused strange and negative things to happen. Some of these things are threatening totally unrelated social constructs like democracy, addiction, and human decency.
Those constructs are absolutely all related private information in a predictable way. They're related by the fact that the disruption of those fields can be very profitable. Disrupting democracies at scale is profitable to governments. Addiction has been profitable for all of human history. Training people to believe that human decency can be something expressed through which brands of products they buy or do not buy is incredibly profitable for certain brands that sell a certain "ethos", while simultaneously convincing buyers that any other sort of direct action (protests, legislation) are not necessary because the market can make meaningful change if people would only choose to buy from more ethical companies.
The problems we're currently facing as a society, due to the the motivations of powerful corporations being misaligned with the best interests of the general public is not unique to tech either. It's long past time to acknowledge that these outcomes are neither strange nor unpredictable.
Consider the world where privacy is the absolute king:
- Much harder to solve crimes as police work is thwarted at every turn
- Much easier to plot terrorist attacks/mass shootings/etc
- Can't setup cameras/audio recording on your own property/phone/car/etc. without consent of the recorded (even if they are a criminal) or you are breaking the law.
- Hard to create companies that deal with data because of the high legal barrier you have to clear
- Much easier to hide your past mistakes, making things like "background checks" less useful
A world where privacy is king has a lot of problems and a lot of avenues of abuse. Before you grab your pitchforks, I do think there is a reasonable compromise somewhere in the middle, but some of the privacy "purists" I see on this site have some really extreme views, in my opinion.
Not unpopular in general, just unpopular among specific groups (including HN).
I agree with you, generally. We give up certain freedoms in order to live in a civilized society as opposed to living in the woods as hermits, and I would rather err on the side of more safety / less privacy than vice versa. But there’s no right or wrong; it’s just your personal philosophy. Some people believe that without absolute privacy, the government will start imposing surveillance on everything and we will live in a dystopian, Minority Report, social credit score future. It’s the same reason a lot of people want the right to bear arms; they want the (perceived) ability to balance against a slippery slope scenario with the government or other large organizations exerting too much control.
companies that deal with data
Here the problem is the opposite - it's now not only possible but typical for private entities to accumulate massive amounts of previously private information about individuals, far beyond what the most powerful totalitarian governments ever could. We got there very quickly and our societal and legal norms and frameworks have not caught up. What's to be done about this?
The only one I would even want to bend is the "can't setup cameras/audio recording on your own property" bit. I'm not sure how that would work, in a privacy-centric world, wouldn't the person you are recording avoid the entire area - to protect their privacy?
This is exactly what has happened with the tracking scripts from facebook and google. They provide some service, but the tradeoff is that you have to let them spy on your users.
So the counter question is: "Can you not just avoid google - to protect your privacy?"
I personally find people like Richard Stallman extremist to the point of considering their positions to be cult-ish or religious. But I think people like that serve as a valuable counterbalance to the people at the other end of the spectrum who don't care about software freedom and are purely pragmatists (or who actively try to thwart FOSS).
At the very least, when bad things happen, and the people on the other side say "we never could have foreseen this", we can point to RMS and say, "yes, you could have".
The difference between Stallman and Fox News is that Stallman's extremism is well reasoned and consistent while Fox News is only consistent in serving the interests of "the man behind the curtain". Stallman offers a base for logical reasoning while Fox News offers a bunch of positions only good for uncritical acceptance.
Nobody said society was or should be perfectly "safe" and it's not something we should optimize for. We should optimize for individual liberty and the rule of law. (I'm an American, for what it's worth). As for rebuttals, here's what I would consider for each of your points as a potential balancing counterpoint:
> "much harder to solve crimes as police work is thwarted at every turn" - there are reasons police require warrants to obtain information. This should continue to be the case. Warrants make their job harder, of course, but it also preserves individual freedom. This balancing act is at the crux of our society and always will be. I lean towards making it harder for the state if I have to choose.
> "much easier to plot terrorist attacks / etc" - see above point about security and liberty. We are not optimizing for complete safety, and any open society will run into these issues. The alternative is not something I would like to consider, personally. The tools that might solve these issues are also readily used to suppress dissent and empower the state. Best to make it difficult to employ them.
> "can't set up cameras on your property" - this is an individual rights issue not a privacy issue. You have the right to do this on your property.
> "hard to create companies that deal with data" - I don't think this is particularly hard right now, and we could use a few more laws about this to prevent third parties from wielding more power than the state without any recourse to the individual. In the end, it's not as much about privacy but rather about rights of the individual to appeal their being "on a list" or discriminated against as a result of being arbitrarily added to the list. If you want a concrete example of this, I'd say let's talk about Credit Ratings agencies and the power they wield in the United States.
> "much easier to hide your past mistakes" making things like "background checks" less useful. - I'm pretty sure it'll still be damned near impossible to really bury this kind of stuff without significant means. Furthermore, as a hypothesis, I'd argue the general benefit to society of not having a giant database is a net positive and the outliers / background check avoiders are probably very very few statistically. Leave blacklisting to the legal system where a felon, sex-offender, etc are tagged for life but at least it's done via laws. (see the previous HN post about a company tracking which bars you've been kicked out of)
I don't feel that my views here are extreme. Mostly I'm left feeling quite powerless to do anything without being tracked and monetized by somebody (at a minimum), and worse - potentially having my real-world life altered because of some random algorithmic detection that I have not opted into, did not realize, and cannot control or appeal. That feels weird, and I think there's a middle ground between the two that we can continue to investigate.
A combination of views on optimization as well as trust in government to uphold law in the way one expects likely determines a lot about where one falls in the continuum of individual privacy vs. access to information on this. I would argue for checks and balances.
When something is wrong and a waste of time (in the sense that the author should have known better) that's dumb.
When something is wrong but not obviously so, and it helps us summarize existing opinions, or guide exploration, and/or otherwise advances the state of our understanding, it's not dumb.
For example, a wrong scientific hypothesis or a wrong startup thesis is not always dumb; they are only dumb when the author should have known better, but chose not to do the homework. Incidentally, this might be the easiest way to fail a YC interview - demonstrate that you think the homework is beneath you.
It's ok to be wrong, just make sure it's "the right kind of wrong".
He steals "kudos" even. If you hover over a little icon on his website to try and find out what it does, he'll record it and publish it as praise of his article.
Oh! I just realized Dustin is the guy behind Svtble.
Now with how horrible ad-tracking has gotten, and the decline in Google's products (Inbox especially, for me) the shine is wearing off quickly. I'm strongly considering abandoning the Google platform as completely as possible whenever my Pixel 2 bites the dust.
Having spent a few weeks on the phone (from a Samsung S7 before), the biggest difference I feel is that the iPhone is no longer nagging me to interact with it the same way my old one did. No blinking notification light, no Samsung pestering me to accept some changes to their terms and conditions etc.
I really hope that there's a real market out there for people who want to own their phones and control their destinies. I really don't want to live in a world where my only choice is which feudal lord I serve.
If there is, I haven't found anyone servicing it. This is why I've been reduced to building my own smartphone. That appears to be the only way I can keep control over my own machine.
>And: (2) If, after Snowden, Experian, and countless other examples of leaks, you think security is going to protect your privacy, you are either ignorant or insane.
This feels incoherent. Security is exactly what Apple is leaning on, e.g. by designing iMessage so they themselves can’t read the content.
And just like Google, they are also asking you trust them, e.g. to not release an update that uploads all of your plain text messages to Apple.
What’s different is the size and nature of the attack surfaces, how distributed vs. centralized they are, and how each company agrees to use the data they do legitimately collect.
There are interesting, persuasive arguments to be made in favor of the different approaches out there, but I think it’s important to have conceptual clarity.
the algorithm is code. put in place by someone. it is not magical. it is not hard to predict. it doesn't run around gainig sentience.
personalizing the "algorithm" is idiotic. and remove all the guilty from the operator, who is the one who fine tuned all the trade offs.
think of what he describe as the algorithm as a person that works as your personal assistant. if that person uses all the info on your life to help you, they are a good assistant. If instead they use that information in ways you didn't know about, just like google does, that person is a criminal. yet, google gets a pass because it wasn't them, it was the algorithm.
see how dumb and detrimental (to everyone but google) speaking of the algorithm as an entity is?
This is basically right; the only thing making it wrong is the question "better for whom?"
I think that there is. It's hard to explain on short notice, but it's the intersection of the abstract concepts of model overfitting, precognition based on data (such as pregnancy), and what happens when the products are not built to serve the user whose data powers them.
Everyone wrote dumb stuff years ago. Move on with your life.
Edit: Aftermath. He deleted the blog post. It's still in the way back machine I suspect. https://news.ycombinator.com/item?id=1441914
Although if someone has invented a mouthfeelier fork in the meantime and Curtis has written about it, I'd like to read that piece as well.
There is nothing about the original article that makes it a 'thesis', or a 'piece', or even an 'essay' by any standard. I was scrolling up and down wondering if I'd missed something, expecting some seriously detailed research. It is being given an inflated opinion of itself.
'Privacy vs. User Experience' is a 500 word long opinion. It starts off like this:
> Apple is going to realize very soon that it has made a grave mistake by positioning itself as a bastion of privacy against Google, the evil invader of everyone’s secrets. The truth is that collecting information about people allows you to make significantly better products, and the more information you collect, the better products you can build.
There is no thesis or structure here, it's a fucking blog post. There are no references, there is no research. It's opinion.
In my personal opinion this follow up is even more dumb than the earlier post it criticises.
Edit: if I was to offer something more constructive, this follow up post should be called 'Famous Last Words'.
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
But it's not an essay. It's a blog post. There are no references or sources. There is no way for you to take that post and research the underpinnings of its conclusion. It is wholly self-referential.
The original post is a hot-take, it's pure opinion. It's not even a noteworthy post, it's just part of the Svtble echo chamber and has inflated worth.
There is no essay in either the original post or the apology for it, and similarly so there is no thesis. Language changes and a thesis now is commonly accepted as a significant body of work with academic significance and a grounding in empiricism or at least extensive research.
Going back to the Ancient Greek (and beyond) roots of 'thesis' to claim an innocent usage of the term is a monumental stretch.
1. A short piece of writing on a particular subject
1. A statement or theory that is put forward as a premise to be maintained or proved.
Sure. And it's also an essay.
> It has a strong thesis.
As in, the gist/central meaning is good/strong.
What I think that you think he said:
> It is a strong thesis.
As in, PhD thesis.
In any case, it's high flung language for a blog.