Pretty sure the mechanics as detailed in the article are factually wrong. Google does not sell personal data. It collects all the personal data it can and then stockpiles it, where it becomes an asset that the company can use to build products and charge rent (largely from advertisers, but also from consumers). Selling an extremely valuable proprietary asset to competitors would be a really dumb move - the money in today's economy is in owning monopoly assets and charging rent for them.
The actual mechanism behind the behavior the author observed is that the user went to Google, searched for a product, and then visited the product's website. The product's website contains a tracking beacon that associates a unique cookie for the user with the visit. The website's owner can then choose to run ads on Facebook or Twitter that are shown only to people who previously visited the website. This program is well-known among Facebook advertisers (it's apparently quite profitable to them), but Facebook understandably doesn't publicize it much to the general public.
Agree that it'd be cool if the person involved could actually control who and where it went to and was compensated for the use of it, but that'd likely require a fundamental change in how the web works.
Google does sell personal data, a little at a time. If I make an ad that targets young dog owners in Seattle, then when someone clicks that ad I know Google thinks they're a young dog owner in Seattle. Google sold me data about that person.
I guess, but that's basically the same information that you can get through the referer header in HTTP or by A/B testing different print ads. If you put up flyers in a certain neighborhood telling people to visit a URL, and you put up flyers in a different neighborhood with a different URL, then when somebody visits your site you know where they live.
This is of a dramatically different scale from the information Google has for its own purposes.
> "I would be a lot more OK with this if Google were only using that data about me themselves instead of letting anyone with a little money use it too!"
that would be better, but google would still be subject to attacks by adversaries and (secret) subpeonas by governments.
much better would be if they did their analytics on user data in real-time and store only the results while discarding the source data right away.
IMHO this'll end up being one of the most influential research developments of the last few years, but it's only a couple years old at this point and needs a lot of supporting infrastructure (which'll have to be done by people outside of Google - the mothership has no economic incentive to support this) before it really works well.
Not that I'm saying we shouldn't be careful with our personal data, but:
There's some serious mental gymnastics going from "young dog owners in Seattle" to selling somebody a dossier with their location, search & browsing history (all of which you can turn off).
The mental gymnastics is in defining "selling data" as "selling all data Google has about you". This isn't a binary where it doesn't count if it's partial.
Google provides selective contextual data about individuals to advertisers who target those individuals in exchange for money. Those advertisers are 3rd parties. Google sells data to 3rd parties. End of.
They don't need to be handing over the full database for it to be a sale of data.
Fine, how about a subset? Prove that an advertiser has seen one instance of my exact location, or an instance of my search or web history.
90% of the 'valuable contextual data' is that I searched for something related to an ad's keywords. The other ~% is targeted data. Targeting anonymised, generalised demographics/keywords which billions of people fit into does not class as selling private data.
I'm not arguing to trust Google, nor am I denying that there is massive data brokerage in the advertising industry - just that there is a big difference between targeted ads and selling data.
> Prove that an advertiser has seen one instance of my exact location, or an instance of my search or web history.
With all the subsidiaries and partner companies - and even just shell companies - out there, why should we be doing such gymnastics in order to support the advertising industry. If a company bought access to some data what's to say they couldn't eventually re-sell it to another company with another slice of that pie... or have that data inherited on acquisition.
That seems like an awful lot of trouble to go to just to sustain an industry that really isn't providing a lot of value.
> that there is a big difference between targeted ads and selling data.
I'm not arguing that the data received by the advertiser through targeted ads is anywhere near as valuable as a selling a dossier full of an array of information about a person, but... they're both examples of selling data. Unless you're redefining English language verbs... data is being sold. About a person. To a 3rd party.
There's a difference in scale and value, but there's no difference beyond that.
Presumably when you "turn off" various Google features that data is no longer used for ad targeting. For Google to do otherwise would invite a scandal. (I wouldn't expect all copies of the data and its backups to be deleted immediately since that would be a lot of work for the tape robots.)
Where's your evidence for this? Are you saying Google violates their privacy policy billions of times per day (every search, location update, page view)? (i.e. fraud)
That's one of the concerns I've heard raised over ad-supported email providers that target ads based on the content of your email.
One way that could alleviate this but still allow targeting, although it might be too expensive to implement in practice, would be only allow ad targeting to be based on attributes of the viewer that are either very broad (e.g., gender, income quartile, age group, geographic region) or that correlate highly with use of the advertised product and do not correlate will with non-users of the product.
For example, targeting "young dog owners in Seattle" would be OK on the "young" part and the "in Seattle" part because age group and region are whitelisted broad categories, but would not be OK on the "dog owners" part unless the product was in a category that dog owners use but people who are not dog owners do not. Dog food, for example, or a dog walking service.
The idea is that if someone ran an ad for dog food targeting "young people in Seattle", rather than "young dog owners in Seattle", the respondents are going to largely be dog owners, and can reasonably expect that they are outing themselves as such to the seller. So allowing using "dog owners" in the targeting doesn't really give away any more information on the responders. The targeting just serves to not waste the time of people who don't own dogs seeing ads for dog food.
If, on the other hand, "dog owner" could be used on the targeting for unrelated products, someone could target "young dog owners in Seattle" with an ad for something that almost everyone wants, and the people who respond out themselves as dog owners without having any way to reasonably know that they are doing so.
Some might opine that you did not so much buy that data as buy the benefit of using that data. Afterwards, Google still has the data they used and you still don't. This might be a distinction worth discussing.
I propose that it would require a minor change in web browsers only. Consider that we have 'incognito' and 'private browsing' sessions. The implementations are slightly different between browsers (e.g. Firefox appears to allow sharing between tabs in a window; Safari keeps each tab completely isolated), but the principle is the same. Modify the browsers so they're always in 'private' mode, and to only permit sharing once the bill is paid.
Pageviews within an individual incognito session still share cookies. If they didn't nearly every login on the web would break. When you create an incognito session just for one specific task and then close it, this doesn't matter. When you're always in incognito mode you're back where you started: a site that you're always logged into will have all the data from any site you visit with its tracking beacon.
Arguably, they already do. They pay us in services such as communications software, social software, search, etc.
Also, this is very common in some countries/industries. In the U.K. if you want to buy insurance or open a bank account etc, you’ll often use a comparison website, and they will likely give you an incentive (such as free cinema tickets) for choosing a service through them. This is paid for directly by the affiliate revenue. Cashback sites also exist that do this through affiliates. Credit cards do this with air miles, cashback and other rewards too.
The argument is that the intangible costs of the current scheme (life under total surveillance) make it desirable to force a direct dollar-value recognition of people's personal data.
A while back I made a similar argument[1]. I argued that some high percentage (e.g. 90%) of revenue from targeted ads should be required to be passed through to viewers of those ads. The possibility of the targeted ad market drying up completely is an acceptable outcome of this policy. Make everyone go back to non-targeted advertising!
I don't have a problem with the theory of targeted ads, in fact as a consumer it is actually useful. If you start looking for a pair of boots, then see that a brand you like but was out of your budget has a sale on, so you buy those boots you really wanted but couldn't afford. That's useful for a consumer.
The problem is that targeted advertising mostly just sucks. Last year I bought wireless headphones from Amazon, and everyone I visited them again for the next two weeks I had recommendations to buy wireless headphones. So even Amazon can't do it properly. What if I'd browsed on Amazon and bought from another store?
The other day I saw an ad on twitter for an wine company, in the "why am I seeing this as?" section it said that it is targeted to males (yup), aged 35 and above (no) who live in the United Kingdom (no). And to top it off, I don't even drink.
The problem is targeted ads mostly don't work, their targeting is so simplistic (see something you like, then blast you with ads for that over and over) or just assign you to such a general bucket (age range and location) that it isn't useful. Even "likes" aren't that useful for targeting - just because I "like" a brand doesn't mean I want to be blasted with ads from them or am going to buy every product they sell. I like Thinkpads, but that doesn't mean I'm going to buy every new ThinkPad that is released.
You are not the IRS - you don't have a right to compensation every time your data changes hands, just for each unique copy.
That's like me paying an author for a book, and then the author demanding I pay if I give (or sell) the book to my friend.
I know what you are thinking: "but in this case copies of your data are being resold" in which case I would agree that each party has to buy their own copy from me, or, if they resell to each other, they have to delete the copy they previously bought from me.
I haven't thought it through much, but on the face of it I like the idea of receiving some compensation whenever my data is _resold_, plus a way for me to see _where_ my data has been resold, maybe even a notification.
It's a bit like a songwriter receiving a royalty whenever their song is played on the radio or in public (see https://www.prsformusic.com/).
Obviously this would be a challenging thing to implement.
1. a musician joins one or more "guilds" and pays dues. there are three major musicians guilds in the US.
2. a public venue pays subscription fees to one or more guilds.
3. the venue is licensed to broadcast music by any of the guilds' members.
4. the guild divides monies from venues proportionally amongst its members, with proportions based largely on the musician's popularity.
that model wouldn't work very well for what you're imagining.
firstly, it wouldn't give you the access to information that you want about who has your data.
secondly, unlike music, user data for advertising purposes doesn't really have differences in value; to an advertiser, data about one person is just as valuable as data about another person. any differences in value have to do with the type of data rather than the source of the data.
"user data for advertising purposes doesn't really have differences in value; to an advertiser, data about one person is just as valuable as data about another person."
That's not really true - there are certain coveted demographics that are worth more than others. In general your value to an advertiser is proportional to how much money you're likely to spend on their product. People with a lot of money are worth more than people without lots of money; people who are likely to spend a lot of money are worth more than people who will save it all; young people are worth more than old people (they have more years of purchases ahead of them); socially connected people are worth more than loners (they influence the buying decisions of others); and people who are easy to influence are worth more than cynics. Also certain industries have very high revenue potential, while others don't; this is why a click on [mesothelioma] or [auto insurance] can fetch over $50, while a click on [f2p games] might be a dollar or two at most.
In old-line (print & TV advertising) they could only determine this in various broad strokes (which is why you'd hear about the coveted 18-34 demographic, or pay more for primetime TV than daytime TV). On the Internet you can divide this up in much greater detail.
5. The guild samples very sparsely so smaller musicians receive no compensation
6. Smaller musicians often arrange their own events and thus have to pay licensing fees to for the privilege of playing their own music, which as the guild doesn't sample the event, is not returned to the musician
I am not the IRS - but I also have not agreed to waive compensation for my data when it is collected by first parties or when it is resold to third parties. Specifically, I have individual rights that have never been knowingly waived to these behemoth data vacuums and yet they continue to violate my right to privacy and track me.
In fact, I refuse to negotiate with FB over my data, I don't participate in that service (or most other social media outlets). If they offered a reasonably priced way to keep up with family on FB without harvesting and tracking everything I do then I would be interested and potentially subscribe to such a service.
> you don't have a right to compensation every time your data changes hands, just for each unique copy
This is just a matter of law, unless there's a hidden moral assumption behind what you're saying. Your comparison to the IRS makes the point quite easy to draw: laws -- say, laws about handling of PII -- can be changed.
Similar to the leader of a scam sect who pays you in spiritual enlightment: you might find it to be a reasonable deal, if you even recognize it as such, but ultimately it is not a solid foundation for a stable society.
The usual question arises: What would a civilized solution to the problem look like?
Edit: Having a unhinged competition where the winner is he who manipulates the masses most effectively without getting cought doesn’t seem like a good way to get to a better future to me.
I like to pay in dollars - can I pay for their services in dollars and not be tracked everywhere I go?
Advertising is a negative value add on pretty much all modern interactions, it needs to go away and stop imposing such a steep cost on consumers for the atrociously marginal value it delivers to producers.
Perfect example in picking affiliate comparison sites - they skew the market, and don't include companies or plans that aren't in the affiliate scheme, so you rarely end up with the actual best value through them.
Just the sort of conflicting loyalties that leads to insurance, pension and mortgage misselling scandals.
More generally the exchange seems to have become one of being scammed out of £10 of data, tracking and inferences for every £1 of value received. It's years since it's felt like a fair and equitable exchange - like search in exchange for simple keyword ads without tracking.
This. Your data is already a currency. By visiting and consuming content you’re effectively paying for that content with whatever data they’re collecting.
Maybe this transaction needs to be somehow more explicit?
this is the point, if you are already paying (implicitly) for using the different services, make me the one who can control it and give me the choice to opt out if I don't want to do it
I know this is a really unpopular opinion, but what I don't get is why all this fuss over personal data? So what if facebook records what sites you've been to? If I visit starbucks, When I go to order my latte, the owners/baristas remember my name and face and maybe even what drink I ordered - you certainly don't mind that, right?
People aren't normally sensitive to these things: for example Govt on average takes 1/3 of people's incomes (thousand, tens of thousands of dollars) without the slightest hint of complaint. But when a company records where you've been online (with 0$ cost to you) it's all the sudden a disaster?
i genuinely struggle to understand this. if someone could explain it, that would be interesting.
Others explained, let's do a simple experiment instead: I see you are posting as "thorwasdfasdf". Why don't you instead not only use your real name, but also post your salary, profession, address, current and past lovers, intimate messages with them, what you like, what you don't, fights you had over electronic messaging...
What's the hold up? Is it not the same as the barista at Starbucks knowing your name and what coffee you like?
(Also, I don't get the comparison to taxes. You do realize that taxes pay for your infrastructure, safety, health--at least indirectly through regulation even if your country does not have public health care, justice system? But that's not really relevant to the topic here.)
I think there is a huge difference between having this info publically accessible where people can judge and mock you for it, and simply having it used statistically to sell you things. I don't care what the computer thinks about me.
I don't think your statement works. First, the data is not just stored as part of a statistic. There are people who have access to the raw data (not to mention unintentional leaking). Secondly, the corporations harboring that data might not just use it to "sell you things". And then that alone would be worth a discussion, because how far is someone allowed to go to make you buy something?
Imagine that an entity knows your worst fears, deepest desires, biggest perversions, most sensitive insecurities, the people you love, the people you hate, the things you covet.
Now let's pretend that they know how to use this information to change your opinion. And not just your opinion, but the opinion of however many people fit their targeted persuasion campaign. Then understand that the moment the data is sold to another group, they have the same power. The new group might want to do things more sinister than sell you designer jeans.
Finally, realize that the information will last longer than you will. It'll probably be your legacy.
I feel like this is a bit superfluous fear mongering. We are somewhat in control of ourselves, and should be expected to take responsibility for our actions. And this is just another way of using ads to attempt to manipulate us. Overtime we will learn to adjust the influence of ads, just as we have done before. I don't trust the government to fix this entirely, so I think its more important to learn to identify these persuasions, so they don't influence our actions.
Where's the fear mongering? This has already happened. Look at fake news. Look at russia manipulating people. Ads work, there is no way to adjust their influence other than refusing to allow them into your life at all. Even then, there's many ads you can't avoid online or in real life.
A couple of other answers are a bit over the top, so I’ll give it a go too.
When you go to Starbucks you make an active choice to give the baristas your name (or any name) without them checking it’s actually yours.
What companies like Facebook and Google do is record your every move online with no effort or repeated decisions on your part on what to share with them. They, and whoever they sell your data to, can then build a profile around whatever aspect of your interests they choose, which will consist of associated risks and gains. That’s quite scary to me because they probably know more about some things about me than my closest friends and family do.
If someone without good intentions, someone who wants something from me, someone I’ve wronged, someone who could just gain something from me has that data they can use it to manipulate me.
I'll give it a shot. Yeah, so you don't give a crap about being tracked everywhere, having your every inclination, desire, thought, intention, or deed recorded. You're fine with them knowing everything about you, about your family, friends, associates, acquaintances, and all their doings and preferences too. But then one day you wake up and you realize that your entire existence is being mined to corral you into corridor of sameness, conformity, and malaise. You don't think anything edgy or dangerous, don't do anything remotely controversial or embarrassing. They mostly track you to monetize your sameness, your complete and utter lack of originality, your total lack of value to them except as cattle to be fed a diet of carrots and sticks; ads and special offers. You don't mind the digital eyes stalking you everywhere from the shadows. You don't mind having your complete existence watched over for--for now, seemingly innocuous if slightly annoying cajoling to buy more crap.
But you live a charmed life without imagination. Without dangerous ideas. Who wouldn't hurt a soul. To you, your life couldn't possibly matter. To you, you're one of the herd. You like it. Blending right in.
Enjoy your charmed life that complete and utter indifference to the digital dystopia being erected around you, beyond your control. You probably won't do anything edgy or interesting, because you're caught in that loop that they love you to be in. Inured and desensitized, caught in a self-reinforcing loop made by the background of your mind, constructed by your subconscious that knows that you're being watched--all the time.
For your sakes I hope you never stumble on a crime, disagree with prevailing political opinions, or speak up against the powers that be. Or that you look like someone who might. Or are close to someone who might. Better steer clear of sex toys, risky behavior and embarrassing maladies, live your bland existence under the watchful eye of the overlords who seem harmless, if only a little greedy.
Don't live your life any way that makes you unique and don't make yourself a target! Don't bother to use your imagination! Keep your lips off that whistle! And make sure to vote, or not, because who the fuck cares how society treats its citizens, amirite?
And don't bother looking back to the expectation of privacy that just a generation ago, was some kind of normal. To see the governments and power structures that wielded tyranny over people and engaged in genocide, to the documents and bills of rights that were put in place to restrict that tyranny, and how thoroughly founders understood that people just have a right to be free. But lattes!
I think what your saying is that facebook makes it easier for the government to come after you, if you blow a whistle. But, if the gov came after you, i think they'd be able to get you if they really wanted you.
And basically, that FB monitoring reduces our willingness to share who we really are (at least the edgy non-conformist stuff)
For sex toys and other non-conformist stuff, signout of facebook and use Google in-cognito mode. You don't want that stuff on Facebook anyways.
No, what I am saying is that we have totally normalized criminal stalking in the digital age. If you think incognito mode is going to save you, I suggest you dig deeper. And turn off your location history and high-accuracy mode! And WiFi! Seriously, you should basically assume your cellphone is a surveillance device, because it is.
I like it for sure, but let me take devil's advocate position! :)
You've got a value scale telegraphed through your essay: edgy and dangerous is much more what a human should be rather than safe, complacent, comfortable.
I think it's something you desire to be/have, and that's fine, but you wouldn't say that unless you felt that this is something far, far away from what you are now. You must feel that you are unimaginative and more of a consumer than a producer. I wouldn't blame you for feeling that way...one of the neat tricks of our system is that we are supposed to individuate deeply (dangerously, rowr!), however every single way to do it is co-opted by some marketing identity. Try going to do some yoga in just ordinary pants not branded as a "lifestyle."
If you feel like a non-entity in this way, it's good to have an enemy. A "man" to keep you down. This is the "they" in your essay. And indeed, he runs rampant through all of our thoughts.
But in reality, they is you. You participate in this internet economy. You take the "free" things that are offered. What I mean is that this system serves you more than you want to admit: it gives you a reason to fail to find out who you really are.
Because, be honest, do you really want to be this edgy and dangerous person you are putting forth as an ideal? Are those people the best dads to their kids, the best wives to their husbands? Isn't there also goodness in the individual who shows up at the median of all these increasingly insidious schemes to harvest data.
Maybe, just maybe, this character shows up as so bland and useless because her attention is focused elsewhere. She doesn't care that she's tracked. Her opinions are bland because she doesn't strive to "cut a profile" in a place which would be visible to observers in the system.
It could just as well be that true freedom is in that.
And further, who is more free: the one caught and victimized by one of these totalitarian webs you describe (and they are scary), or the one who is so savvy about avoiding the webs, the mistakes, the many trip wires that eventually lead to disgrace, prison, whatever.
So I have been caught. So what? Was that the purpose of my life? To adroitly avoid the traps of my enemies? I suggest it's possible to not care about them. About what they do every moment.
I think life may lie in a completely different direction.
Others have already provided good answers. I'll add my 2 cents:
Assume that the companies today are ethical with your data today - they only use algorithmic techniques to figure out what advertisements to sell to you.
What guarantee do you have that these companies will be around for the next 20 years? What guarantee do you have that somebody won't acquire these companies when their stocks are languishing - and that the new masters over this data will not abuse it?
Who owns Yahoo mail now? Who owns MySpace now? What are they doing with all that data? What incentives do they have to not abuse it?
The usefulness of your personal data has an extremely long half-life period. I'd guess it is much much larger than what the average life-span of an internet company.
The appropriate analogy would be like if it was a regular occurrence to have these baristas jumped after a shift and the contents of their brain and your face/late order information written down and sold to other criminals the world over.
I don't mind the big bad gobmint taking my money, I wish they'd take more to be honest. Public infrastructure needs improving and safety nets need strengthening. On the other hand, advertisers don't need my purchase history, or any information about me, and I'd rather not let them stockpile that as even if they were pure and morally righteous, they can't be trusted to make secure databases (no one can).
"Your data is worthless, everyone's data is priceless."
Taxes are unrelated, and people at your coffee shop recognizing you is nothing compared to the lengths Facebook and Google go to for vacuuming up any and all personal data.
I've been thinking instead of paying individuals, gov't should levy a tax on PII that is stored. A $/person identified/fingerprinted would be interesting.
This would allow ethical companies like Firefox and Signal to operate at much lower margins than Chrome and Messenger.
I like this. One worry I have with paying individuals is that it becomes predatory for poorer people. If you struggle come back-to-school season, it might be tempting to let companies vacuum up your data for a quick buck.
Sundar Pichai touched on this recently, criticizing Apple for making privacy a "luxury good" (of course, Google doesn't give you privacy at any price). We should make companies value our privacy in the first place, not commoditize it.
> Sundar Pichai touched on this recently, criticizing Apple for making privacy a "luxury good" (of course, Google doesn't give you privacy at any price).
Heh, yes, clever spin on his part: it's expensive primarily because the default is actors (like Google) working so hard against it.
I think that enforcing strong privacy creates a "herd immunity" similar to how vaccines work. Collecting vast information on individuals allows companies to work in the disinterest of the public.
I also think that services that have become essential to receive public services like email should be regulated as such, with availability and cost guarantees. That's a whole digression on eInfrastructure though....
That sounds self defeating from matters of auditing would expose it to the government and everyone involved in compliance layers. Not to mention PII can be composite. Knowing you are 26 and in say New York City doesn't tell much. Now add "once worked at a Quiznos in 2010" and "born July 19th" starts narrowing things further.
This is not a new movement, and Tim Berners-Lee's SOLID is good but not the first attempt to address this.
The first I heard of something like this was in the Berkman Identity Group, which grew into the Identity Commons. At the time the movement was called Vendor-Relationship-Management (VRM), an inversion of the CRM practice where customers controlled the information and could be incentivized in various ways (discounts, paid, etc.) to share that information for agreed upon durations.
This movement was one of the drivers for the link contract work done as part of the Extensible Data Interchange (XDI) OASIS technical committee.
I don't know if this is the first instance of this kind of movement, but it was the first I heard of it.
Doc Searls reports another recent development: VRM is alive and well, under a new acronym that plays on B2B, Me2B.
Source: http://blogs.harvard.edu/vrm/
I used to support this point on view, but after the Facebook VPN scandal [0][1] I think using this approach will do more harm than good. It's a good way to make sure only poor people lose their privacy, since those that can afford it will use other platforms that charge a fee in exchange for a private experience (say no ads, or better privacy defaults).
Arguably, this is already happening with Apple, positioning themselves as privacy-aware, but charging a huge markup for it.
Why would they pay you for your data? How much is it worth?
If you click on 0 ads per year, you generate $0 for Facebook and therefore your data is worth $0.
For a personally targeted (using your data) ad click on FB, you're looking at ~$3.25.
For a web targeted ad click on a niche website, you're probably looking $0.50-$2.50.
In other words, your "data" is worth $0.75-$2.75 per click of revenue
FB spends 17% of revenue on direct costs and another 18% on marketing. Let's ignore R&D improvements and assume 35% of costs to serve that revenue.
Profit per click for FB = $0.49-$1.79 per click.
Please consider that these clicks must have real intent behind them. In other words, there should be at least 10% chance you'd actually buy that product.
In other words, your data is not worth that much money.
...
On the flip side, FB should offer a paid ad-free version of their services but who honestly would pay $10/mo for FB + Instagram etc.?
> On the flip side, FB should offer a paid ad-free version of their services but who honestly would pay $10/mo for FB + Instagram etc.?
IIRC they don't make anywhere near $10/user/month. If you gave me the option to pay, say, $25/year for ad-free, tracking-free, and it was implemented in some way that I could trust, then I might be willing to participate with FB again. I'm hoping, actually, that someone will come along with exactly this business model so that we can have social networking for friends and family without the dark patterns. Needs to get everyone on board, of course, which is the hard part.
There's an adverse selection issue. Let's say you make $10/month/user from ads, and offer the option to pay $10/month to not have ads. Users with less money are less likely to take you up on that, and are also less valuable as an advertising audience, so you start bringing in much less per month per ads-only user.
Advertising effectively allows a company to charge richer users more, which you lose when you switch to a flat fee.
It seems like a chicken & egg problem, in some ways. A business that was built more traditionally than a unicorn could still make good money at $10/user/month, certainly it's possible to build Facebook-level social networking without employing nearly as many developers as they do. But without the network effect, the value is not there. I don't want to pay to be by myself.
Maybe it could work as a SaaS, selling distinct social FB-style closed social networks to families, with a future option to interconnect those.
Probably someone has already tried this and failed. I'm not an idea guy, just a coder ;-)
I view it kind of like Craigslist. If I put something there for free tons of people are interested. If I charge even a dollar the interest drops by a magnitude.
Well, that's better than I remember :-). Or I'm thinking worldwide average. I'd probably still do it if the software was good enough and the network complete enough.
I agree. The whole point is that they are already providing a service to you in exchange for your data—a service that is quite expensive to run. If you want to sell somebody your data, that's fine. Pick how you want to be paid. Services or cash.
Your data is worth whatever someone will pay. It isn't just about ad revenue, it's about the market research that becomes a behavioral futures market. This app gives you a personal data value estimate (average it's a couple hundred dollars/year):
https://app.fastgarden.io/assessment
That is how much the market values data, but markets are not the ultimate decider of worth. Additionally, different participants in markets place different values on the goods being traded, otherwise trade would be useless - the participants consuming our privacy right now value that privacy incredibly cheaply.
I think I looked that up like 4 years ago and found a few platforms where you could resell your data - thing is it wasn't worth much, like you could make 4 bucks a month with regular smartphone usage.
Even if the data we generate is profitable it's only so when aggregated (e.g. across thousands / millions of users) so the value of a given person's data is heavily diluted.
Then again maybe it isn't worth that much because the price is being set by the people collecting the data and not by the users - if there is a paradigm shift in which people become aware that they're basically working for free, maybe the value would increase ?
(I used to work in a company that was basically harvesting some users' GPS data (via a SDK / without their explicit consent) and using it to make consumer studies for retail companies so it made me think)
I strongly disagree in trying to actually cost out the usage of our personal data. I think the right to privacy should be protected and that the extreme compromise of your personal thoughts is abhorrent. A market around this data would just lead to a normalization of this behavior into the socially acceptable realm. Similar to how we don't allow people to sell themselves into slavery we should prevent people from selling off their personal history.
> Do you remember when you were looking for good pair of hiking boots on Google and almost imminently you start seeing promoted posts about nice shoes discounts on Facebook and Twitter.
I don't think this happens?
> Basically what happens is that Google without your (explicit) confirmation collects all history of you search requests and sell it to another company.
This definitely does not happen. Google doesn't sell search history to anyone else. It may be the case that Google itself remarkets things to you. But I don't think so, since how would they get paid in that transaction? Through affiliate links? Doubtful.
Instead, I think what actually happens is that people make a search, they click a link, and target of that link (be it Amazon or some other store) associates their profile with interest in a particular product, then remarkets to that profile on Facebook.
worth noting that this is why Andrew Yang is justifying a tax-funded Universal Basic Income proposal in the US: if corporate research makes money of citizen's data, then citizens should be compensated accordingly.
Why not sell our data and for that money buy non - free services? In that sense we can at least control what to give and services will compete not only for a good service but also for a good price.
Be careful, as you legitimize a practice that has been very secretive and dishonest in tech, and in many cases should be considered criminal.
I suggest that we're tremendously outclassed in understanding how this all works, and how to find angles or "evil" in it.
Also, it could be somewhat similar to what we get with class-action settlements that are just a minor cost of doing business to let the wrongdoer off the hook, while paying some middleperson to facilitate that.
I’d rather be given the option to just straight up pay for a service with assurance that my personal data isn’t being used. That is what I would like to “make them” do.
I read a statistic that claimed Facebook would only need a single payment of like $4 from it's users to operate at the same margins without selling their data. Seems like presenting users with the option of paying $10 for premium privacy could end up make companies more money. I don't use FB, but I'd sure use Google products more often if the option were available. Maybe we're edge-case users, but I surely don't feel like one. That said it's confusing the options don't exist, because from a purely capitalist perspective, the only perspective these companies have, the option seems to make sense.
Could we just not let anyone use our personal data? I don't need, neither do I want to lease my personal data to google or facebook. I very much want that they don't use my personal data.
Note that I use neither google searcj nor facebook, but both google and facebook do collect my personal data, for example from my use of whatsapp, my Android phone and the google play store, etc. That really has to stop.
>Note that I use neither google searcj nor facebook, but both google and facebook do collect my personal data, for example from my use of whatsapp, my Android phone and the google play store, etc.
That's a non-sequitur... Whatsapp is Facebook. The Google Play Store is Google. You are using Facebook and Google 's services. Whether you're using Google search or "Facebook" (the product not the company) is irrelevant.
Start by figuring out how much YOUR data is worth - that's the hardest part of all this. Individuals will need to create scarcity like the big ad networks do today, otherwise a market flooded with copies of personal data will have very little value. https://app.fastgarden.io/assessment
The initial concern of "the service would have to be paid for" is insurmountable to this generation.
If the average Grandma and High School kid had to pay per query to Google in return for ad free search ( Google did have this product with CSE) what miniscule percentage of users would pay?
The Brave Attention Token tackles this problem in a logical manner
If anyone is interested in some long-form writing, Vi Hart posted a very well-reasoned article about the same subject (and much more besides) a few weeks ago: https://theartofresearch.org/ai-ubi-and-data/
In theory I agree, but the somewhat depressing realization is that your data is not especially valuable in and of itself.
The marginal value of your data with respect to 10,000,000 other people's data is almost (but not quite) nothing. The value is in the whole.
It might work well if significantly large groups of people got together and collectively sold their data - say 100,000 or more. That would be kind of cool - "hey we are 100,000 people with X demographics and are willing to share Y data for Z dollars.
Doing it on an individual basis may be a symbolic victory, but that would likely be it. "Personal" data has a huge asymmetry in value. There is some Danny Kahneman in here somewhere - I bet people would value their own data 10x what they value a random person's at.
In my opinion people would on average refuse to sell their data individually because what it is actually worth would almost certainly be lower (probably much lower) than they'd imagine, and that sounds like it would make people sad not happy.
The thing is, without those companies your data is absolutely worthless, users are not holding any cards. To own your data,put it in RSS format and let competitors bid to license them from you.
(Also author doesnt seem to know how google,fb work)
Under the spirit of capitalism, if privacy is such a popular product that so many consumers want, then why not make it available for them to purchase?
For Google users example, free-tier will allow each user account 5 searches per month that will not be tracked and recorded, so essentially users will have an option to specify right before the moment of each search to indicate whether this specific search is going to be “cognito” or “invisible” (perhaps by just ticking a checkbox).
Higher tiers offering greater amounts of private searches per month will simply cost more to purchase shall those consumers desire. Ran out of private searches? Top it up just like prepaid phone credit! This way those users who are extremely privacy conscious will have a solution to protect their own privacy while Google has invented a new revenue stream and can even increase profits from their valuable services.
The next generation Capitalism 2.0 will now enjoy privacy becoming a new cash cow industry overnight. The question then will be how much are people willing to pay for an unlimited account?
I think the hard truth is that offering privacy at a premium will still detract total net revenue. Nobody really cares about a single person's search history. It's the conclusions you can extract from hundreds of millions of peoples' histories that makes data collection so powerful. The more people you allow to opt out from that collection, the less powerful your ad network becomes and the less big money spenders you will catch as a result. One big money spender can easily spend as much as ten thousand individuals with a single campaign. It's just not likely that there will be enough people willing to pay compared to giving them the services for free in exchange for their data.
> For Google users example, free-tier will allow each user account 5 searches per month that will not be tracked and recorded
You can already open an incognito / private browsing window if you want to search without it contributing to your profile. On the other hand, modern search works better because it can build that sort of profile.
(Disclosure: I work for Google, on ads. Commenting only as myself.)
> On the other hand, modern search works better because it can build that sort of profile.
Google has become too incompetent to even take the query (all of it, without ignoring half of it) into account. I don't know what Google search has become better for because it sure isn't finding things.
This seems… very dystopian. Would we be okay with e.g. landlords placing cameras in the bathrooms of apartments for rent unless the tenant paid an extra fee? What if it were framed as a discount and targeted low-income housing?
Why should we allow our privacy to be commoditized, instead of enshrining it as a basic human right?
It's not mandatory. Everything else would still remain the same for free users just like what we already have, this is only for those who prefer not to be included in the data so now they will have the option to pay instead.
Obviously we cannot expect Google to provide their services for free as it's very costly to maintain such global infrastructure and their team of engineers, therefore in turn Google would need user data in order to provide advertising products and generate the required revenue. We must pay for using Google services in one way or another, and right now we are paying it by giving up our personal data.
In the end this will simply serve as another payment option for those who value their privacy more than cash. I suspect the majority of population would still remain at free tier. However with more options available, at least the public will no longer complain about potential privacy issues, and Google can also remove much of the unnecessary heat and focus off their back. There are just so many different ways Google can tweak and optimize these offers to satisfy public demand for privacy.
The current advertising in exchange for privacy model obviously does not work for all consumers so it's time to evolve.
You're presupposing that privacy is a commodity, but I'm suggesting that it should be more sacred than that.
> In the end this will simply serve as another payment option for those who value their privacy more than cash.
The problem with this is it makes privacy a luxury. Should those living paycheck to paycheck have to choose between their privacy and finding things on the Internet?
If privacy is a commodity, then this is fine. But then, so is the status quo: privacy is currency to be exchanged for services, and companies should stretch that currency as far as possible. There is no sound moral argument against what Google/Facebook/etc are doing. The current advertising model may not work for everyone, but who cares? The market has spoken.
If privacy is a human right, then this is obviously insufficient. We can't allow companies to exploit people who can't afford it by making it opt-in; we need to strictly enforce it for everyone.
Privacy is not an extra or a luxury thing, it is a right. If we keep this comoditization process in some years we are going to talk about an antidiscrimination price, or a true information price.
The main reason why not is it is a matter of trust. Just because you are paying is no guarantee that you aren't also the product. Just look at how long cable remained ad free.
The actual mechanism behind the behavior the author observed is that the user went to Google, searched for a product, and then visited the product's website. The product's website contains a tracking beacon that associates a unique cookie for the user with the visit. The website's owner can then choose to run ads on Facebook or Twitter that are shown only to people who previously visited the website. This program is well-known among Facebook advertisers (it's apparently quite profitable to them), but Facebook understandably doesn't publicize it much to the general public.
Agree that it'd be cool if the person involved could actually control who and where it went to and was compensated for the use of it, but that'd likely require a fundamental change in how the web works.