Hacker News new | past | comments | ask | show | jobs | submit login
Meta enforces purpose limitation via Privacy Aware Infrastructure at scale (fb.com)
33 points by seanieb 11 days ago | hide | past | favorite | 53 comments





This looks very much like it was created (at least partly) in response to the Digital Markets Act, which prohibits companies regulated under it ("Gatekeepers", https://digital-markets-act.ec.europa.eu/gatekeepers_en) from combining data across different services without the consent from the user.

Fines for DMA violations are up to 10% of global revenue (not profit) for the first violation, up to 20% for repeat violations, plus other penalties and remedies. Also, an ongoing fine of 5% of revenue until brought in compliance.

My impression is that DMA is taken quite seriously by Big Tech, especially given that it's clear that they're directly being targeted by it.


It feels like with fixed value fines most businesses just consider fines to be a cost of doing business but damn "10% of global revenue" is insanely dangerous.

The USA would be better off moving away from fixed cost fines and to dynamically calculated fines based on individual and organizational wealth. This way the law binds the poor and the wealthy. It would also erode the statement "Fines are the cost of doing business".

Sure - if you want zero innovation.

If you're a large enough entity operating in multiple jurisdictions around the world, you're exposed to such a byzantine labyrinth of laws and regulations that it's impossible not to be in violation of some rule or law, somewhere, at all times.

Regulators know this. They use discretion to decide when and how to enforce these rules. Some are seldom enforced, others enforced with reckless abandon depending on the administrator, market conditions, political winds, etc.

Every regulated financial institution, for example, and I mean every single one, will be fined for something if it is big enough and has been around long enough.

Basing fines on worldwide revenue is a ridiculous way to ensure absolutely no risk-taking happens, because innovation happens at the margin. Tomorrow's massive industry is today's grey area.


The industry has spent decades moving revenue worldwide via complex financial engineering to avoid taxation. This is the end result of that innovation.

I can't believe that baising fines on the wealth of the criminal could possibly lead to zero innovation.

There's a difference between risk taking in doing something new and innovation and risk taking of riding the line of legal/illegal. I for one, would much prefer an environment where businesses were a little more afraid of breaking the law to exploit people/environments/other businesses/etc.


“Up to” gives a nice bludgeon to hit the repeat or clearly malicious offenders.

It’s a response to the “insanely dangerous” actions of the industry.


Unfortunately, it's very unequal: A firm with a 5% profit margin would be hit way harder by "10% of global revenue" than a firm with 50% profit margin.

That’s likely considered when assessing the fines.

I'd like to see "10% of revenue or 100% of profit, whichever is greater" (scaled accordingly, i.e. 20%/200% for repeat offenses).

10% of global revenue is practically unenforceable and would result in major tech companies simply ignoring the fine and pulling out of the EU.

"Forcing" companies to pull out in this manner would instantly ignite a trade war between the US and the EU.

This is a no-win situation for the EU - if WhatsApp is shut down in the EU, for instance, it will cause massive problems, loss of life and money, etc. Whatever new solution arrives on the scene will struggle to turn a profit and probably be even less compliant with regulations for many years due to the sheer amount of regulation that has slowly accumulated.


> if WhatsApp is shut down in the EU, for instance, it will cause massive problems, loss of life and money, etc.

Please explain how WhatsApp shutting down in the EU would cause "massive problems, loss of life and money" rather than e.g. Signal having two weeks of instability due to 200M new users and that being the end of it?


Countless people rely on WhatsApp. Historically, when we have seen WhatsApp go down even for a short amount of time we have seen massive disruption and some loss of life.

Emergency services in some locations rely on WhatsApp, most businesses use WhatsApp to communicate with customers. You'd have to re-build your whole network on another app. Signal may be that app or the ecosystem might simply fracture.


So what do you think the solution is then?

If the fines are small, people complain that they have no teeth and that corporations just see them as a cost of doing business.

If the fines do actually have teeth, we have your comment here.

What would you consider acceptable?


"Acceptable" is debatable. Is the company genuinely trying to fix the issue? Can they demonstrate it (provide PRs, resourcing information, updates, etc?)

The real solution is not over-regulating the internet. This is how we end up with the comparatively barren tech landscape of the EU, or the trend towards authoritarianism in Brazil, where a massive platform was banned simply because some small and questionable rulings were ignored.

The innovation aspect really bothers me. As a SWE you definitely feel the development burden of so much regulation, it is huge, and that's with the might of a large company and all that brings to support you. I can't imagine startups will ever even attempt to meet these regulations.

My own experience dealing with regulation in big tech makes me believe this is one of the chief reasons the EU has experienced technological stagnation. Maybe good in the short term, very bad for humanity in the long term.


> I can't imagine startups will ever even attempt to meet these regulations.

Most of DMA targets only "gatekeepers" (i.e. big tech), not small startups.


Call me a skeptic but this whole charade about privacy aware infrastructure seems more likely to be about plausible deniability rather than genuine compliance.

If it’s truly such a complex and monumental effort for a global company like Meta to adhere to these standards, how the hell does the EU plan to detect non-compliance?

Many of us here are keenly aware how easily private data can be laundered through ML models, obfuscating provenance beyond reversibility.

Incentivizing whistleblowers by sharing a % of the fine could be a good start, but as far as I’m aware no such provision exists under the EU DMA regulation.


> Call me a skeptic but this whole charade about privacy aware infrastructure seems more likely to be about plausible deniability rather than genuine compliance.

It actually is that difficult. I have worked at a few big tech companies now on these projects and it would be insane to expect a startup to do something similar.

I do not think these regulations were crafted with any semblance of care for feasibility or implementation burden.

> If it’s truly such a complex and monumental effort for a global company like Meta to adhere to these standards, how the hell does the EU plan to detect non-compliance?

These sort of regulations are often enforced by demanding internal audits.

Most of the evidence used against Meta in fines has actually been disclosed by Meta.


Ambitious effort! Was this motivated by regulation?

It would be interesting to compare the capabilities and policy challenges of at-scale data privacy, with patterns in single-node systems like SE Linux and App Armor, which have been historically daunting.

Sqrrl (now Amazon) work on Apache Accumulo has tools for access control plumbing in large datasets, https://accumulo.apache.org/

> Every Accumulo key/value pair has its own security label which limits query results based off user authorizations.


Likely for complaince regarding the EU's Digital Markets Act (DMA) based on what I've seen other impacted companies build

Meanwhile, I can't use Threads without tying it to Instagram.

And from there, in turn, FB: https://webapps.stackexchange.com/questions/108777/prevent-i...


Or Facebook without Messenger (not that I particularly want/like either).

Lovely that their blog with privacy propaganda has a cookie banner that is not compliant with any privacy law in any way. Says everything about their efforts, I guess.

It was a trap. Clickbait to get every privacy-minded tech enthusiast on their site. Now, simply because they interacted with the page, facebook, in their opinion, gets a blank cheque to track you wherever they wish, on and off-site.

Having actually worked for Meta in both security and privacy capacities, I guarantee you that it's really not that conspiratorial.

No one wrote this article with the intention of "trapping privacy-minded tech enthusiasts."

I mean no offense, but this sort of thinking (that an engineering blog is attempting to attack you) is unhinged. There is not some grand conspiracy. Companies like this are not the shadowy, highly-competent and absolutely evil entities you think they are. They are barely functional to begin with.


Yup, also work in big tech and confirm this.

One really just has to think through the situation rationally, even assuming the most greediest of intentions:

> Clickbait to get every privacy-minded tech enthusiast on their site

Turns out the market of privacy-minded tech enthusiast is tiny and they hate clicking on ads. Trying to cajole this group into giving you money is pulling teeth.

Understood.

Let's deploy the same set of company resources and effort on the 99.99% other people in the market place, increase some efficiency by like 0.1% and make waaaayyyy more money.


Having worked elsewhere, this. Every part of it. Especially the "barely functional".

Different parts of the company working together is hard/rare enough. Them conspiring together... forget it.


“Never attribute to malice that which is adequately explained by stupidity.”

https://en.m.wikipedia.org/wiki/Hanlon%27s_razor

Also, I don’t think that the parent comment was being serious.


I was indeed not very serious, neither is the comment I would write in response to this:

-- Ah yes, Hanlon's razor, one of the CIA's more successful PsyOps. --

But then I was shocked to learn that the Razor's namesake Robert J. Hanlon actually did work for the CIA and now I dont know what to think.

https://wydaily.com/obits/2019/04/09/robert-j-bob-hanlon-70-...


The ratio of "people who have opinions about what google/meta/etc might be doing" vs "people who have actually worked privacy/security in google/meta/etc" is abysmally low.

Most of what's said by people who actually known what they're talking about is drowned out by low-effort, conspiratorial, semi-intellectual laziness.


Yeah, this is the main reason I stopped using Reddit when I entered the industry.

Taking it a step further - I frankly don't think normal people are positioned to make any decisions or hold any opinions strongly about tech. They are so mislead by journalism it's not even funny.

My doctor friends feel similarly about medicine and how it's reported on (and the populace's common opinions on medicine.) The average person/voter is immensely mislead in basically every field they themselves are not an expert in.


> I mean no offense, but this sort of thinking (that an engineering blog is attempting to attack you) is unhinged. There is not some grand conspiracy.

“You know, since we're trying to Z but don't have Y, we could probably use X to get Y…” said no inventive engineer ever.

No conspiracy needed. This happens.


X to get Y happens.

A tech company using a blog to get whatever imaginary consent from random anonymous privacy-aware individuals is so many levels of unhinged that it makes absolutely no sense whatsoever.


The company wouldn't. Someone retroactively realizes they have the data, and then it does.

I'm certainly not saying it happened, or will happen, here. I'm saying it definitely happens.

This is why in regulated industries, there's an emphasis on "data minimization". Much like the principle of least privilege, but applied to whether you're letting your people or systems be exposed to it in the first place.

It's easy to follow a least privilege policy if there's an actual technical control not just agreement, and even easier if the control is "I never had it, didn't derive it, and made sure I couldn't if I wanted to".

If you aren't collecting it for any use, even inadvertently, you can't retcon it into availability for alternative uses.


> Someone retroactively realizes they have the data, and then it does.

This simply isn't within the realm of reason.

Engineers at Meta have far more impactful problems to solve than attempting to reverse engineer the browsing habits of the 12 privacy-sensitive tech enthusiasts reading their engineering blog.

From a ROI/time perspective, it is far in the negative for a single junior Meta engineer to spend even 10-20 minutes investigating this. It literally is not worth anyone's time.


Can you explain?

An “accept” only cookie notice isn’t generally permissible in the EU.

They may be serving more compliant versions based on geolocation, though.

> To help personalize content, tailor and measure ads and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies.


Many reasons:

- They are not asking for consent, there is just an ok button. - They assume consent when you navigate further on the site, this is not valid consent. - Consent needs to be for specific, well defined purposes. “help personalize content, tailor and measure ads and provide a safer experience” are three purposes in one and none of them are well defined. - They are probably already setting the cookies in your first request, when you have not seen any information yet (did not check)


What a strange article and initiative by Meta. So we're expected to believe that Meta is doing all this engineering out of some newly discovered concern for user privacy, and ignore the decades of blatant privacy violations, for which they've been fined numerous times? I suppose they've decided that the cost of this effort is less than the fines they would have to pay otherwise, so it probably makes business sense. It doesn't hurt as PR fodder to balance the negative press either.

But let's not be fooled. Advertising and user privacy is a zero-sum game. Adtech is a giant business today precisely because they've violated user privacy since the beginning, taking advantage of the fact that the average web user is either unaware of what they're giving up, or they just don't care. All these supposed privacy initiatives by adtech corporations are simply an answer to increased regulation and public awareness. Otherwise they would happily continue siphoning everyone's data without thinking twice about it. They actually still do for areas of their business that are not under the spotlight yet: shadow profiles, data broker transactions, etc.


My guess is that adherence to the ever-shifting sand dunes of local privacy regulations across the entire planet was such a burden that Meta has been forced to come up with a global, flexible, and broadly compliant solution.

Reminds jurisdiction specific tax/etc. configuration in ERP :)

Meta correlates to privacy, you just have the correlation inversed.

You have privacy settings on almost all meta apps. They have tools to make it hard to download personal pictures without uploader consent. They respond to privacy concerns constantly.

Maybe that's why you are confused, you read or listen to their responses to privacy concerns, and you focus on the concerns rather than the responses.


"They have tools to make it hard to download personal pictures without uploader consent"

What tools are you talking about. Facebook images are save as. Privacy rules can make your photo not appear on your profile but tag a friend and their friends get to see it. A few years ago facebook would show you the first picture of every private album if you went to a general url (photos?id=1234). Back in those days photo urls were sequential with hundreds of numbers skipped between photos but easy enough to take the first photo of a private album and sequentially get every photo.

If you put your photos on facebook at some point someone you didn't realize had access to your photos.


You are right, facebook still allows downloads, but Whatsapp, instagram, stories that kind of things block downloads

This resonates with me. I’ve found big tech to always give me reasonable, sometimes excessive control. I’ve also found “privacy” to be a thought-terminating device that prevents granular discussion of who needs what.

I also personally find it weird that so many people highlight the opposing nature of business and government as if it wasn’t enshrined in law.


> I suppose they've decided that the cost of this effort is less than the fines they would have to pay otherwise

The goal of any corporation is to generate profit for shareholders. That's precisely the role of the law to ensure that this is compatible with public interest.


They're doing bare minimum and solving that problem with technology. As if well intended regulations could drive the civilization forward and benefit all. Strange, isn't it.

> violated user privacy since the beginning

But why is there an expectation of privacy on the web, anyway? When browsing the web you're utilizing hundreds or thousands of other peoples' infrastructure, going to sites that are public or semi-public (aka "public spaces"), and otherwise interacting with strangers in shared spaces.

In the physical world, there is no common expectation of privacy in public spaces. If you go to a shopping mall, you may be observed by either the mall itself or its client stores, and this is not a "violation of your privacy". You elected to go to a public space, knowing you might or would be observed by any other occupant of that space. If you go to your friend's house, you may be observed by their security cameras, if they have them. This is also not really a violation of your privacy. You could also have a conversation with your friend about not being on their cameras, and maybe come to an agreement--presumably because they trust you. This isn't something that scales to hundreds of strangers, much less thousands or millions.

I think the core issue I see with the stance you appear to hold is this: privacy is not a fundamental feature of anything. Any interaction with another person involves the exchange of information, and the amount of information exchanged is neither fair nor necessarily finite. How much information is exchanged is not just a function of the purpose of the interaction. It's also determined by the abilities of all parties to observe and analyze the interaction. Trying to argue that privacy should be a fundamental right seems to ignore the obvious: that forcing any particular level of privacy requires making sacrifices that significantly impede the quality of interactions, especially those involving parties that do not necessarily trust each other.

If you care about privacy, interact with other privacy-minded individuals and establish a common set of norms amongst yourselves, and acknowledge that those norms will be different for every group. This is sane and defensible.

What isn't is demanding that the world adhere to your standards of privacy, while claiming that it's some sort of right that should be protected by someone other than yourself.

Understand that privacy cuts both ways: most of the world is apparently fine with having no expectation of privacy on public places--online or offline. They're okay with this because it makes antisocial behavior harder to hide, and makes pro-social behavior easier to verify. If everyone demanded there be an expectation of privacy in those public places, the world would be very different, and not in a good way. Distrust would be the norm, and cooperative endeavors would be bogged down by the endless need to make sure those you're working with are indeed who they say they are.

If you want good examples of what it looks like when privacy in public spaces is the norm on a large scale, look no further than the cryptocurrency world. Not exactly a shining example of a "pro-social culture".


> In the physical world, there is no common expectation of privacy in public spaces.

Oh, please. This is a false equivalence if I ever read one.

When you're in public nobody is building a profile on you. Nobody is tracking where you go, what you buy, how you behave, and what your interests are. Nobody is then storing this data and getting rich from it in perpetuity.

If privacy online were limited to companies knowing just your IP address and general location, and not storing this data, _that_ would be equivalent to public spaces in the real world. The fact it's not, and companies actively try to gather as much information they can about you, often in scammy ways that forces governments to come up with new regulations, where getting fined for violating them is just the cost of doing business... is beyond nefarious and any reasonable defense.

> privacy is not a fundamental feature of anything

Huh? Privacy has been a fundamental human right for millennia, codified as part of many legal jurisdictions.

> Any interaction with another person involves the exchange of information, and the amount of information exchanged is neither fair nor necessarily finite.

Again, the difference is in how this information is used. The keywords here are _reach_ and _consent_. We implicitly consent to sharing a limited set of information when interacting with another person or in public, but when that information and its use goes far beyond what we implicitly agree to, then this needs to be clarified, and explicit consent must be given.

> Understand that privacy cuts both ways: most of the world is apparently fine with having no expectation of privacy on public places--online or offline.

This boils down to two things: a) most of the world doesn't understand the implications of the transaction they're a part of when they use online services. The use of their data is hidden behind cryptic privacy policies (which are often violated and don't explain the full picture), and data collection happens in obscure ways without any consent. And b) even if they're aware of this, most users simply accept the transaction because "they have nothing to hide", or the value they get from the service is greater than their privacy concerns. Users are often never given the option of an alternative business model, and even when they are, the data collection machine never stops. Companies know that the allure of getting something for "free" is far more profitable than putting up a paywall, and so they justify extracting as much value as they can from their users' data. We're stuck in an endless cycle of having "free" services where the business transaction is far more valuable to the company than its users. And it's in the companies' best interest to keep the users, the general public and governments in the dark as far as the extent and details of how value is actually extracted. This is the most vile business model we've ever invented.

> If you want good examples of what it looks like when privacy in public spaces is the norm on a large scale, look no further than the cryptocurrency world.

What privacy? The blockchain of most cryptocurrencies is a public ledger where all transactions are visible to everyone. Transactions are done almost exclusively via exchanges that follow strict KYC regulations. So it's pretty easy to identify the person behind any transaction.

Following your logic, should we ban all cash transactions, since they're even more private than cryptocurrencies?

You have a twisted sense of the world, friend. My guess would be that you work for one of these companies, and this is how you rationalize the harm you're putting out in the world.


> Oh, please ... is beyond nefarious and any reasonable defense.

I don't know. This seems overly reductive. People build profiles on you all the time, just by looking at you. What you wear, how you talk, how you hold yourself--these things all present a large amount of information to anyone willing and able to observe it.

Companies invest large amounts of money into understanding customer movements in their stores, to better optimize product placement and things like checkout flow. Even sole proprietors will observe their customers' patterns to get a better understanding of how their store is performing. If a worker in a cafe sees the same person come in multiple times and order the same drink each time, is that not also "building a profile" on said customer?

A public IP address and a general location is what... maybe 12 bytes of information? You broadcast a lot more than 12 bytes of information when you pass through a public place, especially when you consider all the possible forms of information you could be broadcasting. You're being overly reductive to try to make your point, and I don't really buy it.

> Huh? Privacy has been a fundamental human right for millennia, codified as part of many legal jurisdictions.

Being "codified as part of many legal jurisdictions" does not make something a "fundamental right". To take a slightly extreme example, treating women as property is "codified as part of many legal jurisdictions" in the world today, and yet it would be insane to use that to argue that "treating women as property is a fundamental human right".

You may believe that privacy is a fundamental human right, but it's not because it's been codified in "many legal jurisdictions". The logic just doesn't follow.

> Again, the difference is in how this information is used. The keywords here are _reach_ and _consent_. We implicitly consent to sharing a limited set of information when interacting with another person or in public, but when that information and its use goes far beyond what we implicitly agree to, then this needs to be clarified, and explicit consent must be given.

When did "reach" and "consent" come into this? Why are these "keywords" here? Merely _existing_ in the world is consenting to sharing "private" information with other people (or even things!) in the world. It's unavoidable. I'd argue further that it's not just avoidable, but _necessary_ to the building of a functioning society--especially any when governed by any form of democracy. It's not a matter of consent, it's merely a fact of life.

You don't explicitly consent to any level of information sharing when you go out into the world, but some amount of it is unavoidable. In fact, I'd even go so far as to argue that it's impossible for you to ever know how much information you share when you venture into the world. You can never know how much someone else can deduce about you from any given interaction, because that would requiring knowing what only another person can know. You also can never know how far that information might spread--because again, this would require knowing others' intent and intentions _before ever interacting with them_, which is impossible.

> This boils down to two things: a) most of the world doesn't understand the implications of the transaction they're a part of when they use online services. The use of their data is hidden behind cryptic privacy policies (which are often violated and don't explain the full picture), and data collection happens in obscure ways without any consent.

I would largely argree with this, because I believe this is how "public spaces" already function. Data collection always happens without your consent. If I pass you on the street and note that you're wearing an Apple Watch, I have obtained information about you that you did not (explicitly) consent to share with me. I can draw reasonable conclusions from that information that might inform me of other aspects of your life that you also did not (explicitly) consent to share with me. This is how public spaces work!

> And b) even if they're aware of this, most users simply accept the transaction because "they have nothing to hide", or the value they get from the service is greater than their privacy concerns.

Again, also agree with this, because I think it's simply true. The "data" that you're sharing with a lot of these companies would be worthless to you yourself (because the value comes from correlations across many data points). The fact that it has value _to a given company_ when it might be worthless to you yourself is part of what gives that company's services value--even to you.

As an example (in an area I'm familiar with), if no company ever collected information about how their website was used (i.e. with a product like Datadog or Sentry), they would be faced with a much larger cost to improve their business. The testing burden would be far higher, they would have no insight into whether or not their product actually works for real users, they would only receive a fraction of the number of bug reports they would receive through automation, etc etc. These are all things that reduce the cost the company has to pay in terms of keeping the product in working order and increasing its feature set. These services exist for _very good reasons_, and they're pervasive because they provide a large amount of value to their customers. Which kind of gets to your next point:

> Users are often never given the option of an alternative business model, and even when they are, the data collection machine never stops.

Users are given the option of alternative business models all the time. I've lost count of how many "privacy-preserving" services I've seen that promise to never collect your data, or never share your data, or whatever. I've also lost count of how many of them have died a slow death, because a) people don't care about that as a value proposition, b) they don't improve their services as quickly as their competitors, c) they're more costly than their competitors, and d) they just suck as services because they fail at basic usability.

There are significant upsides for _all parties_ when it comes to certain kinds of data collection (i.e. telemetry data mentioned above, flow analysis in stores, etc). These things come with no downside other than some bogeyman of "oh noes they have my data".

> Companies know that the allure of getting something for "free" is far more profitable than putting up a paywall, and so they justify extracting as much value as they can from their users' data. We're stuck in an endless cycle of having "free" services where the business transaction is far more valuable to the company than its users.

Again, fully agree, but I also understand _why_ this is the case. There are _significant_ upsides to many kinds of data collection, and those in favor of "enhanced privacy" to this day _cannot_ provide viable alternatives. The fact that it's extremely hard to build a service on a paid subscription that does not subsidize its development/research/feature set through data collection of some kind should be ample evidence that this is the case.

> And it's in the companies' best interest to keep the users, the general public and governments in the dark as far as the extent and details of how value is actually extracted. This is the most vile business model we've ever invented.

Kind of agree with this, except the last bit. I don't see why it's vile at all. Certainly it is in some cases (see the general trend of enshittification), but I don't think the only answer is "make everything _more_ private". That only increases distrust, which is the last thing we need right now.

> What privacy? The blockchain of most cryptocurrencies is a public ledger where all transactions are visible to everyone. Transactions are done almost exclusively via exchanges that follow strict KYC regulations. So it's pretty easy to identify the person behind any transaction.

Okay, here's a Bitcoin transaction[1]. Please identify who the participants are.

> Following your logic, should we ban all cash transactions, since they're even more private than cryptocurrencies?

No, why would we do that? That doesn't follow from pointing at cryptocurrency as a place where minimization of trust and maximization of privacy are the norms. This just seems like you're throwing out whatever other bad things you can attach to the strawman you've constructed from my argument.

> You have a twisted sense of the world, friend. My guess would be that you work for one of these companies, and this is how you rationalize the harm you're putting out in the world.

Wildly incorrect, as well.

But here's the thing... I'm not the one saying "when I go out in public, all people can get from me is the equivalent of a public IP address and a general location". I'm trying to point out that when I interact with the world, there's depth to the interactions that I literally _cannot_ know, and that I'm sharing information with the world that I'm not even aware that I'm sharing.

I'd argue that taking the world of public spaces filled with (very real) people who have as much depth (or more) to their lives than your own, and viewing that as "just a bunch of public IP addresses and general location information" is the truly twisted view of the world. People aren't cartoon cutouts, and places full of real people are also full of a wealth of information that is essential to the functioning of our societies.

The answer isn't to take all that information away--it's to be better at using and utilizing it, for whatever definition of "better" we can all manage to agree on.


I won't reply to all your points, but I'll say this: I don't think I'm being overly reductive as much as you're being dismissive about the lengths tech companies go to build a profile of their users. The major difference between someone building a mental profile based on my interaction with them in the real world and the online profile companies build based on my digital footprint is that the real world profile is not stored perpetually and shared, or rather sold, behind my back to countless interested parties that will later use it to psychologically manipulate me, whether that's to get me to buy something I didn't need, or to change my mind about a political or social topic. Additionally, my real world profile is helpful in building tangible relationships, and I have full control over what I choose to share with the world. I may not be aware of all the information I'm sharing, but I'm certainly in control of it.

This is what privacy essentially is. Consent and reach _are_ keywords, since I get to decide what and how much I share publicly, and what I keep private. I can at any point stop interacting with whoever I want, at which point they only have a fading mental profile of me. This is not the case with digital profiles, since companies use dirty tricks to access as much of my data as possible, routinely exceed their boundaries, and, most crucially, they keep this data and continue profiting from it in perpetuity. Even principles like the right to be forgotten of regulations like the GDPR are not sufficient to put an end to this, since once my profile is sold on the data broker market, it is forever part of it, and will likely end up back in the hands of the company that claims to have deleted it.

This is why I maintain that these practices are vile and nefarious, and why we have far too few and far too lenient regulations to control them.

Re: tracing the Bitcoin transaction, I don't claim to be able to do this personally, just that it's technically possible. There are companies like Chainalysis that do this kind of service, and there are many examples of law enforcement agencies tracking individuals using these techniques. I'm only pointing out the flaw in your argument that cryptocurrencies are some kind of privacy standard, and extrapolating it to cash money, which would presumably foster even less of a shining example of a "pro-social culture".

In any case, I've exhausted my arguments on this topic, and it's clear we have very different views about privacy, so let's agree to disagree. Cheers!


> Consent and reach _are_ keywords, since I get to decide what and how much I share publicly, and what I keep private. I can at any point stop interacting with whoever I want, at which point they only have a fading mental profile of me.

I'll just point out that I think this is where I find the flaw in your argument: You _don't_ get to decide what and how much you share in public. You might try, but you will never succeed, because it's impossible to not share what you don't realize you're sharing. I think this is why I find your argument entirely unconvincing: if you're incapable of knowing the extent of what you're sharing with other people in public, informed consent us impossible by definition. And if you don't know the extent of what you're sharing, how can you know how far that information can reach?

Sure, being online makes collection and archival of what you share easier, but "more privacy" doesn't address the root cause, because the root cause is that you cannot be fully aware of what context you share with others in the world--both online and off. I don't think this is a problem, because I think this kind of unintentional information sharing provides all kinds of benefits to society at large. It's the kind of thing that drives "gut feelings" and other forms of intuition, and we'd be impoverished if everyone adopted the kind privacy maximizing standpoint you're advocating above.

But opposing viewpoints are good, so I'm not super concerned if I fail to convince you of anything, really. Cheers!


Like putting lipstick on a pig.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: