A vending machine doesn't make notes of who buys its products. A vending machine doesn't keep logs of its transactions or of the payment methods used. A vending machine does not make "recommendations" of what the consumer might like based on their past purchases. You give the vending machine a dollar, it gives you a soda. The transaction between consumer and vendor is done, and no tangible evidence of the transaction remains, other than the exchanged goods.
But we haven't figured out a way to scale the vending machine model yet, because when the machine gets stuck and fails to dispense its product, the consumer gets an attendant to help or kicks the thing a few times in frustration and walks away, because the lost dollar isn't worth the hassle. If someone paid $1500 for their laptop, however, they're going to want that convenience of being able to prove they purchased the laptop from XX retailer and be compensated if the thing is a lemon.
I'm not sure we can solve this problem, completely. But we might do better if we actively aspired to make our services more like vending machines.
EDIT: Yes, I'm aware that some vending machines these days can be paid by credit card. My university even had one where you could pay using your Student ID, which was linked to the balance allocated for your meal plan.
I think the real problem is doing the same thing at a distance. At a store you know where to find the entity you purchase from in case of issues and you get a receipt at the same time you purchase. You can sort of get the same thing with an individual face-to-face, but only if there is an established trust in some form (buyer has to trust they can find the seller if something goes wrong, seller has to trust buyer isn't using counterfeit money, etc). It's pretty hard to trade at a distance without revealing something about your self. At the very least, there's going to be a middle man that will want to know something (a lot) about you if they're going to provide you services (P.O. Box, shipping company, etc). They want to make sure customers stay honest so they don't get in trouble when customers scam each other or make an illegal trade.
It all comes down to trust. I think all the retailers, on and off line, are blatantly violating that trust when they sell our data off, but not enough people seem to care or have a better alternative, so there are no consequences for retailers. New law is usually the last thing I want to see, but it seem to the only way to introduce a consequence for sellers at this point.
Haha sure we have, it just isn't as profitable as the current solutions which are to collect as much data as you can and use that information secondary income such as selling to marketers or using for "analytics" or "targeting"
Service builders just have to care about designing their systems like this. Every single person in this thread who works with or for one of these information collecting companies is responsible for the current state of the world.
> If someone paid $1500 for their laptop, however, they're going to want that convenience of being able to prove they purchased the laptop from XX retailer and be compensated if the thing is a lemon.
That can easily be solved. Either by using GNU Taler as Stallman describes, or by issueing cryptographic receipts of a purchase, whereby the vendor digitall signs the receipt so they can recognize it later if presented, but internally they just don't store a record of that transaction, or they don't store any information about who purchased that.
It's not a hard problem to just not store personally identifiable information, you just don't store it.
What is a hard problem is convincing people in these industries that they should care, that they should protect user privacy by not collecting the information in the first place.
Well technically the vending machine itself doesn't note this, but I've increasingly seen vending machines which accept credit cards. Thus your CC issuer knows where you were, when you were there, and can even make some inferences about what was bought based on the price.
Personally, I keep an "entertainment" budget which I withdraw in cash, both for privacy reasons AND budget reasons. (Staring down into your wallet at cold hard cash makes you hesitate before buying that cup of coffee or other sundry)
I've heard good things but I have no idea about the details.
merchant sees NO account information but has the final four digits of the PAN [CC number] as an account identifier to use as a mechanism for tracking volume/activity per account
I like the GNU Taler model mentioned in the article where the payer decides whether they want to be identified, but the payee is always identifiable. Combine this with a crypto-signed receipt from the payee and you have a system that better respects privacy.
Somewhere there must be an intelligent balance.
In practice though this is mostly biased towards the revenue of the company selling the goods, not towards your health requirements.
Thank god for Amazon profiling them, they must have been starving before.
To take your laptop example, though, you may trust the brand, but, you expect it to last. therefore, your relationship is with the brand AND the product. You will need to service that laptop, and they, as the vendor will need to set parameters around what services they will and will not provide in which circumstances. They have a kind of "contract" with you that doesn't necessarily extend beyond you. In fact, there are some situations with a laptop where giving it away to the wrong person could be a crime.
My only question is: if we could "solve" this problem, would we actually want to? Do we really want to make every interaction we have into some kind of transaction, with no obligations beyond payment for goods? I don't think so. I think most of our consumer protection laws have been developed to protect consumers from the considerable costs that such a system will create - warranties, lemon laws, etc. Critically, all aimed at improving the amount of trust between brands and consumers.
Instead, I think we probably need to more carefully define what kind of trust is required, and what the covenants of brand/consumer trust actually include regarding data. GDPR is an admirable model, and probably a step in the right direction, but certainly not a perfect effort.
Is its purpose to provide an oasis of snacky goodness to those who happen to encounter it while passing through? If so, then yes, all it needs is dollar in, item out.
I'd argue, though, that vending machines that are in places where they are regularly encountered by the same people over and over again, such as vending machine in office buildings, dorms, schools, and such have a higher purpose.
They are to provide those who live or work near them a snack safe space. A place they can count on to take a quick break from their work day and find their favorite snacks.
A vending machine that could keep track of regular customers and their snacking habits could better fulfill that purpose. For instance, suppose Carol from accounting only ever buys Tab, and the machine is running low on Tab. Dave from marketing likes Dr. Pepper best, with Pepsi a close second, and only rarely buys anything else.
Dave tries to buy the last Tab (perhaps by mistake). The smart vending machine could recognize that it would be better to reserve the last Tab for Carol, and could tell Dave, "I'm sorry Dave, I'm afraid I can't vend that to you. Would you like a Dr. Pepper or Pepsi instead?".
The future didn't give us the flying cars and jetpacks we were promised, but at least it can give Carol from accounting a reliable Tab supply at work.
>"I'm sorry Dave, I'm afraid I can't vend that to you. Would you like a Dr. Pepper or Pepsi instead?"
I realise this was humour, but you framed the problem as I see it perfectly. I don't feel safe if I don't know Hal's code, especially if he's talking to all the other Hal's (be they other vending machines or information aggregating corporations). And what are the odds that these things will be open source?
Maybe Dave and Carol need to try some new soda flavors once in a while!
The cat is out of the bag. Information technology makes this data accumulation inevitable. Likely even for those who walk around with a face covering all day, pay only cash, and don't use smartphones. By doing those things you actually make yourself more noticeable, in a lot of ways, and are still trackable.
There are some cryptoanarchist ways to achieve practical privacy, to all but the most dedicated governments. Yet, those technologies have severe downsides for both individuals and society at large.
There's no obvious way out of this conundrum. The concept of the Participatory Panopticon is one proposed solution, and is less top-down, but not necessarily any less capricious than centralized authority.
The best safeguards are to create strong institutions, informed citizens, stable economies, and all the usual boring social reform stuff that tech geeks hate and think is mushy, unchangeable, hard to study, and likely irrelevant.
They're not wrong, of course. And your comment is very similar: the problem isn't the technology, it's the impetus of society that needs to change.
You can care plenty about these issues but not see an actual way to have impact such that your concerns are addressed. If society isn't caring, then either society doesn't see why there's a concern, or society recognizes that atomically it does not have agency to address the issue, and the people who do have that agency are the ones that don't give a shit.
For a subset, sure, maybe a good PR push gets people sufficiently inflamed about a particular topic, but does that mean that key problems that aren't 'viral' enough aren't deserving of a proper solution? If "society needs to care more" is the answer, then that's the implication. But that's obviously wrong.
But here, the government is probably the biggest offender. You'll pry their databases and TSA inspections from their cold, dead hands.
My point was simply that we can't skip the stable society part and sit in an ivory tower of activism.
As another pointed out, the article covers this topic, explicitly.
>The best safeguards are to create strong institutions
I would replace 'strong' with 'transparent and honest'. Further, calling it a losing battle is a bit defeatist.
>Yet, those technologies have severe downsides for both individuals and society at large.
Out of curiosity, what are some of these downsides?
Not one part of that comment is true.
The cat’s not out of the bag. Nothing “permanent” has happened. A law could be passed tomorrow requiring that all of our personal data be erased. You might not like it and it would put you out of business.
And that part where you say “best safeguards are to create strong institutions...”. That sounds strangely like “the 4 dog defense” article that’s currently on HN. You just skipped the other arguments
1. We don’t have your data
2. If we had your data it’s it would have taken work to collect it.
3. If it took work to collect it it’s ours since you should have protected it.
4. If you didn’t protect it, the only answer is to form strong instituitions to protect the information formerly about you, but which now belongs to us.
Such a law would be politically viable in only the most enlightened of governments and societies. Which goes back to the parent's point of "strong institutions, informed citizens, stable economies, and all the usual boring social reform stuff".
In the article Stallman explicitly proposes deleting such video recordings after a timeout, and forbidding facial reco without prior cause.
How many times have we heard of large institutions suffering data breaches or failing to delete data that they are supposed to delete? Once it is accumulated, all bets are off.
Track users' location data? Then your location must be public record & realtime. Etc.
At the end of the day, this is about information asymmetry. If the general public loses privacy, then so should those who benefit from that information, in order to level the playing field.
This could be said for literally any transaction, ever. The bank knows how much money you have, but you don't know how much money the teller has. You don't know which books your librarian has out or what the bartender drinks. They are also citizens with their own right to privacy.
I'm all for transparency and oversight, but information asymmetry will always exist. Some industries have regulatory structures around that asymmetry to prevent abuse, HIPAA for example, and others have not.
also , why settle for a level playing field ? why not high privacy for citizens and high transparency for gov/large orgs ?
Or walk around in grey suits and bowler hats with apples in front of our faces
The real question is personal and harder to answer: "Do I really need this information technology? Is it making my quality of life any better than my ancestors?"
The more I ponder this question, the more I'm convinced I could live an equally happy and fulfilling life outside of the majority of consumer technology that has been introduced in the last 20 years. I benefit from a career creating that technology, but I'm partaking in it less and less once I clock out.
I think the deeper problem is that we no longer even pretend to value people of character.
Information is power, and if principled people sacrifice their access to information, then unprincipled people will hold more of the world's power.
So the danger of encouraging principled people to opt out of parts of the internet is that they will become less powerful, and unprincipled people more powerful.
What if access to information necessitates handing over information about oneself, which gives other entities power to influence one? Eg: One can browse for information, but companies will track browsing activity, and use that towards targeted persuasion.
I'm being subtly guided towards products and services all the time through myriad sneaky marketing techniques I'm completely unaware of. I know enough of them to appreciate how much I don't know.
I agree that it's a good ideal, it's just incredibly naive to think that it will be achieved. Even if legislation were passed, it would very likely make things worse, in that the government wouldn't stop surveilling - they would just do it even more covertly.
Our democracies have lots of problems, but they basically work, and on issues like this, the main issue is apathy.
I’m a firm believer that if people are informed and they care legislation like the above could be enacted.
People are apathetic because their democracies don't basically work. Elections and occasional referendums are very crude kinds of representation and are easily manipulated by those who have a lot of cash. Legislation is great but the reality is that you're on camera almost all the time in a large city like Londonand no politician is going to lay themselves open to easy attacks by proposing to dismantle that infrastructure.
Bentham's panopticon was a design for a prison, a mill "for grinding rogues honest." Now the whole society is constructed around that idea; you have legal rights but the implicit premise is that people are rogues. You're not free.
Is it? Or is it the notion that if you scream about something you don't like enough, someone else will fix it, through legislation, so you can stop worrying about it? And, of course, that legislation will work perfectly and never be abused or radically reinterpreted a generation down the line.
I'm for the belief that if you don't like something then don't partake in it - short of it being something being physically forced on you.
This notion is often called collective action, and it’s how most of the social change of the last couple hundred years was achieved in the US (and elsewhere), from black people and women attaining the right to vote, to the end of child labor, to gay marriage, to the existence of weekends.
This 'screaming' is the basis of representative democracies; the system of government that we live under.
> if you don't like something then don't partake in it - short of it being something being physically forced on you.
In every country I'm aware of, surveillance of at least some physical space is non-optional, and could be called 'forced'. The dichotomy of "physically forced" and (by implication) 'voluntary' doesn't seem useful. Very few of the current and hypothetical problems with surveillance are because it's physically-forced on people; it's that a cost-benefit equation is engineered to make it practically irresistible.
Let’s say you’re a poor high-school student. To participate in classes you need to use the school issued Chromebook and an associated Google account. You could resist and not participate in class, but even given a full understanding of the privacy cost, this would be irrational and self-destructive. If you'd like to join your residents' association that may require a Facebook account, if you'd like to travel on an airplane that likely entails video-surveillance which will be used for facial recognition, if you take a job at a company with significant-IP (or not much cash), you're very likely to be working on a machine containing a surveillance-rootkit etc. etc. ad nauseam.
This article is about solutions to the societal problem of mass-surveillance (both from the state and industry). Retreating from society for your own protection (while things get worse for everyone else) is never going to be helpful for addressing the societal problem.
Perhaps you think the status quo is fantastic and shouldn't be changed. Perhaps you have the resources to live a puritanical life avoiding surveillance; I'm impressed! But the fact that you have the freedom to do so is a direct result of the collective action of others.
My 'solution' to not being tracked is that the Smartrip Card I was forced to then buy was bought with cash, and has only ever been refilled with cash.
So their systems know that card # 828272823 (made up number) has traveled here or there (I don't use it much, so there's not many trips on it anyway) but they don't know 'who' is using that card.
If they ever drop the ability to use cash to top up the card, then the card will go in the garbage bin and I'll not ride the system again.
There are many companies with some timestamped location data about you: cell phone company, email or social media, credit card company, etc.
Just a bit of that data combined with Smartrip data would de-identify you as the owner of that card, and all of your anonymity about your metro trips is lost. For example, three or four trips where you used a cell phone at either end of the trip; credit card purchases in different locations; maps directions on your phone; or so on.
This is not far from plausible. I could see the DC Metro naively choosing to sell "de-identified" data to third parties, who can also purchase e.g. credit card data.
To be frank, though, I'm struggling to see what the fuss is about when it comes to TfL. It's their system and they can do what the heck they want with it. As their customer I am OK with them tracking me around their property.
For what it's worth, neither is RMS, if you've read about how he uses the Internet.
Let's keep building the free software tools so anyone can easily get the level of privacy they want for whatever they're trying to do. We've made so much progress but there's still so far to go.
Do not to look UP to authority for hope - look AROUND at your fellow citizens working hard to bring free privacy software to everyone - that's who's going to fix this, because no one else will.
It's noble and laudable to take the same stance towards freedom and privacy that RMS does, essentially check out of the modern internet, and reduce your web usage to calling wget on a shell, but that's not a workable solution for the vast majority of people.
Privacy protection exists along a gradient. Just because one does not want to sit at the far end with RMS does not mean we just throw up our hands and say "well, I guess I have to give up all my privacy, because I can't do what RMS does."
And as I said, there's still a lot of work to do. So let's get to it and stop the nay-saying!
If you want an authority to hold them accountable, you are just shifting your trust from one entity you have little/no control over, to another entity that you have little/no control over.
Look around, not up. An authority will not solve this. We must solve it, together.
Minor nitpick, but he also uses icecat + Tor when needed
https://stallman.org/stallman-computing.html (grep for icecat)
Having everything be AGPLv3 is not perfect, but I think it would be an improvement over the current state of affairs.
AGPL is “GPL for SaaS”.
If you claim that RMS does open source in his presence, he will sternly correct you.
The big problem is there is no licensing-yet- that could be removed if they were caught lying. So you're right- we would have to build a trust model somehow.
2. Power corrupts, so the overall answer is: never.
Change from below requires organization and coordination, and those things require leadership.
Were MLK or Ghandi fascist, because they were effective leaders?
Frankly, I find more of a fascistic tinge in the stock HN response to surveillance:
- Governments will never do anything; democracy is not fit for purpose
- A ‘natural elite’ of hacker Übermensch can avoid dystopia for themselves with hacking skills and expensive crypto currencies. The masses cannot be saved as they are inherently stupid and submissive. Things will always get worse, especially for them; abandon them.
Why such fatalism? The GDPR hasn’t even hit the books yet, and it’s having a transformative effect on how large US tech companies process data for everyone. We’re just beginning to see the tide turn on public opinion on Facebook.
I’ve hope for popular interest in privacy and anti-authoritarianism yet.
Leadership + time = fascism
That MLK and Gandhi were saved from their Hitlerism by assassination by heroic ethnic-separatists?
That's how you ended up with a surveillance society in the first place, silly.
Future strong leaders will employ all of those means of deep state surveillance and invest in new ones in order to remain strong.
That cats out of the bag and it ain’t going back.
The cameras are placed in the public domain, but they can see into houses on every street. I believe Google had some issues with peering into homes and storing that data. Will cities face the same issues? The Google car only gets data when they drive down the street. These cameras will gather data 24/7 using low-lux (low light) cameras.
Instead of citizens sending data to the cloud, corporations can send code to citizen-owned devices where local computation can take place, with data only leaving via an open-source content inspection engine that enforces citizen-consent policy.
By all rights, the equifax databreach should have ended the company. Instead, they're likely to profit from it.
App authors would then be able to decide whether they wanted to let you use their app despite you declining to let them send/sell* your data to a third party.
* most app creators would claim they don't sell users data. But they need to acknowledge that when they receive free telemetry services for their app from a 3rd party, they are doing exactly that: selling user data in exchange for a service that would otherwise have some expense (in dev time or otherwise).
it would suck if the terrorist bomb also took out the cameras themselves.
I have cameras at my house wired to a local unit. It's mostly for convenience and looking for animals and not very serious for security; any well-respecting house prowler would want to take the recording unit from my server area so that I wouldn't have any video of who broke into my house.
that wasn't my reading of what he wrote. He wrote, "should not allow remote viewing without physical collection of the recording.". If a town has security cameras all over the town, and they all transmit recordings to a co-located server facility, there's no reason to physically collect anything - anyone with access to the servers can easily collect surveillance of the whole town for no reason.
> Put your NAS in a secure back room and you're going to be fine for practically every scenario short of a 9/11 scale attack.
Security camera systems are hoped to be useful in the event of a 9/11 scale attack, which has actually occurred, as well as any large-scale bomb attack that might destroy an entire facility, or may have the vault with the servers buried under tons of rubble where one needs the footage a lot quicker than that.
First, it's arguing against the natural evolution of technology. Centralization is a re-occurring feature of software systems due to it's technical benefits. Data collection, organizational structure  and other considerations also affect centralization, but their existence doesn't negate the technical considerations. Whether an ATM or bus card system should be (de)centralized is partially a technical issue: what networks are available and how reliable are they, what's the cpu/mem/storage of the end points, how often do we update, etc...
Second, restraint of Governments might be aided by making collection more difficult, but Governments can deploy large groups of people for long periods, while individuals and smaller groups cannot. Youtube and cellphone cameras empower the weak, while the old technology of cameras belonging to local businesses  benefit only the Government (or very large groups.)
 "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." https://en.wikipedia.org/wiki/Conway%27s_law
 "There is a gap in the footage from about 9:18 p.m. to 10:39 p.m., which covers the time when McDonald was shot by Officer Jason Van Dyke on a nearby street." http://www.chicagotribune.com/news/ct-shooting-laquan-mcdona...
When I'm using that social networks interface, I put the USB stick in my computer - and my computer makes that data available to the network. Then, when I'm done using it, I turn off the interface, take the USB stick out and put it in storage somewhere. The only parts of the data that are available, are the bits I copied to the social network while I was using it, and it can have those bits while I'm away to do with as it pleases.
I truly believe this is a function and responsibility of the local operating system, not the network.
If only the OS vendors were paying attention to these issues instead of .. you know .. trying hard to be the next big social network.
The real issue is that people opt to ignore Stallman - he was right. By opting to use these services, whether you pay for them or not, you're acquiescing to being the product.
What would happen if a transportation service provider would also offer the service of collecting points dependant on the used stations and those points can be used for special offers.
This way they 'need' to collect movement data to check for abuse of the system in order to offer the 'basic' functionality.
I agree with RMS, but it might be difficult to express such law without loopholes for companies.
RMS might want a law saying that whoever sold goods, couldn't also deliver them?
So you can distribute this freely, as long as you acknowledge the author.
And since it's content, not software, it's not imperative to allow modification. (Fair use still applies to critical responses)
You're being way too picky.
I think this article could very well be a valuable free-culture contribution. The chain of derivatives could go deep, and having to get permission at every step cuts it off.
This is a long-standing disagreement between RMS and those of us who support cultural freedom.
It's as if GPG was an implementation of the Clipper Chip algorithm because, you know, otherwise it might be used for ~~tax fraud~~ terrorism.
Happily there are plenty of others to choose from, too, but I'm glad that one with these overall design principles exists.
The basic principle is that a system must be designed not to collect certain data, if its basic function can be carried out without that data."
Frills on the system, such as [some feature conceived by PM or developer], are not part of the basic function, so they can't justify incorporating any additional surveillance."
These additional services could be offered separately to users who request them."
The title is "A radical proposal to keep your data safe."
But this doesnt sound "radical" at all.
1. Dont collect data when unnecessary for the user.
2. Applications which perform a single function.
3. Additional "features" (i.e. more code) that add more functionality and make applications more complex are fine but they should be optional (e.g. not pre-installed or automatically added via "updates").
It really is radical to the current culture of silicon valley where every product must be "smart," collect user data to provide recommendations and "analytics."
There is little restraint done by companies on what data they collect, only on access to that data once collected. It really is a radical culture shift for these companies, and all the engineers that work for them, to switch to a attitude of only collecting what is truly necessary.
To decouple identity data from other data does already not work in theory. Moreover, it is also in conflict with the ability to retract personal data. If data is anonymized, it is not possible anymore to get rid of your personal records.
Homeomorphic encryption is the only method that might make a dent here. Laws will be broken.