Hacker News new | past | comments | ask | show | jobs | submit login
Jaron Lanier: We're Being Enslaved by Free Information (ieee.org)
49 points by wmat on July 19, 2013 | hide | past | favorite | 14 comments



Never much love for Jaron on threads like this. Personally, I find his argument makes sense and he is consistent with himself - at least since he switched sides. Let me try to contextualise his main thesis. As technologists, we are Cultural Revolutionaries - elsewhere he decries digital Maoism. As such we tend to be blind to the effects of the technological changes we champion. We see the positive side, because for us, it is positive. Yet there is a negative aspect which is greater. This is insidious and universal. In much the same way as pollution which, given the pace of development, was barely even acknowledged as a problem until the 1960s. The damage is done not at point of dispersion, rather toxicity is increasing in the general environment. He says the nature of the toxicity this time is the erosion of a traditional economy in which most people used to come out ahead. The cause of this is the centralisation of economic power as a necessary consequence of current information architectures. Next, he hypothesises a solution. He wants to make it market-based and since this is an information management problem, the hypothetical solution is architectural changes. He does not go into how this might work technically, but he provides a high level specification of sorts. If this were another thread on HN, some of us might begin to start filling in possible design solutions. Unfortunately, I don't see it happening here. Hope to be proven wrong though.


If you came to the comments to check whether this is worth your time: it isn't. Incoherent vaguely malthusian rambling dressed up with technobabble.


I wish I read this before I read the whole article.


Jaron comes across as afraid of the future. His thesis seems to be that there are many possible horrible futures, and he imagines them to be more likely than "good" outcomes. And it doesn't read like he has any tools to reason about either one or the other. Hence the fear.

Bill Joy went through a similar phase with his infamous grey goo manifesto. I think he has come back somewhat from the brink on that.

I can appreciate the fear aspect of it. Greg Benford's TimeScape [1] scared the crap out me.

[1] http://en.wikipedia.org/wiki/Timescape


coherent, but not sensible. seemed like not much more than far-fetched idealism.

i simply cannot envision a kind of society where information would not be free and in a "protected" or isolated sort of state as he wishes. i have not dug into his philosophy deeply, however, i cannot comprehend how you would prevent information "looting".

maybe in a society of free of hackers...


I came to the comments to check just that right after reading "ideas that are making us collectively poorer instead of richer, or at least less richer than we could be". There's something seriously wrong with considering these two concepts as being semantically equivalent.


I'm not sure Jaron's ideas are coherent, but as usual, there's enough coherence here to provide some hefty food for thought. Some interesting ideas, which may or not be things Jaron is trying to say:

* Externalizing risk is a killer strategy for limited-liability corporations, since they can privatize their gains and socialize their bankruptcies, and really for whoever can get away with it; and many more people can get away with it with sufficient compute power. (He uses Maxwell's Demon and the Landau limit as a simile for this, but I'm not convinced that's productive either for producing new insights that are likely to be true or for helping people understand his point.)

* Contributions to social networks and commons-based peer production (Linux, Wikipedia, Facebook) are distributed in a sort of crudely Gaussian way, but value extraction from them is instead distributed in a fat-tailed winner-take-all distribution, which will destroy the middle class.

* We are moving our core engines of value production out of the economy. Jaron thinks this is bad, while I think it's way overdue.

* Jaron seems to be arguing that universal computers, which can trivially copy information and keep the fact that you have a copy of it secret from other people, are incompatible with any semblance of democracy. This is an interesting proposition, but it seems dubious to me.


He is basically arguing that information cannot be free in the future because of the need for an information economy to replace the economy of goods and services that will soon be produced by AI/ML and automatons.

He argues that AI/ML create successful business (e.g. insurance providers which can use algorithms to ensure they only provide their service to those least likely to need it) that essentially radiate the risk out to society and, ultimately, society cannot absorb the risk.

I found this part about democracy and spying interesting:

    And so you can’t have democracy in a highly evolved 
    information society if information is free. You just   
    can’t. I mean, because you’ll be giving the government 
    an infinite spying license. And it might sound like an 
    odd idea, but I hope once you roll it over in your 
    brain, you’ll start to see that it’s just a very simple 
    and sensible idea.
While the ideas he presents are interesting, that may be all they are. It is difficult to imagine them being played out in real life.


And he is saying that you have a society where leaders are misusing that information for personal gain rather than an alternative where with an abundance of information you have systems that make decisions for us and we skip "elected leaders"?

I am not naive, it is just early in the morning and I am asking.


Jaron Lanier needs to take an Econ class.


Some remarks about the article:

1. you can't make information always for pay - why? - because it costs nothing to copy it and it's cheap to store terabytes of it

2. it is impossible to monitor all copying and make payments for everything - imagine how intrusive this kind of monitoring would be and how much overhead

3. the idea that everyone should have an unique online identity for tracking and payments is stupid - we'd lose all privacy, even as little as we have left; we don't want to be like China where you need to show your ID at the internet cafe

4. Yes, there is a problem with "power law distributions" of content like Google and App Store, but that needs to be changed by allowing customization, making of lists, and aggregation - I appreciate this insight

My rant on the subject:

I think he is fighting the future. Instead of total tracking and micropayments for everything, why don't we make a flat copyright tax and distribute it by popularity of content? It would be so much less overhead - we only need to track the number of views and divide.

Instead of trying to maintain monetization as it was in the past, into the future, why not embrace free culture and go ahead with basic human income? People create for love a creation too, not just for money. I'd say the best creations are works of love, not just works for money.

His power-law vs bell-curve insight is great but I think I got one that is even better (contains his insight and goes deeper still): we need to apply two criteria for improving society

1. differentiation - we need to maximize it by empowering creativity; why should we all see the same news? that would make for a monoculture. Instead, we should to pick and choose our feeds, each one of us differently. Similarly, we're better off with unique skills, unique perspectives on life and technology, and so on. We should encourage the variety.

2. integration - this is complementary to the 1st. The idea is that people need to be brought together to communicate, travel, make commerce and so on. As long as we are integrated, the system self regulates and can better cope with problems.

An anecdote: what is the difference between Facebook and Reddit? I'd say that the main difference is network topology; FB works by keeping you in your bubble of "friends" while reddit always exposes you to new people, on every page - that makes for better integration. The ability to subscribe/unsubscribe from subreddis improves personalization. Also, the comment history makes each user differentiated because the user will reply to different people. That's why we have such a higher level of creativity and participation in reddit compared to FB, that's why people get involved in deeper, more thoughtful discussions on reddit - it has more differentiation and integration.

Disclosure: I didn't invent this integration-differentiation idea. It comes from Giulio Tononi's Integrated Information Theory of Consciousness (I reused the concepts on society instead of the brain). This is a video presenting the theory: http://vimeo.com/53787308


>> 1. you can't make information always for pay - why? - because it costs nothing to copy it and it's cheap to store terabytes of it.

If you changed the word information to data, I'd agree with you. We cannot hope to build an economy on top of charging money for the following: "I am going to the pub with Dave tonight". That data string, once released is never coming back. But, assuming we are not yet living in a panopticon, you don't know which pub nor which Dave. If you really want that information my authoritative interpretation will cost you. This seemingly minor nitpick helps us towards designing a solution to Jaron's problem.


I found this bit very interesting, and it seems insightful to me; if I'm mistaken, and this is utter horseshit for some reason, I'd love to hear why.

So what I’m proposing is that finance, and indeed consumer Internet companies and all kinds of other people using giant computers, are trying to become Maxwell’s demons in an information network. The easiest way to understand it is to think about an insurance company. So an American health insurance company, before big computing came along, would hire actuaries to set rates. But the idea of, on a person-by-person basis, attempting to decide who should be in the plan so that you could only insure the people who need it the least on an individual basis, that wasn’t really viable. But with big computing and the ability to compute huge correlations with big data, it becomes irresistible. And so what you do is you start to say, "I’m going to..." — you’re like Maxwell’s demon with the little door — "I’m going to let the people who are cheap to insure through the door, and the people who are expensive to insure have to go the other way until I’ve created this perfect system that’s statistically guaranteed to be highly profitable.”

And so what’s wrong with that is that you can’t ever really get ahead. What you’re really doing then is you’re radiating waste heat. I mean, for yourself you’ve created this perfect little business, but you’ve radiated all the risk, basically, to the society at large. And if the society was infinitely large and could absorb it, it would work. There’s nothing intrinsically faulty about your scheme except for the assumption that the society can absorb the risk. And so what we’ve seen with big computing in finance is a repeated occurrence of people using a big computer to radiate risk away from themselves until the society can’t absorb it. And then there’s some giant bailout and some huge breakage. And so it happened with Long-Term Capital [Management] in the ’90s. It happened with Enron, and we saw a repeat of it in the events leading to the Great Recession in the late aughts. And we’ll just see it happening again and again until it’s recognized that this pattern is just not sustainable.

Ignore all that stuff about "information" for a second, and focus on "outsourcing risk" -- I'm not saying that's a new thing to say, but how is it not true, and how is it not, uhh, vaguely important?


"So, yes, they [government] must pay. And the reason that’ll be enforced is because lawyers and accountants will be on their ass. And just to answer some obvious things, yeah, if they have a specific criminal investigation, they don’t have to tell the people in advance that they’re getting paid, because that would reveal it. Yeah, that would be under court order; that should be an exception, as it always has to be in a democracy. They will not be able to do omni spying anymore. They won’t be able to spy in advance without people knowing they’re being spied on, because the people will get money, and that’s proper. It is actually a totally reasonable solution."

So government will have to pay us for the data that they are getting for free from companies that are getting it for free? What would that even look like and how would we get there from where we are now (Gov threatens companies to fork over information they get for free from users)? And there's no mention of free encryption services that are starting to emerge…

Idk, he starts out talking about current issues, then tries to put perspective on it by referencing 19th and 20th century issues, ideas and their implementations (at this point I'm thinking "hmm, maybe there is a pattern he is going to point out that could give us insight on how we could at least be in the frame of mind to approach the issues"), and then tries to connect that to what is going on now by vaguely identifying one issue, placing it in a vacuum separated from the issues hes brought up before, and then makes the claim above. I'm lost, but maybe there's something in this direction…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: