Hacker News new | comments | show | ask | jobs | submit login
On the freedom to speak (lessig.tumblr.com)
141 points by espeed 1606 days ago | hide | past | web | 64 comments | favorite

The whole Lessig/Palantir issue is so illustrative of the tech community's bizarre insistence on ideological purity on political issues, and consequently illustrative of why tech as an industry has so little political influence despite having money and people.

This view held by those in power that "idealists are weak" is an idea ripe for disruption.

The view was stated in a post yesterday "Freedom: The Big American Lie" (https://news.ycombinator.com/item?id=5962459):

To snub and even to wound your most zealous supporters, as Obama has done, is regarded as a mark of maturity in Washington. This is not because snubbing or wounding them is a brave thing to do, but exactly the opposite: Because the righteous attitude of the idealist is repugnant to the men of power, who know that idealists are, in fact, men of weakness, entitled to neither courtesy nor respect.

Maybe the culture of power could be disrupted if a world dominated by bankers and takers was usurped by these makers whom learned to thrive in a culture of openness and sharing. Maybe this change in culture could usher in a New Camelot. Idealistic? -- yes. Possible? -- absolutely, if leading founders make solving this problem their next challenge.

The US was founded on a set of ideals (http://en.wikipedia.org/wiki/American_Dream). This country was started by founders. It can be restored by founders.

Idealists are weak in a democracy because their devotion to ideological purity makes them unable to form the coalitions, which are necessary to have any political impact in a majoritarian system.

Its silly to claim that "makers" are idealists by nature. Most of the "makers" I know are engineers, and if anything engineering as a profession has a distinctly conservative, "don't rock the boat" mentality. E.g. I wouldn't call any of the civil engineers I know high-minded idealists, but rather intensely pragmatic and carefully conservative (which is kind of what you want in your bridge-builders...).

The "American Dream" bears little resemblance to the founding principles of the republic. Much of what he perceive as the "American Dream" is populist rhetoric added to the American narrative by Andrew Jackson and FDR, along with nationalist rhetoric added during the Cold War.

The country was founded by lawyers, another profession not exactly known for its idealism. The American "revolution" was a deeply conservative one, mostly seeking a return to the status quo as it was before the British, drained by wars with France, started hitting up the colonies for money. Indeed, much of the American polity viewed the radicalism of the French Revolution with skepticism and distaste (at one point Thomas Paine was arrested in France and scheduled to be executed).

The American revolutionaries were indeed conservatives in the sense that they just wanted what they felt were their rights as English citizens.

Across the Atlantic, Edmund Burke was the founder of modern conservatism. He did not want the colonies to secede and was deeply pained by it, but supported them because he felt their rights were violated. He was an unqualified opponent of the French Revolution, however, and extensively fought with Paine on this.

Edmund Burke did, in a sense, a classical liberal, he passionately defended English rights, but never really universalized them or subscribed to Enlightenment ideology. He's sometime called "The most important philosopher who wasn't a philosopher"

Most of the American revolutionaries were, deep, strong believers in universal enlightenment ideology.

Two of the most important, Jefferson and Franklin, were also "Makers", polymaths who made contributions to many fields. Especially Franklin who both made real, major scientific advances AND in the literary sphere, founded BOTH the American satirical and ethical traditions.

Again, both were unapologetic followers of the enlightenment: believed in universal natural rights, were Deists, etc.

First you said...

The whole Lessig/Palantir issue is so illustrative of the tech community's bizarre insistence on ideological purity on political issues, and consequently illustrative of why tech as an industry has so little political influence despite having money and people.

Now you say...

Its silly to claim that "makers" are idealists by nature.

BTW, in case my usage of "makers" needs clarification, I am using "makers" in the PG sense of the word (http://www.paulgraham.com/hp.html) where "makers" == hackers, painters, founders.

Successful founders find ways to solve problems.

Elon Musk founded Tesla, SolarCity, and SpaceX on a set of ideals, i.e "principles that one actively pursues as a goal," and he has found a way to achieve these ideals that most thought were "impossible."

If Elon Musk and enough other successful founders decided to run for office, the world would be a different place.

But certainly you can make things without necessarily having to be an idealist, even in pg's sense of the term. Sometimes people just want to build, or paint, or lead, or tinker with software, or any of a million other things that might make someone a maker. But none of that requires some kind of pure ideology, and in fact the successful maker with the pure ideology must almost be the exception to the rule. People like Dr. Stallman, perhaps. But certainly not all "makers".

Your confusion is predicated on your conflating "the tech community" with "makers." The tech community definitely has an idealistic streak. But it a small subset of the larger engineering world, which really does not. To the extent that you can ascribe an ideology to a profession, all those chemical engineers and civil engineers are not "hackers" or "painters" but rather "pyramid builders."

Your confusion is predicated on your glossing over that I said these makers, in direct reference to your statement about "the tech community" being idealists. And in the context of the tech community (as you framed it), the makers who are hackers/founders (as I framed it) are an even smaller subset of the tech community and thus the engineering community at large.

Nevertheless, it is the very idea that you can strike a "balance" between privacy and law, that allows for the existence of the Utah data center. The existence of this architecture of surveillance poses a real existential threat to your privacy. It's a technological atomic bomb in the making. The more powerful the center becomes, the more likely it is to attract the unscrupulous.

The "bizarre insistence" is not on purity of political issues, but it's on the purity of information security. The surveillance state is a rather binary thing... the architecture exists, or it doesn't. No amount of internal audits can make it any safer.

Also, you should hear Russ Tice's (first NSA whistleblower) recent interview with the boiling frog where he claims to believe that politicians are being blackmailed after being spied on.

I love Lessig, and completely it's stupid to attack him over this, or believe he's part of some conspiracy. He was primarily trying to bring up the incredibly fascinating, and important issue of: "Can, in the long run, purely technological means protect privacy?"

It is, however, entirely reasonable to question if a corporation that took part in "Team Themis" could/would actually develop such technology.

No, purely technological means cannot protect privacy!

The point is that technology has already usurped privacy. Even if the government doesn't, everyone else is. (And let's face it, the government does too, even if the NSA doesn't).

The genie is not going back in the bottle, but perhaps we can use technology to add some arc chutes and at least try to constrain where the pixie dust is going. In the end it will be the technological means help the people protect their privacy (who are after all, the only ones who can demand and truly enforce that).

Or as my father always said, "Locks don't keep bad people from breaking into your things; locks just keep honest people honest".

My point is, no, we cannot use technology to constrain this.

The solution is to stop funding concentrated surveillance programs, and to protect government/corporate whistleblowers.

We're headed in the exact opposite direction.

We are headed in the opposite direction because that's not a solution, that's sticking your head in the sand.

Supporting whistleblowers is not sticking your head in the sand.

The PRISM program & Palantir's products do not give YOU any transparency into anything.

I was referring more to the "stop funding concentrated surveillance programs" part. Just because your own government isn't spying on you doesn't mean you are not being spied on. It just means you have deluded yourself into believing you have privacy. Far better, in my opinion, to accept that privacy is no longer possible in the 21st century so that we can start figuring out how to live without it.

Who in this thread is arguing for PRISM? Certainly that's not what Dr. Lessig is asking for either.

What Lessig is asking for (see his 1999 book "Code", and his plug of Palantir) is for software that can cross correlate data from multiple sources to identify leaks of information for the purpose of copyright protection. He (and others) believe that the right software can provide privacy while "securely" storing vast amounts of data, through "immutable audit logs". The problem is, he's wrong, and this isn't technically possible.

The kind of technical naiveté that believes that surveillance software can be secure and auditable, is what is allowing the NSA PRISM software to develop, and concentrating too much data. When you hear Obama claim that PRISM is transparent, he's referring indirectly to flawed technical arguments.

So no, of course Lessig won't argue for PRISM. But he is arguing for measures that PRISM probably already has, measures that cannot be sufficient.

I haven't read the book itself, but it's obvious that 'immutable audit logs' don't prevent privacy violations. They would simply make these privacy violations detectable. Enforcement of privacy violations would still be up to the rest of the oversight structure (whether that's institutional, external audits, journalism, special interest groups, or a combination thereof).

And either way, technical measures that would be insufficient for PRISM may very well be sufficient for most of the rest of what government does.

As a final note I think that sometimes we tech types get way too wrapped around preventing outcomes which are already so implausible as to be merely theoretical, and use that to oppose measures which, while imperfect, would still be useful if applied. It's "The Perfect is the enemy of the good" that we see in software project management, applied to the government at large.

What is "bizarre" about opposing big data surveillance exactly? We know you believe in "if you have nothing to hide, you have nothing to fear", since you've said as much in previous comments, but it seems unfair to claim that the opposing view is bizarre.

The only "bizarre" thing is the insistence on purity (or if you rather, binary logic instead of fuzzy logic), not the position itself.

Any decent politician can exploit this to derail the entire point of the ideologue, by tying the ideologue into supporting things that the majority oppose. Try to imagine an interview given to a Senate subcommittee where a hacktivist claims with a straight face that '...more people die in car accidents each day than in 9/11, so what's the big deal?'

Sometimes compromise is actually better for the people overall. Sometimes strategic retreat allows a political party to be in an advantageous position when the other side inevitably makes a misstep. On the other hand, stubbornly digging a line in the sand risks trapping the party position in a salient, only to later be "overcome by events".

It can be OK to say "Your idea is stupid, and I don't support it. But if it's implemented it should contain at least X, Y, Z to minimize the damage your stupid idea would cause". But tech activists don't generally ever get to that point, and should their objections be overcome the final result ends up being more damaging than it otherwise might have been.

It's not bizarre at all. You give up part of your position by simply conceding that there can be a debate about the subject.

The thing is, your point is valid, too. The whole purity thing itself is not a binary thing, but rather fuzzy.

I think what's needed is both. We need credible, respected leaders who absolutely refuse any kind of blanket, surveillance-by-default architecture. We also need credible, respected leaders who say the other thing - though of course not too publicly. And those two groups had better not fight each other, but unite against the common enemy.

Conservatives tend to be very good at this sort of thing, probably because conservatism is really tribalism at its core, and so conservatives understand instinctively that they should not attack each other. Defenders of civil rights are all about the individual, so unfortunately they are more prone to in-fighting, as this thread demonstrates ;)

Agree. Some people (perhaps tech people are more so in this direction? [1]) see things as black and white and not shades of gray. God forbid you identify yourself as a fanboy of [insert idol] and go against or issue criticism of that idol. You almost certainly, in any forum, will have a hard time doing that. And lose favor.

With respect to all this snowdengate more people seem to have just decided that whatever the government is doing it's wrong, has no benefit and that's that. End of discussion right there. No devil in the details at all.

[1] Simplistically tech people, programmers, math people and the like are binary and deal in exact 1's and 0's. Tech people would be more likely to jump on this particular comment that I made if it didn't fit with their particular world view.

This is a good example of why I don't have the enthusiasm most of this community has for Lessig. His single issue lately is money in politics, to the extent that he seems to be blind to anything else.

The criticism which he linked to primarily pointed out that he was speaking well of a company that exists primarily to enable the surveillance state. My reading of the mention of contributions to politicians in the last paragraph was to point out that this company is contributing to people who Lessig almost certainly does not agree with politically.

But Lessig's response is entirely focussed on not being tainted by any contributions.

He seems to get it what the root of the criticism is: " The essence of the criticism is that Palantir is a bad company, or that it has done bad things, or that it has been funded by bad people."

But his response is "I am completely in favor of questions being raised of anyone like me (meaning people trying to push a particular public policy) about whether mentioning a company or their product is done in exchange for money."

No one ever raised this question. It's not a suspicion. I think what we'd like to know from Lessig is:

1. Do you think it's okay to engage in wholesale spying on Americans, as long as it's subject to what you consider the appropriate oversight?

2. If you don't like spying, are you speaking well of Palantir because you don't think they are involved in it, or are you just interested in the one aspect of their technology and not trying to promote the company as a whole?

...are you speaking well of Palantir because...

He is not speaking well of Palantir. He speaks well of two people in Palantir's employ, whom he knows personally, and he cites a piece of Palantir's technology as the kind of thing that the world needs more of — irrespective of whether or not the rest of their oeuvre is good, bad, or indifferent. That's it. Full stop.

He specifically says he's not touching the discussion of whether or not Palantir are good guys or bad guys. He specifically says he's not endorsing them, for compensation or otherwise. He specifically says he thinks that technology that audits every access of a piece of information is a valuable thing to have in the presence of a panopticon. And. Nothing. Else.

Any deeper reading than that is projection or fantasy. Lessig has earned being taken at his word, IMO.

> Any deeper reading than that is projection or fantasy.

Indeed, and it's concerning that Lessig is able to reply to those types of points in very clear and direct language and still have it fly over the heads of people because "PALANTIR".

Even completing ignoring the very concept of the NSA it is clear that the government will be adopting technology to handle the governance of people.

So decide, tech peoples: Should this technology build in audit trails or not? Should this technology record what interface a government employee was using to extract this data, or not?

There are practically no government databases which can be completely innocuous in the face of a determined government insider. Don't throw the baby of technology controls on misuse of data out with the bathwater of... something(?), especially with accusations as specious as "their board member knows a guy who was on the board of... BAE! (oooooh ahhhhhhh)"

> So decide, tech peoples: Should this technology build in audit trails or not? Should this technology record what interface a government employee was using to extract this data, or not?

Yes, use the technology wherever applicable, knowing that the technology does not guarantee correctness. But definitely stop the wiretapping and the collection of (even encrypted) data.

The tech doesn't justify the program.

So decide, tech peoples: Should this technology build in audit trails or not? Should this technology record what interface a government employee was using to extract this data, or not?

It shouldn't be.

So back to OCR paper forms then? No more web forms to fill out my tax data each year? Should I have to go visit the Social Security office in person each year too?

You're attempting to relate unrelated topics and struggling with a straw man here. If your question had been: Should we use technology in disparate areas of government like tax records and social security, clearly the answer from almost everyone would be yes, of course, as the government does already (and with relatively basic access controls which work perfectly fine). Medical records for example are kept on computers in many countries, and access to them is regulated without a sophisticated system to log access, just with fairly basic access controls.

However that has nothing to do with whether we should attempt to control NSA dragnet surveillance with technology after the fact, or help to construct a surveillance state in the first place. Debating whether we should apply controls to access to say phone record data collected by the government ignores the more important question of whether that data should be collected and collated by the NSA in the first place. I'd say that collection of the data is the only point at which adequate safeguards can be put in place, not after consumption of the data - by then it is too late, and any technical safeguards can easily be put aside later.

> However that has nothing to do with whether we should attempt to control NSA dragnet surveillance with technology after the fact

It's almost like there was a reason I explicitly disclaimed NSA when I made the point you responded to.

Even besides the IRS, what about FBI, local law enforcement, ATF, Border Patrol, and all of those other Federal agencies that have arrest powers? Should the local Good ol' Boy sheriff be able to pull up your record in NCIC with no audit trail?

I don't believe you should have a record in NCIC unless there's a very good reason, just as I don't believe the US should be collecting and storing fingerprints of visitors at the border, or collecting the phone records of every American, or the GCHQ collecting almost all the data that passes through the UK and passing it back to the NSA. The mere act of collating all that information and storing it indefinitely is incredibly dangerous, and some logging of access is not going to make it safer. So this is why I reject the premise of your question.

Not collecting all that information in a central record is the best defence against misuse - if you collect it, it will at some point in the future be misused, just as Hoover, Nixon, the GDR etc misused the far more limited powers they were given.

That's not an available choice.

Not that I'm defending Lessig or Palantir (I'm definitely not a fan of Palantir), but I think the point Lessig was making is that of all the options, having one that is accountable and traceable is probably the least bad. That is to say, agencies (both public and private) are going to (and have!) use data gathering technologies; we may not be able to prevent it. The best we may be able to do is ensure that the chain of custody is unbroken, and evidence isn't manufactured, manipulated or otherwise called into doubt. That and start enforcing warrant requirements again.

That and start enforcing warrant requirements again.

I think enforcing the law (or creating new laws) and revoking the Patriot act would be far more effective than any technical measures after collection. The restrictions of the Palantir system sound about as effective as Snapchat (that is, not at all effective for a determined user). If you can read it and see it, you can spread information and use it in an uncontrolled way. That's not even considering other attacks like photographs, duplicates of the memory, hacks etc. - once the information has escaped the system into the brain of one user, there is no way to control access.

The best solution is for that information to be collected only when strictly necessary and deleted after use, not stored. That's inconvenient and less effective for spy agencies, but more effective for the privacy of everyone else. If you want to stop universal surveillance, we have to control the collection of the data - anything else is too late in the process to make any meaningful difference and will be subverted over time, as the NSA abuse of their mandate has demonstrated.

You cannot ensure such a thing.

Isn't this his response to #2?

> In both cases above, I was pointing to a type of technology. The truth or falsity of what I was saying doesn’t depend upon whether Palantir is a good or bad company. About that question, I am not, and don’t purport to be an expert.

It would be interesting to see his response to points 1 & 2, but I think we already got some idea of his answer.

IHMO I interpreted his original reference to Palantir to be "systems used for scanning databases, and datamining, should have an audit trail built into the core of that system"

Edit: Clarified that its my opinion, not a direct quote.

I totally disagree with you. You first question embodies a very non pragmatic mindset - the first question has no "right" answer. However, rejecting corporate funding for research/etc is something that in principle makes very strong sense and transcends the sort of questions you are asking.

He seems to be wise enough to not take a black and white stance on Palantir as a company - really, no one has the right to call a company good or evil and we are ignorant of their intentions. However, his point is regardless of his perception of Palantir, corporate funding biases research and in principle thus should be rejected.

Lessig (or something) is deleting my comment on Disqus. Here it is:


The the government's claims of transparency and audibility of the NSA's PRISM program is analogous (if not directly related) to the claims of Palantir's. Search for "immutable auditing" below:


But even with such an audit trail to the core, it is known that it isn't sufficient:


I wager that for any given system that touts immutable audibility, there is a way to hack around it. Privacy through automated means is impossible. At best it is a kind of DRM that the NSA can easily work around secretly if it wanted it. What we should be advocating instead is Perfect Forward Secrecy in our internet architecture, and the dismantling of PRISM and related data centers.

Prof Lessig, in your book "Code", you are using the issue of copyright to condone the current direction of the surveillance state and offering red herrings as "balancing" compromises. Such a balance is impossible in the face of concentrated storage of (even encrypted) storage data by intelligence agencies. As long as the NSA can tap the wires and record information in vast databases for cold storage, we are absolutely in risk.

More technical discussion here: https://news.ycombinator.com/item?id=5966942

P.S., FTFY: "I have not, AND would not ever, accept money from Palantir..."

What type of technology would allow us to be absolutely certain that a piece of information was only used for an explicit purpose?

My first thought was a chain of custody with public/private key encryption for viewing the information. Perhaps file systems that record access?

It's analagous to media DRM. Nothing "works" if the user has admin access on the viewing device. And the only way to ensure that is a "trusted computing" chain all the way from processor boot.

Which is possible, but you have to lock out any viewers who can't demonstrate a secure chain from boot to viewing app (so you can trust the viewing app is "yours" rather than the device owners).

And there's always the analog hole. Unless you can rely on a 'Eurion' (http://en.wikipedia.org/wiki/EURion_constellation)-like mark to stop recording, people can always capture data at the point of display-to-retina.

The only other approach is the after-the-fact blamethrower, where you tag each copy steganographically so that you know who was the original source of any leaked or circulated copies. That wouldn't stop a Snowden though.

Yes, and probably the biggest hole is simply mod'ing the surveillance software with backdoor unauditable access to the data, such that higher ups with "special secret privileges" can bypass the logs.

Who audits the software? etc.

Isn't the display-to-retina capture handled by physical security? Ie -- aren't most top-secret (not sure the exact classification but you know what I mean) facilities controlled in regards to what electronic equipment you can take inside?

Yes, good point. But it all depends on level of risk, type and amount of data etc.

You don't need any electronics for a film camera. Or any metal, for that matter. It's a risk to take such an item in, but maybe it's worth it for your use case.

And I guess if it's small amounts of text, there's always human memory (or writing on your skin?).

So you are (obviously I guess) trusting the people accessing the data to not abuse it, "helping" that trust with whatever technical countermeasures make it harder to move bulk data.

You can't. It's not a technical problem, because you can always log access. The problem is trusting the logger to be configured properly and to correctly report results.

I pointed this out on the original HN discussion:


Not specifically for general data tracking, but when I read this the first technology that sprang to mind was Ben Adida's (http://ben.adida.net/) Helios Voiting system (http://heliosvoting.org/):

"When you cast a vote with Helios, you get a smart tracker to track your vote all the way to the tally. No one knows how you voted, but you can track your vote and everyone can check the tally. "

What a remarkable response. Lessig reacts to the substance, rather than the emotion of the criticism, making distinctions that need to be made, and answers the criticisms in language that's easy to understand.

He'd make a great lawyer.

Well, he is the Roy L. Furman Professor of Law at Harvard Law (http://www.law.harvard.edu/faculty/directory/10519/Lessig, http://en.wikipedia.org/wiki/Lawrence_Lessig :-).

Yes, I know. I was making a funny!

For those who are unaware, Lessig is a professor of law and ethics at Harvard, and has been a professor of law at Stanford and the University of Chicago in the past.

Now if only our lawmakers could follow suit.

Just one example of the namecalling and emotional backlashing. The ironic thing is that 6 months ago Harry Reid was called an "idiot", and afaik Reid has returned the favor to House Republicans recently.


No, he's reacting to allegations about direct ties to Palintir, and failing to respond to the criticism of such a surveillance program.

He's making the point that he has no reason to respond to such criticism, since he's neither involved nor advocates it.

He advocates Palantir's surveillance program for the purpose of copyright protection, using its technological aspects of "immutable auditability" to ensure that the program is used for its intended purpose. The problem is, he assumes that such a thing is possible, when in fact it's more like flimsy DRM that the NSA can quite easily, secretly, bypass.

"rather than the emotion of the criticism"

If one of his goals though is to gather support for what he did among regular people though he does have to deal with emotion as well. Think of people sitting on juries and the care that goes into selecting those people.

Sounds like great news for Glenn Greenwald. Can we see the audit trail for Team Themis and understand more about how Palantir was contracted to be paid $3.6MM for malicious datamining to attack a US journalist?

Jeff Jonas (https://twitter.com/JeffJonas) has been working on a similar immutable audit system at IBM. It's part of his G2 Sensemaking system that started as a skunk works project in 2011...

Sensemaking on Streams – My G2 Skunk Works Project: Privacy by Design (PbD) (2011) http://jeffjonas.typepad.com/jeff_jonas/2011/02/sensemaking-...

Found: An Immutable Audit Log (2007) http://jeffjonas.typepad.com/jeff_jonas/2007/11/found-an-imm...

Immutable Audit Logs (IAL’s) (2006) http://jeffjonas.typepad.com/jeff_jonas/2006/02/immutable_au...

Yesterday’s Technology Review Story: Blinding Big Brother, Sort of (2006) http://jeffjonas.typepad.com/jeff_jonas/2006/01/yesterdays_t...

Also see...

G2 | Sensemaking Two Years Old Today (2013) http://jeffjonas.typepad.com/jeff_jonas/2013/01/g2-sensemaki...

G2 | Sensemaking – One Year Birthday Today. Cognitive Basics Emerging (2012) http://jeffjonas.typepad.com/jeff_jonas/2012/01/g2-sensemaki...

I have the sensation that palantir and prism must be related in some way.

Given that "Palantir Intelligence integrates disparate data from disconnected data silos at massive scale for low-friction interaction with intelligence officers." I think it's very likely that government analysts are using Palantir to analyze data gathered with PRISM. But whether they use Palantir or a corkboard, I don't see how that makes much difference about the legality of PRISM itself.

It's because the CIA is a major backer of Palantir. Palantir put out the site https://analyzethe.us/ Tracking and analyzing is what they do.

The CIA is also a major backer of mongodb and they aren't getting called out. http://techcrunch.com/2012/09/18/mongodb-maker-10gen-closes-...

"Next in the WUO news: Department of Defense a major backer in corn foodstuff development programs. Corn farmers remain suspiciously silent about the whole thing. Do the corn farmers realize how their corn is being used???"

there's prototypical government subsidy and research through universities and then slowly out to private sector, and then there's direct capital injection for start-up from "private" venture capital firm run by the CIA.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact