Hacker News new | past | comments | ask | show | jobs | submit login
“The Dream of Internet Freedom Is Dying” (techcrunch.com)
136 points by chris-at on Aug 5, 2015 | hide | past | favorite | 42 comments



I would add that the "Secret law" also encompasses the private deals struck between various organizations that have the full force and effect of law.

For instance, I am often asked about the standards for DMCA takedowns and monitoring of content. I cannot turn to laws and cases. I have to glean information from patterns of activity. Youtube does X, and the MPAA aren't screaming mad about Youtube these days, so perhaps we also need to do X. Or, megaupload was doing X Y and Z. But a dozen other website were doing X and Y without a peep from US authorities. So I guess Z is the touchstone even though Z isn't mentioned anywhere in any law or case and wasn't even around when the laws were written.


Youtube spent several years fighting a lawsuit against Viacom until they settled last year. If whoever you're giving advice to doesn't have Google's cash to defend themselves, don't tell them to copy Youtube's actions.


My advice goes one step further: Don't do user generated content without an ejection plan, a means to extract all the money and shut the doors instantly should you catch the eye of the mpaa or their ilk.


The dream could come back. In my opinion.

Defeating legal threats:

- Design server architecture so can be migrated from one country to a different one without censorship. Now is easy, with VPSs, you can move infrastructure in few hours, if you do automated deployment.

Defeating cost threats ("run your own software like a 'megacorp' "):

- Services running on cheap hardware handling 10-100x more users per server than current high language implementations (back to CGIs/Fast-CGIs on native code, instead of PHP/.NET/Java/NodeJS/Python/Perl).

- Filesystem-less storage (start server, load filesystem to optimized in-process RAM DB), so following steps can be avoided: disk cache, filesystem tree search, multiple memory copies, etc.

- Separate persistent and non-persistent data, so most operations don't need to hit the disk.

- Abstraction over those low-level systems so people with high-level capacitation can build a massive-scale web application.


Two points I disagree with:

> - Services running on cheap hardware handling 10-100x more users per server than current high language implementations (back to CGIs/Fast-CGIs on native code, instead of PHP/.NET/Java/NodeJS/Python/Perl).

CGIs are slow even with native code, in the vast majority of cases, and FastCGIs are fast enough with properly written Python/Perl. Further, properly written Java or .NET, and even NodeJS performs well enough these days that unless you have some extreme case (you don't), you shouldn't even think about native. The tradeoff in bugs and dev time is just not worth it.

> - Abstraction over those low-level systems so people with high-level capacitation can build a massive-scale web application.

With incredibly consistent and repeatable correlation, abstractions over those low level system cost a lot in performance, thus contradicting your earlier points.


> are slow even with native code, in the vast majority of cases, and FastCGIs are fast enough with properly written Python/Perl. (...)

Yes, CGIs are slow because of the process spawn. About FastCGIs, if you want 10-100x throughput increase with same hardware, you have to go to a lower level. That's the tradeoff. I.e. properly written in native vs properly written in high level language.

> The tradeoff in bugs and dev time is just not worth it.

The tradeoff in bugs and dev time can be solved/increased with a higher level abstraction, e.g. using high level libraries, so the developer could avoid risks.

> With incredibly consistent and repeatable correlation, abstractions over those low level system cost a lot in performance, thus contradicting your earlier points.

No, if the abstraction is in the same low level language, e.g. using high level libraries on lower-level languages. In the end, higher level languages solve memory management, string handling, rich library, etc. With good libraries, C/C++/Others can be safer and as simple as e.g. NodeJS.


> With good libraries, C/C++/Others can be safer and as simple as e.g. NodeJS.

C cannot - there is apparently no way that is both reasonably efficient and reasonable foolproof to write C code; if you have large scale examples written by more than one person[0], I would love to see them.

I concur C++ should be able to, but I again request large scale examples, because I've never seen any. Have you?

> if you want 10-100x throughput increase with same hardware, you have to go to a lower level. That's the tradeoff. I.e. properly written in native vs properly written in high level language.

You have to define your terms. Is Lua high or low level? Well written Lua code that does not need millions of small objects seems to run to within x2 as slow as comparable C code when executed through LuaJIT2. I've seen well written Perl beat well written C in the past (the Perl was better written, but both were well written).

> The tradeoff in bugs and dev time can be solved/increased with a higher level abstraction, e.g. using high level libraries, so the developer could avoid risks.

With a language that so easily lets you shoot yourself in the foot like C or C++, this is not the case in practice, see, e.g., every C program ever written. Can you give the examples you're thinking of so we can discuss them?

[0] djb is an army all by himself, and even djb had security bugs - although, IIRC, by shear luck not exploitable on x86 or ARM.


> C cannot - there is apparently no way that is both reasonably efficient and reasonable foolproof to write C code; if you have large scale examples written by more than one person[0], I would love to see them.

Not easy. Sure. In that regard, I'm trying something in order to help in the case of C, but it is not intended specifically for the problem we're speaking (just strings and other data structures, not a "NodeJS for C"): https://github.com/faragon/libsrt

> I concur C++ should be able to, but I again request large scale examples, because I've never seen any. Have you?

Of course not. There were many years ago when servers were way slower than today. The problem is not solved for current web, obviously. "Solved", in the sense of having some framework for having low level stuff in the backend without doing a byzantine effort. The point is about how to break monopolies in the David vs Goliath sense. If you have the resources, go high level.

> You have to define your terms. Is Lua high or low level? Well written Lua code that does not need millions of small objects seems to run to within x2 as slow as comparable C code when executed through LuaJIT2. I've seen well written Perl beat well written C in the past (the Perl was better written, but both were well written).

Please note that I was speaking about extreme optimization, even avoiding filesystem I/O, and reducing memory copies to the minimum. So it is not about comparing C vs Perl doing disk I/O, but in pure calculous. High level languages, even using JIT, have an important data structure size overhead and also allocation issues (when handling 10^5-10^6 requests per second on a single thread), so even with properly optimized code you can have poor performance in comparison to low level languages with optimized data structures (e.g. libraries using low level data structures using indexes instead pointers, multiplexing buffers, allowing memory aliasing for allowing conversions in the same buffer, etc.).

> With a language that so easily lets you shoot yourself in the foot like C or C++, this is not the case in practice, see, e.g., every C program ever written. Can you give the examples you're thinking of so we can discuss them?

Write backends in C/C++ is nonsense, unless you want to handle 10-100x requests using same hardware. If your service allows to cover the expense, go high level. Of course :-)

P.S. some low level backend stuff:

Kore: https://news.ycombinator.com/item?id=9558196

Rwasa: https://news.ycombinator.com/item?id=9948749

Redis: https://github.com/antirez/redis (e.g. changing it in order to call the DB operations directly from same-process, without doing connections)


Attempting sorely technical fixes to political problems is a delaying tactic, nothing more.

Imagine a law like this: "You must be able to positively identify any person creating content on a service you control".

Nothing you have suggested does anything at all to address that in anyway other than to try to make it harder to enforce. That method has been tried for 20 years, and is quickly becoming harder and harder.

It's already extremely hard to buy a VPS with even pseudo-anonymity - most require credit cards, which means a law enforcement agency can retrieve your details trivially.


You can buy services with a credit card that are legal in one country and not in a different one. Theoretically, you can not be punished in your local legislation for operating in other countries. E.g. you can buy/rent servers in Hong Kong (or Taiwan) for providing world-wide free-speech services (you can be blocked from China Mainland, though, but not punished for that -I'm not 100% sure about this, you'll probably be able to find less extreme examples-).


i know of two iclandic outfits that accept bitcoin and anonymous registration. Nobody can trace my VPS to me.


If you think Bitcoin keeps you anonymous in an age of ubiquitous network monitoring and growing work on Blockchain analysis then you have missed the point completely.


someone's never heard of a bitcoin laundry and mail2tor.


Don't you see? This is exactly the point I'm making.

I agree 100% that there will continue to be ways for extremely careful people to do this kind of thing. But the counter-measures and counter-counter-measures continue to ratchet up and at each step it more and more people are left behind.

That's why I said Attempting sorely technical fixes to political problems is a delaying tactic, nothing more.


somebody's never heard that security is a moving target.


There are a few that accept Bitcoin.


> Defeating cost threats ("run your own software like a 'megacorp' "):

If you mean "run your own email server and decentralized social service node", you don't need high performance (note that you are still reliant on a third party to host your stuff anyway...). However, email in particular requires careful setup and maintenance, it's definitely more a problem of convenience than anything. Also, most people are not motivated enough to do that, even in the tech community, and existing social networks/IM systems have a huge network effect at that point.


No. I mean services for world-wide audience (currently you need lots of datacenters), not for a local thing. If you don't need high performance, go higher level, of course.


Publishing has been decentralized by the web.

Communication has been decentralized by email.

Money and contracts has been decentralized with bitcoin.

Social hasn't yet been effectively decentralized, but it will be.

Decentralized is better when it comes to individual choice, curbing abuse of power, and resilience (no single points of failure).

But it's way worse when it comes to security. And no one's been able to decentralize security effectively yet, because a single top-down entity with an economy of scale has more resources to secure itself than expecting EVERY little host and their dog to upgrade to the latest version of Wordpress.


I think you meant to say: "Publishing has been centralized by Amazon, and email has been centralized by Google and Microsoft."


Social has been decentralized. I socialize with friends across a dozen digital platforms now.

Email, phone calls, SMS, WhatsApp/Viber/Kik/Snapchat, Facebook, Skype, Twitter. Then throw in a dozen other communities, ranging from Hacker News, to Stack Overflow, to Reddit or Imgur, and so on.

Social has been substantially decentralized. The only way it can be argued that it hasn't, is if you consider any corporate ownership of a platform to be by default non-decentralization (as opposed to having numerous available platforms being the decentralizing aspect).


Email - logged and retained by service provider.

Phone calls - tappable, metadata logged and retained by service provider.

SMS - logged and retained by service provider.

Skype - activity logged and retained by service provider.

Twitter - activity logged and retained by service provider.

You were saying something about decentralization?


It's not all that easy or cheap to decentralize the last mile. Things were not any more decentralized when most people used dialup, and they were definitely more centralized when the main users of the internet were on a few campuses.


Absolutely true. Getting a phone tap and the equipment required to monitor an active modem connection was sufficiently onerous for local and federal law enforcement that it was reserved for active investigations of high value targets. Hell, they couldn't even be arsed to track Mitnick down when he was on the Most Wanted list, it took a phone company tech and a pissed off security analyst from California to finally bring him down.


Decentralized does not imply secure. Obviously both are desirable, but one does not imply the other. Although some will (rightfully) argue that a centralized system can never be fully secure, so you might say that secure implies decentralized at some level.

Although, to be fair, of all the above only (the old) Skype and email come close to decentralization.


I don't mean decentralized like that. Every one of those platform is a centralized network.

I'm talking about something like Wordpress but for social. The way someone can install an open source SMTP server, open source web server, blog etc.


generally it's worse for UX as well


I don't have the energy nor the motivation, but I would really love to make a p2p internet which could be as easy to use as bittorrent.

Of course security is a problem, but I'm sure that making things public by default would make it easier, and security is not always mandatory, you can always use something which isnt entirely secure and use it well within its limits.



Wu users need to also accept responsibility. We handed over control to "Facebook, Twitter, GMail, Amazon, etc" as part of our Faustian bargain to get everything for free and put advertisers in the drivers seat. The invisible hand works for the customers, not the products.


Dvorak keyboard?


I think it's a side effect of the eternal September. There was a time in my life when I had to convince people that email was important. Now, it's old news and even the slowest adopters have seen the light provided by the internet.

One day, someone will invent something cooler than the internet, and no one but the super technically minded will use it, and it will be a new spring. But September will still be a few months away.


It's ironic to read about Granick advocating a free, open and decentralised Internet the same day we've been discussing how we're heading Straight for AOL 2.0: https://news.ycombinator.com/item?id=10008769

The writing's on the wall.


The writing has been there for a while but many are too distracted to stop and read.


Apparently the Black Hat audience is very conservative and pro-government. And likes Keith Alexander.

I guess the author has been attending a different conference than me.


Perhaps attendees there as spies now outnumber actual enthusiasts. Perhaps they can now spy on each other and leave the rest of us alone.


I've just had a simple idea while thinking about the conflict between privacy, freedom, law and democracy and its effect on the Internet. How about designing a new set of protocols that would be using the highest degree of available cryptography and obfuscation, minimizing leaking of traceable information, but having a democratic voting mechanism in its core (Paxos-style?) for "de-privacing" sources of information/traffic shall the majority of users decide (in cases clearly violating laws in grave matters etc.)? Direct democracy-style. How much of a pipe-dream would that be? What would be deficiencies of this approach and how could they be addressed? Is this even technically doable in a de-centralized fashion? Would apathy or malice of general population be the main risk?

The motivation is that nobody would like to have all their private matters out in the open (which is a fact nowadays), but we need an efficient way to enforce reasonable laws (like preventing child abuse, selling hard drugs, illegal arms etc).


> but we need an efficient way to enforce reasonable laws (like preventing child abuse, selling hard drugs, illegal arms etc).

We have that. All those crimes take place in meatspace. Which is where the police exist, too. Tell the police to do their jobs and detect actual crime, not thought-crime.


i think the main risk is mob mentality, especially polarizing stories could result in essentially innocent people having their private lives laid bare over sensationalist stories. you could, i suppose, couple the idea with a republican idea of senate to dampen that effect, but i suspect it wouldnt work.


We've been helping to kill that dream when we pushed for net neutrality. We played right into it. Getting to the circus on time (i.e. watching king of thrones after torrenting it) was so important we had to hand over the Internet to the FCC.

We no longer route around the damage. We intend damage.


Has anybody found the speech?


Probably won't be out until whenever they decide to release this years talks on YouTube.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: