Hacker News new | comments | show | ask | jobs | submit login
Ask HN: If we could redesign the Internet from scratch, what should it be?
90 points by sdouglas on Dec 18, 2014 | hide | past | web | favorite | 83 comments
I ask the question purely out of interest, not specifically about tweeking HTTP and TCP/IP but also about the underlying hardware, much of which is decades old. If we could completely divorce the Internet from history, what would be the best design. Would it take much more inspiration from OSI, or look radically different? Perhaps the question doesn't make sense: if we radically changed the Internet it would no longer be the Internet, much as the Internet is not simply an improved version of the phone network.



IPv6 is key. I think if we never had to do NATs we would be a lot better off.

Security. HTTP and other similar protocols without encryption by default are not appropriate for today's Internet. Similarly, we wouldn't have to use opportunistic encryption of email.

Better support for protocols other than TCP and UDP. SCTP is a great protocol that fits many application level protocols so well. Instead corporate IT very often sets up the firewall to simply drop anything but the two most popular protocols.

Sadly, we still wouldn't have a more widespread web of trust. Once you buy into WoT, it can be an invaluable tool, but the problem is largely orthogonal to the development of the Internet: publicist crypto is just too difficult to make easy and robust. I only list this here because I believe a more widespread WoT would mean the end of CA's, government snooping, and many other evils.


NAT as become a pillar of internet security in spite of itself. So with IPv6 as to come improvement in security. As you said the generalization of SSL/TLS as well as ND and IPSec that comes with IPv6 but we also need the generalization of stateful firewalls to emulate what NAT was doing for us. Ironic...


NAT does little to nothing for security. NAT != firewall, and if your security depends on keeping internal IPs secret you have a problem. You can easily firewall without NAT.


In theory we agree. But in practice there is little to no firewalling in the default configuration of home routers BUT there is masquerading NAT which behave as a firewall of sorts. NAT under linux is actually done with iptables which is considered to be an "interface to the linux firewall", although it's only for practical reason, once again.

Now I do agree that my point was kind of moot because it's indeed easy to firewall without NAT. I am all for IPv6 if it's done well. I especially like the idea of finally moving away from ARP Spoofing attacks thanks to Neighbor Discovery (ND).


I meant IPSec, ND is functionally the same as ARP. Also IPSec support is no longer required with IPv6 so IPv6 might not really help with that.


> I think if we never had to do NATs we would be a lot better off.

Multiplayer gamers everywhere just did a giant Hoo-ah to that. I think the network admins are already drunk.


The Internet got a lot of things right - in particular, it is an excellent design for evolution. None of the original hardware is still around, it's grown by orders of magnitude in both size and speed, and almost all the most popular applications today hadn't been thought of in the 1980s.

Having said that, there are definite deficiencies. It's fragile in many ways: it shouldn't be possible to launch flooding DoS attacks; encryption should be baked in to all the transport protocols; the ability to connect to a computer should be much more under the control of the recipient (today's firewalls are a poor workaround for a design deficiency). The routing system is pretty fragile.

The ability to handle multiple networks simultaneously, and gracefully move between (or use more than one simultaneously) isn't something that was originally necessary, but is today with mobile devices. We're working on fixing that with MPTCP, but it will take a long time to univerally deploy.

So, lots of architectural issues, but it's important not to lose sight of what is good about the current network too.


> the ability to connect to a computer should be much more under the control of the recipient (today's firewalls are a poor workaround for a design deficiency).

Erm ... no, it is absolutely intentional that the core of the network doesn't keep any flow state, because that doesn't scale.


It's not fundamental to keep per-flow state to have more control over reachability. Once you step back from the current Internet architecture, there are a whole host of possibilities. Here's are a couple of (rather old) papers for example:

http://www0.cs.ucl.ac.uk/staff/M.Handley/papers/dos-arch.pdf

http://nutss.gforge.cis.cornell.edu/


Splitting the address space into "client" and "server" addresses, supposedly for some security benefit? Come on, you can't be serious about that?!

I have only skimmed the paper, but it seems to me to be a collection of bad ideas, and I can't really be bothered to read all 8 pages. If you think there is actually something useful in there, would you mind condensing that into one paragraph?


We already have "client" and "server" addresses, courtesy of NATs. NATs are a huge pain from an architectural point of view, but all the same, many people like them because they provide some measure of security as a side effect. That paper was all about trying to capture this within the architecture, and providing the same effect in depth, rather than just at the NAT itself. It was a thought experiment more than a serious proposal, but that's the way new ideas come about - someone proposes a radical-but-flawed idea, and others see some merit in the principle, but find ways to avoid the flaws. I wasn't trying to push this solution on you - just responding to the over-general per-flow state point.


Except that the stateful packet filter that sometimes comes with a NAT gateway (and that provides the security that people like) depends exactly not at all on NAT and the pain that comes with that kind of address class distinction.


Well I don't have stats but from personal experience (perhaps I have a bit more experience with french rather than US providers), home routers do not use a specific stateful firewall, and use masquerading NAT. So it would depend on the way NAT is done.


* Everything encrypted by default. Dan Bernstein has talked about a few such ideas in the past [1], and there's also the more recent MinimaLT [2].

* The ability to be anonymous baked into the hardware infrastructure [3]. In the Internet's life, people have been more "anonymous" than not. Even in the Facebook age, many still communicate under pseudonyms on the Internet (thinking they are anonymous - but unfortunately the surveillance state and all the ad-tracking has made that wishful thinking)

* Being a lot more censorship-resistant/decentralized. If the Internet was being redesigned, stuff like the Great Firewall of China shouldn't be made possible. The Internet has been such a democratizing force because people have been (for the most part) out of reach of the governments' might. I can only see a strengthening of that feature as a great thing. Yes, bad things will be done, too, but in balance, I think the free Internet has been proven to be an overwhelmingly net positive thing.

* As others have said, much better support from the Internet stakeholders of modern secure standards. There has just been so much reluctance to change things for the better from major Internet stakeholders. That's why I'm a huge fan of Google's recent pushes for SHA2, HTTPS, etc (some would say "forceful" pushes, but I think that's unavoidable if we want them done within a few short years, instead of decades. Considering how large the Internet is today, they are bound to step on a few toes anyway).

1] - https://www.youtube.com/watch?v=pNkz5PtoUrw

[2] - http://cr.yp.to/tcpip/minimalt-20131031.pdf

[3] - https://code.google.com/p/phantom/


Two relics that haven't (and couldn't have) scaled past the late-90s magnitude of the Internet, and yet we're still with them:

TLS: CAs. TLS/SSL is largely "security on the internet", and any of thousands of CAs could sign a cert for any of our websites. Very few exceptions apply here - to protect against this, you effectively need to be on the client (see certificate pinning on chrome). We need something like TACK[1] here. The current state of the TLS CA system is very convenient for malicious actors of any size.

Routing: BGP is an overly trusting protocol[2]. Any of thousands of ASN can advertise a route and its peers will happily chug that traffic along to it. This would allow, for instance, a Pakistani ISP "taking over" YouTube, or more recently, Indosat trying to take over the Internet[3].

There's many things that would be "nice-to-have" fixes, but these two are pretty urgent in my opinion. They were designed for a much friendlier Internet.

[1] https://tack.io/

[2] http://security.stackexchange.com/questions/56069/what-secur...

[3] http://www.bgpmon.net/hijack-event-today-by-indosat/


TLS can work perfectly well with DANE, and BGP trusting is a non-issue once you have a TLS-like protocol to secure your communication.

I don't think BGP needs to be fixed at all, the underlying problem is inherent to the Internet, changing the protocol will only close one of the several different ways for stealing data.

About TLS, yes, we need something better. Your link currently does not work, but I've never seen a proposal that's actualy better than TLS + DANE.


BGP trusting is only a non-issue if you didn't really need to access that resource anyway. Out of the infosec CIA triad, this is still an Availability fail. Your one and only available recourse is... "keep refreshing", hoping that the Gods of Routing will put your request on the right path, which in the case of intentional sabotage would probably be never[1]. Let's emphasize at this point that you don't need MiTM for this, just a rogue ASN somewhere advertising a super-quick route to $ta.rg.e.t. In most cases, that would sinkhole at least the continent. So, IMHO, still broken, even if we will be able to know when it is broken (at some point in the future). Which brings me to DNSSEC/DANE.

TLS can indeed offset its CA burdens onto DANE, when DANE becomes a thing[2] and DNSSEC becomes ubiquitous[3]. We are so far away from that, that even google hasn't bothered with DNSSEC[4]. Considering the enormous infrastructure changes that this requires, we'll hardly be getting our money's worth - still a strict top-down system: IANA/Verisign have the root-root zone/keys. Verisign "owns" .coms (root zone), as well as most (all?) other gTLDs.

TACK is a proposed TLS extension for certificate pinning for the masses. It doesn't solve DNSSEC problems (not its scope), but requires far fewer infrastructure updates to be implemented. It is also not perfect[5] but still a much closer/realistic goal than DANE. We need something yesterday - the green lock icon means fuck-all right now.

And finally:

> I don't think BGP needs to be fixed at all, the underlying problem is inherent to the Internet, changing the protocol will only close one of the several different ways for stealing data.

Strongly disagree. If it does close off one of the ways of stealing data, it definitely needs fixing.

    /summon moxie
[1] never -> until Humans intervene to manually block the BS route

[2] DANE Browser support matrix: [ [] ] (2d array, versions on the X axis)

[3] For this to really work for the end client, you need nearly ubiquitous support. The client must be its own recursive nameserver (don't trust your ISP), and all recursive nameservers until the authoritative one need to speak DNSSEC as well (I think). Of course, the g/cc TLD also needs to support DNSSEC (not all do), the owners of the site must have set it up properly, etc. After all that is ubiquitous enough to you enforce DANE for proper TLS ("green icon"), you just need to update all clients everywhere with the new rules. We're currently at step 0.1 of this process - the root zones for most g/cc TLDs are there, and that's about it.

[4a] http://dnsviz.net/d/google.com/dnssec/

[4b] http://dnsviz.net/d/whitehouse.gov/dnssec/ (for comparison)

[4c] http://www.dnssec-name-and-shame.com/ (NOISY site - be warned) Test out a few top alexa sites. You'll be surprised.

[5] It can only protect your second (and subsequent) visits to the site. If your first time hits a malicious impersonator, you're shit out of luck. Furthermore, the impersonator could "tack" its own malicious certificate for some lols when you actually get to talk to the actual target.com.


edit: [1] should be http://tack.io/, my bad. I assumed https would work.


Overall, I think, Internet proved to be quite reliable (we still use basically same infrastructure for decades). The key problems mainly arise from centralization (DNS) and weak/insecure protocols (BGP, HTTP). Internet 2.0 should be p2p based with end-to-end encryption everywhere, not sure if that's possible to achieve with current protocols/technologies, but torrents and Tor network are step to the right direction imho.


One thing I miss a bit is that I cannot just run a small server somewhere, because of things like NAT. Although this "addressability" problem is slowly solved by using a bigger address space.

With the influx of low power devices, implementing distributed versions of many of today's centralized services could be much more fun.


I’m torn on the NAT dilemma. One the one hand, it obviously makes point-to-point communication harder. But on the other, for each one of us wondering why we can’t ssh directly into a home machine, there are 10 people with insecure machines protected from automated attacks by the box running NAT.

It’s probably helped at least as much as it’s hindered


You are confusing NAT and a stateful packet filter. NAT doesn't contribute anything there, the stateful packet filter does, and a stateful packet filter works just as well without NAT, just that it's much easier to make some services accessible if you want to.


>> a stateful packet filter works just as well without NAT

True, but NAT doesn't work without statefully filtering/routing packets, and unlike generic packet filters, the use of NAT is basically a requirement for most people connecting devices to the internet.

The question is: if IPV6 was around 30 years ago and no one ever needed to use NAT to stick a whole address space behind a single address, how would things be different today? How long would it have taken for packet filters to become a default feature on home routers, and what would their default settings be?


Routing isn't stateful at all, and NAT doesn't need a stateful filter, it just needs connection tracking (which is also needed for a stateful filter, if you have one).

How common are stateful packet filters on home routers today? I don't really know - thanks to NAT, you can get away without for most attack scenarios nowadays, so I wouldn't be surprised if vendors don't really bother with it. But given that connection tracking doesn't seem to be that difficult with home router hardware, I would have expected stateful packet filters in home routers as a default feature early on, with everything inbound blocked by default (and then some UPnP like protocol for opening ports as needed, just without the stupid address collisions you get with NAT).


Well, depending on the type of NAT, isn't the end effect for certain variants that non-communicating services on local ports are rendered unaddressable from the other side?

I believe that's what GP was referring to.

(Disclaimer: can't remember which variant of cone/full this is categorized as -- I thought there was a really useful "Current state of NAT in practice" blog post that was linked a few months ago)


Not really. The non-addressability comes from not globally routed ("private") addresses on the "internal" network: They may prevent someone on the other side of the planet from reaching those "internal" hosts on your local network, but that's orthogonal to NAT: You can do NAT between globally routed addresses (which thus would be reachable directly ... unless there is a (stateful) firewall preventing that!), and also, just because those addresses aren't routed globally, doesn't mean your ISP (or whoever is connected to the "outside" link of your NAT gateway) couldn't send you packets directly addressed to your internet network that your router/NAT gateway would just forward to your local network (once again, unless you have a (stateful) firewall that blocks those packets).

Now, it so happens that dynamic NAT also needs to do connection tracking in order to be able to map addresses back and forth, just like a stateful firewall does, and that therefore, it's easy to also implement stateful packet filtering on top of the same connection tracking state - however, there is no need to do NAT in order to do the connection tracking and the filtering based on that, you could have the exact same stateful packet filtering with the exact same security properties, just without messing with the address fields of the packets and all the bad things that result from that.


From a practical aspect, IPv6 (and the removal of NAT) can't come alone: standalone stateful firewalls need to be generalized as well, to replace the way they are embedded in the NAT implementation inside people's modem/routers nowadays.


Hu? I don't get what you are trying to say ... yes, one usually should have a stateful packetfilter at the uplink, with IPv6 just as with IPv4, with NAT just as without, what is your point?


There is a difference between theory and practice: there is no specific use of stateful firewalls in today's IPv4 mr. nobody home router (i.e. most of the routers do not use the --state option of iptables). If we move to IPv6 only, ISPs need to (and will) use --state (or equivalent). The way mr. nobody has a sort of "stateful firewall" nowadays is actually thanks to the popular use of Masquerading NAT (i.e. iptables -t nat -j MASQUERADE). So jackweirdy is kind of right "NAT" (in practice) as become a pillar of internet security, in spite of itself.


The internet has proven to be quite resilient. So there's not much to be changed. I can think of:

* More security baked in by default.

* Quicker implementation and standardization of features the market demands -- so Java, Flash and ActiveX could be avoided.

* Lesser reliance on central nodes (nameservers, etc), with a larger emphasis on a network of peers, which dynamically grows with actual need.

* Oh and somehow fix domain system. I don't think there's any value that domain squatters provide to the network that don't have simpler alternatives.


Honest question on your last point: What would you offer to prevent the somewhat dark-market forming around the limited number of meaningful identifiers? Using non-meaningful identifiers worked with phone numbers when they were bound to a region, but the web is global so I suppose you would not be happy with something like newprotocol://europe.germany.berlin.101010.servicename ?


I would use the GNUnet name system:

https://gnunet.org/gns-implementation


I don't think anything designed by committee could match the flexibility and resilience of the currently evolved system. The current internet works because its been evolving in-situ for 20 years, reacting to problems and vulnerabilities as they occur. Anything starting again from scratch would be broken at first and then also need to go through a long period of evolution.


I think this is a really good point. Like the human body, the Internet has it's flaws but in many ways is incredibly well suited to its purpose. An interesting article (which I'm sure many have already read) which discusses how the more organic TCP/IP won out over the incredibly organised effort of OSI can be found here: http://spectrum.ieee.org/computing/networks/osi-the-internet...


But the internet was designed by a comittee...and guiding its evolution as you call it:

http://en.wikipedia.org/wiki/Internet_Engineering_Task_Force

http://en.wikipedia.org/wiki/Internet_Technical_Committee

And I am sure DARPA and whomever worked on the original networking protocols would horify you by their style of design by comittee...


But are the things that actually got used and flourished the things that the committee expected?


The system hasn't evolved on it's own, like bacteria. It has changed over time due to new systems which were designed by people.

Knowing what we know now, would those designs be different? Undoubtably, yes. Would they likely be better? Likewise, yes.

Anyone who's worked with protocols knows just how bad the initial design usually is.


Now that encryption on the fly of web pages adds virtually no overhead to the loading of a website, we could build encryption right in. The SSL business if one of the main pain points on the Internet: a more decentralised system could be designed to know whether to trust remote servers or not, effectively removing the certification authorities from the equation (CA are an — almost — single point of failure and make a lot of profit from that monopoly).

The SPDY spec and the HTTP2 draft have a lot of good ideas too regarding the implementation of the communications.


More separation between message encryption and identity verification. ie. something better than the CA system and "untrusted certificate" warnings.

I realise these are actually SSL criticisms but oh well.


RINA is a possibly solution to a number of problems.

http://csr.bu.edu/rina/about.html http://irati.eu/nextworks/

The theory has been around for ages but it's starting to gain traction with a prototype stack released recently.

http://irati.github.io/stack/


1. DNS has some problems. I dislike some of the ways ICANN works and squatting is a real problem. Right now some of the problems of DNS are getting remedied but the influx of new TLDs is causing its own problems. I think that "Internet 2.0" will consist of a broadcast oriented, more than likely free and in some way federated DNS system.

2. The HTML/CSS/JavaScript stack is a mess. HTML as a technology is a sack of warty dicks that has evolved in far too many ways to properly define its use. CSS has slowly grown to encompass all sorts of things (because your HTML at one point was supposed to be semantic), but there are still very basic things that are unreasonably hard to do with CSS that were quite easy to do the wrong way with HTML. JavaScript is a fine language, but has become a bastardized do-everything language in ECMAScript 6 for no other reason than it's the only language you can use directly in the browser.


Append only; the ideal internet would never forget. Persistent databases such as Datomic make that feasible. Websites would be views on top of those DBs. APIs would be a thing of the past as we'd just subscribe to a service's public dataset.

Homoiconic; code would be part of the internet. Views on datasets would themselves be stored in a persistent database. Clients would simply subscribe to the code feed and can update automatically, while still being able choose to run old clients, apps er even just particular versions of functions.

Conversational; interactions between services consist of exchanges of immutable data between services (just like conversations.) I publish some code; you publish your interactions with that code. I publish a question; you publish an answer to that question. This has the added benefit of being able to map it onto existing encrypted communication protocols like OTR.


A User Protocol (UP). It would be similar to IP but specific for each user. Basically, a user for all your devices and webpages. It'd be managed on the browser-side and synced across devices. A User might have and use several profiles, much how you can send emails with several "from" or when you browse in private mode.

It would basically delete all the throw away accounts that we have to do nowadays and the security risk with repeating passwords. It might even get rid of email as we know it since you could do communication user to user.

Development nowadays

- Mozilla's Persona: https://login.persona.org/ getting adopted slowly.

- Facebook/twitter/linkedin/etc login apis. Not ideal since you are tied to the place.

Pros: emails and communication, p2p, payments, development time (no difficult login).

Cons: stolen id = stolen life, privacy


The "improved version of the phone network" metaphor is the exact reason why we should thank our lucky stars for the Internet we have today.

If a global architecture of this scale were to be planned out today, every nation would have its own competing standard for civilian oversight. There would be dozens of competing commercial standards...and Sprint would change theirs every 24 months.

Copyright interests, ISP billing meters, Computer A only working with Network A...all these things would be a part of an Internet designed from the ground up instead of what we have today.

The Internet certainly has its warts, but at least it's gotten us this far.


I would have gone to Berkeley and swayed the decision to use 48bit ipv4. https://www.youtube.com/watch?v=DEEr6dT-4uQ#t=2067


(1) IPv6, and anyone implementing NAT for it should be set on fire. Or alternatively, go back in time and make the original IP 48 or 64 bit instead of 32.

(2) IPSec implementations that work well, and are used from early in its history. Encryption should be the norm, not the exception. It's arguable that if this were done right SSL, SSH, etc. would not be strictly necessary.

(3) IPv6 mobility designed in from the start too. Nobody thought mobility was going to be this common. With a mobility protocol you could have roaming IPs, which would be really valuable for a lot of situations. There are standards for this but nobody supports/uses them.

Other than that, I think the Internet got most things right the first time. Doing more in the core would be second system effect. Anything beyond what I described above should be transport or application layer.

Multicast would be nice, but I think it ought to be implemented in L3 or above (e.g. network virtualization, BitTorrent). Multicast at Internet scale baked into the core would probably be too much cost for not enough benefit... unless someone came up with a really great way to do it without introducing tons of DDOS vulnerabilities and complexity.


Bernd Paysan is trying to do this with his net2o project: http://net2o.de/


IMO the greatest change would not be in tcp/ip and below, but in http and up. If you had no knowledge of the current web, and I would pitch you html, css and js, you'd probably think it was a bad idea. Why would you use interpreted scripts over binaries if speed is mission critical. First you encapsulate your data in a markup language only to be parsed/converted back by the receiver, ridiculous. It also lacks semantics to make dynamic web applications readable for machines, not to mention the XSS vulnerability in the markup language that would make server side application unnecessarily complex. What a weak proposal, would never work.


I agree, there aren't that many problems in IP compared to the www stack. Imagine if we had bytecode instead of Javascript, the web would be years ahead in terms of being able to create games, video editors, etc. and other high performance apps in the browser. Javascript has basically become a poor substitute for an actual assembly language.


I actually disagree. I think the "view source" menu item probably helped the web evolve faster. I think the trade-off made tons of sense at the time.


Yes, but that's the thing: it does work. And anything that you created from whole cloth, empirically, would be highly unlikely to work.


I think it's pretty clear we need to separate the "document" aspects of the web from everything else. HTML and CSS are really useful technologies for dealing with text content, but they're only one tiny part of what the web is about now. Conceptually, building web apps with HTML and CSS just does not make sense, and almost necessitates using some clunky framework built on top of JS. (And I'm not JUST talking about Facebook and the like. I think most websites with a nav bar count as "web apps", in a sense.)

Perhaps this will involve building the layout and plumbing of a website using straight (sandboxed) code, and then filling in the content using the same old HTML (or Markdown) and CSS. If the site has no content — if it's an online tool like a file converter or an uploader, for example — you could omit the HTML/CSS part altogether. Think UIKit: build up your view controllers, your tables, your data sources, and your nibs, and then provide the content in the form of HTML/CSS like before. (Of course, you don't have to work like this if you don't want to. Just throw up a blank view and keep working in HTML/CSS/JS if you prefer.)

(Yes, you can create a blank page and populate it with JS today — and indeed, this is what frameworks like Ember do — but you'd still using divs and the like to do it. I think this is backwards. We shouldn't be using a text rendering engine to produce web apps; we should be using web apps to host text rendering views. In other words, Ember should exist on the top level, not the bottom.)

One question that will let us know if we're on the right path is, "Can we make websites that feel just as good as native apps on mobile?" Not just "good enough", but so good that nobody will even think to write native web apps ever again. Same responsiveness; same support for gestures and touches; no more misclicks and accidental navigation.

Fundamentally, the web is for content, so it'll always be vital to be able to do things like hyperlink to arbitrary pages and open multiple links in succession. But now that most of the web's content is inside these overarching web apps, and also that websites are being used for purposes other than content, it's important that the tools we use grow along with the web.


If you use react.js and rcss you will be able to build your whole app with just js. Well organized and modular. No html, no css (not directly, of course. You still have to fix css issues).


Yeah, I added a comment about that. It's true that web frameworks essentially work like this, but I still think it's massively backwards that we're building web apps by using JS to manipulate a text rendering engine and not the other way around. In my opinion, this is one of the primary reasons why web apps never feel as good as native.

To look at it another way, this means that HTML and CSS are sort of the web's "layout and appearance bytecode", and they're pretty mismatched for that purpose.


John Day, a protocol and ARPANET developer, answered this exact question most completely and wonderfully in his book "Patterns in Network Architecture: A Return to Fundamentals" [1].

[1] http://www.amazon.com/Patterns-Network-Architecture-Fundamen...


Some form of content-centric networking ( http://en.wikipedia.org/wiki/Content_centric_networking ) would be good to help up publishing and broadcasting over the internet, and eliminate the slashdot effect / reddit hug of death.


I wasn't familiar with the term, but that looks terrifying from a net neutrality standpoint. Deep packet inspection wouldn't even be needed to sort traffic, no?

Also, making the network content-aware to a greater degree gives me nightmares of bloated XML / WS-I style schemas making their way into core specs.

That said, your points are a definite need with the size of the modern network (and the order of magnitude difference in traffic spikes). Imho, it'd be great to have a more formalized caching system, whereby the server could tag content as recommended-cache (core page content) or not (comment threads) & invalidate asynchronously if it became necessary.

I'd be interested to see what we could do with SSDs / future technology with feasible latencies to assemble such a system.

Hypothetically, there's no reason it couldn't be incentivized via a peering-style agreement: e.g. I cached and served X amount of traffic that you didn't have to, therefore you'll pay me Y% of the cost of you serving that traffic yourself (where Y < 100% & it still makes sense for both parties).


So much was got right the first time round, that I would assume we would fail whale if we tried to redo it - in the end the internet was amenable to evolution - and that is how it should proceed, from where it is now.

I know it's now in the spirit of your question - but everyone else seems to have answered those well (DNS, https, routing at AN)


Not sure about web vs internet, but content-addressable sounds like it could potentially be a big win. Particularly if you factor in encryption, it would be pretty great if static assets + signatures could be more easily + globally cached, while encrypted personal content could go in separate packets closer to what we currently do.


- the URL notation <proto>://<bla> should loose the //

- changing the IP address or having other temporary interruptions (e.g. train in tunnel) should not force me to re-open all my SSH sessions

I have no idea what that means from a technical POV though.


Tim Berners-Lee says the double slashes was a mistake and should have been left out. http://www.dailymail.co.uk/sciencetech/article-1220286/Sir-T...


But the // is useful.

http://stackoverflow.com/a/2216721


Yes thats a good point, protocol-relative links are useful, but IMHO they don't necessarily depend on the "//". If URLs would look like "http:google.com/bla", a different syntax would have been invented instead, e.g. ":google.com/bla".


ssh should handle temporary interruptions today but for changing ip addresses, check out mosh! https://mosh.mit.edu/


And thus you earned my highly prized "Todays Most Valuable Posting" badge. Do you have any experience with it? Gotchas, potential security issues etc?


Mosh is great, seriously. It just works, of course not nearly as battle tested as SSH but it practically works very well.

I used it alot when I ran an irc client (I use IRCCloud now).


In addition to RINA (mentioned below), also check out http://www.ccnx.org/


No ads.

Instead, an easy-to-use and affordable micropayment system, something like flattr (but with an option for paid-only content), that can be used by Internet users to pay for content, maybe as a part of the ISP subscription.

The cynic in myself outlines a world full of pay-to-win online games, but I suppose people will learn about them and they will vanish sooner or later anyway.


I actually don't mind ads anymore - more specifically, targeted ads - as long as they don't auto-play sound or get in the way. I'd rather see a few non-intrusive ads relevant to my interests, instead of paying for content myself.

I used to find ads annoying and intrusive. Maybe I'm becoming subliminally subservient to my advertising overlords, but I like to think it's the targeting that makes a difference. Also, the fact that they've become less intrusive, and I have the option to skip ads not relevant to my interests.


For a lot of people in my field (comics in particular, visual arts in general), Patreon is filling that need quite well. I'm now getting about $400/mo for drawing my comic, mostly from people paying a few pennies a page. People who have larger fan bases are making several thousand a month.

It is stupidly easy to use. Make a video pitch and a text pitch, then put a link to it on every page of your content (whether by tweaking your cms templates or pasting the same text into every post on a site you don't control), and forget about it beyond posting a copy of payable works you make to it.


The IP address of a computer should never be known. It should be impossible to ping a computer directly. DNS servers should be the proxy who get requests and fetch data on behalf of a client. They cache and mask DoS attacks. Clients and user-facing machines only communicate with proxies


This "solution" is about on the same level as saying that you just have to pour more stronger lasers into a bigger thicker metal cage that contains hydrogen in order to obtain fusion reactors. That won't get you the pressure and temperatures you want yet? MOAR LAZOR!


What about M2M communication? Or server initiated communication (something like LWM2M)? How would you get around having an IP address there?


No browsers. Just a package manager and applications for different things.


Urbit is trying to answer this question. Have a look http://alexkrupp.typepad.com/sensemaking/2013/12/a-brief-int....


!) Sessionless web 2) BaseN (quantum) 3) <>Internet


Just one thing : no Javascript pretty please :-)


Or perhaps just a more mature Javascript?


I think I could probably live with something like Typescript, but really between JS and me it's a purely subjective matter.

I'm sure if I put my heart to it I would come to love JS (you know like in Stockholm syndrome ;-)


Why?


I'm a desktop and server developer. I have excellent tooling and a top notch debugger which make my job easy.

When I have to do front-end web dev, I feel like coming out my high end sedan and riding a skateboard : skateboarding is certainly a lot of fun for the cool kids, but I'd rather sit comfortably in my sedan.

OK ... now I guess I just have to ask you to get off my lawn :-)


The web would be semantic.


QML EVERYTHING




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: