Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: You have one shot to redesign the Internet – what do you change?
172 points by lowercasename on June 28, 2021 | hide | past | favorite | 308 comments
This was just an idle conversation we were having at work. Imagine that one day you wake up and you've been sent back in time, where you are now a researcher at DARPA in the early 1960s. You've got the influence to effect fundamental changes in the next sixty years of the Internet's history, and can make your changes any time in the next sixty years - but you know that as soon as you change one thing in history, you'll be sent back to 2021, to continue living in the world you have wrought.

How are you going to make the Internet better?




Changes I would have made in the early days:

- 48-bit static IP addresses. 70 trillion should be enough. 128 bits was overkill.

- Nodes, not interfaces, have IP addresses, so you can use multiple paths.

- IPSEC available but initially optional.

- Explicit congestion notification, so packet loss and congestion loss can be distinguished.

- Everything on the wire is little-endian, byte oriented, and twos complement.

- You can validate a source IP address by pinging it with a random number. If you don't get a valid reply, the IP address is fake. Routers do this the first time they hear from a new address, as a form of egress filtering. This contains DDOS attacks.

- Routers will accept a "shut up" request. If A wants to block B, it sends to a router on the path, the router pings A to validate the source, and then blocks traffic from B to A for a few minutes. This also contains DDOS attacks. Routers can forward "shut up" requests to the next router in the path, for further containment.

- Fair queuing at choke points where bandwidth out is much less than bandwidth in.

- Explicit quality of service. At a higher quality of service, your packets get through faster, but you can't send as many per unit time.

- No delayed ACKs in TCP.

- Fast connection reuse in TCP.

- Mail is not forwarded. Mail is done with an end to end connection. Mail to offline nodes may be resent later, but the sender handles that. Mail, instant messaging, and notifications are the same thing. Spam is still possible but hard to anonymize. If you want your mail buffered, use an IMAP server at the receive end.

- One to many messaging uses a combination of RSS and notifications.

- Something like Gopher should be available early. The Web would not have fit in early machines. but Gopher would.


> 48-bit static IP addresses. 70 trillion should be enough. 128 bits was overkill

The least significant 64 bits in IPv6 are used for link-layer addressing, which is how IPv6 supplants older L2 protocols (ARP etc) with NDP etc, thereby fixing (or at least improving) some significant issues with larger IPv4 subnet scalability/reliability


Right. The reference IPv6 packet header is only twice as long as that for IPv4 (320 bits vs 160).

What is crucial, combined with the fact darkr points out that since they include the link layer the IPv6 protocol deals with more of the network architecture, is that IPv6 headers are much simpler, with only 8 fields, the 6 non-address fields all in the first 64 bits, while IPv4 has 14 fields, with the 12 non-address fields taking 96 bits, and they are better designed with attention to what routers find most important. This simplicity offers routing subsystems increased flexibility in the way they can be put together and handle their workloads.


This is a great list. I don't think IPSEC would have been a win; it's not a great protocol just as a cryptographic transport, and I think one thing we've learned over the last 30 years is that the end-to-end argument applies especially well to cryptographic security (just because there are so many different service models you can want, and no one design that serves all of them).


> Mail is not forwarded

Go one further, mail is pulled, not sent. One such approach is https://tonsky.me/blog/streams/. It'd be hard to prevent moats and people might still coalesce into a few popular middlemen, but at least spam would be reduced. Spam is the primary reason client-side/self-hosted mail is left to the tech-savvy.


I prefer the "headers are sent" but "messages are pulled" variation. Like downloading from NNTP.



> Routers will accept a "shut up" request

This shifts the DoS threat from servers to clients. How does the router know that the request from A to block B is legit? Just because the router can ping A? Behold, a system where you can shut down your enemies for good by paying botnets to send requests to block B.


So many conversations like this wouldn't have existed if the internet had been designed for zero-trust from day one. Imagine if all icmp / igmp had the possibility to sign requests.


It's implied that the block is ONLY for traffic TO A FROM B; via R (router) (with R having the chance to aggregate? Maybe based on owning delivery paths that are valid?).

Full quote: "- Routers will accept a "shut up" request. If A wants to block B, it sends to a router on the path, the router pings A to validate the source, and then blocks traffic from B to A for a few minutes. This also contains DDOS attacks. Routers can forward "shut up" requests to the next router in the path, for further containment."


This is slightly disingenuous. If I'm a router manufacturer, and there's a protocol that will tell me "peers such-and-such are troublemakers", and I get twenty such notifications from downstream that peer Foo is being a troublemaker, then I can automatically block all traffic from peer Foo (forget about what the protocol actually intended) and actually sell that as a security feature.


> Nodes, not interfaces, have IP addresses, so you can use multiple paths.

Operating systems would quickly develop "multi noding"; binding multiple node identities to a host, and then partitioning those among the interfaces.

Separate IPs for different adapters is a good idea, and needed for NAT: e.g. a home router being 192.168.1.1 inward-facing, and having some external IP. I


> 48-bit static IP addresses. 70 trillion should be enough. 128 bits was overkill.

What if humanity becomes a Kardashev type-2 or an interstellar civilization? In that case the population of humans, much less computers, could easily succeed 70 trillion.


It is unlikely that we will be able to use IP as a way to create a single network between Earth and Mars much less outside the solar system.


not unlikely that we will have ip-islands connected by some DTN/sneakernet thou.


"- Mail is not forwarded."

I experimented with this one over 10 years ago, pre-Wireguard. The idea was to first establish a peer-to-peer connection, e.g., via L2 overlay, then have each peer run their own smptd. A supernode runs on a publicly reachable server at a hosting company and is only necessary for extablishing a peer-to-peer connection, not for routing traffic. Each peer has an Ethernet interface, e.g., a tap device with a private address for the L2 overlay, and each peer runs their own smptd. As for spam, the trick is to limit the size of the overlay network. Users might belong to several L2 networks, e.g., work, home, school, etc. If it networks are kept small and the peers do not give out their email addresses to people not on the network, then it's possible to have a small, spam-free email network among the peers. If end-to-end email on small disparate networks became the norm, spammers would have to find all these small overlay networks and infiltrate every one.

As I understand it, this was the original intended design of email: smtpd's directly communicating with each other. This still happens but users do not run smtpd's. Instead they were asked to run pop clients.


> Routers will accept a "shut up" request. If A wants to block B, it sends to a router on the path, the router pings A to validate the source, and then blocks traffic from B to A for a few minutes. This also contains DDOS attacks. Routers can forward "shut up" requests to the next router in the path, for further containment.

This is an invitation to silence unwanted users by certain governments.


You might have missed the "from B to A" in the parent comment which is quite important. Moreover governments have other (legal) ways to silence users.


> Nodes, not interfaces, have IP addresses, so you can use multiple paths.

Isn't this difference academic/software? What is the crucial point here that can no longer be implemented as an abstraction? (Not a networking specialist so this might just be my ignorance speaking.)

> You can validate a source IP address by pinging it with a random number.

Good idea, but I suspect this is the type of feature that might quickly fall prey to implementation laziness/incompetence, or in other words, enough implementers would both ignore it and not use it until it became unreliable.

> Routers will accept a "shut up" request

I like this idea but couldn't this system be abused by a malicious man in the middle to much more easily hinder connectivity to a targeted system? The man in the middle can fake the validation too...

> Mail is not forwarded

I don't think this is a good idea. People would quickly invent a forwarding protocol if one didn't exist. I mean, I see what you're trying to do here, but I don't think you would succeed on this one.


Re: mail - I believe what gp is saying is that mail will be sent directly - made possible by the larger IP address space. Once it’s on your device you can copy and move it wherever. This would actually match the original design of email before it became the case where everything was via pop and imap. Of course the biggest flaw with email was not expecting spam. My change would be accepting messages only from known senders or those who have done the needed “work” https://www.w3.org/2003/10/acquaintance-protocol/


>> Routers will accept a "shut up" request

> I like this idea but couldn't this system be abused by a malicious man in the middle to much more easily hinder connectivity to a targeted system? The man in the middle can fake the validation too...

I think of it more as a "I'm just ignoring these, you mine as well stop sending them" request. After all, a man-in-the-middle can already just drop the packets.


> I think of it more as a "I'm just ignoring these, you mine as well stop sending them" request. After all, a man-in-the-middle can already just drop the packets.

Some MITM can only inspect and forge traffic, not drop it (e.g. NSA tapping fiber)


To be able to make these suggestions means you must have significant experience and knowledge of the internet backbone. Super impressive.


> Nodes, not interfaces, have IP addresses, so you can use multiple paths.

SCTP has multi-path routing, no?


> 48-bit static IP addresses

a static IP per device is google and facebook's wet dream. yeah apple, keep your IDFA for yourself.

also, imagine getting banned and not being to use google anymore, at all. asking your kids to google something for you and getting them banned too.


In a great many cases, malware can grab your MAC address now. Which is, as it happens, a 48-bit globally unique address.

IPv6 addresses are a major PITA, not only are they too long, they're impossible to write and get even most technically competent people to understand. They're almost as hideous and unusable as x.400 email addresses...


I'd include some extra bits in IP addresses to eliminate NAT


The most basic problems seem to be:

1. Everything being 'free' by default drives us to ad-supported centralized services. Economics aren't a separable concern.

2. Too few IP addresses. (At least one of the pioneers, I forget which, said he pushed for longer addresses but was overruled. So the technical constraints probably did not force this.)

I'm not sure how to fix #1, but here's an approach from the 90s: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16....


> 1. Everything being 'free' by default drives us to ad-supported centralized services. Economics aren't a separable concern.

I hate this argument. I've been online for a long time and the internet existed just fine without, for example, facebook. We don't have to accept ads but we do have to be willing to not use certain things out there.


Indeed, on the surface the argument is sound, but when you look back into past, it's clear it is not how things have to work. Web can exist even without pervasive surveillance. It would be different, no huge ad companies growing like cancer, but that would probably be good for information quality/communication freedom on the web.


Blockchains right now mostly are trying to do exactly this - bring economics into computation and network space. This is of course a higher level solution, but the one that is quite achievable now. Protocols like NEAR and Internet Computer can be good examples of this.

I know this is a holywar topic and many would disagree simply by seeing `blockchain` word used here. But to my mind this exactly the place where this technology is beneficial. We need automated algorithm controlled currency to have this type of smart internet subscription.


Yes, I only left this out because it was over the horizon at the time. I did find these papers inspiring in the 90s, and as with the Digital Silk Road paper I linked above, the ideas could have been thought of earlier (e.g. capability security goes back to the 60s): http://www.erights.org/talks/agoric/index.html


I think this is a problem with the World Wide Web, not the Internet. At the network level, per-user cost accounting and quality of service is implemented quite well, to the point of being able to pay per byte if you really want to. And access to any network beyond your own LAN is not free.

The problem of how to fund actual web application development is not much different from the problem of how to fund media development in general. Newspapers, television, and magazines alike long ago settled on some mix of premium-tier subscription services augmenting a more open, ad-funded free tier.

This is so much more of an issue today than 30 years ago not because of anything specifically about ads to fund media, but because ad profitably has been driven sky-high by individual consumer profiling that relies upon privacy invasion and surveillance of your customers. Ads back in the day would improve via voluntary focus groups and that was fine. Today, they improve by tracking every digital action a person ever takes in order to more accurately correlate interacting with an ad with making a future purchase. Volunteers for focus groups are pretty much okay with giving their opinions in return for cash. Every person on the planet is not nearly as okay with having every digital action they ever take recorded and analyzed.


In the alternate history I'm imagining, with network-layer accounting and payments from the beginning, flooding and spam were never much of a problem -- they were at most a "nice problem to have". Endpoint security became a priority from the beginning, because failures that cost money (even if not much per computer) get treated that way. Participating in the network as a full peer, e.g. running the equivalent of a webserver on your home PC, was considerably more practical and normalized. (The bigger address space of course went into this too.) And this history made other peer-to-peer software easier to develop and more practical and economically sustainable, in a virtuous cycle.

This doesn't mean it'd kill advertising. It wouldn't even kill all of the forces encouraging me to put my 'content' on centralized services even though I'm nothing like a media corp. But I suspect the right changes could've helped a lot towards a healthier computational ecosystem now.

This early architectural decision that costs are out of band didn't fundamentally make anything impossible. But when you see a lower-level problem addressed by higher-level workarounds, and you're getting to redesign the system, isn't that exactly what constitutes an opportunity?


I might have read somewhere that the original design of HTTP allowed for transactions to occur. This free point is a big one. Once Apple set the price of 0.99 for an app it really dropped the market price for software. It's funny that there is so little different between enterprise software and consumer apps except for a monstrous difference in price point. Funny thing is, consumer apps are sometimes better.


The 402 HTTP status code is "Payment Required". At one time people envisioned micropayments etc but it never went anywhere.


Transaction fees are what kill this. That and moral police / "fraud" charge-backs caused by insufficient user authentication.

I also don't want to be nickle and dimed to death, given the transaction fees there's a huge push to subscriptions and other models. Where if I could instead just pay exactly what a generic ad I'd _never_ click on anyway pays to get placed (like 0.00001 dollars for all the ads on a webpage stuffed with them) I might actually pay instead of using adblock for all of the security, usability, and bandwidth saving reasons.


There have been many micropayments schemes over the years. IIRC, there was a decent one slightly cleverly called "Millicent" developed by DEC and pushed in the early Internet days... (Then there were the various pre-bitcoin internet banks/e-cash outfits - First Virtual, eGold, etc....)


IPv4 adresses fit in a standard 4byte integer, that might be a reason.

Quite frankly, if they didn't allocate full /24 to single entities (including localhost...), we might still have enough addresses left.


Nah, if you sort by date here you can see how quickly IANA was doling out /8's to the RIRs up until they ran out: https://www.iana.org/assignments/ipv4-address-space/ipv4-add...

Even if we hadn't wasted /8s on huge allocations to single companies, and things like 240.0.0.0/4, we'd still be basically where we are today, with v4 address space being scarce and traded as a commodity.


What's wrong with it being traded as a commodity?


>we might still have enough addresses left.

For how long?


Isnt it "127.0.0.1/8" which is allocated, not "127.0.0.1/24"?


yes, according to wikipedia[1]

IPv4 is just inefficiently allocated in general. Why does the world need 10.0.0.0/8 in addition to 192.168.0.0/16? Isn't 65k addresses enough? Is there a private organization in need of 16 million addresses?

Not to mention AMPRNet (amateur radio) owned the entire 44.0.0.0/8 up until 2019 (a portion was sold off to Amazon). That may have seemed reasonable in 1980 but now it's just plain crazy.

[1] https://en.wikipedia.org/wiki/Reserved_IP_addresses


> Is there a private organization in need of 16 million addresses?

Plenty of mobile phone networks have well over 16 million subscribers, and they typically don't have enough public IPv4 addresses for everyone. This has led to really hacky stuff, like using DOD IP ranges as psuedo-private space, or re-using private IP addresses in different regions (which can't be fun to maintain.)

Some networks have fixed the problem by using NAT64 - forgoing IPv4 altogether internally, and translating to public v4 at the edge. (Works surprisingly well, T-Mobile US has been doing it for the better part of a decade.)


It's not about number of addresses available, it's about reflection of organization hierarchy in the octets. The 10.0.0.0/8 range is useful for breaking a distributed company's network up so each major office gets a 10.x.0.0/16, and each department a 10.x.y.0/24.

This is why 192.168.0.0/16 is often used for services like libvirtd, kubernetes and docker. And the use of the range by those services makes it even more unwieldy to try and put some other LAN in there.

You can work around these considerations if you want, but many people won't. When you're the network engineer responsible for designing a company's networks, you'll be wanting to keep things simple and robust. When you're called in at 3am on a Sunday because the network is down, you better hope your ability to recover doesn't require making a bunch of subnet calculations because you decided to try and use the pool of available IP addresses efficiently.


Honestly, just one thing, and I only need to go back to the early 90s.

Just before the Internet was opened to commercial use in the mid 90s, would've made a perpetual prohibition of advertising over the Internet. Ads are what have ruined everything.

Take just about anything unpleasant about the Internet today and it is either directly a consequence of ads, or an indirect consequence of someone trying hard to make you see ads.


Ad's got us here to begin with. Without ads the internet would have been only for the rich.

There would also have been no google, no amazon, no android, no kindles. I'd still be ordering everything from the Sears Catalogue, waiting 6-8 weeks for it to arrive.


And how do you know this. Also, you are referring to the web, not the internet. I have been using the internet since the web was opened to the public. It was already growing like crazy without ads. You make some very strange assumptions. By all means, find the silver linings, but there is no basis for making a causal connection between computer networks and advertising. Ads are not why we have conveniences. Networking technology, continually decreasing in cost, is the reason. There is no inherent connection between computer networks and advertising. The connection exists in your mind because computer network technology is dirt cheap to produce and people have resorted to sketchy behaviour to try to make money from it.1 It would still be dirt cheap without the sketchy behaviour. The argument that the internet would only be for "the rich" is laughable. It must come out of a Big Tech PR department. Sounds like what Facebook likes to say to the press in response to criticism. Yeah, the internet is for the rich: the people who started a small number of oversized, overtrafficked websites and will use whatever means necessary to stay on top. There is nothing egalitatrian about today's internet.

1 The number of people who became wealthy from this behaviour is relatively small, but news of their "success" is widely disseminated.


I think what they are getting at are loss-leading products they fear wouldn't have been made in that timeline.


Of course there would be amazon (widespread e-commerce, if not literally amazon.com), android, kindle. None of these are ad-driven products.

google as it exists today, an advertising company hoovering up private data while masquerading as various services, all in the name of pushing more ads, is not a net positive to the Internet. Having it not exist would be wonderful in this alternate universe.

Of course there would be search engines though. There were plenty search engines before google, before ads, and those would've evolved along a different path from where they were in the mid 90s.


Email, uucp, nntp, and irc were basically funded by the users of the systems since it was primarily P2P systems operated by the ISPs. Most/many ISPs had local ftp mirrors of useful software, and of course provided the first http hosting.

The Internet and the web could have grown in this way indefinitely with large e-commerce sites paying their ISPs proportionally more for traffic but reaping the benefits of not maintaining brick and mortar storefronts.

The first search engines had no ads. They were public services to a large extent. Modern Google is a distributed crawler/indexer and inverted search tree for queries (last I heard). Something quite similar could have been built and sharded across ISPs.

DNS is still ad-free, supported only by the ISPs and Registrars that benefit from it. So is email. The PKI is free again. BitTorrent or ipfs could localize the costs of content distribution if media companies weren't bamboozled by DRM.


> There would also have been no google, no amazon, no android, no kindles

Oh no!

Anyway...

Sears could still have their catalogue on the internet. Just because something is online does not mean that it needs advertisements everywhere. It's about time we realize that.


I don't think so. Everybody pays for their internet connection and always had. News on paper have always been supported by ads so it's fair if news on the internet have ads too. Amazon sells things, they would still sell them. Phone makers sells phones and we had Symbian before Android. Wikipedia has no ads.

But I don't see how we could actually forbid ads on the internet and the legal basis for it.


No Amazon, really? Amazon ships physical goods to your house in exchange for money--they aren't dependent on advertising.

Even Google landed on advertising as a monetization technique but existed before it and easily could have been a sustainable business without it (though not the behemoth it has become).


Not the amazon of today at least.

I had supposed without Ad's you wouldn't have had the dotcom boom of the 90's. No dot-com boom means venture capital wouldn't have flowed as readily. Amazon still would have started, but they burned a ton of cash in the late 90's. Without burning so much capital I don't believe they would have become as huge or if they would have even survived.

I remember old amazon being cheaper than retail most of the time, they must have been eating that loss. In the end it worked out for them.


Google existed before ads but had no revenue. Ads are the only thing that kept it alive other than investor money.


I might have overstated a bit. Ads were obviously a sufficient revenue source, but provide far more revenue that is necessary for the core search engine. It’s possible some other revenue source might have been sufficient to support the search engine, even if not as lucrative.


It'd be kind of cool to have a search engine only for pages without advertising. Kind of like Wiby, but specifically geared for that.


It's interesting, I wonder if the internet would have been successful at all without something to monetize? Yeah, e-commerce probably would have been doable, but it would have been like the dotcom era - generic domain names running ads to try to get you to walk over to a desktop computer and type in the domain name.

Digital communities would have been really hard - a lot of online forums, I know, depend(ed) on advertising to pay for the then-expensive hosting costs. I'm a believer that Facebook now is long past its prime, but I remember how exciting it was in the early days when it was connecting me with family members and friends. Advertising is how all of those things were able to become accessible to everyone.

Maybe a better solution would have been to ban personalized advertising? A lot of the really gross behaviors with ad exchanges, data brokers, ISP interception, etc. are all to allow for highly targeted and personalized ads. Maybe you could only target ads towards "interests", which could be set at a browser level, and a user could configure if they wanted to see more relevant ads.


> without something to monetize?

Not without something to monetize. Just no advertising.

Commerce sites would exist largely as-today in this alternate universe.

Early on there was a lot of interest in micropayments for funding sites with interesting content. That pretty much all dies because it was a harder problem to kickstart off the ground than just stuffing every page full of ads.

But, with ads prohibited, the Internet would've evolved differently and we'd probably have a very convenient way to pay some fraction of a penny to any site we frequent, all with zero ads and none of the toxic consequences of forcing people to view ads.


There were thriving communities like MUDs and IRC and Usenet before e-commerce and corporations moved in.


Absolutely true, but I would say that MUDs and IRC and Usenet are the definition of pre-Internet boom culture. It was a big deal when my parents were able to get online and use a browser to engage with other people. My dad could have been playing chess online via IRC and email for years, but it wasn't until he had a browser that it was within his technical comfort level.


E-commerce existed before MUDs and IRC and was contemporaneous with Usenet. Online Services like Compuserve, Prodigy, and Qlink (which became AOL) allowed people to buy and trade products like you do on E-bay.


    Take just about anything unpleasant about the 
    Internet today and it is either directly a 
    consequence of ads, or an indirect consequence 
    of someone trying hard to make you see ads. 
I largely agree but would solve it a slightly different way.

The ad-supported model became the overwhelming default because payments/micropayments were hard and scary so we spent a whole bunch of years conditioning users to believe everything on the internet had to be free.

Payments/micropayments still are hard/scary to a large extent. Large swathes of internet users literally would not dream of paying for content or community membership. Thus, nearly everything online remains riddled with ads or the consequences thereof.

So rather than banning ads I wish we'd made somehow payments/micropayments easy, safe, and transparent and baked that right into the internet somehow.

Think about how Patreon and Kickstarter (which are not without their flaws) have allowed people to support creators more or less directly. Now, imagine if we'd somehow baked something like that in to the internet itself.


Does this include laws about the internet? Because I think if the Computer Fraud and Abuse Act were altered somehow sufficiently that aaronsw hadn't been arrested, I'd prefer that future.


I can't immediately see a law in which he wouldn't have been arrested, that seems more like a matter of interpretation and zeal given that what he was doing does seem like it has to be illegal just not subject to such outrageous punishment.


Honestly most of it is pretty great. Folks who want to give it better, more reliable performance end up reinventing circuit switching, and anything involving security is difficult to solve at the IP layer.

The last big things to secure are DNS (can be done with DNSSEC), and possibly somehow mandate TLS for connections (although you definitely don't want that all the time).

One big glaring problem is BGP, which we don't really have an answer for. Whereas "just use DNSSEC" pretty much solve the last big security hole above, BGP is still difficult because you have to basically have a system to attest the path for each BGP node. AS1 can't say "I have a path of length 5 through AS2 AS3 AS4 AS5 AS6 to AS6" unless that message can be attested to by each node, but then this comes into a bootstrapping problem (e.g. how do you reach those ASes to get some sort of key without going through AS2 first?) or trusting some authority as we do for ssl certs. God knows the first thing I do on any fresh install is uninstall those root certs from any sketchy government I don't trust.

Having worked on SDN in its heyday for some of the big players in the space, there are definitely good ideas in the space, but getting to adoption is damn difficult, bordering on impossible. I don't know what it will take to oust BGP, so we're kinda stuck with it for the foreseeable future.


> One big glaring problem is BGP,

I see what you did there :D


There are only about 100k ASNs. It would be fairly easy to solve the bootstrapping problem by just preloading all the keys.


Yes that isn't too many, but right now any device can just hop on a network and start communicating (after DHCP). Not only that, but they can start communicating with any public IP address.

If people decided to take this away by adding some security directly into the IP layer (i.e. such that communicating without it is impossible, such as mandatory IPsec), I don't think the tradeoff would be worth it. Now you would have to manage all the normal stuff that comes with keys (e.g. expiration and renewal), and you may find your device gets wedged if you don't do the delicate key expiry dance correctly (i.e. you can't even connect to the site to get updated keys).

It's very easy to say "those DARPA morons not designing security was a big mistake!", but I am not convinced that the tradeoffs of solving it at the internet level (i.e. L4 and down) are worth the bootstrapping / flexibility hits.


I'd make payments a prominent, application-native, easy to use feature from Day 1, right down in the network protocol level. 100% of the transaction would be transferred from the user to the service owner without a middle-man service taking a cut (payment gateway fees not withstanding). That way there wouldn't be any need to rely on advertising, app stores, or external subscriptions to run something on the internet.


Observation: That's how https://en.wikipedia.org/wiki/Minitel flourished and then stagnated and ultimately died when facing competition from the more "free" Internet. (Still upvoted you.)


Today I learned. That sounds like a good system. It's a shame it didn't lead to a more payments-oriented internet.


That would have just made the internet in accessible to the poor.

I remember my dad scraping enough money to buy us a computer in '97 with unlimited internet (aol was still selling the internet by the minute back then). As a kid who hardly ever had enough money to buy a comic let alone magazine, and who's local library was lacking, it was life changing being able to just lookup and read about anything for free.


The price of giving free access to everyone has been very high though. We continue to pay with our eroded privacy, personal information that's frequently breached, the cost of advertising that's passed on through product pricing, the cost of giving up so much control of the internet to three or four giant companies, the sheer cost of fighting against advertising in our browsers and so on. We're practically held hostage by advertising - a downturn in ad spending leads to content and publishing companies closing down. The cost of free is strangely expensive.

Maybe it would be better to fund more useful things, like sites that give children access to knowledge through direct contributions. Like how Wikipedia is funded.


> That would have just made the internet in accessible to the poor.

We have that problem now, apparently. Every consent decree that happens to some monster ISP includes a "low cost service for poor people" mandate.


Ted Nelson envisaged micro-payments as part of the underlying platform (he invented … waves vaguely … all of this)


I'm a huge Ted Nelson fan, but what we have is very different from what he's envisioned over the years.

(Even Xanadu is very arguably not a single concept - it's evolved significantly over time, and although Ted doesn't miss much, the power and development pace of personal/portable computers and smartphones seems to have surprised even him - we don't see many roadside Xanadu stands with the big flaming X...)


(Agree, that’s why I only waved vaguely, lol. Ted is full on.)


Javascript. Not to remove it, but to make it better from the start. Current JS feels painful to me because it feels like piling hacks on top of a poorly-designed language to make it palatable. Doing it better from the start sounds amazing.


The thing I'm most offended with: about every generation has now dumped their favorite interface pattern onto it, mostly reused from the fancy language of the day. It has become a Frankenstein language without internal strictness or attitude (which had been once a major trait.) I guess, it's the language most disrespected by its developers.


I'd give it a better name, which does not involve the word "java."


And ECMAscript still sounds like a disease.


Ecma balls


let's just pick a letter and it's fine.

escript cscript mscript ascript


Let's face it, it would end up as "JScript". What we now know as "JScript" would probably become "MScript" or something.


The original name was LiveScript…


But it's such a great litmus test to instantly identify the people that don't know what the hell they're talking about ;)


It would have been nice if the original idea to have the web support multiple scripting languages had panned out. That's not a design problem, though - some form of code running in the browser would have been inevitable, and it was considered as a possibility from the beginning - but it seems inevitable in hindsight that one language would have won out and we would have all been stuck with whatever quirks and pecadillos that language had.

Maybe if something like webassembly had come along earlier, we could have avoided all that "javascript as bytecode" nonsense.


I wish Brendan Eich's original intention to put Scheme "in the browser"[1] had succeeded. There was never any reason to invent a new browser language when perfectly good one's already existed.

[1] https://books.google.com/books?id=3b40AwAAQBAJ&pg=PT32#v=one...


I'm OP, and this was in fact my response to my question! Everyone else is suggesting these wonderful grandiose economic and social changes, and I just want a strongly typed (or, hell, even a little bit typed) Internet...


Honestly I think it couldn’t have gone a lot better...

The worst that happened to the internet is Google becoming evil. The internet circa 2000 was mind blowing.


It was mind blowing indeed. It was such an adventure building stuff on the web back then. We didn't have the choice we have today. I don't think Google became "evil". It just realised (or remembered) it's a business.


> It just realised (or remembered) it's a business.

Exactly, it became evil!


99.99% of businesses are NOT evil. But Google most certainly IS actively evil - perhaps the most evil of an increasingly evil lot. ...and I don't use the word evil lightly.


I strongly believe between evil and stupidity/incompetence the latter wins, always.


Just curious, how old were you circa 2000?


Not GP but I was 40. You?


18. I’m guessing GP was about the same age, give or take. Kind of like how SNL peaked when everyone was 16 and has been downhill ever since.


Spot on


I'd advocate to build DARPANET for an untrusted environment with the expectation that DARPANET would be unleashed on the masses at some later point.


This. I think there are so many changes that I'd like to make, but a lot of them stem from switching from everything being public and out there to being private and with restricted access. I would only hope that assuming things would be untrusted from the beginning would lead to a more securely designed web from day one. I'm sure that's wishful thinking, but still.


TL;DR: Constants shouldn't be hard coded in the software, they should always be in a separate config

The problem is that it's the 60s. My first thought was "security", but unless you an also teach them about elliptic curves, they're going to use the security of the 1960s, which as we now know isn't very secure.

Maybe at least having security baked in would help make it easier to switch to better security later, like how ssh can use different protocols as old ones are broken. But you'd have to make sure that you were very clever about how it was implemented so that it could be switched without major changes.

Another thought is "more IP addresses", but again you are in the 60s. The computers don't have enough memory to deal with IPv6 length addresses. So again the best you can do is try to set them up with easy upgrades.

Which makes me think the best suggestion would be to teach them about Moore's Law, which of course would have a different name, and try to push for every protocol being extensible as technology grows -- make sure that more octets can be added to IP addresses without them breaking, that security is baked into everything but everything has a way of negating a protocol so that they can be upgraded, that there are no hard upper limits that are assumed and can always be changed.

Basically, teach them what we now know are software best practices -- constants shouldn't be hard coded in the software, they should always be in a separate config.


So you're essentially pushing against specialized IP-oriented VLSI chips in everything from NICs to switches to routers... instead they should all have been reconfigurable general computing devices?

Don't you think that would have slowed down the growth quite a bit?


Maybe. I'm just saying that we should teach people in the 60s to design for the future and make things extensible. Maybe we'd still have custom silicon, but they would be designed with an external bus to a second chip that can be easily switched out or upgraded or something as a way of dealing with future growth of address space.

Obviously it wouldn't apply to everything, but back in the 60s future growth was clearly not thought about the same way as today.


They could have had a design rule that all network endpoints must work with 32/64/128 bit IPs.

This could perhaps have been implemented in VLSI back in the 80s in such a manner that 32-bit IPs ran at full speed, 64-bit at half speed and 128-bit at quarter speed.


Yeah, perfect. If the standard had said IPs could have any bit length they would have designed accordingly.


I really don't understand what stopped ipv4 from being extended like this: w.x.y.z == 0.0.0.0.w.x.y.z

In fact, you can even ping any 32 bit unsigned integer and it will turn it into an ipv4. Try it: `ping 134744072` will ping `8.8.8.8`

So why not 64 bit? 32 bits fit in 64.

Instead we now have 2 concurrent protocols.

ps. 8*256^3 + 8*256^2 + 8*256 + 8 = 134744072


Because "just add 4 more bytes of address space" wouldn't be backwards-compatible with the world's worth of hardware and software built around IPv4 anyways. There just isn't anywhere in an ipv4 packet that you could put those extra bytes that wouldn't break existing systems.

So the only real option is creating a new protocol with a new protocol ID in the header. And if we're making a new protocol anyways, we might as well design it to fix more problems than just address space exhaustion.

And that's what IPv6 is. The Wikipedia article has a lot of details on the kinds of changes it makes and why: https://en.wikipedia.org/wiki/IPv6#Comparison_with_IPv4


Because all the existing routers would break on all the new addresses. A new protocol was created specifically so that legacy equipment would never get the request.

For example, if an old router got a packet for 1.2.3.4.5.6.7.8, where would it send it, if it didn't crash just trying to read that address?


Yes, let's design the internet around keeping 90s routers running as long as possible. Excellent. Let's also design highways with a max speed of 20 MPH so we can accommodate Model T's


Funny you should mention the Model T. We did the exact same thing with roads as we did with IPv6. We created a new type of road (highways) that the Model T could not go on. But the new cars can still use the old roads, albeit slower than their max speed. This is the same as IPv6 going through carrier grade NAT.

Slowly over time the roads were upgraded, so now you can use mostly high speed roads to get between places and you don't really see Model Ts anymore because it's not convenient to avoid all the high speed roads.


A better analogy might be something like upgrading roads with the newly liberated fourth dimension; so mere three dimensional Model Ts - that people will try to use no matter your protest - are very unlikely to end up in the right place. (And not just Model Ts, but the entire twentieth century and a bit's worth of vehicles.)


Roads? Where we're going we don't need roads.


The issue is replacement cost and feasibility. As roads got better pavement (or were more reliably paved) speed limits could increase. I still have some 20-30mph dirt roads near my home here (actually quite a few), but highways were specifically made to support higher speeds (longer straight sections, in sane places smoother curves). Notably, improving road quality did not obsolete the Model T (though it did become obsolete) on its own. It just made them particularly outdated when other, faster, options were now viable.

Nothing has stopped making faster cars just as nothing (fundamentally) has stopped us from making a "better" Internet (supporting more addresses and other features). What hasn't been done is to say, "Ok, so we already have this massive hardware rollout but we're going to literally obsolete 99.99% of it overnight and cost everyone, home users, small businesses, governments, megacorps, billions or trillions to replace their hardware."

Why didn't that happen? Because it would've been stupid. Instead we ended up with what's amounted to two Internets with bridges between them. The transition hasn't been smooth, but it has been happening.


There's no way we could ever stop the world and get everybody to use a new protocol overnight.

Attempting to change the format of an existing protocol in-place in a way that existing hardware and software cannot handle, would be an effort that's dead on arrival.

The global transition to ipv6 is already hard enough, and that's with it designed as a separate protocol so that existing equipment that doesn't understand it can at least gracefully discard it instead of mishandling it.


An IPv4 address isn't A.B.C.D though. That's just a shorthand way for us write it down because those chunks are easier for us to deal with. An IPv4 address is an unsigned 32 bit integer.

The address is not the real issue. An IPv4 header contains 32 bits for the source address and 32 bits for the destination address. That's what needed to be extended. IPv4 implementations expect the destination address to be 32 bits after the source address.

Any fix you can think of to extend the address fields in the header will fall into the situation of having an additional protocol. Because changing the v4 header and calling it a v4 header would probably get your packet dropped as bad. Or have it being sent to somewhere else. Or some other undefined behavior.

So your best option is to have it identify as a new version. Yes, v4 implementations will reject your packet, but that's what you want in this case.


Not sure how to accomplish it, but advertisements should be captured in a semantically distinct way in markup. I hate ads, but I'm not saying you can't have them if you want, you just have to wrap them in the equivalent <advertisement>../</advertisement> tags, so that there is no question about what content contains an advertisement and what doesn't. That way, it's trivial for me to block them. The flip side of the coin is that my browser will tell you if I'm blocking ads, and you can decide if you want me as a viewer or not.


Even if you could come up with an "objective" definition of what counts as an advertisement, what incentive do site owners have to be honest about it? Likewise, why would I want my browser to tell the site that I'm blocking ads?

This idea reminds me of the Evil Bit: https://datatracker.ietf.org/doc/html/rfc3514


> Not sure how to accomplish it

I am hoping that the same technology that transported me back in time and made me director of DARPA would help me solve those issues.


DNS.

Others here seem to be redesigning the entire intarwebs. I'll just pick one thing I might have been able to digest.

DNS is brilliant - but insecure, and centralised. The dependence on registrars was a huge mistake. The competition for names is an unintended consequence; the DNS created artificial scarcity, which resulted in commercial businesses that produce nothing of value.

So something like GNS, I guess. https://tools.ietf.org/id/draft-schanzen-gns-01.html


What a great question. One of two things:

A) Establish the expectation that websites "close" in the middle of the night for ~5-6 hours, local time / for each timezone. I don't know if would best be done via cultural influence -- giving talks, writing essays, personal communication, testifying / making inroads with politicians -- or via creating some sort of protocol. The idea is to prevent the unhealthier aspects of internet binging and screen addiction.

B) Establish the expectation that internet comments are transcriptions of voice recordings. I.e. to leave a comment, you have to call a phone number and leave a message which then gets transcribed as "the comment." In order to respond or reply to a post or a thread, you have to listen to the message and tone of voice of the person you are replying to. I don't think this would solve every internet dialogue, but it'd promote healthier interactions and less division.

In my book, the largest problems with the internet are techno-cultural, not technological.


Fun fact, closing websites do exist among ultra-orthodox (esp. haredi) jewish communities in Israel.

You can read a bit about how it works in practice on the web.

https://www.huffpost.com/entry/ultraorthodox-jews-are-co_b_1...

http://www.hareidi.org/en/index.php/Hareidi.org%27s_Kosher_I...

Background: https://www.journals.uchicago.edu/doi/pdfplus/10.14318/hau7....


Regarding A, for which timezones? Any attempt to narrow browsing time would ultimately fail as soon as the web goes global.


Good question. Ideally, each user's timezone would be required information. The server would check to see if the requesting user's local time meant that the website should be "open" for them.


Honestly this would probably just make VPN services more widely used to workaround shutdowns.


True, unless the timezone was tied to each user's accounts. The website doesn't have to "not work" it just has to return a page that says something to the effect of "we are closed for business."


So a user couldn't just change their timezone? What if they moved?

Also who is this to protect? The user? What if they work midnights or their days are backwards?


There are cases where the business shouldn't promote the use of its website at odd hours (child protection, gambling addiction and other similar things).


Exactly


A minute of silence for those with unusual circadian rythms.


Regarding the second point, it would probably just make people say things like "you sound fat". It's still anonymous enough. In fact, people say mean things in public all the time. It seems some people are just inherently mean.


I don't disagree. The main problem I'm attempting to solve is the non-anonymous case. Even discussions on social media, where people are either "friends" or have the same shared interest, still devolve into discourse where few come away satisfied. Instead, many people feel either negatively critiqued or that other posters "just don't get it." Every time this happens, the sense of group cohesion frays a little.

I see this is as a problem of the medium of discussion more than anything else.


Lurker here. These ideas, especially #2, sound interesting for a discord-style (when they did embedded comments only) startup.


At least one domain where NO commercial activity is allowed. No buying, selling, advertising, trading. Companies who are convicted of mens rea transgression (along with whichever employees are guilty) lose ALL access to the internet in perpetuity.


This seems unworkable. Gaining attention is intrinsically commercial activity. Attention has value; you can't stop it bleeding into the rest of the world. Some examples and questions:

- An "influencer" gets her photo on a Wheaties box. After that, the influencer doesn't have to do overt advertising to promote the cereal, their fortunes are bound together.

- In politics, as the former US president so amply demonstrated, attention is a currency.

- What about the exchange of ideas? Can one talk about the contents of a book without selling (or discouraging the sales of) the book?

- Is any mention of brand names to be prohibited? If you can mention a brand, unless all the brand's marketing has been completely ineffective, you are selling the brand. Don't like it? Pass the Kleenex.


Until the late 90's the entire Internet was this way - even .com domains were subject to the Internet AUPs (Acceptable Use Policies) that strictly banned "commercial" use of the Internet.

AUPs lasted only a very short time after O'Reilly showed their Global Network Navigator site at Interop showcasing the capabilites of a new graphical web browser from NCSA called Mosaic that could - gasp! - display inline images along with the HTML hypertext! (Probably hard for the younger crowd here to believe, but early browsers had to open images in a separate window, sometimes with a separate image viewer helper program. Yeah, really. Mosaic was a game-changer, that made the vision of the modern web obvious to at least 1% of the people who saw it.)


> advertising

This one would be hard to enforce. Still a good idea.


Here are a few things off the top of my head (and trying not to duplicate too many things):

- Get rid of ARP - just append the LAN address to the network address like other networks. By default LAN addresses are random. (Note IPv6 enables this basically.)

- Support encrypted DNS and authenticated BGP.

- Let DNS return other metadata including the port as well as the IP address.

- Let DNS caching work. Don't misuse short DNS timeouts for load balancing.

- Ingress traffic filtering - reject source IP addresses from outside the current prefix.

- Not IP per se, but let multipath work in the LAN (and give Ethernet a TTL so that packets don't loop forever if things go bad.)

- Eliminate (or minimize) broadcasts. Use unicast/multicast for DHCP, service lookup, etc..

- Support relocation/forwarding of TCP connections so they don't break when your IP address changes.

- Fix TCP congestion control so that the data rate doesn't decrease as latency increases.

- Second adding congestion notification to TCP to differentiate between packet loss and congestion.

- Encrypt the host name in SSL/TLS.


Biggest thing I'd change: Build in the ability to do long-duration keepalives from the git-go to avoid three-way handshakes when you want to send data later after establishing the connection.

A whole bunch of other changes to reduce latency in general - no one really quite realized how important latency was back when this stuff was designed, but in all fairness, it's almost, "how could they?"

Lastly, extend DNS to provide not only encryption and non-repudiation, bu most importantly, more info, or just absorb Project Athena's Kerberos and Hesiod into DNS at first opportunity. This also allows AFS or other global distributed filesystem support, which in turn could have changed database architecture and semantics for the better. Imagine how different the net would be with a scalable global federated name and directory service that could do everything that Yellow Pages did in the 90s.

(N.B.: When I was at Chevron, we built our own version of YP that was hierarchical and multi-domain. It worked really well ("really well", as in damn near flawlessly from the Monday morning we turned it on after a coffee-fuelled weekend hacking session by one of the three true geniuses I've ever met!), seamlessly syncing the info that was in Hesiod/Athena, YP, and DNS to make as much of it as made sense available to clients of each. I've never seen anyone else ever do anything like that, even in all my consulting exposure to other big companies' network services architectures.)


Although DEC's LAVC (Local Area VAX Cluster) protocol had shown how important low latency was to those that were paying attention...


Wider IP address space at the beginning could have led to static IP deployment to the home, allowing for self-hosted websites and email service that would have obviated the need for ad-supported websites and web services. We still would have had the issue of asymmetrical home connections but that’s a separate issue.


You don't need a static IP though. You just need dynamic DNS and to use the name. And depending on the ISP you might be "static" until a long enough power outage anyway.


Brendan Eich gets to actually use Scheme for Netscape. It never takes off, because everyone hates Lisps, instead VBScript becomes standardized through IE, which is then phased out in favor of a C#-based script in the early 2000s, and we have a massively better web development enviroment.


Adam Curtis provides some historical context on what went wrong with the internet https://thoughtmaybe.com/all-watched-over-by-machines-of-lov...


IPv6 from the beginning.

I have a theory that NAT killed the open web. There was this idea at the beginning that everyone could host their own website, email, etc. But when you're behind a router, you suddenly have to be quite technical in order to set all that up on the computer in your room. So only (bored) technical people bother. It's possible this is the reason platforms came to dominate.


I'm not sure I'd subscribe to that theory. The port forwarding issue is a few clicks; actually getting a site to host is another issue entirely. Even if it was some application you ran on your desktop 24/7 that holds your hand through the process, NAT + DHCP aren't insurmountable.


Port forwarding + static IP. And don't be so sure that it's trivial; despite having taken a course on the network stack I'd have to do some research to figure out port-forwarding and get it set up correctly. Normal people's response would be, "What's a port? Like the holes on the back of the computer?"

In an IPv6 world you could have a free program you download that gives you an interface like Squarespace except you can host it for free on the very computer you're using to make your site. Could be totally accessible to nontechnical people. Same goes (much more powerfully) for distributed protocols like Matrix. You could have encrypted messaging platforms like Signal that don't even need a handshake server; devices talk exclusively to each other. Etc. It would be a radically different internet.


Is there any way to get there from where we are now? Seems like even with ipv6 we still have to use routers.


My understanding is it would be hard/impossible to get everyone to IPv6 at this point because there will always be a long tail of devices that aren't there yet, and we can't just jettison those from the internet

But yeah, I think even if everything magically changed to v6 overnight, we'd still be stuck with NAT to a large extent because so much is tangled up in it at this point. For one, it probably makes it easier for ISPs to bill their customers. Especially the dynamic-IP side of things.


I think it was a combination of bad actors (spam and malware) and market segmentation. (Soak the businesses)

It’s always been a somewhat niche, difficult thing to run most standard servers. (Web/mail/ftp/gopher/nntp)


There is no need to put IP stacks on non-server endpoints unless you know what you are doing and the implications. Each non-server device could establish a context with the network as and when needed to send and receive traffic, as you did in the dial-up age. Removes a huge raft of security problems, IP address shortages, DDoS, privacy/tracking.


The most toxic part of the Internet today is identity. Some of the IP address space should have worked like mobile phone numbers - paid for by subscribers and representing single identities. Even better, after the invention of RSA, the government should have backed ISPs or states issuing signing identities for a protocol level identity standard - sort of like SIM cards that would “represent” you on a single, authoritative device without the possibility of delegation.


What you're suggesting is a dystopia.


I think they were being sarcastic, given that they're writing it under a pseudonym.


Almost all of us are writing under pseudonyms.


Are cellphone numbers a dystopia?


Yeah, I keep getting solicitors and I can't easily change my number to something only my friends know. I also can't easily separate my business number from my personal number without paying for a second phone subscription, even though it wouldn't cost the phone company anything more.


Completely not that important, but I did this a year or so ago, committed 'the number' to google voice, and handed out my new one to close contacts. It took a bit for people to filter their calls correctly, and I could always answer the 'business line' regardless, but I have zero regrets about it now.


Unfortunately, using Google Voice in that manner is USA-only, and much (most?) of the rest of the world has no comparable alternative.


Just as an alternative to a different dystopia though.


My guess is that most people who use the internet by 2050 will be strongly authenticated, whereas the current anonymous internet will remain as a relic of barbaric times for most, but still in use.


I expect we'll have one or more protocols enforcing end-to-end route verification at the packet level on most Internet routers by 2050, so I'm not sure how anonymous any Internet served over that infrastructure will be. I expect bans will be much more effective, international traffic will have much more advanced filtering and blocking, and that proxying will basically be illegal without some kind of business arrangement and being subject to surveillance & ban-supporting laws (and anyway, proxying weird traffic will get you banninated by the entire "legitimate" Internet, much faster than it does now).


I'd go so far to say that by 2030, at least one OECD country will have a law requiring ISPs to enforce remote-attestation of the Secure Boot of every device accessing the internet.

Such a government could then require app stores to remove "dangerous" technologies like Tor and messaging apps that support end-to-end encryption.

Perhaps ISPs would be allowed to support "legacy" devices and OSes, but with a special "Evil Bit" set on packets, so that websites could (and in some cases would be required to) refuse access.


I agree that there needs to be some way for identity to be strongly represented on the internet but there should be a way or places where it isn't required. This could be something that perhaps allows indemnity for user generated content as long as the content is attributed to a person otherwise the site is responsible. This would allow the small time cars forum to (with identity) operate safely as they do today, but would also allow 4chan to (anonymously) operate as long as they ensue the content wont get the website operator in trouble.

The other problem you run into is ease of use. Most users of the internet can't understand PKI based certificates so I don't know how an interface could look that would provide a strong guarantee of identity but also allows my mom to use it and not get phished.


I agree on the general problem of identity. Implementation-wise one could also think of some kind of digital numberplate, which can change, so that people can't follow you around on the net, but is still unmistakenly linked to you or your device.


The internet actually works really well. Its problems are human.


I mean, kind of. It does work, but under the hood there's a big fossilized mess of a technology stack that could have been better had it not been the result of many years of accumulated unrelated solutions to different, smaller problems...


Decentralisation in that you'd preferentially download 'web' content via your friends. Every house has an always-on server and networking hardware to enable selected local connections (long range WiFi?) across town with no reliance on an ISP. Think Wikipedia content living in tens of millions of locations around the world, with updates being pushed out with versioning and flagging if your extended network lacks consensus on a change.


Find a way to better decentralize it, so that it doesn't rely on single hosts(and later sites) as the source of truth for any particular function.

The original idea was that protocols would allow any one to participate by simply making their own webpage. But dynamic IP addresses, the DNS system, and even just HTML design were out of reach for most people so that got lost and monsterous websites under centralized control became the mediators for most people.

So if we could find a way to bake that decentralization into the protocols even more strongly while making them accessible to non-technical people, that's the change I would make.

The aim is to create a world where central platforms are not dominant, but any user can easily participate in the communication protocols with out there being a central point to collect all the data or force changes from.

...of course, I have no idea how one would go about doing that, and there in lies the rub.


> So if we could find a way to bake that decentralization into the protocols even more strongly while making them accessible to non-technical people, that's the change I would make.

Have you heard of Holochain?

"Holochain is an open source framework for building fully distributed, peer-to-peer applications.

Holochain is BitTorrent + Git + Cryptographic Signatures + Peer Validation + Gossip (data propagation).

Holochain apps are versatile, resilient, scalable, and thousands of times more efficient than blockchain (no token or mining required). The purpose of Holochain is to enable humans to interact with each other by mutual-consent to a shared set of rules, without relying on any authority to dictate or unilaterally change those rules. Peer-to-peer interaction means you own and control your data, with no intermediary (e.g., Google, Facebook, Uber) collecting, selling, or losing it."


> The aim is to create a world where central platforms are not dominant

Centralization has been greatly enabled by it being legal to hoover up all kinds of user data and monetize it by selling ads or using it for ML training. Huge moats, unassailable by anyone trying to charge money directly and discouraging interest and participation even in free, volunteer efforts (since the commercial ones are already no-charge...) while encouraging players to jealously keep their users captive, avoiding open protocols and certainly not developing new ones (notice how application-level network protocol development and support started to dry up fast as FB and Google's money-printing machines really started to get going?).

I'd say the shortest path to fixing the Internet is making that illegal, especially since that activity is also horrible and dangerous for other reasons. Ideally, the same law would hamstring the credit reporting agencies and also keep banks and other financial institutions from using/selling your data.


You seem to be talking about WWW and not the Internet. The Internet itself was designed to be decentralized, in fact that was a major motivator (distributed command & control in the case of a major war cutting off large chunks of the C&C groups from each other or eliminating them entirely).

Now, it could've been done in a better fashion, more deliberately, or more broadly, but there are at least two notable early protocols with decentralization/distribution in mind: email (consisting of several protocols) and nntp. Now an individual may still access a, to them, centralized authority for sending/receiving content, but the protocols themselves were meant to support a distributed architecture.


While yes, "the internet" and "www" are different systems, the latter built on top of the former, in practice "the internet" and "www" are one in the same. When we're talking - culturally - about problems with the internet, we mean the problems with "www".


In practice they are not the same thing. And the original prompt poses sending you back to the 1960s, about 30 years before the WWW came into existence.


If we’re starting from the 60s, there’s no guarantee that by now the two would be practically the same (which isn’t a given, even now).


Tor's hidden services and the Yggdrasil network are examples of decentralized alternatives to the DNS system.


3 things: Security, Security and Security. BGP security, DNSSEC, TLS by default, etc.


Imagine if 1980s security (single DES, no public key crypto) was baked into IP though.


None of those things secure the nodes of the internet, if the nodes aren't secure, nothing is.


Decouple IP addresses from locations, to make it harder to balkanize the internet.


Replace HTML with something closer to hypercard - it would give simple interactions, database management and user interface that you need.

Failing that, Flash should have become open source and part of the W3 web standards, but opened up such that we could observe the code that's running.


I rather liked Adobe flex programming and through Adobe air was very cool (the ElectronJS of the 2000s)


Actually, this became a thing very early on, compare Viola Net.


Oh, something minor. I’d make both domains and paths go from least significant to most significant, so com.google.mail/u/0/html


JANET in the UK did this until it adopted the internet standards in the 1990’s. Always seemed more logical to me too.

https://en.m.wikipedia.org/wiki/JANET_NRS


One of my only two attempts of an Ask HN was this specifically [0] and I've wondered this still. With subdomains, top level domains being at the top and being able to drill down into subdomains and then folders feels a lot more intuitive. Could a browser add-on be made to experiment this?

[0] https://news.ycombinator.com/item?id=24438978


I didn’t realise how much I wanted this until now.


backlinks.

(see http://www.youtube.com/watch?v=bpdDtK5bVKk&feature=youtu.be&... by Jaron Lanier, also see Ted Nelson)


Going a bit more philosophical, I found myself thinking about the lack of backlinks and people like René Descartes (via https://en.wikipedia.org/wiki/René_Descartes) :

" Refusing to accept the authority of previous philosophers, Descartes frequently set his views apart from the philosophers who preceded him. In the opening section of the Passions of the Soul, an early modern treatise on emotions, Descartes goes so far as to assert that he will write on this topic "as if no one had written on these matters before." "


> How are you going to make the Internet better?

Anything that allows me to send files to a device of a person I know on a direct connection without a service in between and regardless of our locations in the world. Still an unsolved problem AFAIK.


If you're both connected to the Internet and NAT didn't exist, you could run a small server and he could get your files


Very easy choice. All services paid explicitly.

Having thing given for free to be then exploited for various purposes is reason why these services are shit - because you are not the client, the guy who pays for your data or advertising space is.


Security should have been been a top priority right from the start. It is very strange that ARPA, a DOD agency that sponsored most of the Internet technology, cared so little about security.


A gun doesn't have built-in security either. But there's generally people with guns standing in front of the armory. I'd say they probably expected people with guns standing in front of the computers too.


Its a darn difficult question. The "internet" is the first time humanity got a technology for information exchange that can scale arbitrarily. It creates a complete graph that can enable data exchange between any two individuals (and of course an arbitrary additional number of devices).

How this increadible technical potential got translated into social reality says more about society than the technology[0]. If the stack of applications that has been built on top of it has become dystopic it is because society had dystopia in its dna. The technology simply allowed it to be expressed, so to speak.

By the same token, any technical tweak that maintained or improved this scalability would simply have led to an alternate dystopia. It may be counterintuitive but maybe the only internet that would actually be "better" would have been a more local / less scaling version. A more gradual transition might have given society time to adapt, develop some defense mechanisms and not be dominated by the lowest common denominator

[0] Keep in mind that all communication technologies of the 20th century (phone, radio, TV) quickly degenerated and never delivered the utopia initially projected


1. Prevent protocols to cross layer boundaries. FTP, SIP, etc. are application protocols, yet they use ports and IP addresses for identification of endpoints. They break if the transport layer does something they did not anticipate (e.g. NAT). NAT is not evil, nor broken. Protocols not respecting layer boundaries are broken.

2. Make use of DNS SRV records for all services. Why HTTP must be on port 80? Why not consult DNS to resolve the port too? Pretty much related to my first point.


Separate data and views.

Basically: servers focus on serving their data, and then it's up to the user to figure out which "renderer" they want to use to display it. Ofc defaults would be provided.

But, say, you wanted to view tweets in a table form: no problem.

Or maybe, you want to have a really wacky "whip the llamas ass" UI for podcasts: go for it.

-----

The big benefit of this is that it would allow for artistry in websites, rather than the boring old blue, black, white, grey material design.


I pick the year 1995, right as the web starts to hit the mainstream.

I'd add some kind of built-in, frictionless, privacy-respecting, user-friendly, transparent payment/micropayment system. Built on open standards so we could have multiple competing UX's and the best one(s) would win.

Basically, think about how Patreon and Kickstarter (which are not without their flaws) have allowed people to support creators more or less directly. Now, imagine if we'd somehow baked something like that in to the internet itself.

The web is 99.9999% garbage and one of the biggest reasons is because we spent nearly two decades training people that everything on the interwebs was free which meant that it had to be ad-supported which means that nearly everything has been forced to pander to the absolute lowest common mass-market denominator.

Even with some kind of "good" micropayment system, sure, most stuff would still be free/ad-supported lowest common denominator crap. I have no illusions. Just look at every other form of media that has ever existed.

However, just imagine how books or movies or whatever would look they were de facto forced to be free for the earliest part of their existence.


Accepting but regulating centralization of the Web

I think we failed to appreciate how much the average user would need centralized services (Search & Social) to use the Web. Both of these services are around discoverability of content. Humans want a water cooler to visit and chit-chat, or an organized library to look for information.

Additionally, because accessing the Web was seen at first as "free" (outside your ISP), people would gravitate towards "free" centralized services like Facebook and Google.

This created a recipe for what we see today with the incredible power of these companies over so many aspects of our lives.

So what would I think should be different? I would have been more thoughtful about regulating these centralized services in the way the FCC regulates the airwaves & media companies. Which is even more proactive than antitrust law. It's OK that they're profitable. That's good! But we ought to avoid single companies owning the entire search / social space.


I don’t know how it would be implemented but I always thought it would have been better if there was more meaningful separation of domains, and that you need a different browser to connect to a different domain. That way the net could be divided. There could be a domain for only verified information. A domain for public use. A domain for only educational content, etc. I have no idea how it would be moderated though


Such a categorisation already existsed in the form of the original top-level domain names, but it was never enforced. .gov for government, .mil for military, .com for commercial, .edu for educational facilities, etc.

For good reason, though. Who says what information is "verified", the government, the news media, the publishers? And what governments, media, and publishers get a say?

Perhaps it'd be nice for browsers to show a little indicator to confirm that a website is hosted by a government (although .gov and .mil are only valid for American governments, government could use a given domain as their government basis). There's a big difference between what's right ("climate change is real") and what the government is saying ("who knows, maybe drinking bleach is a good idea?").

Dividing the net goes straight against the idea of the internet. I don't think using a special browser for special domains is very tempting. We'd probably end up with the alt-net version of onion.to to proxy all different kinds of sites to a single application.


You're conflating authenticity with authority. It would be nice to quickly and reliably tell whether something is a government source, or a educational institution, or what have you. Whether the information is then correct, that's another matter.


Add a fallback protocol, that allows for onion-protected mesh-net-routing. All that is ever revealed for routing a package, is the next human-supra-organism you want that package to be handed over. Imagine you write a reply: "This is not a good idea" and hit send, but some benevolent leader decides this message is not a good idea. So it fails on the centralized infrastructure.

Now you wrap the adress of me: Individual > Household> Street > City> Airport into encrypted shells, that only reveal the next destination upon arrival within the data-organism.

These of course are valid only, if a public ledger certifies their longterm existence.

Your reply will take time, it will travel on land, air, water and, by all means possible. But it will reach me, i promise you that. To add plausible deniability, all you need is hostile apps, who participate within the meshnet, without the users consent. To add motivation to participate, just allow the transfer of crypto-currency - a currency backed up by the promise of data-transfer, no matter were, no matter what.


Client certificates are required with every http/s request. Back in 2021 we've never logged in with user/pass


That sounds horrendous imho, how would it be administered and what privacy concerns would there be.


I would make packets over the MTU truncate (with a flag indicating the truncation), adjusting the IP checksum (the same way it's currently adjusted for changes in other fields like the TTL) to match, instead of fragmenting or dropping. That would avoid all the issues with PMTU blackholes or non-initial fragments.


The internet itself is pretty solid: other than a few technical tweaks, I think the infrastructure evolved as well as it could. One thing I'd like to see changed is a re-thought internet protocol that's more privacy focused: an IP address is an absurdly specific identifier, fingerprinting a user down to a single household in most cases. An ephemeral addressing scheme for clients that changes with every new connection would be really quite helpful, perhaps along with some safeguards that allow law enforcement to still track that ephemeral identifier to an internet connection in the case of abuse.

The web is a different story, especially social media. I'd like to make social media, and the web in general, more forgetful. "Digital natives" (second-flight millenials and Gen Z) are going to get screwed with the persistence and easy archiving of social media data. This is partially a result of the natural shift in cultural expectations that occurs over time, as well as a consequence of having their awkward-for-any-generation blunder years recorded forever. This is definitely more a legal change than a technical one, but I would mandate (1) a time span (such as 5 years) where public social media posts must revert to author-only private unless consent is otherwise obtained and (2) a prohibition against public mass archiving of social media posts from people who aren't public figures.

This type of mass archiving for the use of closed-off academic research libraries is acceptable, but merely going and hoovering up every public tweet or Youtube comment or Reddit post and and putting it up with a public search engine shouldn't be permitted. Treat it like many countries treat the census, and only allow publicly opening up these archives far into the future (for example, the raw underlying questionnaires used for the Canadian census are not released to the general public until 92 years after collection). Different story for public figures such as politicians, but we shouldn't archive everything that everyone has said in perpetuity.


A better DOM. I know this topic is mostly about data transfer, but I'm going to complain about web UI standards. For many uses, a state-ful coordinate-based UI standard is needed. It's why PDF's proliferate: DOM can't faithfully reproduce documents in a WYSIWYG way. It's not practical for every document/content author to become a "semantic auto-flow" layout expert: the learning curve to do semantic right is too damned long. We could have things like interactive flow-charts and ERD diagrams with our favorite GUI widgets in them if we had a decent state-ful coordinate standard (and maybe the "missing" GUI idioms like combo boxes, tabs, editable grids, MDI, drop-down menus, etc. Reinventing them in JavaScript has proven a mess.)


There is only one protocol, for file transfer.

Consequences:

* There is no live user tracking.

* Access control can be user/password or ssh keys

* You always have an archive of what you read

* You always have an archive of chats

* Everything is in principle decentralized (whether it is in practice depends on whether people keep files.

* Clients are in control.


This is a really great question! I can't help but think that for some small multiple of the cost of the International Space Station or Large Hadron Collider or any other large government science experiment, the nations of the earth could have gotten together early on and funded a free satellite internet project. It would have been limited bandwidth, high latency and error-ridden but it would have been ubiquitous in coverage, much like the GPS system. And would have accelerated adoption of many of the digital cash and e-banking innovations we are seeing today. As well as gotten a jump start on the physical layer of space based internet ;)


Push for content-addressing over URLs. To be honest, I don’t know if this would hamper the development of the internet or be a good thing at all, but I would love to see what people would come up with and how it would change the web.


The Web Packaging specifications[1] are surprisingly close to turning URLs into a content-addressed system. Content-addressed networking is all about names for content, after all, which philosophically is a close match for what a URL is, it just so happens that HTTP uses urls to resolve servers that serve the content. But with Web Package, content is signed by the origin, and can be distributed via other means.

One of the primary uses cases for Web Package is to let two users exchange content while offline, perhaps via a usb stick or what not. This isn't part of the specification, but we could begin to imagine sites that have a list of the web packages they have available for download. And we could imagine aggregating those content indexes, and preferring to download from these mirrors over downloads from the origin's servers.

I'm hoping eventually we get a fairly content-addressed network, via urls.

[1] https://github.com/WICG/webpackage


I would fix the order of URLs instead of http://news.ycombinator.com/item it would be https:com/ycombninator/news/item


I actually really like this idea (and once created a browser extension to at least make URLs look like that in the address bar), but I still think there is a benefit to having domain names distinguishable from paths, so it would be:

https://com.ycombinator.news/item

Having just checked, I see that ycombinator.news has already been registered, so maybe the lesson is that everyone should register "palindromic" domains, like com.ycombinator.com for example.


The hierarchical order and flat namespace is logical, but for privacy reasons you may not wish to make a DNS query on the full path. So we might want to leave it split:

com.ycombinator.news/item

On the other hand, you could do an incremental query perhaps and cache the path length.

But I wonder about having the protocol name in there. Couldn't DNS return the ports for the supported services and then you could pick whichever one you want?


Can we just do this now anyway?

Also I like that ycombinator becomes "ycombninator".


Laws to prevent and penalize monopolies from forming on the internet. E.g. prevent google, facebook, amazon, etc from becoming all-consuming evil entities.

That and probably mandatory native layer 3 encryption


One general change and one specific:

- A general idea: Arrange so that sending packets costs way more than receiving, just like snail-mail. (At the moment, a large website or spammer pays very close to $0 per message, individuals pay much more per byte to receive junk.) This would encourage decentralisation, nicely small web pages, and discourage spam.

- And "disappear" XML. It might be "ok" (just) as document markup, but it's a terrible for structured data and config, and for transfers.


DNS is a bit later, but: the domain-name specificity notation.

As a coworker of mine pointed out, it's a little bit ridiculous that URLs on the web look like this: `more.and.more.general/more/and/more/specific`. They should really be `more.and.more/specific`, just like bang paths[1] were.

[1]: https://en.wikipedia.org/wiki/UUCP#Bang_path


re banning ads.

I expected Xanadu. Centralized, omniscient namespace, two-way links, micropayments, etc.

We got The Web. Which eschewed all of that.

Much as I hate The Web (repeating myself), I grudgingly accept that it probably succeeded because it wasn't Xanadu.

Even if Xanadu launched, like a better AOL or Prodigy, I suspect most people wouldn't have grokked it.

Another triumph of Worse Is Better. Like PHP and JavaScript and so many others. Then hot patch it towards something less offensive.


I would probably have most nodes on some kind of overlay network contained within a jurisdiction/extradition area, with fewer and more structured interconnections across hostile national borders. I don’t think it’s reasonable to make every small business with an email or web server for its local customers and partners, have to defend against large and well funded criminals operating in broad daylight from Russia.


Just have the internet be a bunch of data streams that people can start/pause/stop any time. It would be "real time" from the beginning, and then you can just build whatever around these streams.

It'd be like my brain broadcast just multi subscription stream, bunch of inputs and reacting to those events.


i would remove javascript.


I think there would always be something invented to take its place. It addresses a real need that lots of people have. If you want to remove it, you have to replace it with something else...


Encourage the App store paradigm and discourage the browser paradigm. Users download apps that they trust instead of using a browser to directly go to a URL that (a) could be fake e.g. "whitehouse.com" (b) installs malware on the client computer. The theory here is that app developers know how to wade the shark infested waters of the internet better than grandpa or grandma and can code basic protections in the app. e.g. just because a website says click here, you don't really have to click there. Especially if a dialog pops up asking do want to give elevated permissions to XYZ?

Encourage the development of browsers for kids. Parents can configure their kids' computer to only run "KidFox" browser which has all the security features turned on. Only allows white listed sites that has been vetted by various agencies, denies escalated privileges, turns off all webcams, prevents remote hackers taking over their kids computer etc.

Lastly, it is more of mindset. Developers should take the attitude that every client computer, server, and database has been compromised to some degree. That is we should have defense in depth and not rely solely on one mechanism to protect us from the bad guys.


Stretch goal: No password authentication whatsoever.

Target goal: No cleartext password authentication. (No telnet as it was, no ftp as it was, no smtp as it was, etc., yada, and so on....)

Fallback goal: Get HTTPS right far sooner, with cryptographers working on SSL 1.0 from the beginning, with funding, and eliminate HTTP as soon as possible.


Emails cost money to send. The money goes towards some good cause (maybe even towards internet infrastructure).


I think it's too soon, but security security security.

I'm push as many examples of capability based security into the academic world as I possibly could, in the 1970s.

Alternatively, push a version of Pascal with a standard library, and drown the insane practice of ending strings with a null instead of knowing their lengths.


Not my idea and it's not too late to change it IMHO. We use the word "free" for services that extrapolate and sell data. These services are not "free", this small thing IMHO drives the data broker industry to a big extend.


Ome change is enough to transform the mature of the net: no JavaScript or other way for pushing esecutable code or dynamic state chamges to the client, instead, extend HTTP to contain a Turing complete query language to be executed on fhe server.

Pondering the implications is left as an exercise for the reader.


It would be marketing & advertising free.

As in, if you need ad revenue & marketing to support your website, you simply don't exist. You can have a website, but no advertisements or "user engagement" nonsense. You're either free or you're offline.


I've so many ideas for this question.

So so many protocols (history), but how few could we get away with, and what would they look like?*

Why don't we have constructive computational contracts for computational work?

How do we make the Internet easier to understand?

How do we manage the agency problem? We yield far too much agency as a matter of daily life, our data is not our own, our decisions are shared with barely knowable third parties.

How do we design human computer interfaces with health, especially mental health, as primary constraint?

How rapidly can the EU coalesce around a combination of a RISC-V general purpose CPU (with suitable trimmings) and a SEL4-influenced-kernel, perhaps in Rust (https://gitlab.com/robigalia)?

How do we standardise on constraints of discourse such as those pertaining to offensive language or hate speech? How do we make it easier for people to communicate with kindness? Autohinting everywhere? Like a shellcheck for human bashfulness?

VR and AR are coming very soon and without care they will be shatteringly destructive of human life. Humans addicted to computationally modeled utility functions mediated by multi-sensory computer games?

How do we embed the lore in the experience? How do we make available all the references as delightful marginalia?

What areas of Mathematics and Physics do we need to study to get ahead of our problems? Category theory is beyond trendy, what's trending? How about rigorous dimensional analysis to match the type theory, or sumthin? How do we invite the world's smartest financiers to apply and share their thought more generously?

Can we settle on a basic curriculum? What functional minimum of linguistic, mathematical, physical, visual, and other skills do we need? Is lisp or a variant the first language we should learn, and if so how should we be able to learn it? If not lisp then what? APL? Fortran? Compiler forbid, Haskell?

How do we ensure that code and documentation are always in sync? How much time will this require?

How do we guarantee a standard of professional attainment and delivery of ICT expert that is globally effective? How do we standardise how we do, not just what we do?

How best can we help each other make our Internet an even better place?

(much spelling, apologies)

*4


Symmetric connections for everyone, everywhere by default (assymetric only when it's not feasible, i.e. mobile devices). This I always thought would open door for more peer-to-peer or onion or some superdecentralized architectures.


It was pretty much always symmetric connections at the beginning. It's only the economics of ISPs that changed it.


I would have built in a protocol which could be used for decentralized ad networks in a federated fashion unlike the craziness that we see today with TOS designed to fight all competition and to selectively control the markets.


It's not entirely fair, because in the 60s basically all modern crypto primitives were missing. If I had those:

1. Encrypted onion routing on layers that betray source/dest IP. 2. eSNI on all TLS connections. 3. Privacy-focused DNS.


Better child protections. Not sure what the implementation would be though.


Ban children from Internet. Solves even more problems.


And possibly creates new ones


Nothing bad could ever be permanently fixed, because these changes were inevitable.

The problem is how laws destroy fair competition by favoring those with the most $$.

Fix that, and you fix everthing else (not gonna happen).


I wouldn't change anything. Centralized control will be abused. The people I see harassing others online a lot of times are US citizens, on US domains, in US jurisdictions. Sorry.


Remove the cookie consent popups.

Or, if we're talking about the ground work, specs the cookies such that browsers must implement the cookie consent and therefore sites can't build it in js.


A law that requires apps and websites to ask their new members what they want from that company And then the most repeated comment/idea/design should be applied.


You clearly have something in mind. Speak up.


Require emails to be signed by the sender cryptographically.


We have DKIM and you can safely drop non-DKIM mail. This tells you the MTA is actually sending the mail.

The problem is that these MTAs get hacked or people just hijack domains.

Heck, the barrier to buying a domain is so low that this is a legit way of spamming too.


> the barrier to buying a domain is so low

This suggests that one way of making spam unprofitable would be to require any domain which sends email to put up a bond of $100 which is forfeited if any of the big four email providers decide that the domain is sending spam.

To make this acceptable to public opinion, the forfeited bonds would to go directly to popular charities, and all existing domains would be automatically grandfathered in, so there would be no extra cost for any current business or user.


> safely drop non-DKIM mail

Yes, but the customer will blame you when e-mail from a misconfigured MTA doesn't arrive (from my experience running one).

If only we'd had DKIM from the start, preventing thousands of broken servers from being set up in the last twenty five years...


PGP with good UX could've been great. Too bad it missed the mark.

S/MIME seems to work pretty well, but the paid certificates pretty much doomed its uptake. I think I still have a Startcom certificate from back in the day that has long since expired.


It's the 1960's. There's no RSA or AES or even DES.


Sure but it's a small community that hung out in person a lot. Tokens, pass phrases or one-time-pads could have been physically swapped.

There's really just a difference in thinking, "private by default for email" vs "public by default." Or "untrusted network" vs "trusted network." However you want to think about it.


ARPANET had a link between Stanford and UCLA in the 1960's. That's a five hour drive. One time pads can only be used one time. Otherwise XOR is a pretty bad cipher.


I will propose decentralized Domain Registration system which can overcome all monopoly, sensors and bureaucracy related to ICANN.


I'd make it so you can only set the evil bit to 0.


Develop stronger cultural and technological safeguards to let people remain anonymous and unmasked, preventing the eventual erosion of privacy.


replace https:// with https: (I believe that was TBL's biggest regret too) :).


Eliminate comments.


Zero trust as a factor of the protocol itself.


Fiber internet. Having 1000 mbps up/down was so nice when I lived in a city where I had access to fiber.


Widespread adoption of trusted keystores and encryption keys, completely eliminating the need for passwords.


i had to answer this question on my computer networks exam ages ago. i forget my exact answer, but it was along the lines of “ATM is the greatest tech since sliced bread. IP is bad because connectionless blah blah… therefore http over tcp over atm better than http over tcp over ip”

i passed the course.


Email. Cold email costs $0.01 per message per address to send.

How we distinguish warm va cold, idk


Make everything small endian, its a small ask and would make so many lives easier.


We already have IPv6.

The only things broken on the internet are smartphones and closed IoT firmware.


>We already have IPv6

I feel like starting with IPv6 would make a dramatic difference in adoption rates today... I.e. IPv4 never gets created. Imagine a world without a single NAT device or port translation algorithm.


Why am I the only one who likes all my devices in my house having one IP address. It's no one's business if I have 10 computers in here or 1.


There's nothing stopping you from continuing to route everything through one computer, since it's your address space you could even do NAT.


NAT would never have been a thing if IP6 was the starting point


The internet is fine.

I would change the web.

I would remove JS and design browsers to natively run python instead.


Smart documents instead of web apps. And no JavaScript of course.


I think you mean the late 60's and early 70's.


Nothin free. Pay. Small amounts here and there.


Ban all commercial activity. Make that law loosely defined so it can strike down any company trying to work around it.

A centralized internet will inevitably do more harm than good.


Use IPv6 from the beginning and banish NAT.


Simple answer. No third party cookies.

And no JS. ;) /jk


Add requirement for every user to have to solve mathematical equation before being allowed to post anything. Make it hard requirement and enforce at all times.


No JavaScript


…also I’d like to add… It’s a pity @dakami isn’t around to give his response, I’d have liked to hear it.


Put AJAX back in the bottle.


Surprised there are comments about essentially trivial network-layer implementation details, and not so much about culture and UX.

As an alternative:

- Private and federated. Everyone has a personal server application which spans multiple personal computing, storage, and peripheral devices and supports federated access at varying levels of security.

- The server stores and manages a user's private data and anything else they feel like storing or sharing. (This requires unobtanium level security, but since we're imagining let's pretend this is a solved problem and see where it takes us.)

- Sharing of all kinds is user controlled and is opt-in across multiple competing federated networks. This includes social networks - with the difference that anyone can start their own network, for any purpose.

- Networks are decentralised and peer-to-peer, and do not store personal data, track, or profile users. This is a user=centric network where users own and control their data. Not an industrial data silo network.

- Users can share different interest profiles and personal details across different networks with varying levels of security and implied credibility.

- Ads are opt=in not opt-out, and defined by voluntary and informed profile and interest sharing, not involuntary and uninformed data harvesting.

- Anonymous microtransactions are a thing. Anyone can sell at scale with as little friction as possible.

- There are no cryptocurrencies and no blockchain tech, because generating random numbers with the equivalent of your own electrical substation is fucking stupid. There is a low-energy secure equivalent. (See unobtanium. Or is it?)

- A common kit of essential server apps is open sourced and community-maintained.

- Commercial and/or professional apps are available by hire or subscription. Servers have a multi-profile multi-layer security model which controls which layer of personal and/or server data outsider apps have access to.

- All paid-for apps supply full details of the schemas and file formats they use, to guarantee that users can freely transfer data to a competing app provider so apps and services hold personal data hostage and have to compete on service quality, not on retention gaming.

- Hacking, malware, virus creation, phishing, and so on, are punished by deletion of personal server data and reduction to the most basic server hardware and software. For serious and repeat offenders, this is for life.

- IoT devices are treated as personal server peripherals with no external data sharing (except by opt-in.)

- Government and military networks use an expanded version of the same system. Municipal, military, and internal gov services run on separate private subnetworks which can only be accessed through authorised devices with extra ID verification, not through general public logins.

Basically it's a combination of device security, private ID (probably biometric), sacrosanct personal data protection, high user-controlled privacy, super low cost of entry for entrepreneurial service provision, squashing of local, national and international scales, and strong forcing of anti-monopolistic competition - the opposite of the current model, which seems to be about herding users into virtual pens owned by monopolists, applying various psychological patterns to control and trigger behaviour, monitoring behaviour and sentiment through minimal privacy, and having to deal with very leaky and insecure devices and systems.


A server in every home


s-expressions instead of docbook or xml.

(html (head ...) (body ...))


Call Ted Nelson


All porn on .xxx top level domain only.


Bitcoin !


Civility.


>"This was just an idle conversation we were having at work. Imagine that one day you wake up and you've been sent back in time, where you are now a researcher at DARPA in the early 1960s. You've got the influence to effect fundamental changes in the next sixty years of the Internet's history, and can make your changes any time in the next sixty years - but you know that as soon as you change one thing in history, you'll be sent back to 2021, to continue living in the world you have wrought. How are you going to make the Internet better?"

You leave it the hell alone! <g>

You leave it the hell alone -- because if you don't, upon returning to 2021, you'll discover that in addition to your wanted change -- there will be all kinds of unwanted "butterfly effects" in the world, resulting from that change, and not limited to the Internet, either! <g>

Like, propagating in and through actual reality -- not just constrained to a computer screen or virtual world!

Unwanted/unforseen/unexpected (but mostly unwanted!) "butterfly effects" (imagine just how scary these could be if you were unprepared for them -- the scariest Stephen King novel wouldn't do them justice!) resulting from Chaos Theory (which programmers know to be actual fact -- make a small change early on in a program -- get vastly differing results later on, as the program moves through TIME...)

So if it one day happens that you magically appear (through time travel, or other plot element) at DARPA in the 1960's -- then you take a quick look, like Clark Griswold in "National Lampoon's Vacation", when he takes a brief look at the Grand Canyon (all of a few seconds!), and you appreciate all that DARPA and all of the other earlier Internet researchers did -- and you leave all of it the hell alone! <g>

Yup, sorry, nothing to see here, nothing to change here, no changes for me! Just passing by, not going to touch a single thing! <g>

You also appreciate the fact that while today's reality is a mess in many ways (and it is!) -- it could also (with Time Travel/Butterfly Effects/Chaos Theory) -- have been a much, much bigger mess(!) -- with Butterfly Effect horrors beyond your wildest understandings! <g>

https://en.wikipedia.org/wiki/The_Butterfly_Effect

https://en.wikipedia.org/wiki/Butterfly_effect

Disclaimer: The above was written for thought-provoking and possibly (depending on the reader's viewpoint!) comedy purposes only! <g> (Though it also works if read from a serious viewpoint...)


Encrypted SNI and No Paywalls


Cancel the project, Terminator 2 style


Easy, I'd shut down this website.


I want to be able to run a lan cable from the street and basicly join the worlds largest LAN. Free open access


I assassinate Osama Bin Laden and therefore prevent the patriot act and the erosion of personal freedoms in the pursuit of “national security”.


I remove most centralized control so that private companies have no power to censure, but governments could if they chose. I think private companies have vastly too much power over the internet, something that is unlikely to change.


Search for URL, should be hardcoded into a new standard| Every PERSONAL IDENTITY/WEB URL should exist as its own resource cluster on the hosting server, either as a microservice on a distributed kuberenetes and provide API's to query information | NEWS and ELECTIONS should be blockchain | Blood donation/organs should be on global blockchain | A new Data standard for open health analytics | Pollution, toxic levels of chemicals from known pollution control boards of every country, based on location should be on blockchain.


Verifiable identity. We need to be able to verify the source of all data on the network, even if that's just proving the source IP address, or better yet the physical location it came from. People often argue that anonymity by default is a good thing, but I argue that trust of origin by default is better.

You can still strip identification for things like posting in public forums, but for everything else knowing where shit came from is critical. From spam email to well everything. It baffles me that spoofing callerID is not only possible but that some people think its important.

Along these lines I have a pet idea that a subset of IPv6 be geo-located, meaning your latitude, longitude, and possibly altitude are encoded in the IP address. This allows routing without the huge tables in the routers. Combined with the ability to verify that a packet came from its advertised location this is very powerful for security.

One way to verify the origin of data (email for example) is not to send it, but to send something akin to a URL (preferably better than that) so we have to at least be able to request the data from somewhere rather than have it sent to us anonymously.

Unfortunately being able to verify the source of data also enables end-to-end encryption fairly easily and nobody in power wants the public to anything like that...


Facebook verifies identities, yet it's still a cesspool of hate in some corners, so it might help, but it's no panacea.


It could help cut down on bots and spam. Think of a Twitter free from bots and propaganda from marketing agencies and governments.


I have gotten massively downvoted on various social media platforms for suggesting verifiable identity so I think there are some people that are just really resistant to this idea.

I don't think it would solve all problems but it would be useful for cutting down on bots and spam for certain types of websites. Many people don't realize that a large chunk of content on sites like Twitter and Reddit (and probably HN) are propaganda from marketing agencies or hostile foreign governments. That argument that you had with someone on Twitter - you could be trying to chat it up with a Chinese paid internet troll. How would you even know? There is a complete lack of good faith on the internet now and verifiable identity, it would be nice to think of how the internet would have been different with it. Unfortunately, we also have a huge number of people/groups/companies that depend on lack of verifiability so I think this is why the idea has become so controversial.


Not necessarily as an enforced standard, but normalizing sharing your identity online earlier on might have changed some of the formational culture of the Internet. The Internet was also a dangerous place, and still is, so perhaps this is not the best idea


2-D syntax everywhere. It's a 2-dimensional binary. Suddenly the intermediate layers between binary and the higher level langs we work with day in and day out have the same form.

For example, 2d langs for HTML, CSS, JSON, others: https://jtree.treenotation.org/designer/

A 2D lang that replaces Markdown: http://scroll.pub/

You can have 2D langs for TCP/IP, DNS, HTTP, et cetera. A grid is all you need.

I figured the math out 8 years ago, https://medium.com/space-net, and slowly getting there. Still in early days, but good annual growth rate. I'd be surprised if it doesn't happen. The math makes too much sense.


I don't follow. Can you elaborate? What is 2-D syntax?


1D syntax assumes a linear pass from beginning to end across the bytes of a program to build the AST. You have concepts like brackets and parens and quotation marks and colons.

2D languages have none of those. They imagine all programs as laid out on a grid. Think of a spreadsheet. For trees, you use indentation. There is not a single computer language in all the world that cannot have it's semantics represented with just that 2D syntax. This realization has long fascinated me. It knocks me off my feet, the same way I feel about binary notation.

Here is a talk I gave in 2017 about 1D vs higher D languages: https://www.youtube.com/watch?v=ldVtDlbOUMA

Here is a video that makes the connection between programming languages and spreadsheets clearer: https://www.youtube.com/watch?v=0l2QWH-iV3k



No, but this is really interesting! Thank you so much for sharing!

I obviously have quite a passion for the idea that there's a universality to Tree Structures and some fundamental simple notation we are stumbling towards, so interesting to see others with that!

This page (https://www.tree-annotation.org/blog/no-escape.html) is close to one of the benefits, but it turns out you don't need any escape characters at all! Simply indentation let's you have blocks that cut and paste naturally with no escaping.


What is making Scrolldown more 2d than markdown? Is it just the extendibility? The intends?

I don't think I fully understand your concept

I also noticed your example grammar often contains a lot of js code. So it's not intended as different language but as data model with built in JavaScript? Kinda like a more modern XSLT?


> What is making Scrolldown more 2d than markdown? Is it just the extendibility?

You hit the nail on the head. The extendibility is the key thing. In Scrolldown every little piece of content is in its own little scope. It's like every block is it's own little file written in one of many different grammars. These all compose effortlessly. This is the early days, but I expect there to be thousands of little "micro-grammars" for use by people using Scroll. For example, you might have a microgrammar for making flow charts, or making interactive observablehq/worrydream/jupyter like documents, or quick sims, or blueprints, or audio content, or slide shows. It's sort of like web components done right.

> I also noticed your example grammar often contains a lot of js code.

The idea with Grammar is to keep refining the language and hoisting as many patterns into pure 2D/Tree languages as possible. But I have to use resources judiciously, and strike a balance between research and deliverables. Folks have started Tree Notation implementations in languages like Kotlin and Swift, and when/if there turns out to be a need to have something like a Kotlin Grammar interpreter, then at the time it would make sense to iterate on Grammar so the `javascript` blocks are instead a DSL. You can see that is a bit in progress with the `compiler` nodes; but have gone with the hacky `javascript` bits in many places just for pragmatic reasons.

> So it's not intended as different language but as data model with built in JavaScript? Kinda like a more modern XSLT?

Grammar started out as a POC but evolved into something pretty practical. Sort of an ANTLR for 2D languages. But sometimes I still just hard code the parser/compiler for a language from scratch. They are generally pretty simple.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: