Hacker News new | past | comments | ask | show | jobs | submit login
Engineering for Slow Internet (brr.fyi)
1008 points by jader201 48 days ago | hide | past | favorite | 386 comments



A lot of this resonates. I'm not in Antartica, I'm in Beijing, but still struggle with the internet. Being behind the great firewall means using creative approaches. VPNs only sometimes work, and each leaves a signature that the firewall's hueristics and ML can eventually catch onto. Even state-mandated ones are 'gently' limited at times of political sensitivity. It all ends up meaning that, even if I get a connection, it's not stable, and it's so painful to sink precious packets into pointless web-app-react-crap roundtrips.

I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly. In deficit of time travel, if people could just learn to open web tools and use its throttling tool: turn it to 3g, and see if their webapp is resilient. Please!


"I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly."

No need to invent time travel, just let them have a working retreat somewhere with only bad mobile connection for a few days.


Amen to this. And give them a mobile cell plan with 1GB of data per month.

I've seen some web sites with 250MB payloads on the home page due to ads and pre-loading videos.

I work with parolees who get free government cell phones and then burn through the 3GB/mo of data within three days. Then they can't apply for jobs, get bus times, rent a bike, top up their subway card, get directions.


"But all the cheap front-end talent is in thick client frameworks, telemetry indicates most revenue conversions are from users on 5G, our MVP works for 80% of our target user base, and all we need to do is make back our VC's investment plus enough to cash out on our IPO exit strategy, plus other reasons not to care" — self-identified serial entrepreneur, probably


Having an adblocker (firefox mobile works with uBlock origin) and completely deactivate loading of images and videos can get you quite far with limited connection.


You're 100% right. uBlock Origin can reduce page weight by an astronomical amount.


uMatrix (unsupported but still works) reduces page weight and compute even more


If you enable the Advanced Features mode of Ublock Origin you get access to just about all the things uMatrix does


Except for a usable UI.


How so?


it allows javascript on the original site domain, but turns it off for external domains, buts then lets you selectively turn it back on, and remembers what you've selected. Turning off javascript turns off a lot of both addtional downloading and cpu cycles. for impatient people it's tedious, but for minimalists it's heaven.


I strongly suspect that the venn diagram of people that know how to minimize data usage, and indigent people being given free cell phones with limited data has precious little overlap.


Yes, but "qingcharles" said he works with them, so he can show it to them.


And disabling JavaScript


Yeah and then give them thousands upon thousands of paying customers with these constraints worth caring about


This is probably a deliberate decision on the part of the government. A lot of the justice system is designed to keep people in prison.


That makes little sense. If it was a deliberate plan, it would be much more effective (and cheaper) to not provide the cellphones and data plan in the first place.


Exactly. It's easy to attribute malice where incompetence is the answer. The root problem is that there is no feedback loop so that the government agency funding the program isn't properly looking at the product being delivered at the other end and putting measures in place to stop the contractors from taking advantage of the program to only deliver the barest minimum without tracking the effects on the end customer.


You see this all the time. Need to keep up appearances but the actually implementation is poor.


It's not deliberate by the government. And the situation is getting better. Basically some of the telcos being a bit greedy and only giving 3GB/mo data to the users and spending all their profits on paying people in the hoods $20 a pop to sign people up.

Recently I've started to see some contracts with vastly more data, including some unlimiteds.


Just put them on a train during work hours! We have really good coverage here but there's congestion and frequent random dropouts, and a lot of apps just don't plan for that at all.


There's no need for retreat. Chrome DevTools have "simulate slow connection" button.


Yeah - and do they use it? Does it help experiencing them the joy of wanting just a little text information, but having to load loads of other stuff first and your connection timing out? I am afraid to get the full experience, they actually need to have a bad connection.


My first education was in industrial design, there designers usually have pride in avoiding needless complexity and material waste.

Even if I do web services I try to do so with as little moving parts as needed and with huge reliance on trusted and standardized solutions. If see any addition of complexity as a cost that needs to be weighed against the benefits you hope it brings you will automatically end up with a lean and fast application.


A little time with embedded hardware will teach you a few things too.


I lived in Shoreditch for 7 years and most of my flats had almost 3G internet speeds. The last one had windows that incidentally acted like a faraday cage.

I always test my projects with throttled bandwidth, largely because (just like with a11y) following good practices results in better UX for all users, not just those with poor connectivity.

Edit: Another often missed opportunity is building SPAs as offline-first.


>> Another often missed opportunity is building SPAs as offline-first.

You are going to get so many blank stares at many shops building web apps when suggesting things like this. This kind of consideration doesn't even enter into the minds of many developers in 2024. Few of the available resources in 2024 address it that well for developers coming up in the industry.

Back in the early-2000s, I recall these kinds of things being an active discussion point even with work placement students. Now that focus seems to have shifted to developer experience with less consideration on the user. Should developer experience ever weigh higher than user experience?


>Should developer experience ever weigh higher than user experience?

Developer experience is user experience. However, in a normative sense, I operate such that Developer suffering is preferable to user suffering to get any arbitrary task done.


The irony for me is that I got into React because I thought that we could finally move to an offline-first SPA application. Current trends seem to go the opposite.


SPAs and "engineering for slow internet" usually don't belong together. The giant bundles usually guarantee slow first paint, and the incremental rendering/loading usually guarantees a lot of network chatter that randomly breaks the page when one of the requests times out. Most web applications are fundamentally online. For these, consider what inspires more confidence when you're in a train on a hotspot: an old school HTML forms page (like HN), or a page with a lot of React grey placeholders and loading spinners scattered throughout? I guess my point is that while you can take a lot of careful time and work to make an SPA work offline-first, as a pattern it tends to encourage the bloat and flakiness that makes things bad on slow internet.


London internet (and English internet in general) is just so bad.

Having lived in lots of countries (mainly developing) it’s embarrassing how bad our internet is in comparison


Oh, London is notorious for having... questionable internet speeds in certain areas. It's good if you live in a new build flat/work in a recently constructed office building or you own your own home in a place OpenReach have gotten to yet, but if you live in an apartment building/work in an office building more than 5 or so years old?

Yeah, there's a decent chance you'll be stuck with crappy internet as a result. I still remember quite a few of my employers getting frustrated that fibre internet wasn't available for the building they were renting office space in, despite them running a tech company that really needed a good internet connection.


We design for slow internet, react is one of the better options for it with ssr, code splitting and http2 push, mixed in with more off-line friendly clients like Tauri. You can also deploy very near people if you work “on the edge”.

I’m not necessarily disagreeing with your overall point, but modern JS is actually rather good at dealing with slow internet for server-client “applications”. It’s not necessarily easy to do, and there is almost no online resources that you can base your projects on if you’re a Google/GPT programmer. Part of this is because of the ocean of terrible JS resources online, but a big part of it is also that the organisations which work like this aren’t sharing. We have 0 public resources for the way we work as an example, because why would we hand that info to our competition?



Very sad. I use http/2 push on my website to push the CSS if there’s no same-origin referrer. It saves a full roundtrip which can be pretty significant on high latency connections. The html+css is less than 14kb so it can all be sent on the first roundtrip as it’s generally within TCP’s initial congestion window of about 10*1400.

The only other alternative is to send the CSS inline, but that doesn’t work as well for caching for future page loads.

103 Early Hints is not nearly as useful as it doesn’t actually save a round trip. It only works around long request processing time on the server. Also, most web frameworks will have a very hard time supporting early hints, because it doesn’t fit in the normal request->response cycle, so I doubt it’s going get much adoption.

Also it would be nice to be able to somehow push 30x redirects to avoid more round trips.


By far the lightest weight JS framework isn't React, it's no javascript at all.

I regularly talk to developers who aren't even aware that this is an option.


If you're behind an overloaded geosynchronous satellite then no JS at all just moves the pain around. At least once it's loaded a JS-heavy app will respond to most mouse clicks and scrolls quickly. If there's no JS then every single click will go back to the server and reload the entire page, even if all that's needed is to open a small popup or reload a single word of text.


This makes perfect sense in theory and yet it's the opposite of my experience in practice. I don't know how, but SPA websites are pretty much always much more laggy than just plain HTML, even if there are a lot of page loads.


It often is that way, but it's not for technical reasons. They're just poorly written. A lot of apps are written by inexperienced teams under time pressure and that's what you're seeing. Such teams are unlikely to choose plain server-side rendering because it's not the trendy thing to do. But SPAs absolutely can be done well. For simple apps (HN is a good example) you won't get too much benefit, but for more highly interactive apps it's a much better experience than going via the server every time (setting filters on a shopping website would be a good example).


Yep. In SPAs with good architecture, you only need to load the page once, which is obviously weighed down by the libraries, but largely is as heavy or light as you make it. Everything else should be super minimal API calls. It's especially useful in data-focused apps that require a lot of small interactions. Imagine implementing something like spreadsheet functionality using forms and requests and no JavaScript, as others are suggesting all sites should be: productivity would be terrible not only because you'd need to reload the page for trivial actions that should trade a but of json back and forth, but also because users would throw their devices out the window before they got any work done. You can also queue and batch changes in a situation like that so the requests are not only comparatively tiny, you can use fewer requests. That said, most sites definitely should not be SPAs. Use the right tool for the job


> which is obviously weighed down by the libraries, but largely is as heavy or light as you make it

One thing which surprised me at a recent job was that even what I consider to be a large bundle size (2MB) didn't have much of an effect on page load time. I was going to look into bundle splitting (because that included things like a charting library that was only used in a small subsection of the app). But in the end I didn't bother because I got page loads fast (~600ms) without it.

What did make a huge different was cutting down the number of HTTP requests that the app made on load (and making sure that they weren't serialised). Our app was originally going auth by communicating with Firebase Auth directly from the client, and that was terrible for performance because that request was quite slow (most of second!) and blocked everything else. I created an all-in-one auth endpoint that would check the user's auth and send back initial user and app configuration data in one ~50ms request and suddenly the app was fast.


In many cases, like satellite Internet access or spotty mobile service, for sure. But if you have low bandwidth but fast response times, that 2mb is murder and the big pile o requests is NBD.If you have slow response times but good throughput, the 2MB is NBD but the requests are murder.

An extreme and outdated example, but back when cable modems first became available, online FPS players were astonished to see how much better the ping times were for many dial up players. If you were downloading a floppy disk of information, the cable modem user would obviously blow them away, but their round trip time sucked!

Like if you're on a totally reliable but low throughput LTE connection, the requests are NBD but the download is terrible. If you're on spotty 5g service, it's probably the opposite. If you're on, like, a heavily deprioritized MVNO with a slower device, they both super suck.

It's not like optimization is free though, which is why it's important to have a solid UX research phase to get data on who is going to use it, and what their use case is.


My experience agrees with this comment – I’m not sure why web browsers seem to frequently get hung up on only some Http requests at times, unrelated to the actual network conditions. Ie: in the browser the HTTP request is timing out or in a blocked state and hasn’t even reached the network layer when this occurs. (Not sure if I should be pointing the finger here at the browser or the underlying OS). However, when testing slow / stalled loading issues, this (the browser itself) is frequently one of the culprits- however, this issue I am referring to even further reinforces the article/sentiments on this HN thread (cut down on the number of requests / bloat, and this issue too can be avoided.)


If if the request itself hasn't reached the network layer but is having a networky feeling hang, I'd look into DNS. It's network dependent but handled by the system so it wouldn't show up in your web app requests. I'm sure there's a way to profile this directly but unless I had to do it all the time I'd probably just fire up wireshark.


Chrome has a built-in, hard coded limit of six (6) concurrent requests. Once you have that many in flight, any subsequent requests will be kept in queue.

Now take a good, hard look at the number of individual resources your application's page includes. Every tracker, analytics crapware, etc. gets in that queue. So do all the requests they generate. And the software you wrote is even slower to load because marketing insisted that they must have their packages loading at the top of the page.

Welcome to hell.


Can you point me to a decently complex front end app, written by a small team, that is well written? I’ve seen one, Linear, but I’m interested to see more


An SPA I use not infrequently is the online catalog on https://segor.de (it's a small store for electronics components). When you open it, it downloads the entire catalog, some tens of MB of I-guess-it-predates-JSON, and then all navigation and filtering is local and very fast.


Having written a fair amount of SPA and similar I can confirm that it is actually possible to just write some JavaScript that does fairly complicated jobs without the whole thing ballooning into the MB space. I should say that I could write a fairly feature-rich chat-app in say 500 kB of JS, then minified and compressed it would be more like 50 kB on the wire.

How my "colleagues" manage to get to 20 MB is a bit of mystery.


> How my "colleagues" manage to get to 20 MB is a bit of mystery.

More often than not (and wittingly or not) it is effectively by using javascript to build a browser-inside-the-browser, Russian doll style, for the purposes of tracking users' behavior and undermining privacy.

Modern "javascript frameworks" do this all by default with just a few clicks.


There's quite some space between "100% no JS" and "full SPA"; many applications are mostly backend template-driven, but use JS async loads for some things where it makes sense. The vote buttons on Hacker News is a good example.

I agree a lot of full SPAs are poorly done, but some are do work well. FastMail is an example of a SPA done well.

The reason many SPAs are slower is just latency; traditional template-driven is:

- You send request

- Backend takes time to process that

- Sends all the data in one go.

- Browser renders it.

But full SPA is:

- You send request

- You get a stub template which loads JS.

- You load the JS.

- You parse the JS.

- You send some number of requests to get some JSON data, this can be anything from 1 to 10 depending on how it was written. Sometimes it's even serial (e.g. request 1 needs to complete, then uses some part of that to send request 2, and then that needs to finish).

- Your JS parses that and converts it to HTML.

- It injects that in your DOM.

- Browser renders it.

There are ways to make that faster, but many don't.


However, getting 6.4KB of data (just tested on my blog) or 60KB of data (a git.sr.ht repository with a README.md and a PNG) is way better than getting 20MB of frameworks in the first place.


Yes. It's inexcusable that text and images and video pulls in megabytes of dependencies from dozens of domains. It's wasteful on every front: network, battery, and it's also SLOW.


The crap is that even themes for static site generators like mkdocs link resources from cloudflare rather than including them in the theme.

For typedload I've had to use wget+sed to get rid of that crap after recompiling the website.

https://codeberg.org/ltworf/typedload/src/branch/master/Make...


Yeah, but your blog is not a full featured chat system with integrated audio and video calling, strapped on top of a document format.

There are a few architectural/policy problems in web browsers that cause this kind of expansion:

1. Browsers can update large binaries asynchronously (=instant from the user's perspective) but this feature is only very recently available to web apps via obscure caching headers and most people don't know it exists yet/frameworks don't use it.

2. Large download sizes tend to come from frameworks that are featureful and thus widely used. Browsers could allow them to be cached but don't because they're over-aggressive at shutting down theoretical privacy problems, i.e. the browser is afraid that if one site learns you used another site that uses React, that's a privacy leak. A reasonable solution would be to let HTTP responses opt in to being put in the global cache rather than a partitioned cache, that way sites could share frameworks and they'd stay hot in the cache and not have to be downloaded. But browsers compete to satisfy a very noisy minority of people obsessed with "privacy" in the abstract, and don't want to do anything that could kick up a fuss. So every site gets a partitioned cache and things are slow.

3. Browsers often ignore trends in web development. React style vdom diffing could be offered by browsers themselves, where it'd be faster and shipped with browser updates, but it isn't so lots of websites ship it themselves over and over. I think the SCIter embedded browser actually does do this. CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.

I think at some pretty foundational level the way this stuff works architecturally is wrong. The web needs a much more modular approach and most JS libraries should be handled more like libraries are in desktop apps. The browser is basically an OS already anyway.


> CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.

I don't know exactly which features you are referring to, but you may have noticed that CSS has adopted native nesting, very similarly to Sass, but few sites actually use it. Functions and mixins are similar compactness/convenience topics being worked on by the CSSWG.

(Disclosure: I work on style in a browser team)


I hadn't noticed and I guess this is part of the problem. Sorry this post turned into a bit of a rant but I wrote it now.

When it was decided that HTML shouldn't be versioned anymore it became impossible for anyone who isn't a full time and very conscientious web dev to keep up. Versions are a signal, they say "pay attention please, here is a nice blog post telling you the most important things you need to know". If once a year there was a new version of HTML I could take the time to spend thirty minutes reading what's new and feel like I'm at least aware of what I should learn next. But I'm not a full time web dev, the web platform changes constantly, sometimes changes appear and then get rolled back, and everyone has long since plastered over the core with transpilers and other layers anyway. Additionally there doesn't seem to be any concept of deprecating stuff, so it all just piles up like a mound of high school homework that never shrinks.

It's one of the reasons I've come to really dislike CSS and HTML in general (no offense to your work, it's not the browser implementations that are painful). Every time I try to work out how to get a particular effect it turns out that there's now five different alternatives, and because HTML isn't versioned and web pages / search results aren't strongly dated, it can be tough to even figure out what the modern way to do it is at all. Dev tools just make you even more confused because you start typing what you think you remember and now discover there are a dozen properties with very similar names, none of which seem to have any effect. Mistakes don't yield errors, it just silently does either nothing or the wrong thing. Everything turns into trial-and-error, plus fixing mobile always seems to break desktop or vice-versa for reasons that are hard to understand.

Oh and then there's magic like Tailwind. Gah.

I've been writing HTML since before CSS existed, but feel like CSS has become basically non-discoverable by this point. It's understandable why neither Jetpack Compose nor SwiftUI decided to adopt it, even whilst being heavily inspired by React. The CSS dialect in JavaFX I find much easier to understand than web CSS, partly because it's smaller and partly because it doesn't try to handle layout. The way it interacts with components is also more logical.


You may be interested in the Baseline initiative, then. (https://web.dev/baseline/2024)


That does look useful, thanks!


> Oh and then there's magic like Tailwind. Gah.

I'm not sure why Tailwind is magic. It's just a bunch of predefined classes at its core.


The privacy issues aren't just hypothetical, but that aside, that caching model unfortunately doesn't mesh well with modern webdev. It requires all dependencies to be shipped in full, no tree shaking to only include the needed functions. And separately as individual files.. and for people to actually stick to the same versions of dependencies


Can you show some real sites that were mounting such attacks using libraries?


Also wonder how many savings are still possible with a more efficient HTML/CSS/JS binary representation. Text is low tech and all but it still hurts to waste so many octets for such a relatively low amount of possible symbols.

Applies to all formal languages actually. 2^(8x20x10^6) ~= 2x10^48164799 is such a ridiculously large space...


The generalisation of this concept is what I like call the "kilobyte" rule.

A typical web page of text on a screen is about a kilobyte. Sure, you can pack more in with fine print, and obviously additional data is required to represent the styling, but the actual text is about 1 kb.

If you've sent 20 MB, then that is 20,000x more data than what was displayed on the screen.

Worse still, an uncompressed 4K still image is only 23.7 megabytes. At some point you might be better off doing "server side rendering" with a GPU instead of sending more JavaScript!


> "server side rendering" with a GPU instead of sending more JavaScript

Some 7~10 years ago I remember I saw somewhere (maybe here on HN) a website which did exactly this: you gave it an URL - it downloaded a webpage with all its resources, rendered and screenshot'ed it (probably in headless Chrome or something), and compared size of png screenshot versus size of webpage with all its resources.

For many popular websites, png screenshot of a page indeed was several times less than webpage itself!


I read epubs, and they’re mostly html and css files zipped. The whole book usually comes under a MB if there’s not a lot of big pictures. Then you come across a website and for just an article you have to download tens of MBs. Disable JavaScript and the website is broken.


If your server renders the image as text we'll be right back down towards a kilobyte again. See https://www.brow.sh/


Soo.. there should be a standardized web API for page content. And suddenly... gopher (with embedded media/widgets).


Shouldn’t HTTP compression reap most of the benefits of this approach for bigger pages?



Surely you're aware of gzip encoding on the wire for http right?


Sure, would be interesting to know how it would fare against purpose-made compression under real world conditions still...


gzip is fast. And it was made for real world conditions.


Brotli is a better example, as it was literally purpose made for HTML/CSS/JS. It is now supported basically everywhere for HTTP compression, and uses a huge custom dictionary (about 120KB) that was trained on simulated web traffic.

You can even swap in your own shared dictionary with a HTTP header, but trying to make your own dictionary specific to your content is a fool’s errand, you’ll never amortize the cost in total bits.

What you CAN do with shared dictionaries though, is delta updates.

https://developer.chrome.com/blog/shared-dictionary-compress... https://news.ycombinator.com/item?id=39615198


Is it supported by curl or any browser or no?


Every browser, curl yes.


False dichotomy, with what is likely extreme hyperbole on the JS side. Are there actual sites that ship 20 MB, or even 5 MB or more, of frameworks? One can fit a lot of useful functionality in 100 KB or less of JS< especially minified and gzipped.


Well, I'm working right now so let me check our daily "productivity" sites (with an adblocker installed):

  - Google Mail: Inbox is ~18MB (~6MB Compressed). Of that, 2.5MB is CSS (!) and the rest is mostly JS
  - Google Calendar: 30% lower, but more or less the same proportions
  - Confluence: Home is ~32MB (~5MB Comp.). There's easily 20MB of Javascript and at least 5MB of JSON. 
  - Jira: Home is ~35MB (~7MB compressed). I see more than 25MB of Javascript
  - Google Cloud Console: 30MB (~7MB Comp.). I see at least 16MB of JS
  - AWS Console: 18MB (~4MB Comp.). I think it's at least 12MB of JS
  - New Relic: 14MB (~3MB Comp.). 11MB of JS.
    This is funny because even being way more data heavy than the rest, its weight is way lower.
  - Wiz: 23MB (~6MB Comp.) 10MB of JS and 10MB of CSS
  - Slack: 60MB (~13MB Compressed). Of that, 48MB of JS. No joke.


I sometimes wish I could spare the time just to tear into something like that Slack number and figure out what it is all doing in there.

Javascript should even generally be fairly efficient in terms of bytes/capability. Run a basic minimizer on it and compress it and you should be looking at something approaching optimal for what is being done. For instance, a variable reference can amortize down to less than one byte, unlike compiled code where it ends up 8 bytes (64 bits) at the drop of a hat. Imagine how much assembler "a.b=c.d(e)" can compile into to, in what is likely represented in less compressed space than a single 64-bit integer in a compiled language.

Yet it still seems like we need 3 megabytes of minified, compressed Javascript on the modern web just to clear our throats. It's kind of bizarre, really.


js developers had this idea of "1 function = 1 library" for a really long time, and "NEVER REIMPLEMENT ANYTHING". So they will go and import a library instead of writing a 5 line function, because that's somehow more maintainable in their mind.

Then of course every library is allowed to pin its own dependencies. So you can have 15 different versions of the same thing, so they can change API at will.

I poked around some electron applications.

I've found .h files from openssl, executables for other operating systems, megabytes of large image files that were for some example webpage, in the documentation of one project. They literally have no idea what's in there at all.


That's a good question. I just launched Slack and took a look. Basically: it's doing everything. There's no specialization whatsoever. It's like a desktop app you redownload on every "boot".

You talk about minification. The JS isn't minified much. Variable names are single letter, but property names and more aren't renamed, formatting isn't removed. I guess the minifier can't touch property names because it doesn't know what might get turned into JSON or not.

There's plenty of logging and span tracing strings as well. Lots of code like this:

            _n.meta = {
                name: "createThunk",
                key: "createThunkaddEphemeralMessageSideEffectHandler",
                description: "addEphemeralMessageSideEffect side effect handler"
            };
The JS is completely generic. In many places there are if statements that branch on all languages Slack was translated into. I see checks in there for whether localStorage exists, even though the browser told the server what version it is when the page was loaded. There are many checks and branches for experiments, whether the company is in trial mode, whether the code is executing in Electron, whether this is GovSlack. These combinations could have been compiled server side to a more minimal set of modules but perhaps it's too hard to do that with their JS setup.

Everything appears compiled using a coroutines framework, which adds some bloat. Not sure why they aren't using native async/await but maybe it's related to not being specialized based on execution environment.

Shooting from the hip, the learnings I'd take from this are:

1. There's a ton of low hanging fruit. A language toolchain that was more static and had more insight into what was being done where could minify much more aggressively.

2. Frameworks that could compile and optimize with way more server-side constants would strip away a lot of stuff.

3. Encoding logs/span labels as message numbers+interpolated strings would help a lot. Of course the code has to be debuggable but hopefully, not on every single user's computer.

4. Demand loading of features could surely be more aggressive.

But Slack is very popular and successful without all that, so they're probably right not to over-focus on this stuff. Especially for corporate users on corporate networks does anyone really care? Their competition is Teams after all.


This is mind blowing to me. I expect that the majority of any application will be the assets and content. And megabytes of CSS is something I can't imagine. Not the least for what it implies about the DOM structure of the site. Just, what!? Wow.


too much crap holy and this is worse case scenario with adblock


I just tried some websites:

    - https://web.whatsapp.com 11.12MB compressed / 26.17MB real.
    - https://www.arstechnica.com 8.82MB compressed / 16.92MB real.
    - httsp://www.reddit.com 2.33MB compressed / 5.22 MB real.
    - https://www.trello.com (logged in) 2.50MB compressed / 10.40MB real.
    - https://www.notion.so (logged out) 5.20MB compressed / 11.65MB real.
    - https://www.notion.so (logged in) 19.21MB compressed / 34.97MB real.


Well, in TFA, if you re-read the section labeled "Detailed, Real-world Example" yes, that is exactly what the person was encountering. So no hyperbole at all actually.


I agree with adding very little JavaScript, say 1kB https://instant.page/ to make it snappier.


I'm getting almost 2MB (5MB uncompressed) just for a google search.


Getting at least “n”kb of html with content in it that you can look at in the interim is better than getting the same amount of framework code.

SPA’s also have a terrible habit of not behaving well after being left alone for a while. Nothing like coming back to a blank page and having it try to redownload the world to show you 3kb of text, because we stopped running the VM a week ago.


Here’s something that’s not true: in js, the first link you click navigates you. In the browser, clicking a second link cancels the first one and navigates to the second one.

GitHub annoys the fuck out of me with this.


Yeah, right. GitHub migrated from serving static sites to displaying everything dynamically and it’s basically unusable nowadays. Unbelievably long load times, frustratingly unresponsive, and that’s on my top spec m1 MacBook Pro connected to a router with fiber connection.

Let’s not kid ourselves, no matter how many fancy features, splitting, optimizing, whatever you do, JS webapps may be an upgrade for developers, they’re a huge downgrade for users in all aspects


Every time I click a link in GitHub, and watch their _stupid_ SPA “my internal loading bar is better than yours” I despair.

It’s never faster than simply reloading the page. I don’t know what they were thinking, but they shouldn’t have.


I have an instance of Forgejo and it’s so snappy. Even though I’m the only user, but the server is only 2GB, 2vcores with other services present. Itp

On the other side, Gitlab doesn’t work with JS disabled.


Between massacring the UX and copilot I've more or less stopped engaging with github. I got tempted the other day to comment on an issue and it turns out the brain trust over at Microsoft broke threaded comment replies. They still haven't fixed keyboard navigation in their bullshit text widget.

I could put up with the glacial performance if it actually worked in the first place, but apparently adding whiz bang "AI" features is the only thing that matters these days.

The whole thing smacks of a rewrite so someone could get a bonus and/or promotion.


Surprisingly my experience of GitLab is even worse! How's yours? BitBucket wasn't much better from memory either. Seems like most commercial offerings in this spaces suck.


I've been using Sourcehut. I respect Drew's commitment to open source, but I think that a lot of the UX misses the mark. For most things I really don't want an email based work flow and some pieces feel a bit disjointed. Overall though it has most of the features I want, and dramatically less bullshit than Github.


Loading an entire page with cached pictures is more or less instant, connection wise, though.


In my experience page weight isn't usually the biggest issue. On unreliable connections you'll often get decent bandwidth when you can get through. It's applications that expect to be able to multiple HTTP requests sequentially and don't deal well with some succeeding and failing (or just network failures in general) that are the most problematic.

If I can retry a failed a network request that's fine. If I have to restart the entire flow when I get a failure that's unusable.


When used well, JS will improve the experience especially for high-latency low bandwidth users. Not doing full page refreshes for example, or not loading all data at once.

So no, "no JS at all" is not "by far the lightest weight" in many cases. This is just uncritically repeating dogma. Even 5K to 20K of JS can significantly increase performance.


I always ask people to give example of real world SPAs where JS is "used well" and nobody could give me an example


FastMail is pretty good; that's my go-to example.

However, you don't need to go full SPA. "No JS at all" and "SPA" are not the only options that exist. See my other comment: https://news.ycombinator.com/item?id=40541555

Sites like Hacker News, Stack Overflow, old.reddit.com, and many more greatly benefit from JS. I made GoatCounter tons faster with JS as well: rendering 8 charts on the server can be slow. It uses a "hybrid approach" where it renders only the first one on the server, sends the HTML, and then sends the rest later over a websocket. That gives the best of both: fast initial load without too much waiting, and most of the time you don't even notice the rest loads later.


Very true, Javascript was never meant to be mandatory for web pages.

Two of the lighter options right now though seem to be things like alpinejs, htmx, etc. Basic building blocks where / if needed.


No JS can actually increase roundtrips in some cases, and that's a problem if you're latency-bound and not necessarily speed-bound.

Imagine a Reddit or HN style UI with upvote and downvote buttons on each comment. If you have no JS, you have to reload the page every time one of the buttons is clicked. This takes a lot of time and a lot of packets.

If you have an offline-first SPA, you can queue the upvotes up and send them to the server when possible, with no impact on the UI. If you do this well, you can even make them survive prolonged internet dropouts (think being on a subway). Just save all incomplete voting actions to local storage, and then try re-submmitting them when you get internet access.


It's not always the application itself per se. It's the various / numerour marketing, analytics or (sometimes) ad-serving scripts. These third party vendors aren't often performance minded. They could be. They should be.


And the insistence on pushing everything into JS instead of just serving the content. So you’ve got to wait for the skeleton to dl, then the JS, which’ll take its sweet time, just to then(usually blindly) make half a dozen _more_requests back out, to grab JSON, which it’ll then convert into html and eventually show you. Eventually.


Yup. There's definitely too much unnecessary complexity in tech and too much over-design in presentation. Applications, I understand. Interactions and experience can get complicated and nuanced. But serving plain ol' content? To a small screen? Why has that been made into rocket science?


Well grabbing json isn't that bad.

I made a CLI for ultimateguitar (https://packages.debian.org/sid/ultimateultimateguitar) that works by grabbing the json :D


I've not so much good vs not so bad vs bad. It's more necessary vs unnecessary. There's also "just because you can, doesn't mean you should."


I live in a well connected city, but my work only pays for other continent based Virtual Machines so most of my projects end up "fast" but latency bound, it's been an interesting exercise of minimizing pointless roundtrips in a technology that expects you to use them for everyting


Tried multiple VPNs in China and finally rolled my own obfuscation layer for Wireshark. A quick search revealed there are multiple similar projects on GitHub, but I guess the problem is once they get some visibility, they don't work that well anymore. I'm still getting between 1 and 10mbit/s (mostly depending on time of day) and pretty much no connectivity issues.


Wireguard?


Haha yes, thanks. I used Wireshark extensively the past days to debug a weird http/2 issue so I guess that messed me up a bit ;)


I do that too looking stuff up.


Tbh, developers just need to test their site with existing tools or just try leaving the office. My cellular data reception in Germany in a major city sucks in a lot of spots. I experience sites not loading or breaking every single day.


developers shouldn't be given those ultra performant machines. They can have a performant build server :D


>A lot of this resonates. I'm not in Antartica, I'm in Beijing, but still struggle with the internet.

Not even that, with outer space travel, we all need to build for very slow internet and long latency. Devs do need to time-travel back to 2005.


I'm sure this is not what you meant but made me lol anyways: sv techbros would sooner plan for "outer space internet" than give a shit about the billions of people with bad internet and/or a phone older than 5 years.


Sounds like what would benefit you is a HTMX approach to the web.


What about plain HTML & CSS for all the websites where this approach is sufficient? Then apply HTMX or any other approach for the few websites that are and need to be dynamic.


That is exactly what htmx is and does. Everything is rendered server side and sections of the page that you need to be dynamic and respond to clicks to fetch more data have some added attributes


I see two differences: (1) the software stack on the server side and (2) I guess there is JS to be sent to the client side for HTMX support(?). Both those things make a difference.


The size of HTMX compressed is 10kb and very rarely changes which means it can stay in your cache for a very long time.


I'm embedded so I don't much about web stuff but sometimes I create dashboards to monitor services just for our team, tganks for introducing me to htmx. I do think html+css should be used for anything that is a document or static for longer than a typical view lasts. Arxiv is leaning towards HTML+css vs latex in acknowledgement that paper is no longer how "papers" are read. And on the other end, eBay works really well with no js right up until you get to an item's page, where it breaks. If ebay can work without js, almost anything that isn't monitoring and visualizing constant data (last few minutes of a bid, or telemetry from an embedded sensor) can work without js. I don't understand how amazon.com has gotten so slow and clunky for instance.

I have been using wasm and webgpu for visualization, partly to offload any burden from the embedded device to be monitored, but that could always be a third machine. Htmx says it supports websockets, is there a good way to have it eat a stream and plot data as telemetry, or is that time for a new tool?


You would have to replace the whole graph everytime. Probably works if it updates once per minute. But more than that it might be time to look at some small js plot library to update the graph.


It sounds like GP would benefit from satellite internet bypassing the firewall, but I don't know how hard the Chinese government works to crack down on that loophole.


I hear you on frontend-only react. But hopefully the newer React Server Components are helping? They just send HTML over the wire (right?)


The problem isn't in what is being sent over the wire - it's in the request lifecycle.

When it comes to static HTML, the browser will just slowly grind along, showing the user what it is doing. It'll incrementally render the response as it comes in. Can't download CSS or images? No big deal, you can still read text. Timeouts? Not a thing.

Even if your Javascript framework is rendering HTML chunks on the server, it's still essentially hijacking the entire request. You'll have some button in your app, which fires off a request when clicked. But it's now up to the individual developer to properly implement things like progress bars/spinners, timouts, retries, and all the rest the browser normally handles for you.

They never get this right. Often you're stuck with an app which will give absolutely zero feedback on user action, only updating the UI when the response has been received. Request failed? Sorry, gotta F5 that app because you're now stuck in an invalid state!


Yep. I’m a JS dev who gets offended when people complain about JS-sites being slower because there’s zero technical reason why interactions should be slower. I honestly suspect a large part of it is that people don’t expect clicking a button to take 300ms and so they feel like the website must be poorly programmed. Whereas if they click a link and it takes 300ms to load a new version of the page they have no ill-will towards the developer because they’re used to 300ms page loads. Both interactions take 300ms but one uses the browser’s native loading UI and the other uses the webpage’s custom loading UI, making the webpage feel slow.

This isn’t to exonerate SPAs, but I don’t think it helps to talk about it as a “JavaScript” problem because it’s really a user experience problem.


Yes, server-rendering definitely helps, though I have suspicions about its compiled outputs still being very heavy. There's also a lot of CSS frameworks that have an inline-first paradigm meaning there's no saving for the browser in downloading a single stylesheet. But I'm not sure about that.


Yes, though server side rendering is everything but a new thing in the react world. NextJS, Remix, Astro and many other frameworks and approaches exist (and have done so for at least five years) to make sure pages are small and efficient to load.


The amount of complexity to generate HTML/JS is a little staggering sometimes for the majority of simple use cases.

Using Facebook level architectures for actually pretty basic needs can be like hitting an ant-sized problem with a sledgehammer and wondering why the sledgehammer is so heavy and awkward to swing for little things.


Devs only build for the requirements they are given.

You want performance? Then include it in the requirements and give it the necessary time budget in a project.


Eh, I'm a few miles from NYC and have the misfortune of being a comcast/xfinity customer and my packetloss to my webserver is sometimes so bad it takes a full minute to load pages.

I take that time to clean a little, make a coffee, you know sometimes you gotta take a break and breathe. Life has gotten too fast and too busy and we all need a few reminders to slow down and enjoy the view. Thanks xfinity!


> It all ends up meaning that, even if I get a connection, it's not stable

I feel you, but the experience can actually be very good if you invest a bit of time looking at Shadowsocks/V2Ray and building your own infra.


Chrome dev tools offer a "slow 3G" and a "fast 3G". Slow 3G?

With fresh cache on "slow 3G", my site _works_, but has 5-8 second page loads. Would you have consider that usable/sufficient, or pretty awful?


It depends if you are ok with 1/3 to 2/3rds of your visitors bouncing due to loading times and losing 3 to 5x of convertion rate depending on sources...


No need to be sarcastic. What page load speed at "slow 3G" speeds do you believe is necessary to avoid that?

(I work for a non-profit cultural heritage organization, we don't really have "conversions")


I didn't mean this sarcastically, it is a decision and may not apply to all situations.

You can see these kinds of differences with just a few seconds difference, ideally I aim to stay under 2s, even on the slowest connection type. 2s is already very long for a user to wait and many will not.

Non profits are tricky. You could see volunteer sign ups and donations as conversions. I manage a non profit site as well and unfortunately I don't have a good solution that is both fast and approachable for our staff to use, so we had to make that compromise as well.


Is tor not viable?


Or maybe, just get rid of the firewall. I am all for nimble tech, but enabling the Chinese government is not very high on my to-do list.


Please understand that Chinese government wants to block "outside" web services to Chinese residents, and Chinese residents want to access those services. So if the service itself decides to deny access from China, it's actually helping the Chinese government.


Whether you like it or not, over 15% of the world's population lives in China.


Are you a citizen of China, or move there for work/education/research?

Anyway, this is very unrelated, but I'm in the USA and have been trying to sign up for the official learning center for CAXA 3D Solid Modeling (I believe it's the same program as IronCAD, but CAXA 3D in China seems to have 1000x more educational videos and Training on the software) and I can't for the life of me figure out how to get the WeChat/SMS login system they use to work to be able to access the training videos. Is it just impossible for a USA phone number to receive direct SMS website messages from a mainland China website to establish accounts? Seems like every website uses SMS message verification instead of letting me sign up with an email.


I guess they should fix their government, then.


What an incredibly naive and dismissive thing to say.


It isn't naive, and isn't dismissive.

The problem is the CCP.

The only fix is for the people to rise up against them.

This doesn't even have to be violent. Most of the former Soviet Block governments fell without any bloodshed.

What's the alternative? Wait for Xi to "make his mark on history" in the same way that Putin is doing in Ukraine because it's "naive and dismissive" to even talk about unseating him?


It is always so funny to read Americans or Western Europeans saying "just overthrow your dictator bro". Usually told by people who never faced any political violence, or any violence for that matter.

I was born and live in the ex-Soviet country, and stating that Soviet governments fell without any bloodshed is a proof of ignorance.


By 2017 Xi Jinping already had six failed assassination attempts against him, which prompted him to perform a large-scale purge within the ranks of the CCP.

If it was all that easy, it would have been done a long time ago.


> Most of the former Soviet Block governments fell without any bloodshed.

That was Gorbachev. Most leaders of any country would roll tanks.


Gorbachev sent the tanks rolling in Lithuania (https://en.wikipedia.org/wiki/January_Events).


And uncensored websites that function through the great firewall would help organize that government fixing.


I mean, you're not wrong. But if you happen to not be in a position to overthrow the government, maybe the next best thing can be a more realistic approach.


Having a lot of experience commuting on underground public transport (intermittent, congested), and living/working in Australia (remote), I can safely say that most services are terrible for people without "ideal" network conditions.

On the London Underground it's particularly noticeable that most apps are terrible at handling network that comes and goes every ~2 minutes (between stops), and which takes ~15s to connect to each AP as a train with 500 people on it all try to connect at the same time.

In Australia you're just 200ms from everything most of the time. That might not seem like much, but it really highlights which apps trip up on the N+1 request problem.

The only app that I am always impressed with is WhatsApp. It's always the first app to start working after a reconnect, the last to get any traffic through before a disconnect, and even with the latency, calls feel pretty fast.


> In Australia you're just 200ms from everything most of the time. (...)

> The only app that I am always impressed with is WhatsApp. It's always the first app to start working after a reconnect, the last to get any traffic through before a disconnect, and even with the latency, calls feel pretty fast.

The 200ms is telling.

I bet that WhatsApp is one of the rare services you use which actually deployed servers to Australia. To me, 200ms is a telltale sign of intercontinental traffic.

Most global companies deploy only to at most three regions:

* the US (us-east, us-central, us-east+us-east)

* Europe (west-europe),

* and somewhat rarely far-east (meither us-west or Japan)

This means that places such as south Africa, south America, and of course Australia typically have to pull data from one of these regions, which means latencies of at least 200ms due to physics.

Australia is particularly hit because, even with dedicated deployments in their theoretical catchment area, often these servers are actualy located in an entirely separate continent (west-us or Japan) and thus users do experience the performance impact of having packets cross half a globe.


> I bet that WhatsApp is one of the rare services you use which actually deployed servers to Australia. To me, 200ms is a telltale sign of intercontinental traffic.

So, I used to work at WhatsApp. And we got this kind of praise when we only had servers in Reston, Virginia (not at aws us-east1, but in the same neighborhood). Nowadays, Facebook is most likely terminating connections in Australia, but messaging most likely goes through another continent. Calling within Australia should stay local though (either p2p or through a nearby relay).

There's lots of things WhatsApp does to improve experience on low quality networks that other services don't (even when we worked in the same buildings and told them they should consider things!)

In no particular order:

0) offline first, phone is the source of truth, although there's multi-device now. You don't need to be online to read messages you have, or to write messages to be sent whenever you're online. Email used to work like this for everyone; and it was no big deal to grab mail once in a while, read it and reply, and then send in a batch. Online messaging is great, if you can, but for things like being on a commuter train where connectivity ebbs and flows, it's nice to pick up messages when you can.

a) hardcode fallback ips for when DNS doesn't work (not if)

b) setup "0rtt" fast resume, so you can start getting messages on the second round trip. This is part of noise pipes or whatever they're called, and tls 1.3

c) do reasonable-ish things to work with MTU. In the old days, FreeBSD reflected the client MSS back to it, which helps when there's a tunnel like PPPoE and it only modifies outgoing syns and not incoming syn+ack. Linux never did that, and afaik, FreeBSD took it out. Behind Facebook infrastructure, they just hardcode the mss for i think 1480 MTU (you can/should check with tcpdump). I did some limited testing, and really the best results come from monitoring for /24's with bad behavior (it's pretty easy, if you look for it --- never got any large packets and packet gaps are a multiple of MSS - space for tcp timestamps) and then sending back client - 20 to those; you could also just always send back client - 20. I think Android finally started doing pMTUD blackhole detection stuff a couple years back, Apple has been doing it really well for longer. Path MTU Discovery is still an issue, and anything you can do to make it happier is good.

d) connect in the background to exchange messages when possible. Don't post notifications unless the message content is on the device. Don't be one of those apps that can only load messsages from the network when the app is in the foreground, because the user might not have connectivity then

e) prioritize messages over telemetry. Don't measure everything, only measure things when you know what you'll do with the numbers. Everybody hates telemetry, but it can be super useful as a developer. But if you've got giant telemetry packs to upload, that's bad by itself, and if you do them before you get messages in and out, you're failing the user.

f) pay attention to how big things are on the wire. Not everything needs to get shrunk as much as possible, but login needs to be very tight, and message sending should be too. IMHO, http and json and xml are too bulky for those, but are ok for multimedia because the payload is big so framing doesn't matter as much, and they're ok for low volume services because they're low volume.


WhatsApp is (or was) using XMPP for the chat part too, right?

When I was IT person on a research ship, WhatsApp was a nice easy one to get working with our "50+ people sharing two 256kbps uplinks" internet. Big part of that was being able to QoS prioritise the XMPP traffic which WhatsApp was a big part of.

Not having to come up with filters for HTTPS for IP ranges belonging to general-use CDNs that managed to hit the right blocks used by that app, was a definite boon. That, and the fact XMPP was nice and lightweight.

As far as I know google cloud messaging (GCN? GCM? firebase? Play notifications? Notifications by Google? Google Play Android Notifications Service?) also did/does use XMPP, so we often had the bizarre and infuriating very fast notifications _where sometimes the content was in the notification_ but when you clicked on it, other apps would fail to load it due to the congestion and latency and hardcoded timeouts TFA mentions.. argh.

But WhatsApp pretty much always worked, as long as the ship had an active WAN connection.... And that kept us all happy, because we could reach our families.


> WhatsApp is (or was) using XMPP for the chat part too, right?

It's not exactly XMPP, it started with XMPP, but XML is big, so it's tokenized (some details are published in the European Market Access documentation), and there's no need for interop with standard XMPP clients, so login sequence is I think way different.

But it runs on port 5222? by default (with fallbacks to port 443 and 80).

I think GCM or whatever it's called today is plain XMPP (including, optionally, on the server to server side), and runs on ports 5228-5230. Not sure what protocol apple push is, but they use port 5223 which is affiliated with xmpp over tls.

So I think using a non 443 port was helpful for your QoS? But being avaialable on port 443 is helpful for getting through blanket firewall rules. AOL used to run AIM on all the ports, which is even better at getting through firewalls.


Yes - a thousand yeses.

I once got asked "what was a life changing company/product" and my answer was WhatsApp - to slightly bemused looks.

WhatsApp connected the world for free. Obviously they weren't the first to try but when my (very globally distributed family) picked up WhatsApp in '09/'10 we knew we were onto something different. Being able to stay in touch with my brother half way across the world in realtime was very special. Nothing else at the time really competed. SMS was expensive and had latency. Email felt clunky and oddly formal - email clients don't feel "chatty". MSN was crap on mobile and you both had to be online. Ditto for Skype. For calls we even used to do this odd VOIP bridge where you would each call an endpoint for cheap international phone calls.

Meanwhile in 2012, I was able to install WhatsApp on my mum's old Nokia Symbian feature phone, use WhatsApp on a pay-as-you-go sim plan in Singapore communicating over WAP. The data consumption was so low I basically survived 2 months on maybe 1-2 top ups. Compare that with the other day where I turned on roaming on my phone (so I could connect to Singtel to BUY a roaming package) and my phone passively fetched ~50+ MB in seconds and I was hit with 400SGD of data charges (I was able to get them refunded)

I am very grateful to all the work and thought WhatsApp put into building an affordable global resilient communication network and I hope every one of the people involved got the payout they deserve.


> Email felt clunky and oddly formal - email clients don't feel "chatty".

Now (in 2024) have you tried Delta Chat?


No I haven't - it looks interesting thanks for sharing.

I did once think whether a client could abstract over IMAP to build to build a WhatsApp-like UI/UX so clearly other people have thought the same.

Although I will confess I'm no longer really looking for a new chat experience...

How is the latency is for Delta Chat?


> prioritize messages over telemetry

This is a big one that makes low-bandwidth connections unusable in a lot of apps. The deluge of ad/tracking/telemetry SDKs' requests all being fired in parallel with the main business-logic requests makes them all saturate the slow pipe and usually leads to all of them timing out. By being third-party SDKs they may not even give you control of the underlying network requests nor the ability to buffer/delay/cache those requests.

One advantage of being Facebook in this case is that they're the masters of spyware and are unlikely to need to embed third-party spyware, so they can blend tracking/telemetry traffic within their business logic traffic and apply prioritization, including buffering any telemetry and sending it during less critical times.


> I used to work at WhatsApp..

Do you know why there is a 4 device limit? I run into this limit quite a bit, because I have a lot more devices.

And... Why is there is WhatsApp for most commonly used devices, but iPads?


> Why is there is WhatsApp for most commonly used devices, but iPads?

I was frustrated by this a while back, so I asked the PMs. Basically when investing engineering effort WhatsApp prioritises the overall number of users connected, and supporting iPads doesn't really move that metric, because (a) the vast majority of iPad owners also own a smartphone, and (b) iPads are pretty rare outside of wealthy western cities.


I've been gone too long for accurate answers, but I can guess.

For iPad, I think it's like the sibling notes; expected use is very low, so it didm't justify the engineering cost while I was there. But I see some signs it might happen eventually [1]; WhatsApp for Android Tablets wasn't a thing when I was there either, but it is now.

For the four device limit, there's a few things going on IMHO. Synchronization is hard and the more devices are playing, the harder it is. Independent devices makes it easier in some ways because the user devices don't have to be online together to communicate (like when whatsapp web was essentially a remote control for your phone), but it does mean that all of your communications partner's devices have to work harder and the servers have to work harder, too.

Four deviced covers your phone, a desktop at home and work, and a laptop; but really most of the users only have a phone. Allowing more devices makes it more likely that you'll lose track of one or not use it for long enough that it's lost sync, etc.

WhatsApp has usually focused on product features that benefit the most users, and more than 4 devices isn't going to benefit many people, and 4 is plenty for internal use (phone, prod build, dev build, home computer). I'm sure they've got metrics of how many devices are used, and if there's a lot of 4 device users and enough requests, it's a #define somewhere.

[1] https://www.macworld.com/article/668638/how-to-get-whatsapp-...


Yeah, it's very very noticeable that WhatsApp is architected in a way that makes experience great for all kind of poor connectivity scenarios that most other software just... isn't.


I kinda chuckled that you left out most of Asia.

Most global companies (at least the US-based ones) have deployed in India, where I'm at right now. I suppose a billion people online is too big of a market to ignore (or not; I really don't know). Or there are services that I'm completely unaware of that's not in India.

Internet's pretty fast as well. Much faster than a certain conspicuous European country you'd expect to have fast internet ;)


Part of the problem seems to be the cost of peering in Australia:

https://blog.cloudflare.com/bandwidth-costs-around-the-world


WhatsApp has a massive audience in developing countries where it's normal for people to have slower internet and much slower devices. That perspective being so embedded in their development goals certainly has given WhatsApp good reason to be the leading messaging platform in many countries around the world


It works remarkably well when your phone runs out of data and you get capped at 8 kbps. Even voice calls work smoothly.


LOL 8kbps. Damn. That takes me back. I built the first version of one of the world's largest music streaming sites on a 9.6kbps connection.

I was working from home (we had no offices yet) and my cable Internet got cut off. My only back up was a serial cable to a 2G Nokia 9000i. I had to re-encode a chunk of the music catalog at 8kbps so I could test it from home before I pushed the code to production.

Psychoacoustic compression is a miracle.


Nokia 9000i, so you had to work on CSD (which is usually billed per-minute, like dial-up), not even GPRS. How much did that cost you? :P

BTW, an interesting thing is that some/most carriers allow you to use CSD/HSCSD over 3G these days, and you can establish data CSD connection between two phone numbers, yielding essentially a dedicated L2 pipe which isn't routed over internet. Can have much lower latency and jitter if that's what you need. Some specialized telemetry is still using that, however as 3G is slowly getting phased out, it will probably have to change.


God, the cost was probably horrid, but I was connecting in, setting tasks running and logging out. This was late 1999 in the UK, so per-minute prices were high. Also, these were Windows servers, so I had to sluggishly RDP into them, no nice low-bandwidth terminals.


Even wealthy countries will have dead zones (Toronto subway until recently, and like 90% of the landmass), and at least in Canada, “running out of data” and just having none left (or it being extremely expensive) was relatively common until about the last year or two when things got competitive (finally!).

Still have an entire territory where everything is satellite fed (Nunavut), including its capital.


Wow. I didn't knew that Nunavut is entirely satellite fed. That's very interesting to know, thanks. Do you have some more info, though? What kind of satellite - geostationary, LEO? Also which constellation has the most share of traffic from Nunavut?


Unsure if other telcos have their own setups, but:

> Northwestel, one of biggest internet service providers in the North, said it provides broadband service for all Nunavut communities using Telesat's Telestar 19 VANTAGE high-throughput satellite. After the satellite was deployed in July 2018, Northwestel said it would significantly improve broadband connectivity in the territory, increasing speeds to 15 megabits per second.

https://www.ctvnews.ca/sci-tech/efforts-underway-to-improve-...

Interestingly, the same satellite (but different transponders) probably supplies internet to a good chunk of transatlantic voyages between US/Can and Europe: https://www.telesat.com/wp-content/uploads/2022/11/Telstar-1...


It's not only the services them self. I have a very slow mobile connection, and one thing that bothered me immensly is downloading images in the browser: How is it, than when I go to a .jpg url to view an image in the browser it takes way longer and sometimes times out, than hopping over to termux and running wget. I had this problem with both firefox and chrome based browsers. Note that even the wget download usually takes 10-30 seconds on my mobile connection.


Too many services today do stupid image transcoding today. While the URL says jpg it will decide that because your browser supports WebP that what you really must have wanted was a WebP. It'll then either transcode or just send you WebP data for the image or send you a redirect. This is rarely what you actually want.

With wget it sends you the source you actually requested and doesn't try to get clever (stupid). Google likes WebP so that means everyone needs to join the WebP cargo cult even if it means transcoding a lossy format to another lossy format.


You can try going into proxy settings and setting to "none" instead of autodetect. Also, the dns server used by the browser could be different (and slower).


Browsers usually try to multiplex things, sometimes even the same image if the server supports "get specific byte range" or whatever.

There may be a setting to turn a browser back into a dumb wget visual displayer.


I have the same issue with nearly every static asset.


I guess in London you get wifi only on stops, it's the same in Berlin. In Helsinki the wifi connection is available inside the trains, and in the stations. So you never get a connection loss when moving. I never understood the decision in Berlin to do this, why not just provide internet inside the train...

And yeah, most of the internet works very badly when you drop the network all the time...


WiFi at a stop is as easy as putting up a few wireless routers, it's a bit more complex than at home but the same general idea.

Wifi inside the trains involves much more work, and to get them to ALSO be seamless across the entire setup - even harder. Easily 10x or 100x the cost.

It's sad, because the Internet shouldn't be that bad when the network drops all the time; it should just be slower as it waits to send good data.


I think they just put a wire in the tunnel.


Yes, that's the best way which is often used. A "leaky cable" aka "leaky feeder", to be particular.


Berlin did not have mobile connections inside the tunnels until very recently (this year, I believe). This included the trains not being connected to any outside network. Thus wifi on the subway was useless to implement.


They did if you were on o2, that's why I'm still with Aldi Talk (they use the o2 network); they've had LTE through the entire network for a while now. The new thing is 5G for everyone.


Despite Berlin's general lack of parity with modern technology, I've never actually had a problem with internet access across the ubahn network in the past decade. I noticed that certain carriers used to have very different availability when travelling and so switched to a better one, but I was always surprised at being able to handle mobile data whilst underground.


Really? I don't even get consistent internet on the Ringbahn. There are lots of holes in the coverage in Berlin.

Which provider are you with? Vodafone is still dead in large parts of the U-Bahn, but I know that one of them works much better.


Wow! I was in Berlin last week and kept losing connection... like all the time. I use 3 with a Swedish plan. In Sweden, it literally never drops, not on trains, not on metro, not on faraway mountains... it works everywhere.


used to have spotty coverage underground with vodafone, when i switched to telekom, internet suddenly magically worked underground on the routes i used.

I believe someone published a map of the data coverage of different providers on the berlin ubahn, but probably outdated now


Yeah, admittedly this year I've also started experiencing holes on the ringbahn (strangely and consistently around frankfurter allee), but the ubahn has been fine.

I'm with sim.de which I believe is essentially an O2 reseller (apn references o2)


Gesundbrunnen too.


I was in Berlin earlier this month and the cellular connections underground were quite good now. So maybe this is less of a problem?


It's provider-specific


Not any more? https://unternehmen.bvg.de/pressemitteilung/grossprojekt-erf...

Summary: since 2024-05-06, users of all networks also get LTE in the U-Bahn thanks to a project between BVG and Teléfonica (not surprising that Teléfonica deployed the infra because they had the best U-Bahn LTE coverage beforehand)


Yes, right now it's mostly just wifi at stations only. However, they're deploying 4G/5G coverage in the tunnels and expect 80% coverage by the end of 2024 [1].

So… you can expect apps developed by engineers in London to get much worse on slow internet in 2025. :-)

[1]: https://tfl.gov.uk/campaign/station-wifi


The London Underground not having any connectivity for decades after other metro systems showed only that high connectivity during a commute isn't necessary.


London fails to provide a lot of essentials.


Of which the need for status updates and short video isn't one.


I travel a lot. Slow internet is pretty common. Also, right now my mobile data ran out and I'm capped at 8 kbps.

Websites that are Just Text On A Page should load fast, but many don't. Hacker News is blazing fast, but Google's API docs never load.

The worst problem is that most UIs fail to account for slow requests. Buttons feel broken. Things that really shouldn't need megabytes of data to load still take minutes to load or just fail. Google Maps' entire UI is broken.

I wish that developers spent more time designing and testing for slow internet. Instead we get data hungry websites that only work great on fast company laptops with fast internet.

---

On a related note, I run a website for a living, and moving to a static site generators was one of the best productivity moves I've made.

Instead of the latency of a CMS permeating everything I do, I edit text files at blazing speed, even when fully offline. I just push changes once I'm back online. It's a game changer.


Google used to be good about slow apps. Using gmail on the school computers in the day, the site would load so slowly it would detect that and instead load a basic html version.

Now a days I download a 500mb google map cache on my phone and its like there is no point. Everything still has to fetch and pop in.


One additional benefit of static sites, which I learned the hard way, is that you're mostly immune to attacks.

I have a domain that's currently marked as "dangerous" because I didn't use the latest version of Wordpress.


I had a client that I set up with a static site generator. Sadly the client changed their FTP password to something insecure and someone FTP'd in and added a tiny piece of code to every HTML file!


> Google's API docs never load

I used to work on the team that served those docs. Due to some unfortunate technical decisions made in the name of making docs dynamic/interactive they are almost entirely uncached. Basically every request you send hits an AppEngine app which runs Python code to send you back the HTML.

So even though it looks like it should be fast, it’s not.


>Websites that are Just Text On A Page should load fast, but many don't. Hacker News is blazing fast, but Google's API docs never load.

Things aren't always that simple.

I'm in the UK, and my ping time to news.ycombinator.com is 147ms - presumably because it's not using a CDN and is hosted in the USA.

cloud.google.com on the other hand has an 8ms ping time.

So yes, Hacker News is a simple, low-JS page - but there can be other factors that make it feel slow for users in some places. This is despite me being in a privileged situation, having an XGS-PON fibre connection providing symmetric 8Gbps speeds.


HN loads quickly for me _despite_ the 147 ms. I guess partially because it doesn't need 20 roundtrips to sent useful content to me.

At some point, I wrote a webapp (with one specific, limited function, of course) and optimized it to the point where loading it required one 27 kB request. And then turned up the cwnd somewhat, so that it could load in a single RTT :-) Doesn't really matter if you're in Australia then, really.


I have experience with having a webpage with a global audience, served from random US locations (east/west/texas, but no targetting) and pretty unbloated, to something served everywhere with geodns and twice the page weight... Load times were about the same before and after. If we could have kept the low bloat, I expect we would have seen a noticable improvement in load times (but it wasn't important enough to fight over)


I gave a fellow who'd just come off the ice a ride while he was hitchhiking, he was saying that the blog author was somewhat resented by others because his blog posts, as amazing as they are, tended to hog what limited bandwidth they already had while the images uploaded, but he was given priority because the administration realised the PR value of it.

Which I thought ties into the discussion about slow internet nicely.


I was wondering about the practicalities indeed. Not everyone knows when their OS or applications decided it is now a great time to update. You'll have a phone in your pocket that is unnecessarily using all the bandwidth it can get its hands on, or maybe you're using the phone but just don't realise that watching a 720p video, while barely functional, also means the person trying to load it after you cannot watch even 480p anymore (you might not notice because you've got buffer and they'll give up before their buffer is filled enough to start playing).

It seems as though there should be accounting so you at least know what % of traffic went to you in the last hour (and a reference value of bandwidth_available divided by connected_users so you know what % was your share if everyone had equal need of it), if not a system that deprioritises everyone unless you punched the button that says "yes I'm aware what bandwidth I'm using in the next [X≤24] hour(s) and actually need it, thank you" which'll set the QoS priority for your MAC/IP address to normal


The kind of scenario screams a local-first applications and solutions, and it's the reason why the Internet was created in the first place [1][2]. People have been duped by the misleading no software of Salesforce's advert slogan that goes against the very foundation and the spirit of the Internet. For most of its life and duration starting back in 1969, the Mbps is the anomaly not the norm and the its first killer application of email messaging (arguably the still best Internet application) is a local-first [3]. Ironically the culprit application that the author was lamenting in the article is a messaging app.

[1] Local-first software: You own your data, in spite of the cloud:

https://www.inkandswitch.com/local-first/

[2] Local-first Software:

https://localfirstweb.dev/

[3] Leonard Kleinrock: Mr. Internet:

https://www.latimes.com/opinion/la-oe-morrison-use24-2009oct...


So I've hacked a lot on networking things over the years and have spent time getting my own "slow internet" cases working. Nothing as interesting as McMurdo by far but I've chatted and watched YouTube videos on international flights, trains through the middle of nowhere, crappy rural hotels, and through tunnels.

If you have access/the power (since these tend to be power hungry) to a general-purpose computing device and are willing to roll your own my suggestion is to use NNCP [1]. NNCP can can take data, chunk it, then send it. It also comes with a sync protocol that uses noise (though I can't remember if this enables 0RTT) over TCP (no TLS needed so only 1.5 RTT time spent establishing connection) and sends chunks, retrying failed chunks along the way.

NNCP supports feeding data as stdin to a remote program. I wrote a YouTube downloader, a Slack bot, a Telegram bot, and a Discord bot that reads incoming data and interacts with the appropriate services. On the local machine I have a local Matrix (Dendrite) server and bot running which sends data to the appropriate remote service via NNCP. You'll still want to hope (or experiment such) that MTU/MSS along your path is as low as possible to support frequent TCP level retries, but this setup has never really failed me wherever I go and let's me consume media and chat.

The most annoying thing on an international flight is that the NNCP endpoint isn't geographically distributed and depending on the route your packets end up taking to the endpoint, this could add a lot of latency and jitter. I try to locate my NNCP endpoint near my destination but based on the flight's WiFi the actual path may be terrible. NNCP now has Yggdrasil support which may ameliorate this (and help control MTU issues) but I've never tried Ygg under these conditions.

[1]: http://www.nncpgo.org/


This sounds fascinating. Do you have some articles describing your setup?


Hah no, but maybe I should. The reason I haven't is that most of my work is just glue code. I use yt-dlp to do Youtube downloads, make use of the Discord, Slack and Telegram APIs to access those services. I run NNCP and the bots in systemd units, though at this point I should probably bake all of these into a VM and just bring it up on whichever cloud instance I want to act as ingress. Cloud IPs stay static as long as the box itself stays up so you don't need to deal with DNS either. John Goerzen has a bunch of articles about using NNCP [1] that I do recommend interested folks look into but given the popularity of my post maybe I should write an article on my setup.

FWIW I think it's fine that major services do not work under these conditions, though I wish messaging apps did. Both WhatsApp and Telegram IME are well tuned for poor network conditions and do take a lot of these issues into account (a former WA engineer comments in this thread and you can see their attention to detail.) Complaining about these things a lot is sort of like eating out at restaurants and complaining at how much sodium and fat goes into the dishes: restaurants have to turn a profit and catering to niche dietary needs just isn't enough for them to survive. You can always cook at home and get the macros you want. But for you to "cook" your own software you need access to APIs and I'm glad Telegram, Slack, and Discord make this fairly easy. Youtube yt-dlp does the heavy lifting but I wish it were easier, at least for Premium subscribers, to access Youtube via API.

I find Slack to be the absolute worst offender networking-wise. I have no idea how, now that Slack is owned by Salesforce, the app experience can continue to be so crappy on network usage. It's obvious that management there does not prioritize the experience under non-ideal conditions in any way possible. Their app's usage of networks is almost shameful in how bad it is.

[1]: https://www.complete.org/nncp/


I had a similar experience as the author on a boat in the south pacific. Starlink was available but often wasn't used because of its high power usage (60+ watts). So we got local SIM cards instead which provided 4G internet in some locations and EDGE (2G) in others.

EDGE by itself isn't too bad on paper - you get a couple dozen kilobits per second. In reality, it was much worse. I ran into apps with short timeouts that would have worked just fine, if the authors had taken into account that loading can take minutes instead of milliseconds.

Low bandwith, high latency connections need to be part of the regular testing of software. For Linux, there's netem (https://wiki.linuxfoundation.org/networking/netem) that will let you do this.

An issue that the anonymous blog author didn't have was metered connections. Doing OS or even app upgrades was pretty much out of the question for cost reasons. Luckily, every few weeks or so, we got to a location with an unmetered connection to perform such things. But we got very familiar with the various operating systems' ways to mark connections as metered/unmetered disable all automatic updates and save precious bandwidth.


The South Pacific should be very sunny. I guess that you didn't have enough solar panels to provide 60+ watts. I am genuinely surprised.

And "local SIM cards" implies that you set foot on (is)lands to buy said SIM cards. Where did you only get 2G in the 2020s? I cannot believe any of this is still left in the South Pacific.


> Where did you only get 2G in the 2020s?

My previous smartphone supported 4G/3G/Edge, but for some reason the 4G didn't work. At all, ever, anywhere (not a provider/subscription or OS settings issue, and WiFi was fine).

In my country 3G was turned off a while ago to free up spectrum. So it fell back to Edge all the time.

That phone died recently. I'm temporarily using an older phone which also supports 4G/3G/Edge, and where the 4G bit works. Except... in many places where I hang out (rural / countryside) 4G coverage is spotty or non-existant. So it also falls back to Edge most of the time.

Just the other day (while on WiFi) I installed Dolphin as a lightweight browser alternative. Out in the countryside, it doesn't work ("no connection"), even though Firefox works fine there.

Apps won't download unless on WiFi. Not even if you're patient: downloads break somewhere, don't resume properly, or what's downloaded doesn't install because the download was corrupted. None of these issues over WiFi. Same with some websites: roundtrips take too long, server drops the connection, images don't load, etc etc.

Bottom line: app developers or online services don't (seem to) care about slow connections.

But here's the thing: for the average person in this world, fast mobile connections are still the exception, not the norm. Big city / developed country / 4G or 5G base stations 'everywhere' doesn't apply to a large % of the world's population (who do own smartphones these days, even if low-spec ones).

Not that some low-tier mobile plans also cap connection speeds. Read: slow connection even if there's 4G/5G coverage. There's a reason internet cafe's are still a thing around the world.


I live in a developed country with 4g/5g everywhere and its still no better than the 3g era I remember. Modern apps and sites have gobbled up the spare bandwith so the general ux feels the same to the user in terms of latency. On top of that there are frequent connection dropouts even with the device claiming a decent connection to the tower. Using mobile internet seems like 4g often can’t bring the speed to load a modern junked up news or recipe site in sometimes any amount of time.


In the Marquesas and Tuamotus, you don't see a lot of 4G reception, no matter what Vini's pretty map claims.

Re: Sunny - there's quite a bit of cloud cover and other devices onboard like the water maker and fridge (more important than Starlink!) also need a lot of power.


> Low bandwith, high latency connections need to be part of the regular testing of software.

One size does not fit all. It would be a waste of time and effort to architect (or redesign) an app just because a residual subset of potential users might find themselves on a boat in the middle of the Pacific.

Let's keep things in perspective: some projects even skip testing WebApps on more than one browser because they deem that wasteful and an unjustified expense, even though it's trivial to include them on a test matrix, and this is a UI-only.


Websites regularly break because I don't have perfect network coverage on my phone every single day. In a lot of places, I don't even have decent reception. This in Germany in and around a major city.

Why do you think this only applies to people on a boat?


> Websites regularly break because I don't have perfect network coverage on my phone every single day.

Indeed, that's true. However, the number of users that go through similar experiences are quite low and even those who do are always a F5 away from circumventing that issue.

I repeat: even supporting a browser other than the latest N releases of Chrome is a hard sell to some companies. Typically the test matrix is limited to N versions of Chrome and the latest release of Safari when Apple products are supported. If budgets don't stretch even to cover the basics, of course that even rarer edge cases such as a user accessing a service through a crappy network will be far from the list of concerns.


it's not a total redesign, it's just raising a timeout from 30 to 3000


I still think engineering for slow internet is really important, and massively under appreciated by most software developers, but ... LEO systems (like Starlink, especially StarLink) essentially solve the core problems now. I did an Arctic transit (Alaska to Norway) in September and October of 2023, and we could make FaceTime video calls from the ship, way above the Arctic Circle, despite cloud cover, being quite far from land, and ice. This was at the same time OP was in Antartica. Whatever that constraint was, it's just contracting for the service and getting terminals to the sites. The polar coverage is relatively sparse, but still plenty, due to the extraordinarily low population.

https://satellitemap.space/


> I still think engineering for slow internet is really important, and massively under appreciated by most software developers, but ... LEO systems (like Starlink, especially StarLink) essentially solve the core problems now.

I don't think that this is a valid assessment of the underlying problem.

Slow internet means many things, and one of them is connection problems. In connection-oriented protocols like TCP this means slowness induced by drop of packets, and in fire-and-forget protocols like UDP this means your messages don't get through. This means that slowness might take multiple forms, such as low data rates or moments of high throughput followed by momentary connection drops.

One solid approach to deal with slow networks is supporting offline mode, where all data pushes and pulls are designed as transactions that take place asynchronously, and data pushes are cached locally to be retried whenever possible. This brings additional requirements such as systems having to support versioning and conflict resolution.

Naturally, these requirements permeate onto additional UI requirements, such as support for manually synching/refreshing, displaying network status, toggling actions that are meaningless when the network is down, rely on eager loading to remain usable while offline, etc.


> I don't think that this is a valid assessment of the underlying problem.

Inmarsat is an insecure, 2 Mbps (at best) connection with satellites at 22236 miles above Earth and a latency of about 900-1100 ms.

Starlink is a secure, 100 Mpbs (typical) connection with satellites at 342 miles above Earth and a latency of about 25 ms.

Odds of getting a video link on Inmarsat are low, and even if you do, it's potato quality. Source - have been using these systems operationally since the 1990s.


I'd say these days it's more common to deploy in ap-southeast-1 (Singapore) rather than Japan to cover most of APAC.


Delay/disruption tolerant networking (DTN) seeks to address these kind of problems, using alternative techniques and protocols: store-and-forward, Bundle protocols and Licklider Transmission Protocol. Interesting stuff, enjoy!


There’s a diner in SF I frequent. I usually sit 15 feet from the door, on a busy retail corridor, with Verizon premium network access. My iPhone XS reports two bars of LTE but there’s never enough throughout for DNS to resolve. Same at my dentist’s office. I hope to live in a post slow internet world one day, but that is still many years away.

(The XS does have an Intel modem, known to be inferior to the Qualcomm flagship of the era)


I think this is a tough because a lot of bands have been repurposed for 5G and an Xs doesn’t support any of those.


I get 400 Mbps down standing at the door of that same diner. My understanding is that 4G bands are repurposed for 5G in rough proportion to the usage of 4G vs 5G devices at that tower, plus there’s some way to use a band for both. In any case I was having these indoor performance issues back in 2019. I’m pretty sure it’s an Intel issue, and any Qualcomm modem would be fine.


I see this in my french city, there's a particular spot on my commute where my phone (mediatek) will report 2 bars of 5G but speeds will actually be around 3G. I've also noticed other people on the tram having their videos buffer at that spot, so it's not just me. The carriers do not care, of course.

I think there's just some of these areas where operational conditions make the towers break in some specific way.


It's very much not an overloading issue. This always happens, even at 3 AM.


I mean, they're not breaking, they're just overloaded. Solution is generally to add more towers, but that's expensive.


What do we pay them for if not to build out our telecom towers?


How do LEO satellite help me when a commuter train full of people connecting to the same AP enters the station I'm in? I live in one of the most densely populated places on Earth, chock-full of 5G antennas and wifi stations. Yet I still feel it when poorly engineered websites trip up on slow/intermittent connections.


Pole doesn't have Starlink. McMurdo does. There are reasons.

Polar coverage from GEO satellites is limited because how close to the horizon GEO satellites are from Pole. Pole uses old GEO satellites which are low on fuel and have relatively large inclinations... then you can talk to them for ~ 6 hours per 24.

Schedule: https://www.usap.gov/technology/1935/


Idealistic! I think a lot of countries are going to block starlink in the future by interfering with the signals, much like the success some countries are having interfering so heavily with GPS. Their governments won't want uncensored web, or an American company being the gateway to the internet. They'll maintain whatever territorial networks they have now and the speed question is still relevant.

Also the number of people worldwide whose only access to the internet is a $100 android phone with older software and limited CPU should be considered


Even if people want to / are allowed to, I'm trying to imagine how well starlink could plausibly function if 2 billion people switched from their sketchy terrestrial service to starlink.

As a luxury product used by a few people, maybe it "solves" the problem, but I don't think this is a very scalable solution.


Slow Internet isn't just remote places, it also crops up in heavily populated urban areas. It's sad that you had better connectivity above the Arctic circle than the typical connectivity with hotel WiFi. Bad connectivity also happens with cellular connections all over the place.


doesn't take much stress to make starlink exacerbate packet loss levels way above docsis. it's ok for off-grid but not for the majority.


Starlink has its own networking issues thanks to a lot of latency jitter and 0.5% or more packet loss. See the discussion from last month: https://news.ycombinator.com/item?id=40384959

The biggest issue for Starlink at the poles is, as you say, very sparse coverage. Also I suspect Starlink has to usually relay polar packets between satellites, not just a simple bent pipe relaying to a ground station.


What ship were you on and was it the northwest passage? We haven't had good luck north of 80 degrees with starlink.


FYI, Space Norway will launch two satellites this summer on a Falcon 9 that will be going in a HEO orbit, among the payloads on the satellites is a Viasat/Inmarsat Ka-band payload which will provide coverage north of 80 degrees. Latency will probably be GEO+, but coverage is coverage I guess. :-)


LEO internet won't be practically usable for another decade for loads of us who are not living in a country the US DOD favors.


Not really the point of your post, but that sounds like a really cool trip. What were you doing up there?


IETF draft proposal to extend HTTP for efficient state synchronization, which could improve UX on slow networks, https://news.ycombinator.com/item?id=40480016

  The Braid Protocol allows multiple synchronization algorithms to interoperate over a common network protocol, which any synchronizer's network messages can be translated into.. The current Braid specification extends HTTP with two dimensions of synchronization:

  Level 0: Today's HTTP
  Level 1: Subscriptions with Push Updates
  Level 2: P2P Consistency (Patches, Versions, Merges)

  Even though today's synchronizers use different protocols, their network messages convey the same types of information: versions in time, locations in space, and patches to regions of space across spans of time. The composition of any set of patches forms a mathematical structure called a braid—the forks, mergers, and re-orderings of space over time.
Hope springs eternal!


Yes, please! Even more layers of needlessly complex crap will definitely improve things!


> needlessly complex

The optional Braid extension can _reduce_ complexity for offline-first apps, e.g. relative to WebDAV, https://news.ycombinator.com/item?id=40482610

  You might be surprised at just how elegantly HTTP extends into a full-featured synchronization protocol. A key to this elegance is the Merge-Type: this is the abstraction that allows a single synchronization algorithm to merge across multiple data types.

  As an application programmer, you will specify both the data types of your variables (e.g. int, string, bool) and also the merge-types (e.g. "this merges as a bank account balance, or a LWW unique ID, or a collaborative text field"). This is all the application programmer needs to specify. The rest of the synchronization algorithm gets automated by middleware libraries that the programmer can just use and rely upon, like his compiler, and web browser.

  I'd encourage you to check out the Braid spec, and notice how much we can do with how little. This is because HTTP already has almost everything we need. Compare this with the WebDAV spec, for instance, which tries to define versioning on top of HTTP, and you'll see how monstrous the result becomes. Example here: 
https://news.ycombinator.com/item?id=40481003


This sounds suspiciously like Matrix. Does is required buy-in from user agents or will it benefit existing browsers once implemented?


From https://braid.org/

  Braid is backwards-compatible with today's web, works in today's browsers, and is easy to add to existing web applications.. You can use Braid features in Chrome with the Braid-Chrome extension.
Demo of Statebus+Braid sync on existing browsers: https://stateb.us/#demos


Grump take: More complex technology will not fix a business-social problem. In fact, you have to go out of your way to make things this shitty. It’s not hard to build things with few round trips and less bloat, it’s much easier. The bloat is there for completely different reasons.

Sometimes the bloat is unnoticeable on juicy machines and fast internet close to the DC. You can simulate that easily, but it requires the company to care. Generally, ad-tech and friends cares very little about small cohorts of users. In fact, the only reason they care about end users at all is because they generate revenue for their actual customers, ie the advertisers.


> Generally, ad-tech and friends cares very little about small cohorts of users.

Sure, and it will keep being that way. But if this gets improved at the transport layer, seems like a win.

As an analogy, if buses are late because roads are bumpy and drivers are lousy, fixing the bumpy road may help, even if drivers don't change their behavior.


> But if this gets improved at the transport layer, seems like a win.

What do you mean? TCP and HTTP is already designed for slow links with packet loss, it’s old reliable tech from before modern connectivity. You just have to not pull in thousands of modules in the npm dep tree and add 50 microservice bloatware, ads and client side “telemetry”. You set your cache-control headers and etags, and for large downloads you’ll want range requests. Perhaps some lightweight client side retry logic in case of PWAs. In extreme cases like Antarctica maybe you’d tune some tcp kernel params on the client to reduce RTTs under packet loss. There is nothing major missing from the standard decades old toolbox.

Of course it’s not optimal, the web isn’t perfect for offline hybrid apps. But for standard things like reading the news, sending email, chatting, you’ll be fine.


> fixing the bumpy road may help

It really wouldn't. Lousy drivers are a way thinner bottleneck than the roads.

But it will improve the services where the drivers are good.

If the protocol is actually any good (its goals by themselves already make me suspicious it won't be), the well-designed web-apps out there can become even better designed. But it absolutely won't improve the situation people are complaining about.


>As an analogy, if buses are late because roads are bumpy and drivers are lousy, fixing the bumpy road may help, even if drivers don't change their behavior

No. It will make things worse, because now lousy drivers are no longer constrained by the bumpy road, so they will become even lousier.

Case in point: software is shittier than ever, despite the 100x increase in computer performance since the 90s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: