A lot of this resonates. I'm not in Antartica, I'm in Beijing, but still struggle with the internet. Being behind the great firewall means using creative approaches. VPNs only sometimes work, and each leaves a signature that the firewall's hueristics and ML can eventually catch onto. Even state-mandated ones are 'gently' limited at times of political sensitivity. It all ends up meaning that, even if I get a connection, it's not stable, and it's so painful to sink precious packets into pointless web-app-react-crap roundtrips.
I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly. In deficit of time travel, if people could just learn to open web tools and use its throttling tool: turn it to 3g, and see if their webapp is resilient. Please!
Amen to this. And give them a mobile cell plan with 1GB of data per month.
I've seen some web sites with 250MB payloads on the home page due to ads and pre-loading videos.
I work with parolees who get free government cell phones and then burn through the 3GB/mo of data within three days. Then they can't apply for jobs, get bus times, rent a bike, top up their subway card, get directions.
"But all the cheap front-end talent is in thick client frameworks, telemetry indicates most revenue conversions are from users on 5G, our MVP works for 80% of our target user base, and all we need to do is make back our VC's investment plus enough to cash out on our IPO exit strategy, plus other reasons not to care" — self-identified serial entrepreneur, probably
Having an adblocker (firefox mobile works with uBlock origin) and completely deactivate loading of images and videos can get you quite far with limited connection.
it allows javascript on the original site domain, but turns it off for external domains, buts then lets you selectively turn it back on, and remembers what you've selected. Turning off javascript turns off a lot of both addtional downloading and cpu cycles. for impatient people it's tedious, but for minimalists it's heaven.
I strongly suspect that the venn diagram of people that know how to minimize data usage, and indigent people being given free cell phones with limited data has precious little overlap.
That makes little sense. If it was a deliberate plan, it would be much more effective (and cheaper) to not provide the cellphones and data plan in the first place.
Exactly. It's easy to attribute malice where incompetence is the answer. The root problem is that there is no feedback loop so that the government agency funding the program isn't properly looking at the product being delivered at the other end and putting measures in place to stop the contractors from taking advantage of the program to only deliver the barest minimum without tracking the effects on the end customer.
It's not deliberate by the government. And the situation is getting better. Basically some of the telcos being a bit greedy and only giving 3GB/mo data to the users and spending all their profits on paying people in the hoods $20 a pop to sign people up.
Recently I've started to see some contracts with vastly more data, including some unlimiteds.
Just put them on a train during work hours! We have really good coverage here but there's congestion and frequent random dropouts, and a lot of apps just don't plan for that at all.
Yeah - and do they use it? Does it help experiencing them the joy of wanting just a little text information, but having to load loads of other stuff first and your connection timing out? I am afraid to get the full experience, they actually need to have a bad connection.
My first education was in industrial design, there designers usually have pride in avoiding needless complexity and material waste.
Even if I do web services I try to do so with as little moving parts as needed and with huge reliance on trusted and standardized solutions. If see any addition of complexity as a cost that needs to be weighed against the benefits you hope it brings you will automatically end up with a lean and fast application.
I lived in Shoreditch for 7 years and most of my flats had almost 3G internet speeds. The last one had windows that incidentally acted like a faraday cage.
I always test my projects with throttled bandwidth, largely because (just like with a11y) following good practices results in better UX for all users, not just those with poor connectivity.
Edit: Another often missed opportunity is building SPAs as offline-first.
>> Another often missed opportunity is building SPAs as offline-first.
You are going to get so many blank stares at many shops building web apps when suggesting things like this. This kind of consideration doesn't even enter into the minds of many developers in 2024. Few of the available resources in 2024 address it that well for developers coming up in the industry.
Back in the early-2000s, I recall these kinds of things being an active discussion point even with work placement students. Now that focus seems to have shifted to developer experience with less consideration on the user. Should developer experience ever weigh higher than user experience?
>Should developer experience ever weigh higher than user experience?
Developer experience is user experience. However, in a normative sense, I operate such that Developer suffering is preferable to user suffering to get any arbitrary task done.
The irony for me is that I got into React because I thought that we could finally move to an offline-first SPA application. Current trends seem to go the opposite.
SPAs and "engineering for slow internet" usually don't belong together. The giant bundles usually guarantee slow first paint, and the incremental rendering/loading usually guarantees a lot of network chatter that randomly breaks the page when one of the requests times out. Most web applications are fundamentally online. For these, consider what inspires more confidence when you're in a train on a hotspot: an old school HTML forms page (like HN), or a page with a lot of React grey placeholders and loading spinners scattered throughout? I guess my point is that while you can take a lot of careful time and work to make an SPA work offline-first, as a pattern it tends to encourage the bloat and flakiness that makes things bad on slow internet.
Oh, London is notorious for having... questionable internet speeds in certain areas. It's good if you live in a new build flat/work in a recently constructed office building or you own your own home in a place OpenReach have gotten to yet, but if you live in an apartment building/work in an office building more than 5 or so years old?
Yeah, there's a decent chance you'll be stuck with crappy internet as a result. I still remember quite a few of my employers getting frustrated that fibre internet wasn't available for the building they were renting office space in, despite them running a tech company that really needed a good internet connection.
We design for slow internet, react is one of the better options for it with ssr, code splitting and http2 push, mixed in with more off-line friendly clients like Tauri. You can also deploy very near people if you work “on the edge”.
I’m not necessarily disagreeing with your overall point, but modern JS is actually rather good at dealing with slow internet for server-client “applications”. It’s not necessarily easy to do, and there is almost no online resources that you can base your projects on if you’re a Google/GPT programmer. Part of this is because of the ocean of terrible JS resources online, but a big part of it is also that the organisations which work like this aren’t sharing. We have 0 public resources for the way we work as an example, because why would we hand that info to our competition?
Very sad. I use http/2 push on my website to push the CSS if there’s no same-origin referrer. It saves a full roundtrip which can be pretty significant on high latency connections. The html+css is less than 14kb so it can all be sent on the first roundtrip as it’s generally within TCP’s initial congestion window of about 10*1400.
The only other alternative is to send the CSS inline, but that doesn’t work as well for caching for future page loads.
103 Early Hints is not nearly as useful as it doesn’t actually save a round trip. It only works around long request processing time on the server. Also, most web frameworks will have a very hard time supporting early hints, because it doesn’t fit in the normal request->response cycle, so I doubt it’s going get much adoption.
Also it would be nice to be able to somehow push 30x redirects to avoid more round trips.
If you're behind an overloaded geosynchronous satellite then no JS at all just moves the pain around. At least once it's loaded a JS-heavy app will respond to most mouse clicks and scrolls quickly. If there's no JS then every single click will go back to the server and reload the entire page, even if all that's needed is to open a small popup or reload a single word of text.
This makes perfect sense in theory and yet it's the opposite of my experience in practice. I don't know how, but SPA websites are pretty much always much more laggy than just plain HTML, even if there are a lot of page loads.
It often is that way, but it's not for technical reasons. They're just poorly written. A lot of apps are written by inexperienced teams under time pressure and that's what you're seeing. Such teams are unlikely to choose plain server-side rendering because it's not the trendy thing to do. But SPAs absolutely can be done well. For simple apps (HN is a good example) you won't get too much benefit, but for more highly interactive apps it's a much better experience than going via the server every time (setting filters on a shopping website would be a good example).
Yep. In SPAs with good architecture, you only need to load the page once, which is obviously weighed down by the libraries, but largely is as heavy or light as you make it. Everything else should be super minimal API calls. It's especially useful in data-focused apps that require a lot of small interactions. Imagine implementing something like spreadsheet functionality using forms and requests and no JavaScript, as others are suggesting all sites should be: productivity would be terrible not only because you'd need to reload the page for trivial actions that should trade a but of json back and forth, but also because users would throw their devices out the window before they got any work done. You can also queue and batch changes in a situation like that so the requests are not only comparatively tiny, you can use fewer requests. That said, most sites definitely should not be SPAs. Use the right tool for the job
> which is obviously weighed down by the libraries, but largely is as heavy or light as you make it
One thing which surprised me at a recent job was that even what I consider to be a large bundle size (2MB) didn't have much of an effect on page load time. I was going to look into bundle splitting (because that included things like a charting library that was only used in a small subsection of the app). But in the end I didn't bother because I got page loads fast (~600ms) without it.
What did make a huge different was cutting down the number of HTTP requests that the app made on load (and making sure that they weren't serialised). Our app was originally going auth by communicating with Firebase Auth directly from the client, and that was terrible for performance because that request was quite slow (most of second!) and blocked everything else. I created an all-in-one auth endpoint that would check the user's auth and send back initial user and app configuration data in one ~50ms request and suddenly the app was fast.
In many cases, like satellite Internet access or spotty mobile service, for sure. But if you have low bandwidth but fast response times, that 2mb is murder and the big pile o requests is NBD.If you have slow response times but good throughput, the 2MB is NBD but the requests are murder.
An extreme and outdated example, but back when cable modems first became available, online FPS players were astonished to see how much better the ping times were for many dial up players. If you were downloading a floppy disk of information, the cable modem user would obviously blow them away, but their round trip time sucked!
Like if you're on a totally reliable but low throughput LTE connection, the requests are NBD but the download is terrible. If you're on spotty 5g service, it's probably the opposite. If you're on, like, a heavily deprioritized MVNO with a slower device, they both super suck.
It's not like optimization is free though, which is why it's important to have a solid UX research phase to get data on who is going to use it, and what their use case is.
My experience agrees with this comment – I’m not sure why web browsers seem to frequently get hung up on only some Http requests at times, unrelated to the actual network conditions. Ie: in the browser the HTTP request is timing out or in a blocked state and hasn’t even reached the network layer when this occurs. (Not sure if I should be pointing the finger here at the browser or the underlying OS). However, when testing slow / stalled loading issues, this (the browser itself) is frequently one of the culprits- however, this issue I am referring to even further reinforces the article/sentiments on this HN thread (cut down on the number of requests / bloat, and this issue too can be avoided.)
If if the request itself hasn't reached the network layer but is having a networky feeling hang, I'd look into DNS. It's network dependent but handled by the system so it wouldn't show up in your web app requests. I'm sure there's a way to profile this directly but unless I had to do it all the time I'd probably just fire up wireshark.
Chrome has a built-in, hard coded limit of six (6) concurrent requests. Once you have that many in flight, any subsequent requests will be kept in queue.
Now take a good, hard look at the number of individual resources your application's page includes. Every tracker, analytics crapware, etc. gets in that queue. So do all the requests they generate. And the software you wrote is even slower to load because marketing insisted that they must have their packages loading at the top of the page.
Can you point me to a decently complex front end app, written by a small team, that is well written? I’ve seen one, Linear, but I’m interested to see more
An SPA I use not infrequently is the online catalog on https://segor.de (it's a small store for electronics components). When you open it, it downloads the entire catalog, some tens of MB of I-guess-it-predates-JSON, and then all navigation and filtering is local and very fast.
Having written a fair amount of SPA and similar I can confirm that it is actually possible to just write some JavaScript that does fairly complicated jobs without the whole thing ballooning into the MB space. I should say that I could write a fairly feature-rich chat-app in say 500 kB of JS, then minified and compressed it would be more like 50 kB on the wire.
How my "colleagues" manage to get to 20 MB is a bit of mystery.
> How my "colleagues" manage to get to 20 MB is a bit of mystery.
More often than not (and wittingly or not) it is effectively by using javascript to build a browser-inside-the-browser, Russian doll style, for the purposes of tracking users' behavior and undermining privacy.
Modern "javascript frameworks" do this all by default with just a few clicks.
There's quite some space between "100% no JS" and "full SPA"; many applications are mostly backend template-driven, but use JS async loads for some things where it makes sense. The vote buttons on Hacker News is a good example.
I agree a lot of full SPAs are poorly done, but some are do work well. FastMail is an example of a SPA done well.
The reason many SPAs are slower is just latency; traditional template-driven is:
- You send request
- Backend takes time to process that
- Sends all the data in one go.
- Browser renders it.
But full SPA is:
- You send request
- You get a stub template which loads JS.
- You load the JS.
- You parse the JS.
- You send some number of requests to get some JSON data, this can be anything from 1 to 10 depending on how it was written. Sometimes it's even serial (e.g. request 1 needs to complete, then uses some part of that to send request 2, and then that needs to finish).
- Your JS parses that and converts it to HTML.
- It injects that in your DOM.
- Browser renders it.
There are ways to make that faster, but many don't.
However, getting 6.4KB of data (just tested on my blog) or 60KB of data (a git.sr.ht repository with a README.md and a PNG) is way better than getting 20MB of frameworks in the first place.
Yes. It's inexcusable that text and images and video pulls in megabytes of dependencies from dozens of domains. It's wasteful on every front: network, battery, and it's also SLOW.
Yeah, but your blog is not a full featured chat system with integrated audio and video calling, strapped on top of a document format.
There are a few architectural/policy problems in web browsers that cause this kind of expansion:
1. Browsers can update large binaries asynchronously (=instant from the user's perspective) but this feature is only very recently available to web apps via obscure caching headers and most people don't know it exists yet/frameworks don't use it.
2. Large download sizes tend to come from frameworks that are featureful and thus widely used. Browsers could allow them to be cached but don't because they're over-aggressive at shutting down theoretical privacy problems, i.e. the browser is afraid that if one site learns you used another site that uses React, that's a privacy leak. A reasonable solution would be to let HTTP responses opt in to being put in the global cache rather than a partitioned cache, that way sites could share frameworks and they'd stay hot in the cache and not have to be downloaded. But browsers compete to satisfy a very noisy minority of people obsessed with "privacy" in the abstract, and don't want to do anything that could kick up a fuss. So every site gets a partitioned cache and things are slow.
3. Browsers often ignore trends in web development. React style vdom diffing could be offered by browsers themselves, where it'd be faster and shipped with browser updates, but it isn't so lots of websites ship it themselves over and over. I think the SCIter embedded browser actually does do this. CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.
I think at some pretty foundational level the way this stuff works architecturally is wrong. The web needs a much more modular approach and most JS libraries should be handled more like libraries are in desktop apps. The browser is basically an OS already anyway.
> CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.
I don't know exactly which features you are referring to, but you may have noticed that CSS has adopted native nesting, very similarly to Sass, but few sites actually use it. Functions and mixins are similar compactness/convenience topics being worked on by the CSSWG.
I hadn't noticed and I guess this is part of the problem. Sorry this post turned into a bit of a rant but I wrote it now.
When it was decided that HTML shouldn't be versioned anymore it became impossible for anyone who isn't a full time and very conscientious web dev to keep up. Versions are a signal, they say "pay attention please, here is a nice blog post telling you the most important things you need to know". If once a year there was a new version of HTML I could take the time to spend thirty minutes reading what's new and feel like I'm at least aware of what I should learn next. But I'm not a full time web dev, the web platform changes constantly, sometimes changes appear and then get rolled back, and everyone has long since plastered over the core with transpilers and other layers anyway. Additionally there doesn't seem to be any concept of deprecating stuff, so it all just piles up like a mound of high school homework that never shrinks.
It's one of the reasons I've come to really dislike CSS and HTML in general (no offense to your work, it's not the browser implementations that are painful). Every time I try to work out how to get a particular effect it turns out that there's now five different alternatives, and because HTML isn't versioned and web pages / search results aren't strongly dated, it can be tough to even figure out what the modern way to do it is at all. Dev tools just make you even more confused because you start typing what you think you remember and now discover there are a dozen properties with very similar names, none of which seem to have any effect. Mistakes don't yield errors, it just silently does either nothing or the wrong thing. Everything turns into trial-and-error, plus fixing mobile always seems to break desktop or vice-versa for reasons that are hard to understand.
Oh and then there's magic like Tailwind. Gah.
I've been writing HTML since before CSS existed, but feel like CSS has become basically non-discoverable by this point. It's understandable why neither Jetpack Compose nor SwiftUI decided to adopt it, even whilst being heavily inspired by React. The CSS dialect in JavaFX I find much easier to understand than web CSS, partly because it's smaller and partly because it doesn't try to handle layout. The way it interacts with components is also more logical.
The privacy issues aren't just hypothetical, but that aside, that caching model unfortunately doesn't mesh well with modern webdev. It requires all dependencies to be shipped in full, no tree shaking to only include the needed functions. And separately as individual files.. and for people to actually stick to the same versions of dependencies
Also wonder how many savings are still possible with a more efficient HTML/CSS/JS binary representation. Text is low tech and all but it still hurts to waste so many octets for such a relatively low amount of possible symbols.
Applies to all formal languages actually. 2^(8x20x10^6) ~= 2x10^48164799 is such a ridiculously large space...
The generalisation of this concept is what I like call the "kilobyte" rule.
A typical web page of text on a screen is about a kilobyte. Sure, you can pack more in with fine print, and obviously additional data is required to represent the styling, but the actual text is about 1 kb.
If you've sent 20 MB, then that is 20,000x more data than what was displayed on the screen.
Worse still, an uncompressed 4K still image is only 23.7 megabytes. At some point you might be better off doing "server side rendering" with a GPU instead of sending more JavaScript!
> "server side rendering" with a GPU instead of sending more JavaScript
Some 7~10 years ago I remember I saw somewhere (maybe here on HN) a website which did exactly this: you gave it an URL - it downloaded a webpage with all its resources, rendered and screenshot'ed it (probably in headless Chrome or something), and compared size of png screenshot versus size of webpage with all its resources.
For many popular websites, png screenshot of a page indeed was several times less than webpage itself!
I read epubs, and they’re mostly html and css files zipped. The whole book usually comes under a MB if there’s not a lot of big pictures. Then you come across a website and for just an article you have to download tens of MBs. Disable JavaScript and the website is broken.
Brotli is a better example, as it was literally purpose made for HTML/CSS/JS. It is now supported basically everywhere for HTTP compression, and uses a huge custom dictionary (about 120KB) that was trained on simulated web traffic.
You can even swap in your own shared dictionary with a HTTP header, but trying to make your own dictionary specific to your content is a fool’s errand, you’ll never amortize the cost in total bits.
What you CAN do with shared dictionaries though, is delta updates.
False dichotomy, with what is likely extreme hyperbole on the JS side. Are there actual sites that ship 20 MB, or even 5 MB or more, of frameworks? One can fit a lot of useful functionality in 100 KB or less of JS< especially minified and gzipped.
Well, I'm working right now so let me check our daily "productivity" sites (with an adblocker installed):
- Google Mail: Inbox is ~18MB (~6MB Compressed). Of that, 2.5MB is CSS (!) and the rest is mostly JS
- Google Calendar: 30% lower, but more or less the same proportions
- Confluence: Home is ~32MB (~5MB Comp.). There's easily 20MB of Javascript and at least 5MB of JSON.
- Jira: Home is ~35MB (~7MB compressed). I see more than 25MB of Javascript
- Google Cloud Console: 30MB (~7MB Comp.). I see at least 16MB of JS
- AWS Console: 18MB (~4MB Comp.). I think it's at least 12MB of JS
- New Relic: 14MB (~3MB Comp.). 11MB of JS.
This is funny because even being way more data heavy than the rest, its weight is way lower.
- Wiz: 23MB (~6MB Comp.) 10MB of JS and 10MB of CSS
- Slack: 60MB (~13MB Compressed). Of that, 48MB of JS. No joke.
I sometimes wish I could spare the time just to tear into something like that Slack number and figure out what it is all doing in there.
Javascript should even generally be fairly efficient in terms of bytes/capability. Run a basic minimizer on it and compress it and you should be looking at something approaching optimal for what is being done. For instance, a variable reference can amortize down to less than one byte, unlike compiled code where it ends up 8 bytes (64 bits) at the drop of a hat. Imagine how much assembler "a.b=c.d(e)" can compile into to, in what is likely represented in less compressed space than a single 64-bit integer in a compiled language.
Yet it still seems like we need 3 megabytes of minified, compressed Javascript on the modern web just to clear our throats. It's kind of bizarre, really.
js developers had this idea of "1 function = 1 library" for a really long time, and "NEVER REIMPLEMENT ANYTHING". So they will go and import a library instead of writing a 5 line function, because that's somehow more maintainable in their mind.
Then of course every library is allowed to pin its own dependencies. So you can have 15 different versions of the same thing, so they can change API at will.
I poked around some electron applications.
I've found .h files from openssl, executables for other operating systems, megabytes of large image files that were for some example webpage, in the documentation of one project. They literally have no idea what's in there at all.
That's a good question. I just launched Slack and took a look. Basically: it's doing everything. There's no specialization whatsoever. It's like a desktop app you redownload on every "boot".
You talk about minification. The JS isn't minified much. Variable names are single letter, but property names and more aren't renamed, formatting isn't removed. I guess the minifier can't touch property names because it doesn't know what might get turned into JSON or not.
There's plenty of logging and span tracing strings as well. Lots of code like this:
The JS is completely generic. In many places there are if statements that branch on all languages Slack was translated into. I see checks in there for whether localStorage exists, even though the browser told the server what version it is when the page was loaded. There are many checks and branches for experiments, whether the company is in trial mode, whether the code is executing in Electron, whether this is GovSlack. These combinations could have been compiled server side to a more minimal set of modules but perhaps it's too hard to do that with their JS setup.
Everything appears compiled using a coroutines framework, which adds some bloat. Not sure why they aren't using native async/await but maybe it's related to not being specialized based on execution environment.
Shooting from the hip, the learnings I'd take from this are:
1. There's a ton of low hanging fruit. A language toolchain that was more static and had more insight into what was being done where could minify much more aggressively.
2. Frameworks that could compile and optimize with way more server-side constants would strip away a lot of stuff.
3. Encoding logs/span labels as message numbers+interpolated strings would help a lot. Of course the code has to be debuggable but hopefully, not on every single user's computer.
4. Demand loading of features could surely be more aggressive.
But Slack is very popular and successful without all that, so they're probably right not to over-focus on this stuff. Especially for corporate users on corporate networks does anyone really care? Their competition is Teams after all.
This is mind blowing to me. I expect that the majority of any application will be the assets and content. And megabytes of CSS is something I can't imagine. Not the least for what it implies about the DOM structure of the site. Just, what!? Wow.
Well, in TFA, if you re-read the section labeled "Detailed, Real-world Example" yes, that is exactly what the person was encountering. So no hyperbole at all actually.
Getting at least “n”kb of html with content in it that you can look at in the interim is better than getting the same amount of framework code.
SPA’s also have a terrible habit of not behaving well after being left alone for a while. Nothing like coming back to a blank page and having it try to redownload the world to show you 3kb of text, because we stopped running the VM a week ago.
Here’s something that’s not true: in js, the first link you click navigates you. In the browser, clicking a second link cancels the first one and navigates to the second one.
Yeah, right. GitHub migrated from serving static sites to displaying everything dynamically and it’s basically unusable nowadays. Unbelievably long load times, frustratingly unresponsive, and that’s on my top spec m1 MacBook Pro connected to a router with fiber connection.
Let’s not kid ourselves, no matter how many fancy features, splitting, optimizing, whatever you do, JS webapps may be an upgrade for developers, they’re a huge downgrade for users in all aspects
Between massacring the UX and copilot I've more or less stopped engaging with github. I got tempted the other day to comment on an issue and it turns out the brain trust over at Microsoft broke threaded comment replies. They still haven't fixed keyboard navigation in their bullshit text widget.
I could put up with the glacial performance if it actually worked in the first place, but apparently adding whiz bang "AI" features is the only thing that matters these days.
The whole thing smacks of a rewrite so someone could get a bonus and/or promotion.
Surprisingly my experience of GitLab is even worse! How's yours? BitBucket wasn't much better from memory either. Seems like most commercial offerings in this spaces suck.
I've been using Sourcehut. I respect Drew's commitment to open source, but I think that a lot of the UX misses the mark. For most things I really don't want an email based work flow and some pieces feel a bit disjointed. Overall though it has most of the features I want, and dramatically less bullshit than Github.
In my experience page weight isn't usually the biggest issue. On unreliable connections you'll often get decent bandwidth when you can get through. It's applications that expect to be able to multiple HTTP requests sequentially and don't deal well with some succeeding and failing (or just network failures in general) that are the most problematic.
If I can retry a failed a network request that's fine. If I have to restart the entire flow when I get a failure that's unusable.
When used well, JS will improve the experience especially for high-latency low bandwidth users. Not doing full page refreshes for example, or not loading all data at once.
So no, "no JS at all" is not "by far the lightest weight" in many cases. This is just uncritically repeating dogma. Even 5K to 20K of JS can significantly increase performance.
Sites like Hacker News, Stack Overflow, old.reddit.com, and many more greatly benefit from JS. I made GoatCounter tons faster with JS as well: rendering 8 charts on the server can be slow. It uses a "hybrid approach" where it renders only the first one on the server, sends the HTML, and then sends the rest later over a websocket. That gives the best of both: fast initial load without too much waiting, and most of the time you don't even notice the rest loads later.
No JS can actually increase roundtrips in some cases, and that's a problem if you're latency-bound and not necessarily speed-bound.
Imagine a Reddit or HN style UI with upvote and downvote buttons on each comment. If you have no JS, you have to reload the page every time one of the buttons is clicked. This takes a lot of time and a lot of packets.
If you have an offline-first SPA, you can queue the upvotes up and send them to the server when possible, with no impact on the UI. If you do this well, you can even make them survive prolonged internet dropouts (think being on a subway). Just save all incomplete voting actions to local storage, and then try re-submmitting them when you get internet access.
It's not always the application itself per se. It's the various / numerour marketing, analytics or (sometimes) ad-serving scripts. These third party vendors aren't often performance minded. They could be. They should be.
And the insistence on pushing everything into JS instead of just serving the content. So you’ve got to wait for the skeleton to dl, then the JS, which’ll take its sweet time, just to then(usually blindly) make half a dozen _more_requests back out, to grab JSON, which it’ll then convert into html and eventually show you. Eventually.
Yup. There's definitely too much unnecessary complexity in tech and too much over-design in presentation. Applications, I understand. Interactions and experience can get complicated and nuanced. But serving plain ol' content? To a small screen? Why has that been made into rocket science?
I live in a well connected city, but my work only pays for other continent based Virtual Machines so most of my projects end up "fast" but latency bound, it's been an interesting exercise of minimizing pointless roundtrips in a technology that expects you to use them for everyting
Tried multiple VPNs in China and finally rolled my own obfuscation layer for Wireshark. A quick search revealed there are multiple similar projects on GitHub, but I guess the problem is once they get some visibility, they don't work that well anymore. I'm still getting between 1 and 10mbit/s (mostly depending on time of day) and pretty much no connectivity issues.
Tbh, developers just need to test their site with existing tools or just try leaving the office. My cellular data reception in Germany in a major city sucks in a lot of spots. I experience sites not loading or breaking every single day.
I'm sure this is not what you meant but made me lol anyways: sv techbros would sooner plan for "outer space internet" than give a shit about the billions of people with bad internet and/or a phone older than 5 years.
What about plain HTML & CSS for all the websites where this approach is sufficient? Then apply HTMX or any other approach for the few websites that are and need to be dynamic.
That is exactly what htmx is and does. Everything is rendered server side and sections of the page that you need to be dynamic and respond to clicks to fetch more data have some added attributes
I see two differences: (1) the software stack on the server side and (2) I guess there is JS to be sent to the client side for HTMX support(?). Both those things make a difference.
I'm embedded so I don't much about web stuff but sometimes I create dashboards to monitor services just for our team, tganks for introducing me to htmx. I do think html+css should be used for anything that is a document or static for longer than a typical view lasts. Arxiv is leaning towards HTML+css vs latex in acknowledgement that paper is no longer how "papers" are read. And on the other end, eBay works really well with no js right up until you get to an item's page, where it breaks. If ebay can work without js, almost anything that isn't monitoring and visualizing constant data (last few minutes of a bid, or telemetry from an embedded sensor) can work without js. I don't understand how amazon.com has gotten so slow and clunky for instance.
I have been using wasm and webgpu for visualization, partly to offload any burden from the embedded device to be monitored, but that could always be a third machine. Htmx says it supports websockets, is there a good way to have it eat a stream and plot data as telemetry, or is that time for a new tool?
You would have to replace the whole graph everytime. Probably works if it updates once per minute. But more than that it might be time to look at some small js plot library to update the graph.
It sounds like GP would benefit from satellite internet bypassing the firewall, but I don't know how hard the Chinese government works to crack down on that loophole.
The problem isn't in what is being sent over the wire - it's in the request lifecycle.
When it comes to static HTML, the browser will just slowly grind along, showing the user what it is doing. It'll incrementally render the response as it comes in. Can't download CSS or images? No big deal, you can still read text. Timeouts? Not a thing.
Even if your Javascript framework is rendering HTML chunks on the server, it's still essentially hijacking the entire request. You'll have some button in your app, which fires off a request when clicked. But it's now up to the individual developer to properly implement things like progress bars/spinners, timouts, retries, and all the rest the browser normally handles for you.
They never get this right. Often you're stuck with an app which will give absolutely zero feedback on user action, only updating the UI when the response has been received. Request failed? Sorry, gotta F5 that app because you're now stuck in an invalid state!
Yep. I’m a JS dev who gets offended when people complain about JS-sites being slower because there’s zero technical reason why interactions should be slower. I honestly suspect a large part of it is that people don’t expect clicking a button to take 300ms and so they feel like the website must be poorly programmed. Whereas if they click a link and it takes 300ms to load a new version of the page they have no ill-will towards the developer because they’re used to 300ms page loads. Both interactions take 300ms but one uses the browser’s native loading UI and the other uses the webpage’s custom loading UI, making the webpage feel slow.
This isn’t to exonerate SPAs, but I don’t think it helps to talk about it as a “JavaScript” problem because it’s really a user experience problem.
Yes, server-rendering definitely helps, though I have suspicions about its compiled outputs still being very heavy. There's also a lot of CSS frameworks that have an inline-first paradigm meaning there's no saving for the browser in downloading a single stylesheet. But I'm not sure about that.
Yes, though server side rendering is everything but a new thing in the react world. NextJS, Remix, Astro and many other frameworks and approaches exist (and have done so for at least five years) to make sure pages are small and efficient to load.
The amount of complexity to generate HTML/JS is a little staggering sometimes for the majority of simple use cases.
Using Facebook level architectures for actually pretty basic needs can be like hitting an ant-sized problem with a sledgehammer and wondering why the sledgehammer is so heavy and awkward to swing for little things.
Eh, I'm a few miles from NYC and have the misfortune of being a comcast/xfinity customer and my packetloss to my webserver is sometimes so bad it takes a full minute to load pages.
I take that time to clean a little, make a coffee, you know sometimes you gotta take a break and breathe. Life has gotten too fast and too busy and we all need a few reminders to slow down and enjoy the view. Thanks xfinity!
It depends if you are ok with 1/3 to 2/3rds of your visitors bouncing due to loading times and losing 3 to 5x of convertion rate depending on sources...
I didn't mean this sarcastically, it is a decision and may not apply to all situations.
You can see these kinds of differences with just a few seconds difference, ideally I aim to stay under 2s, even on the slowest connection type. 2s is already very long for a user to wait and many will not.
Non profits are tricky. You could see volunteer sign ups and donations as conversions. I manage a non profit site as well and unfortunately I don't have a good solution that is both fast and approachable for our staff to use, so we had to make that compromise as well.
Please understand that Chinese government wants to block "outside" web services to Chinese residents, and Chinese residents want to access those services. So if the service itself decides to deny access from China, it's actually helping the Chinese government.
Are you a citizen of China, or move there for work/education/research?
Anyway, this is very unrelated, but I'm in the USA and have been trying to sign up for the official learning center for CAXA 3D Solid Modeling (I believe it's the same program as IronCAD, but CAXA 3D in China seems to have 1000x more educational videos and Training on the software) and I can't for the life of me figure out how to get the WeChat/SMS login system they use to work to be able to access the training videos. Is it just impossible for a USA phone number to receive direct SMS website messages from a mainland China website to establish accounts? Seems like every website uses SMS message verification instead of letting me sign up with an email.
The only fix is for the people to rise up against them.
This doesn't even have to be violent. Most of the former Soviet Block governments fell without any bloodshed.
What's the alternative? Wait for Xi to "make his mark on history" in the same way that Putin is doing in Ukraine because it's "naive and dismissive" to even talk about unseating him?
It is always so funny to read Americans or Western Europeans saying "just overthrow your dictator bro". Usually told by people who never faced any political violence, or any violence for that matter.
I was born and live in the ex-Soviet country, and stating that Soviet governments fell without any bloodshed is a proof of ignorance.
By 2017 Xi Jinping already had six failed assassination attempts against him, which prompted him to perform a large-scale purge within the ranks of the CCP.
If it was all that easy, it would have been done a long time ago.
I mean, you're not wrong. But if you happen to not be in a position to overthrow the government, maybe the next best thing can be a more realistic approach.
Having a lot of experience commuting on underground public transport (intermittent, congested), and living/working in Australia (remote), I can safely say that most services are terrible for people without "ideal" network conditions.
On the London Underground it's particularly noticeable that most apps are terrible at handling network that comes and goes every ~2 minutes (between stops), and which takes ~15s to connect to each AP as a train with 500 people on it all try to connect at the same time.
In Australia you're just 200ms from everything most of the time. That might not seem like much, but it really highlights which apps trip up on the N+1 request problem.
The only app that I am always impressed with is WhatsApp. It's always the first app to start working after a reconnect, the last to get any traffic through before a disconnect, and even with the latency, calls feel pretty fast.
> In Australia you're just 200ms from everything most of the time. (...)
> The only app that I am always impressed with is WhatsApp. It's always the first app to start working after a reconnect, the last to get any traffic through before a disconnect, and even with the latency, calls feel pretty fast.
The 200ms is telling.
I bet that WhatsApp is one of the rare services you use which actually deployed servers to Australia. To me, 200ms is a telltale sign of intercontinental traffic.
Most global companies deploy only to at most three regions:
* the US (us-east, us-central, us-east+us-east)
* Europe (west-europe),
* and somewhat rarely far-east (meither us-west or Japan)
This means that places such as south Africa, south America, and of course Australia typically have to pull data from one of these regions, which means latencies of at least 200ms due to physics.
Australia is particularly hit because, even with dedicated deployments in their theoretical catchment area, often these servers are actualy located in an entirely separate continent (west-us or Japan) and thus users do experience the performance impact of having packets cross half a globe.
> I bet that WhatsApp is one of the rare services you use which actually deployed servers to Australia. To me, 200ms is a telltale sign of intercontinental traffic.
So, I used to work at WhatsApp. And we got this kind of praise when we only had servers in Reston, Virginia (not at aws us-east1, but in the same neighborhood). Nowadays, Facebook is most likely terminating connections in Australia, but messaging most likely goes through another continent. Calling within Australia should stay local though (either p2p or through a nearby relay).
There's lots of things WhatsApp does to improve experience on low quality networks that other services don't (even when we worked in the same buildings and told them they should consider things!)
In no particular order:
0) offline first, phone is the source of truth, although there's multi-device now. You don't need to be online to read messages you have, or to write messages to be sent whenever you're online. Email used to work like this for everyone; and it was no big deal to grab mail once in a while, read it and reply, and then send in a batch. Online messaging is great, if you can, but for things like being on a commuter train where connectivity ebbs and flows, it's nice to pick up messages when you can.
a) hardcode fallback ips for when DNS doesn't work (not if)
b) setup "0rtt" fast resume, so you can start getting messages on the second round trip. This is part of noise pipes or whatever they're called, and tls 1.3
c) do reasonable-ish things to work with MTU. In the old days, FreeBSD reflected the client MSS back to it, which helps when there's a tunnel like PPPoE and it only modifies outgoing syns and not incoming syn+ack. Linux never did that, and afaik, FreeBSD took it out. Behind Facebook infrastructure, they just hardcode the mss for i think 1480 MTU (you can/should check with tcpdump). I did some limited testing, and really the best results come from monitoring for /24's with bad behavior (it's pretty easy, if you look for it --- never got any large packets and packet gaps are a multiple of MSS - space for tcp timestamps) and then sending back client - 20 to those; you could also just always send back client - 20. I think Android finally started doing pMTUD blackhole detection stuff a couple years back, Apple has been doing it really well for longer. Path MTU Discovery is still an issue, and anything you can do to make it happier is good.
d) connect in the background to exchange messages when possible. Don't post notifications unless the message content is on the device. Don't be one of those apps that can only load messsages from the network when the app is in the foreground, because the user might not have connectivity then
e) prioritize messages over telemetry. Don't measure everything, only measure things when you know what you'll do with the numbers. Everybody hates telemetry, but it can be super useful as a developer. But if you've got giant telemetry packs to upload, that's bad by itself, and if you do them before you get messages in and out, you're failing the user.
f) pay attention to how big things are on the wire. Not everything needs to get shrunk as much as possible, but login needs to be very tight, and message sending should be too. IMHO, http and json and xml are too bulky for those, but are ok for multimedia because the payload is big so framing doesn't matter as much, and they're ok for low volume services because they're low volume.
WhatsApp is (or was) using XMPP for the chat part too, right?
When I was IT person on a research ship, WhatsApp was a nice easy one to get working with our "50+ people sharing two 256kbps uplinks" internet. Big part of that was being able to QoS prioritise the XMPP traffic which WhatsApp was a big part of.
Not having to come up with filters for HTTPS for IP ranges belonging to general-use CDNs that managed to hit the right blocks used by that app, was a definite boon. That, and the fact XMPP was nice and lightweight.
As far as I know google cloud messaging (GCN? GCM? firebase? Play notifications? Notifications by Google? Google Play Android Notifications Service?) also did/does use XMPP, so we often had the bizarre and infuriating very fast notifications _where sometimes the content was in the notification_ but when you clicked on it, other apps would fail to load it due to the congestion and latency and hardcoded timeouts TFA mentions.. argh.
But WhatsApp pretty much always worked, as long as the ship had an active WAN connection.... And that kept us all happy, because we could reach our families.
> WhatsApp is (or was) using XMPP for the chat part too, right?
It's not exactly XMPP, it started with XMPP, but XML is big, so it's tokenized (some details are published in the European Market Access documentation), and there's no need for interop with standard XMPP clients, so login sequence is I think way different.
But it runs on port 5222? by default (with fallbacks to port 443 and 80).
I think GCM or whatever it's called today is plain XMPP (including, optionally, on the server to server side), and runs on ports 5228-5230. Not sure what protocol apple push is, but they use port 5223 which is affiliated with xmpp over tls.
So I think using a non 443 port was helpful for your QoS? But being avaialable on port 443 is helpful for getting through blanket firewall rules. AOL used to run AIM on all the ports, which is even better at getting through firewalls.
I once got asked "what was a life changing company/product" and my answer was WhatsApp - to slightly bemused looks.
WhatsApp connected the world for free. Obviously they weren't the first to try but when my (very globally distributed family) picked up WhatsApp in '09/'10 we knew we were onto something different. Being able to stay in touch with my brother half way across the world in realtime was very special. Nothing else at the time really competed. SMS was expensive and had latency. Email felt clunky and oddly formal - email clients don't feel "chatty". MSN was crap on mobile and you both had to be online. Ditto for Skype. For calls we even used to do this odd VOIP bridge where you would each call an endpoint for cheap international phone calls.
Meanwhile in 2012, I was able to install WhatsApp on my mum's old Nokia Symbian feature phone, use WhatsApp on a pay-as-you-go sim plan in Singapore communicating over WAP. The data consumption was so low I basically survived 2 months on maybe 1-2 top ups. Compare that with the other day where I turned on roaming on my phone (so I could connect to Singtel to BUY a roaming package) and my phone passively fetched ~50+ MB in seconds and I was hit with 400SGD of data charges (I was able to get them refunded)
I am very grateful to all the work and thought WhatsApp put into building an affordable global resilient communication network and I hope every one of the people involved got the payout they deserve.
This is a big one that makes low-bandwidth connections unusable in a lot of apps. The deluge of ad/tracking/telemetry SDKs' requests all being fired in parallel with the main business-logic requests makes them all saturate the slow pipe and usually leads to all of them timing out. By being third-party SDKs they may not even give you control of the underlying network requests nor the ability to buffer/delay/cache those requests.
One advantage of being Facebook in this case is that they're the masters of spyware and are unlikely to need to embed third-party spyware, so they can blend tracking/telemetry traffic within their business logic traffic and apply prioritization, including buffering any telemetry and sending it during less critical times.
> Why is there is WhatsApp for most commonly used devices, but iPads?
I was frustrated by this a while back, so I asked the PMs. Basically when investing engineering effort WhatsApp prioritises the overall number of users connected, and supporting iPads doesn't really move that metric, because (a) the vast majority of iPad owners also own a smartphone, and (b) iPads are pretty rare outside of wealthy western cities.
I've been gone too long for accurate answers, but I can guess.
For iPad, I think it's like the sibling notes; expected use is very low, so it didm't justify the engineering cost while I was there. But I see some signs it might happen eventually [1]; WhatsApp for Android Tablets wasn't a thing when I was there either, but it is now.
For the four device limit, there's a few things going on IMHO. Synchronization is hard and the more devices are playing, the harder it is. Independent devices makes it easier in some ways because the user devices don't have to be online together to communicate (like when whatsapp web was essentially a remote control for your phone), but it does mean that all of your communications partner's devices have to work harder and the servers have to work harder, too.
Four deviced covers your phone, a desktop at home and work, and a laptop; but really most of the users only have a phone. Allowing more devices makes it more likely that you'll lose track of one or not use it for long enough that it's lost sync, etc.
WhatsApp has usually focused on product features that benefit the most users, and more than 4 devices isn't going to benefit many people, and 4 is plenty for internal use (phone, prod build, dev build, home computer). I'm sure they've got metrics of how many devices are used, and if there's a lot of 4 device users and enough requests, it's a #define somewhere.
Yeah, it's very very noticeable that WhatsApp is architected in a way that makes experience great for all kind of poor connectivity scenarios that most other software just... isn't.
Most global companies (at least the US-based ones) have deployed in India, where I'm at right now. I suppose a billion people online is too big of a market to ignore (or not; I really don't know). Or there are services that I'm completely unaware of that's not in India.
Internet's pretty fast as well. Much faster than a certain conspicuous European country you'd expect to have fast internet ;)
WhatsApp has a massive audience in developing countries where it's normal for people to have slower internet and much slower devices. That perspective being so embedded in their development goals certainly has given WhatsApp good reason to be the leading messaging platform in many countries around the world
LOL 8kbps. Damn. That takes me back. I built the first version of one of the world's largest music streaming sites on a 9.6kbps connection.
I was working from home (we had no offices yet) and my cable Internet got cut off. My only back up was a serial cable to a 2G Nokia 9000i. I had to re-encode a chunk of the music catalog at 8kbps so I could test it from home before I pushed the code to production.
Nokia 9000i, so you had to work on CSD (which is usually billed per-minute, like dial-up), not even GPRS. How much did that cost you? :P
BTW, an interesting thing is that some/most carriers allow you to use CSD/HSCSD over 3G these days, and you can establish data CSD connection between two phone numbers, yielding essentially a dedicated L2 pipe which isn't routed over internet. Can have much lower latency and jitter if that's what you need. Some specialized telemetry is still using that, however as 3G is slowly getting phased out, it will probably have to change.
God, the cost was probably horrid, but I was connecting in, setting tasks running and logging out. This was late 1999 in the UK, so per-minute prices were high. Also, these were Windows servers, so I had to sluggishly RDP into them, no nice low-bandwidth terminals.
Even wealthy countries will have dead zones (Toronto subway until recently, and like 90% of the landmass), and at least in Canada, “running out of data” and just having none left (or it being extremely expensive) was relatively common until about the last year or two when things got competitive (finally!).
Still have an entire territory where everything is satellite fed (Nunavut), including its capital.
Wow. I didn't knew that Nunavut is entirely satellite fed. That's very interesting to know, thanks. Do you have some more info, though? What kind of satellite - geostationary, LEO? Also which constellation has the most share of traffic from Nunavut?
Unsure if other telcos have their own setups, but:
> Northwestel, one of biggest internet service providers in the North, said it provides broadband service for all Nunavut communities using Telesat's Telestar 19 VANTAGE high-throughput satellite. After the satellite was deployed in July 2018, Northwestel said it would significantly improve broadband connectivity in the territory, increasing speeds to 15 megabits per second.
It's not only the services them self. I have a very slow mobile connection, and one thing that bothered me immensly is downloading images in the browser:
How is it, than when I go to a .jpg url to view an image in the browser it takes way longer and sometimes times out, than hopping over to termux and running wget.
I had this problem with both firefox and chrome based browsers.
Note that even the wget download usually takes 10-30 seconds on my mobile connection.
Too many services today do stupid image transcoding today. While the URL says jpg it will decide that because your browser supports WebP that what you really must have wanted was a WebP. It'll then either transcode or just send you WebP data for the image or send you a redirect. This is rarely what you actually want.
With wget it sends you the source you actually requested and doesn't try to get clever (stupid). Google likes WebP so that means everyone needs to join the WebP cargo cult even if it means transcoding a lossy format to another lossy format.
You can try going into proxy settings and setting to "none" instead of autodetect. Also, the dns server used by the browser could be different (and slower).
I guess in London you get wifi only on stops, it's the same in Berlin. In Helsinki the wifi connection is available inside the trains, and in the stations. So you never get a connection loss when moving. I never understood the decision in Berlin to do this, why not just provide internet inside the train...
And yeah, most of the internet works very badly when you drop the network all the time...
WiFi at a stop is as easy as putting up a few wireless routers, it's a bit more complex than at home but the same general idea.
Wifi inside the trains involves much more work, and to get them to ALSO be seamless across the entire setup - even harder. Easily 10x or 100x the cost.
It's sad, because the Internet shouldn't be that bad when the network drops all the time; it should just be slower as it waits to send good data.
Berlin did not have mobile connections inside the tunnels until very recently (this year, I believe). This included the trains not being connected to any outside network. Thus wifi on the subway was useless to implement.
They did if you were on o2, that's why I'm still with Aldi Talk (they use the o2 network); they've had LTE through the entire network for a while now. The new thing is 5G for everyone.
Despite Berlin's general lack of parity with modern technology, I've never actually had a problem with internet access across the ubahn network in the past decade. I noticed that certain carriers used to have very different availability when travelling and so switched to a better one, but I was always surprised at being able to handle mobile data whilst underground.
Wow! I was in Berlin last week and kept losing connection... like all the time. I use 3 with a Swedish plan. In Sweden, it literally never drops, not on trains, not on metro, not on faraway mountains... it works everywhere.
used to have spotty coverage underground with vodafone, when i switched to telekom, internet suddenly magically worked underground on the routes i used.
I believe someone published a map of the data coverage of different providers on the berlin ubahn, but probably outdated now
Yeah, admittedly this year I've also started experiencing holes on the ringbahn (strangely and consistently around frankfurter allee), but the ubahn has been fine.
I'm with sim.de which I believe is essentially an O2 reseller (apn references o2)
Summary: since 2024-05-06, users of all networks also get LTE in the U-Bahn thanks to a project between BVG and Teléfonica (not surprising that Teléfonica deployed the infra because they had the best U-Bahn LTE coverage beforehand)
Yes, right now it's mostly just wifi at stations only. However, they're deploying 4G/5G coverage in the tunnels and expect 80% coverage by the end of 2024 [1].
So… you can expect apps developed by engineers in London to get much worse on slow internet in 2025. :-)
The London Underground not having any connectivity for decades after other metro systems showed only that high connectivity during a commute isn't necessary.
I travel a lot. Slow internet is pretty common. Also, right now my mobile data ran out and I'm capped at 8 kbps.
Websites that are Just Text On A Page should load fast, but many don't. Hacker News is blazing fast, but Google's API docs never load.
The worst problem is that most UIs fail to account for slow requests. Buttons feel broken. Things that really shouldn't need megabytes of data to load still take minutes to load or just fail. Google Maps' entire UI is broken.
I wish that developers spent more time designing and testing for slow internet. Instead we get data hungry websites that only work great on fast company laptops with fast internet.
---
On a related note, I run a website for a living, and moving to a static site generators was one of the best productivity moves I've made.
Instead of the latency of a CMS permeating everything I do, I edit text files at blazing speed, even when fully offline. I just push changes once I'm back online. It's a game changer.
Google used to be good about slow apps. Using gmail on the school computers in the day, the site would load so slowly it would detect that and instead load a basic html version.
Now a days I download a 500mb google map cache on my phone and its like there is no point. Everything still has to fetch and pop in.
I had a client that I set up with a static site generator. Sadly the client changed their FTP password to something insecure and someone FTP'd in and added a tiny piece of code to every HTML file!
I used to work on the team that served those docs. Due to some unfortunate technical decisions made in the name of making docs dynamic/interactive they are almost entirely uncached. Basically every request you send hits an AppEngine app which runs Python code to send you back the HTML.
So even though it looks like it should be fast, it’s not.
>Websites that are Just Text On A Page should load fast, but many don't. Hacker News is blazing fast, but Google's API docs never load.
Things aren't always that simple.
I'm in the UK, and my ping time to news.ycombinator.com is 147ms - presumably because it's not using a CDN and is hosted in the USA.
cloud.google.com on the other hand has an 8ms ping time.
So yes, Hacker News is a simple, low-JS page - but there can be other factors that make it feel slow for users in some places.
This is despite me being in a privileged situation, having an XGS-PON fibre connection providing symmetric 8Gbps speeds.
HN loads quickly for me _despite_ the 147 ms. I guess partially because it doesn't need 20 roundtrips to sent useful content to me.
At some point, I wrote a webapp (with one specific, limited function, of course) and optimized it to the point where loading it required one 27 kB request. And then turned up the cwnd somewhat, so that it could load in a single RTT :-) Doesn't really matter if you're in Australia then, really.
I have experience with having a webpage with a global audience, served from random US locations (east/west/texas, but no targetting) and pretty unbloated, to something served everywhere with geodns and twice the page weight... Load times were about the same before and after. If we could have kept the low bloat, I expect we would have seen a noticable improvement in load times (but it wasn't important enough to fight over)
I gave a fellow who'd just come off the ice a ride while he was hitchhiking, he was saying that the blog author was somewhat resented by others because his blog posts, as amazing as they are, tended to hog what limited bandwidth they already had while the images uploaded, but he was given priority because the administration realised the PR value of it.
Which I thought ties into the discussion about slow internet nicely.
I was wondering about the practicalities indeed. Not everyone knows when their OS or applications decided it is now a great time to update. You'll have a phone in your pocket that is unnecessarily using all the bandwidth it can get its hands on, or maybe you're using the phone but just don't realise that watching a 720p video, while barely functional, also means the person trying to load it after you cannot watch even 480p anymore (you might not notice because you've got buffer and they'll give up before their buffer is filled enough to start playing).
It seems as though there should be accounting so you at least know what % of traffic went to you in the last hour (and a reference value of bandwidth_available divided by connected_users so you know what % was your share if everyone had equal need of it), if not a system that deprioritises everyone unless you punched the button that says "yes I'm aware what bandwidth I'm using in the next [X≤24] hour(s) and actually need it, thank you" which'll set the QoS priority for your MAC/IP address to normal
The kind of scenario screams a local-first applications and solutions, and it's the reason why the Internet was created in the first place [1][2]. People have been duped by the misleading no software of Salesforce's advert slogan that goes against the very foundation and the spirit of the Internet. For most of its life and duration starting back in 1969, the Mbps is the anomaly not the norm and the its first killer application of email messaging (arguably the still best Internet application) is a local-first [3]. Ironically the culprit application that the author was lamenting in the article is a messaging app.
[1] Local-first software: You own your data, in spite of the cloud:
So I've hacked a lot on networking things over the years and have spent time getting my own "slow internet" cases working. Nothing as interesting as McMurdo by far but I've chatted and watched YouTube videos on international flights, trains through the middle of nowhere, crappy rural hotels, and through tunnels.
If you have access/the power (since these tend to be power hungry) to a general-purpose computing device and are willing to roll your own my suggestion is to use NNCP [1]. NNCP can can take data, chunk it, then send it. It also comes with a sync protocol that uses noise (though I can't remember if this enables 0RTT) over TCP (no TLS needed so only 1.5 RTT time spent establishing connection) and sends chunks, retrying failed chunks along the way.
NNCP supports feeding data as stdin to a remote program. I wrote a YouTube downloader, a Slack bot, a Telegram bot, and a Discord bot that reads incoming data and interacts with the appropriate services. On the local machine I have a local Matrix (Dendrite) server and bot running which sends data to the appropriate remote service via NNCP. You'll still want to hope (or experiment such) that MTU/MSS along your path is as low as possible to support frequent TCP level retries, but this setup has never really failed me wherever I go and let's me consume media and chat.
The most annoying thing on an international flight is that the NNCP endpoint isn't geographically distributed and depending on the route your packets end up taking to the endpoint, this could add a lot of latency and jitter. I try to locate my NNCP endpoint near my destination but based on the flight's WiFi the actual path may be terrible. NNCP now has Yggdrasil support which may ameliorate this (and help control MTU issues) but I've never tried Ygg under these conditions.
Hah no, but maybe I should. The reason I haven't is that most of my work is just glue code. I use yt-dlp to do Youtube downloads, make use of the Discord, Slack and Telegram APIs to access those services. I run NNCP and the bots in systemd units, though at this point I should probably bake all of these into a VM and just bring it up on whichever cloud instance I want to act as ingress. Cloud IPs stay static as long as the box itself stays up so you don't need to deal with DNS either. John Goerzen has a bunch of articles about using NNCP [1] that I do recommend interested folks look into but given the popularity of my post maybe I should write an article on my setup.
FWIW I think it's fine that major services do not work under these conditions, though I wish messaging apps did. Both WhatsApp and Telegram IME are well tuned for poor network conditions and do take a lot of these issues into account (a former WA engineer comments in this thread and you can see their attention to detail.) Complaining about these things a lot is sort of like eating out at restaurants and complaining at how much sodium and fat goes into the dishes: restaurants have to turn a profit and catering to niche dietary needs just isn't enough for them to survive. You can always cook at home and get the macros you want. But for you to "cook" your own software you need access to APIs and I'm glad Telegram, Slack, and Discord make this fairly easy. Youtube yt-dlp does the heavy lifting but I wish it were easier, at least for Premium subscribers, to access Youtube via API.
I find Slack to be the absolute worst offender networking-wise. I have no idea how, now that Slack is owned by Salesforce, the app experience can continue to be so crappy on network usage. It's obvious that management there does not prioritize the experience under non-ideal conditions in any way possible. Their app's usage of networks is almost shameful in how bad it is.
I had a similar experience as the author on a boat in the south pacific. Starlink was available but often wasn't used because of its high power usage (60+ watts). So we got local SIM cards instead which provided 4G internet in some locations and EDGE (2G) in others.
EDGE by itself isn't too bad on paper - you get a couple dozen kilobits per second. In reality, it was much worse. I ran into apps with short timeouts that would have worked just fine, if the authors had taken into account that loading can take minutes instead of milliseconds.
An issue that the anonymous blog author didn't have was metered connections. Doing OS or even app upgrades was pretty much out of the question for cost reasons. Luckily, every few weeks or so, we got to a location with an unmetered connection to perform such things. But we got very familiar with the various operating systems' ways to mark connections as metered/unmetered disable all automatic updates and save precious bandwidth.
The South Pacific should be very sunny. I guess that you didn't have enough solar panels to provide 60+ watts. I am genuinely surprised.
And "local SIM cards" implies that you set foot on (is)lands to buy said SIM cards. Where did you only get 2G in the 2020s? I cannot believe any of this is still left in the South Pacific.
My previous smartphone supported 4G/3G/Edge, but for some reason the 4G didn't work. At all, ever, anywhere (not a provider/subscription or OS settings issue, and WiFi was fine).
In my country 3G was turned off a while ago to free up spectrum. So it fell back to Edge all the time.
That phone died recently. I'm temporarily using an older phone which also supports 4G/3G/Edge, and where the 4G bit works. Except... in many places where I hang out (rural / countryside) 4G coverage is spotty or non-existant. So it also falls back to Edge most of the time.
Just the other day (while on WiFi) I installed Dolphin as a lightweight browser alternative. Out in the countryside, it doesn't work ("no connection"), even though Firefox works fine there.
Apps won't download unless on WiFi. Not even if you're patient: downloads break somewhere, don't resume properly, or what's downloaded doesn't install because the download was corrupted. None of these issues over WiFi. Same with some websites: roundtrips take too long, server drops the connection, images don't load, etc etc.
Bottom line: app developers or online services don't (seem to) care about slow connections.
But here's the thing: for the average person in this world, fast mobile connections are still the exception, not the norm. Big city / developed country / 4G or 5G base stations 'everywhere' doesn't apply to a large % of the world's population (who do own smartphones these days, even if low-spec ones).
Not that some low-tier mobile plans also cap connection speeds. Read: slow connection even if there's 4G/5G coverage. There's a reason internet cafe's are still a thing around the world.
I live in a developed country with 4g/5g everywhere and its still no better than the 3g era I remember. Modern apps and sites have gobbled up the spare bandwith so the general ux feels the same to the user in terms of latency. On top of that there are frequent connection dropouts even with the device claiming a decent connection to the tower. Using mobile internet seems like 4g often can’t bring the speed to load a modern junked up news or recipe site in sometimes any amount of time.
In the Marquesas and Tuamotus, you don't see a lot of 4G reception, no matter what Vini's pretty map claims.
Re: Sunny - there's quite a bit of cloud cover and other devices onboard like the water maker and fridge (more important than Starlink!) also need a lot of power.
> Low bandwith, high latency connections need to be part of the regular testing of software.
One size does not fit all. It would be a waste of time and effort to architect (or redesign) an app just because a residual subset of potential users might find themselves on a boat in the middle of the Pacific.
Let's keep things in perspective: some projects even skip testing WebApps on more than one browser because they deem that wasteful and an unjustified expense, even though it's trivial to include them on a test matrix, and this is a UI-only.
Websites regularly break because I don't have perfect network coverage on my phone every single day. In a lot of places, I don't even have decent reception. This in Germany in and around a major city.
Why do you think this only applies to people on a boat?
> Websites regularly break because I don't have perfect network coverage on my phone every single day.
Indeed, that's true. However, the number of users that go through similar experiences are quite low and even those who do are always a F5 away from circumventing that issue.
I repeat: even supporting a browser other than the latest N releases of Chrome is a hard sell to some companies. Typically the test matrix is limited to N versions of Chrome and the latest release of Safari when Apple products are supported. If budgets don't stretch even to cover the basics, of course that even rarer edge cases such as a user accessing a service through a crappy network will be far from the list of concerns.
I still think engineering for slow internet is really important, and massively under appreciated by most software developers, but ... LEO systems (like Starlink, especially StarLink) essentially solve the core problems now. I did an Arctic transit (Alaska to Norway) in September and October of 2023, and we could make FaceTime video calls from the ship, way above the Arctic Circle, despite cloud cover, being quite far from land, and ice. This was at the same time OP was in Antartica. Whatever that constraint was, it's just contracting for the service and getting terminals to the sites. The polar coverage is relatively sparse, but still plenty, due to the extraordinarily low population.
> I still think engineering for slow internet is really important, and massively under appreciated by most software developers, but ... LEO systems (like Starlink, especially StarLink) essentially solve the core problems now.
I don't think that this is a valid assessment of the underlying problem.
Slow internet means many things, and one of them is connection problems. In connection-oriented protocols like TCP this means slowness induced by drop of packets, and in fire-and-forget protocols like UDP this means your messages don't get through. This means that slowness might take multiple forms, such as low data rates or moments of high throughput followed by momentary connection drops.
One solid approach to deal with slow networks is supporting offline mode, where all data pushes and pulls are designed as transactions that take place asynchronously, and data pushes are cached locally to be retried whenever possible. This brings additional requirements such as systems having to support versioning and conflict resolution.
Naturally, these requirements permeate onto additional UI requirements, such as support for manually synching/refreshing, displaying network status, toggling actions that are meaningless when the network is down, rely on eager loading to remain usable while offline, etc.
> I don't think that this is a valid assessment of the underlying problem.
Inmarsat is an insecure, 2 Mbps (at best) connection with satellites at 22236 miles above Earth and a latency of about 900-1100 ms.
Starlink is a secure, 100 Mpbs (typical) connection with satellites at 342 miles above Earth and a latency of about 25 ms.
Odds of getting a video link on Inmarsat are low, and even if you do, it's potato quality. Source - have been using these systems operationally since the 1990s.
Delay/disruption tolerant networking (DTN) seeks to address these kind of problems, using alternative techniques and protocols: store-and-forward, Bundle protocols and Licklider Transmission Protocol. Interesting stuff, enjoy!
There’s a diner in SF I frequent. I usually sit 15 feet from the door, on a busy retail corridor, with Verizon premium network access. My iPhone XS reports two bars of LTE but there’s never enough throughout for DNS to resolve. Same at my dentist’s office. I hope to live in a post slow internet world one day, but that is still many years away.
(The XS does have an Intel modem, known to be inferior to the Qualcomm flagship of the era)
I get 400 Mbps down standing at the door of that same diner. My understanding is that 4G bands are repurposed for 5G in rough proportion to the usage of 4G vs 5G devices at that tower, plus there’s some way to use a band for both. In any case I was having these indoor performance issues back in 2019. I’m pretty sure it’s an Intel issue, and any Qualcomm modem would be fine.
I see this in my french city, there's a particular spot on my commute where my phone (mediatek) will report 2 bars of 5G but speeds will actually be around 3G. I've also noticed other people on the tram having their videos buffer at that spot, so it's not just me. The carriers do not care, of course.
I think there's just some of these areas where operational conditions make the towers break in some specific way.
How do LEO satellite help me when a commuter train full of people connecting to the same AP enters the station I'm in? I live in one of the most densely populated places on Earth, chock-full of 5G antennas and wifi stations. Yet I still feel it when poorly engineered websites trip up on slow/intermittent connections.
Pole doesn't have Starlink. McMurdo does. There are reasons.
Polar coverage from GEO satellites is limited because how close to the horizon GEO satellites are from Pole. Pole uses old GEO satellites which are low on fuel and have relatively large inclinations... then you can talk to them for ~ 6 hours per 24.
Idealistic! I think a lot of countries are going to block starlink in the future by interfering with the signals, much like the success some countries are having interfering so heavily with GPS. Their governments won't want uncensored web, or an American company being the gateway to the internet. They'll maintain whatever territorial networks they have now and the speed question is still relevant.
Also the number of people worldwide whose only access to the internet is a $100 android phone with older software and limited CPU should be considered
Even if people want to / are allowed to, I'm trying to imagine how well starlink could plausibly function if 2 billion people switched from their sketchy terrestrial service to starlink.
As a luxury product used by a few people, maybe it "solves" the problem, but I don't think this is a very scalable solution.
Slow Internet isn't just remote places, it also crops up in heavily populated urban areas. It's sad that you had better connectivity above the Arctic circle than the typical connectivity with hotel WiFi. Bad connectivity also happens with cellular connections all over the place.
Starlink has its own networking issues thanks to a lot of latency jitter and 0.5% or more packet loss. See the discussion from last month: https://news.ycombinator.com/item?id=40384959
The biggest issue for Starlink at the poles is, as you say, very sparse coverage. Also I suspect Starlink has to usually relay polar packets between satellites, not just a simple bent pipe relaying to a ground station.
FYI, Space Norway will launch two satellites this summer on a Falcon 9 that will be going in a HEO orbit, among the payloads on the satellites is a Viasat/Inmarsat Ka-band payload which will provide coverage north of 80 degrees. Latency will probably be GEO+, but coverage is coverage I guess. :-)
The Braid Protocol allows multiple synchronization algorithms to interoperate over a common network protocol, which any synchronizer's network messages can be translated into.. The current Braid specification extends HTTP with two dimensions of synchronization:
Level 0: Today's HTTP
Level 1: Subscriptions with Push Updates
Level 2: P2P Consistency (Patches, Versions, Merges)
Even though today's synchronizers use different protocols, their network messages convey the same types of information: versions in time, locations in space, and patches to regions of space across spans of time. The composition of any set of patches forms a mathematical structure called a braid—the forks, mergers, and re-orderings of space over time.
You might be surprised at just how elegantly HTTP extends into a full-featured synchronization protocol. A key to this elegance is the Merge-Type: this is the abstraction that allows a single synchronization algorithm to merge across multiple data types.
As an application programmer, you will specify both the data types of your variables (e.g. int, string, bool) and also the merge-types (e.g. "this merges as a bank account balance, or a LWW unique ID, or a collaborative text field"). This is all the application programmer needs to specify. The rest of the synchronization algorithm gets automated by middleware libraries that the programmer can just use and rely upon, like his compiler, and web browser.
I'd encourage you to check out the Braid spec, and notice how much we can do with how little. This is because HTTP already has almost everything we need. Compare this with the WebDAV spec, for instance, which tries to define versioning on top of HTTP, and you'll see how monstrous the result becomes. Example here:
Braid is backwards-compatible with today's web, works in today's browsers, and is easy to add to existing web applications.. You can use Braid features in Chrome with the Braid-Chrome extension.
Grump take: More complex technology will not fix a business-social problem. In fact, you have to go out of your way to make things this shitty. It’s not hard to build things with few round trips and less bloat, it’s much easier. The bloat is there for completely different reasons.
Sometimes the bloat is unnoticeable on juicy machines and fast internet close to the DC. You can simulate that easily, but it requires the company to care. Generally, ad-tech and friends cares very little about small cohorts of users. In fact, the only reason they care about end users at all is because they generate revenue for their actual customers, ie the advertisers.
> Generally, ad-tech and friends cares very little about small cohorts of users.
Sure, and it will keep being that way. But if this gets improved at the transport layer, seems like a win.
As an analogy, if buses are late because roads are bumpy and drivers are lousy, fixing the bumpy road may help, even if drivers don't change their behavior.
> But if this gets improved at the transport layer, seems like a win.
What do you mean? TCP and HTTP is already designed for slow links with packet loss, it’s old reliable tech from before modern connectivity. You just have to not pull in thousands of modules in the npm dep tree and add 50 microservice bloatware, ads and client side “telemetry”. You set your cache-control headers and etags, and for large downloads you’ll want range requests. Perhaps some lightweight client side retry logic in case of PWAs. In extreme cases like Antarctica maybe you’d tune some tcp kernel params on the client to reduce RTTs under packet loss. There is nothing major missing from the standard decades old toolbox.
Of course it’s not optimal, the web isn’t perfect for offline hybrid apps. But for standard things like reading the news, sending email, chatting, you’ll be fine.
It really wouldn't. Lousy drivers are a way thinner bottleneck than the roads.
But it will improve the services where the drivers are good.
If the protocol is actually any good (its goals by themselves already make me suspicious it won't be), the well-designed web-apps out there can become even better designed. But it absolutely won't improve the situation people are complaining about.
>As an analogy, if buses are late because roads are bumpy and drivers are lousy, fixing the bumpy road may help, even if drivers don't change their behavior
No. It will make things worse, because now lousy drivers are no longer constrained by the bumpy road, so they will become even lousier.
Case in point: software is shittier than ever, despite the 100x increase in computer performance since the 90s.
Those of us working on apps, websites, etc, need to remember that there are lots of people out there that are not connected to the fast Wi-Fi or fibre connections we have.
Here in the UK, some networks started shutting down 3G. Some have 2G as a low energy fall back, but we're supposed to use 4G/5G now. The problem is that 4G is not available everywhere yet, some areas until recently only had good 3G signal. So I've been dropping to 2G/EDGE more often than I'd like and a lot of stuff just stops working. A lot of apps are just not tested on slow, high latency, high package loss scenarios.
3G devices should still work over 2G. It's much slower, but it works and should do so until well into 2030 in the UK.
The problem with 3G as I understand it is that it uses more power and is less efficient than 4G/5G. They're starting to re-deploy the 3G bands as 4G/5G, so the other Gs will eventually benefit from this shutdown.
The lower-bandwidth connections get completely saturated by modern phones with modern data allowances. Back in the day I had 500MB a month on 3G, for instance. I can use that in a few minutes these days.
Here in the USA a great number of networks will drop back to 2G when their data plan runs out. And most poor people are on really low data limits, so they spend most of the month on 2G.
I had similar problem on a ship with many users share a 2M VSAT Internet. Few tricks made Internet less painful:
- block windows update by returning DNS query for microsoft update endpoints as NXDOMAIN.
- use a captive portal to limit user session duration, so that unattended devices won't consume bandwidth.
- with freebsd dummynet, pfSense can share bandwidth equally among users. It can also share bandwidth by weight among groups. It helps.
- inside Arctic circle, the geosynchronous satellites are very low on the horizon and were blocked frequently when ship turns. I was able to read the ship's gyro and available satellites from VSAT controller and generate a plot to show the satellite blockage. It was so popular that everyone is using it to forecast next satellite online.
This is why I find it dreadful that evangelists here are heavily promoting live$whatever technology where every local state change requires at least one server roundtrip, or “browsers support esm now, bundling is a thing of the past!” etc. You don’t need to be at Antarctica to feel the latencies caused by the waterfall of roundtrips, or roundtrip on every click, as long as you’re a mere 200ms from the server, or in a heavily congested place.
It is not just bandwidth or latency, and is not just for Antarctica. Not in all places of the world you have the best connectivity. Even with not so bad connectivity, you may have environmental interference, shared use or be just far from the wifi router. You may have a browser running in a not so powerful CPU, doing more things chewing processor, or the available memory, so heavy JS sites may suffer or not work at all there. You don't know what is in the other side, putting high requirements there may turn your solution unfit for a lot of situations.
Things should be improving (sometimes fast, sometimes slowly) in that direction, but still is not something guaranteed everywhere, or at least in every place that your application is intended or needed to run. And there may be even setbacks in that road.
I run into this daily on my phone. Where I live, its hilly, network is usually saturated, my speeds are crap usually and some sites more complicated than hn cannot even load at all without timing out sometimes.
Mosh and NNCP will help a lot, but you need some good sysadmin
to set NNCP as the mail MUA/MTA backend to spool everything efficiently.
NNCP it's an expert level skill, but your data will be sent over very unreliable channels:
Also, relying on propietary OS'es is not recommended.
Apple and iOS are disasters to work on remote, isolated places.
No wonder no one uses Apple in Europe for any serious work
except for iOS development. Most science and engineering setups
will use anything else as a backend.
Most offline distros, like Ubuntu or Trisquel have methods of downloading the software packages
for offline installs.
On chats, Slack and Zoom are disasters. Any SIP or Jabber client with VOIP support will be far more reliable as it can use several different protocols which can use far less bandwidth (OPUS for audio) without having to download tons of JS to use it, even if you cached your web app it will still download tons of crap in the background.
And, again, distros like Debian have a full offline DVD/BR pack which ironically can be better if you got that by mail. Or you can just use the downloaded/stored ISO files with
apt-cdrom add -m /path/to/your/file.iso
This way everything from Debian could be installed without even having an Internet connection.
Well, statistically average end-user internet connection in Europe is much faster than in the US. Maybe outside some places like most of western Germany, but these are an exception. Europe has really good bandwidth speeds, overall.
I absolutely agree with the rest, though, including the part saying any "serious" software will have such features (and better support in general), and I second the examples you gave.
USA it's huge and sparse, but Spain it's the same, it's like USA.rar. Crowded coasts, lots of mountains with a rough interior almost as empty as Lapland modulo Madrid.
So, yes, you can have the issues on phone signal issues.
This is increasingly often the case. Also, don't forget that modern WISP equipment allows for 100Mbps+ speeds for a price next to nothing (Ubiquiti, MikroTik).
As a web developer I actually resisted much faster internet for ages.
Until 2022 I had a rock-solid, never-failed 7 megabit/s-ish down, 640k up connection and I found it very easy to build sites that others describe as blazing fast.
This was slow really by the standards of much of the UK population even by 2015.
So all I had to do was make it fast for me.
A change of provider for practical reasons gave me an ADSL2+ connection that is ten times faster; still arguably slower than a lot of residential broadband in the UK but not so helpfully.
So now I test speed on mobile; even in the south east of England it is not that difficult to find poor mobile broadband. And when it’s poor, it’s poor in arguably more varied ways.
As a web developer you can just throttle your connection in developer tools though, no self-limiting required. But nobody does that in big corporations building most of the sites needed by people with slow connections.
Yeah, though it doesn’t quite capture all of the experience of working with slower broadband.
For example if you have a website that is meant to be used alongside a video call or while watching video, it’s difficult to really simulate all of that “feel”.
Using a link that is slow in practice is an invaluable experience.
In my experience, browsers limit speeds in a way that's kind of nice and stable. You tell them to stick to 100kbps and they'll have 100kbps. Packet loss, jitter, it's all a single number, and rather stable. It's like a 250kbps fiber optic connection that just happens to be very long.
In my experience, real life slow internet isn't like that. Packet loss numbers jump around, jitter switches second by second, speeds vary wildly and packets arrive out of order more than in order. Plus, with sattelites, the local router sends fake TCP acknowledgements to hide the slow data transfer, so the browser thinks it's connected while the traffic is still half a second away.
There are software tools to limit connectivity in a more realistic way, often using VMs, but they're not used as often as the nice browser speed limiter.
Good points, but it would still be a major step forward if websites start handling browser-simulated 3G well. Right now the typical webshit used by regular people more often than not ranges from barely usable to completely unusable on browser-simulated 3G, let alone browser-simulated 2G or real world bad connections. As a first step, make your site work well on, say, 200ms and 1Mbps.
One problem is that developers have the best hardware and Internet because "it's their job", so they are completely biased. A bit like rich people tend to not understand what it means to be poor.
The other problem is that nobody in the software industry gives a damn. Everyone wants to make shiny apps with the last shiny tech. Try to mention optimizing for slow hardware/Internet and look at the face of your colleagues, behind their brand new M3.
I worked in a company with some remote colleagues in Africa. There were projects that they could literally not build, because it would require downloading tens of GB of docker crap multiple times a week for no apparent reason. The solution was to not have those colleagues work on those projects. Nobody even considered that maybe there was something to fix somewhere.
70%+ of the web is putting text on screen and responding to user interactions, 25%+ is spyware and advertising, and the last 5% are cool applications. How complicated should that really be?
This is a good example of why I gave up a career as a JavaScript developer after 15 years. I got tired of fighting stupid, but even stupid woefully unqualified people need to make 6 figures spinning their wheels to justify their existence.
From where I'm from (Southeast Asia), slow internet is common in provincial and remote areas. It's like the OP's experience in South Pole but slower.
That's why I always cringe at these fancy-looking UI cross-platform apps since I know they will never work in a remote environment. Also, that is why offline support is very important. I only use Apple Notes and Things 3, both work tremendously in such remote settings.
Imagine your notes or to-do list (ehem Basecamp) not loading since it needs internet connection
What's sad is that the app-style setup on phones SHOULD be perfect for this - you download the app when you DO have a good connection, and then when you're out on the slow/intermittent connection the ONLY thing the app is sending is the new data needed.
Instead almost all apps are just a bad web browser that goes to one webpage.
This is why we need more incremental rendering[1] (or "streaming"). This pattern become somewhat of a lost art in the era of SPAs — it's been possible since HTTP/1.1 via chunked transfer encoding, allowing servers to start sending a response without knowing the total length.
With this technique, the server can break down a page load into smaller chunks of UI, and progressively stream smaller parts of the UI to the client as they become available. No more waiting for the entire page to load in, especially in poor network conditions as the author experienced from Anartica.
I remember doing this with ASPv3 pages back in the day on a content site. It made it easy to dump what HTML has already been completed out before continuing to generate the heavier, but much less important, comments section below.
I would highly recommend not only testing on slow network connections, but also on slow computers, tablets and smartphones. At least in my case there was some low hanging fruit that immediately improved the experience on these slow devices which I would have never noticed had I not tested on slower machines.
McMurdo has starlink. South Pole doesn't, but not due to technical reasons from starlink's side. From what I understand when they tested at Pole they noticed interference with some of the science experiments, its possible they will engineer around that at some point but for now starlink is a low priority compared to ensuring the science goes on.
I forget the exact distance, but its something like 5 miles from pole that they ask groups traversing to turn off their starlink.
that is to say, the starlink terminals radiating EM sufficient to mess with the sensitive sensors at the south pole, which is fascinating since they're supposed to have passed compliance testing that they're not doing too much of that. but the south pole has a different definition of too much, it seems. fascinating!
They're being obtuse. What "it's likely the IT landscape has shifted" actually means is "they got Starlink and their connection is fast now, and I know this for certain but I want to downplay it as much as possible because I'm trying to make a point".
Or they could be making a joke about how quickly trends shift in IT. It's like how people joke (or at least used to joke) that you'd get a dozen new JavaScript frameworks daily.
What are the hopes for "engineering for slow internet" to happen, when people have engineered applications for "fast internet" when all we had was "slow internet".
Nice thought in theory but unnecessarily gives false hope.
Confluence/Jira sometimes need to download 20 megabytes in order to show a page with only text and some icons. I have a friend that tells me that in their company they had two jiras, one for developers, one for the rest of the company, because it was that dead slow.
I've lost all faith already that this will change for better.
Thank you! I am in the East coast US, and consistently find web sites and internet-connected applications are too slow. If I am at home, they are probably fine, but on mobile internet? Coffee shops etc? Traveling? No!
No excuses! It is easier than ever to build fast, interactive websites, now that modern, native Javascript includes so many niceties.
Using JS dependencies is a minefield. At work, where I give less of a fuck, a dev recently brought in Material UI and Plotly. My god.
Engineering for slow CPUs next. No matter how fast our machines get these days, it's just never enough for the memory/CPU/battery hungry essential apps and operating systems we use nowadays.
If you're targeting the general population (so, chat service, banking app, utility app,...), you should be targeting the majority of users, not just the new-flagship ones, so all the testing should be done on a cheapest smartphone you could buy in a supermarket two years ago (because well.. that's what "grandmas" use). Then downgrade the connection to 3g, or maybe even edge speeds (can be smulated on network devices), and the app/service should still work.
Somehow it seems that devs get the best new flagships only, optimize the software for that, and forget about the rest... and I understand that for a 3d shooter game or something, but an eg. banking app, should work on older devices too!
Not yet officially launched, but I’m working on a no-bloat, no-tracking, no-JS… blogging platform, powered by a drag/drop markdown file: https://lmno.lol
Blogs can be read from just about any device (or your favourite terminal). My blog, as an example: https://lmno.lol/alvaro
Yes - name and shame. Slack is INFURIATING on intermittent connectivity. That is simply not good enough for a product who's primary value is communication.
Anyone who has tried to use Slack:
- in the countryside with patchy connection
- abroad
- in China
- on the London Underground
Can attest to how poor and buggy Slack is on bad internet.
These aren't weird edgecases - London is a major tech hub. Remote workers and open source communities rely on Slack around the world.
China is the second largest economy in the world with a population of 1.7B (incidentally it's blocked at least it was when I was last there but even on VPN it was weird and buggy).
How aren't these kinds of metrics tracked by their product teams. How isn't WhatsApp the gold standard now for message delivery, replicated everywhere.
Neither email nor WhatsApp have the weird consistency issues Slack has with simply sending a message with dodgy internet. Not to mention the unreliable and sometimes user-hostile client state management when Slack can't phone home which can sometimes lead to lost work or inability to see old messages you literally were able to see until you tried to interact with stuff.
Slack additionally decides to hard-reload itself, seemingly without reason.
I work on the road (from a train / parking lot / etc) for five or six hours per week. My T-Mobile plan is grandfathered in, so I can't "upgrade" to a plan that allows full-speed tethering without considerably impacting my monthly bill.
Realistically, I hit around 1.5Mbps down. When Slack reloads itself, I have to stop _everything else_ that I'm doing, immediately, and give Slack full usage of my available bandwidth. Often times, it means taking my phone out of my pocket, and holding it up near the ceiling of the train, which (I've confirmed in Wireshark) reduces my packet loss. Even then, it takes two or three tries just to get Slack to load.
I wonder if you could stick your own root CA into your OS'S certificate store and then MitM the connections slack makes, and then respond no don't update with burpsuite and cache with squid to alleviate the problem.
Slack web downloads 40MB of Javascript. The macOS Slack client, that I guess should have all that stuff already, downloads 10MB of stuff just by starting it (and going directly to a private text only chat).
I doubt I'll ever work with a place that uses Telegram but yes its clear that resilient message delivery is a solved problem nowadays but Slack is still hopeless at the one most important key feature of its product. Now that WhatsApp also has groups there's even less of an excuse for Slack to perform so badly
You got all the same answers I did, which helps me determine how good my sleuthing skills are. I used exclusively strings, either API routes, error codes, or version/build numbers.
I've also found that the AWS and Azure consoles behave this way. While not listed in the blog post, they load JavaScript bundles in the tens of megabytes, and must have a hard-coded timeout that fails the entire load if that JavaScript hasn't been downloaded inside of a few minutes.
To Amazon's credit, my ability to load the AWS console has improved considerably in recent months, but I can't say the same for Azure.
My experience is that Slack worked great last winter, when the broadband satellite was up. When it's down, folks use an IRC-style client to cope with the very limited & expensive bandwidth from Iridium.
Querying an exact match of a few strings on Google shows me that Slack is the very first example given in the blog post. For additional confirmation, the "6-byte message" screenshot lists an xoxc token and rich_text object, both of which you will frequently encounter in the Slack API. To be honest, I was expecting it to be Jira at first since I was unaware of Slack's size.
Searching for an exact match of "PRL_ERR_WEB_PORTAL_UNEXPECTED" gives away Parallels as the first example of a hard-coded HTTPS timeout.
All of that sounds like, that torrent-based updaters/downloaders should be the absolute killer-app for environments like that.
Infinitely resume-able, never looses progress, remains completely unfazed by timeouts, connection loss, etc. - and the ability to share received update-data between multiple devices peer-to-peer.
It takes a “special” skill Level to develop web applications in JS for low bandwidth connections. it takes time because frameworks and libraries are not built for this. there are very few libraries and frameworks in JS that are optimized for Low bandwidth connections. this requires having to program applications from scratch.
I went through such a process. Took me two weeks versus two hours using JS query
What's odd is that I have the exact opposite: I find it uncomfortable to use megabytes of libraries with hundreds of dependencies when I can spend 30 seconds and discover the html5 native way of doing something. But my applications look butt ugly. They're fast and privacy preserving, but also it requires a special skill, from my point of view, to make them look good (be it design skills or knowing how to use these frameworks)
I'm thinking of UI elements that are often half-implemented in JavaScript but also exist natively nowadays, such as sliders. What kind of thing are you thinking of?
Great post. One thing though. Maybe the engineers were misguided but its possible they were trying to mitigate slow loris attacks. Which are annoying to deal with and hard to separate from users who are just sending data at a really slow pace. Having had to mitigate these attacks before, we usually do a global timeout on the backend. Maybe different but definitely a possibility.
It doesn't take much to slow down RDP over TCP (especially when port forwarding through SSH).
I did find mention of increasing the cache¹ and lowering the refresh rate to 4 fps² (avoiding unnecessary animations), but I still feel the need for a server-side QUIC proxy that is less pushy based on network conditions. There is a red team project that has the protocol parsed out in Python³ instead of all the ActiveX control clients.
A fascinating article and I need to revisit this when I have more time. So, all I'd say now is that there's way too much emphasis on GUI. Also, check out some of the web sites on https://1mb.club - it's amazing what can be achieved in less than 1Mb of HTML ...
It enabled you to simulate network slow-downs, packet-loss, packet corruption, packet reordering and more. It was so critical in testing our highly network sensitive software.
At the beginning of learning to code in 2017 I was living in the Caribbean. I had to get Xcode command line tools installed, <1gb of download. Something trivial to get done in the US. It took me several days of hunting down an internet connection that was fast and stable enough to handle it since most internet in the country was cell based. The solution was a friendly computer repair shop with a decent hardwired connection that let me leave the computer for a few hours to download.
In a place like that there are some sites that just don’t work sometimes because it is clear that the devs have never thought about what happens when a significant portion of your requests fail.
For the last three years, Rekka Bellum and Devine Lu Linvega have sailed the Pacific Ocean on Pino, a sailboat turned mobile studio, making videogames, art, and music with their own homegrown software. Off-grid for long stretcheS they’ve built a custom OS for their needs: https://100r.co/site/uxn.html
I currently develop applications which are used on machines with very spotty connections and speeds which are still calculated in Baud. We have to hand-write compression protocols to optimize for our use. Any updates/installs over network are out of the question.
It's a great lesson in the importance of edge computing, but it also provides some harsh trusth about the current way we produce software. We cannot afford to deliver a spotty product. To get new updates out to all parties takes a prohibitively long time. This is hard for new people or outsiders giving courses to grok, and makes most modern devops practices useless to us.
An example of a program that's atrocious about unreliable connectivity is `git` -- it has no way to resume downloads, and will abort all progress if it fails mid-transfer.
The only way I've found to reliably check out a git repository over an unreliable link is to check it out somewhere reliable and `rsync` the .git directory over.
Usually `git clone --depth 1 URL` works, then you can incrementally deepen it.
This does cause extra load on the servers, but if it's that big a problem for them, they can write the incremental patches themselves.
(I suspect that the "dumb http" transport are also incremental if you squint hard enough at them, but I've never had reason to investigate that closely)
I cringe whenever I think how blazing fast things could be today if only we hadn't bloated the web by 1000x.
In the dial-up era things used to be unworldly fast merely by getting access to something like 10M ethernet. Now mobile connections are way, way faster than physical connections in the 90's but web pages aren't few KB, they are few MB at minimum.
It takes four seconds and 2.5 MB to load my local meteorological institute's weather page which changes not more often than maybe once an hour and could be cached and served as a static base page in a few dozen milliseconds (i.e. instantly). A modern connection that's plenty capable to support all my remote work and development over a VPN and interactive shells without any lag can't help me get modern web pages load any faster because of the amount of data and the required processing/execution of a million lines of javascript that's imported from a bunch of number of sources, with the appropriate handshake delays of new connections implied, for each page load.
A weather page from 2004 served exactly the same information as a weather page from 2024, and that information is everything required to get a sufficient glimpse of today's weather. One web page could be fixed but there are billions of URIs that load poorly. The overall user experience hasn't improved much, if at all. Yes, you can stream 4K video without any problems which reveals how fast things actually are today but you won't see it when browsing common pages -- I'd actually like to say web pages have only gone slower despite the improvements in bandwidth and processing power.
When many pages still had mobile versions it was occasionally a very welcome alternative. Either the mobile version was so crappy you wanted to use the desktop version on your phone, or it was so good you wanted to predominantly load the mobile version even on desktop.
I'd love to see an information internet where things like weather data, news articles, forum posts, etc. would be downloadable as snippets of plaintext, presumably intended to be machine readable, and "web" would actually be a www site that builds a presentation and UI for loading and viewing these snippets. You could use whichever "web" you want but you would still ultimately see the same information. This would disconnect information sources from the presentation which I think is the reason web sites started considering "browser" a programmable platform, thus taking away user control and each site bloating their pages each individually, leaving no choice for the user but maybe some 3rd party monkeyscripts or forced CSS rules.
If the end user could always choose the presentation, the user would be greatly empowered in comparison to the current state of affairs where web users are currently being tamed down to be mere receivers or consumers of information, much not unlike passive TV viewers.
One of my first professional software projects, as an intern, was I wrote a tool for simulating this type of latency. I modeled it as a set of pipe objects that you could chain together with command line arguments. There was one that would do a fixed delay, another that would introduce random dropped packets, a tee component in case you wanted to send traffic to another port as well, etc.
It's interesting that these are exactly the sort of conditions that the internet protocols were designed for. A typical suite of internet software from the 90s would handle easily.
One key difference is that client software was installed locally, which (in modern terms) decouples UI from content. As the article points out, the actual data you're dealing with is often measured in bytes. An email reader or AOL Instant Messenger would only have to deal with that data (plus headers and basic login) instead of having to download an entire web app. And since the protocols didn't change often, there was no need to update software every few weeks (or months, or even years).
Another key difference, which is less relevant today, is that more data came from servers on the local network. Email and Usenet were both designed to do bulk data transfer between servers and then let users download their individual data off of their local server. As I recall, email servers can spend several days making delivery attempts before giving up.
> From my berthing room at the South Pole, it was about 750 milliseconds
I’m currently on a moving cruise ship in the Mediterranean with a starlink connection. A latency of 300-500 ms seems to be normal. Although bandwidth is tolerable at 2-4 mbps during the day with hundreds of passengers using it. At night it gets better. But latency can still be frustrating.
How do you use WhatsApp over BitTorrent? And how do you update your macOS over BitTorrent?
The author clearly says that downloaders elaborate enough to deal with slow connections (e.g. "download in a browser") were fine. The problem is that modern apps don't let you download the file the way you want, they just expect you to have a fast internet connection.
No, I just meant if he tried to get a large file(Linux ISO) with BitTorrent. which should be reliable in theory.
Bittorent has webseeds support, which can use apples direct CDN urls to create a torrent file to download.
archive.org still uses this technique and AWS S3 used to do this when they had torrent support.
There a website that do just that, it creates a torrent file from any direct weburl.
I didn't know that was possible! Thanks so much, I've got a torrent client but with so few things using bittorrent these days, it feels like innovation went backwards and it's now one-shot http downloads or bust. This will be helpful :)
I can't believe I had never heard about webseeds [1] before!
Do I understand correctly that one can make a torrent from any HTTP-available file (without even owning the file) and start downloading and sharing with BitTorrent? It's incredible!
I’m curious if using these sites through a remote desktop session would be a lot better experience. Keep a computer running at home. Connect to it from Antarctica and do what you gotta do. I don’t know how well VNC/Windows Remote Desktop do with such poor network conditions though.
This topic resonates with me, because I'm currently building a horrible marketing static page with images and videos that top 150MB, prior to optimization. It causes me psychic pain to think about pushing that over the wire to people that might have data caps. Not my call, though...
Yeah I'm talking about Antarctica. I was really thinking about going there, just to see how it is. But the lack thereof proper net puts me off.
No different from regional Australia, which ALWAYS has some crippling addiction, whether it's alcohol, domestic violence or something else. Not the best environment.
Great post. I was asked this question in an interview which I completely bombed, where the interviewer wanted me to think of flaky networks while designing an image upload system. I spoke about things like chunking, but didn't cover timeouts, variable chunk size and also just sizing up the network conditions and then adjusting those parameters.
Not to mention having a good UX and explaining to the customer what's going on, helping with session resumption. I regret it. Couldn't make it through :(
The presumption that "every user" has 20 ms latency, 200 Mbps bandwidth, and unlimited data limits is fundamentally inconsiderate to other cases such as great distances, local congestion, or where accessibility issues exist.
This problem will return with a vengeance once humans occupy the Moon and Mars.
PSA: Please optimize your website, web apps for caching and efficiency, and offer slow/graceful fallback versions instead of 8K fullscreen video as your homepage for all users.
Back in 1999 while at MP3.com it was common for people outside of engineering to complain about speed when they were at home. In the office we had 1G symmetric (that was lot 25 yrs ago!) between the office and primary DC. I tied to explain that the large graphics wanted by some, and heavy videos didn't work great for dial up or with snow cable modem connections. Surely the servers are misconfigured!
Big tech probably loses millions of customers from developing countries because of their bloated frontend engineering. Not everyone has an M3 with gigabit fiber.
My favorite setup when working remotely with slow internet is the following :
- Build tasks & large data operations run on my home server via SSH
- Markdown notes, Notion being unusable
- 100% CLI apps, which are usually the lightest and most efficient
As I was reading this I realised that everything here had already been solved - by torrents.
Everything is split into chunks. Downloads and uploads happen whenever connections can happen. If someone local has a copy, they can seed it to you without you needing an external connection. You can have a cache server that downloads important stuff for everyone.
I don't think any of our apps are built with slow connections in mind at all.
Most of our web libraries and frameworks are indeed quite bloated (with features of convenience), downloading 20 MB of JS and 50 MB of content in total to render a page is insane when you think about it. We'd need to be able to turn off most images or visual elements to focus purely on the elements and their functionality, except for cases where displaying an image is critical for the function (and even then give the choice of showing a low quality version with a smaller file size). Things like web safe fonts that are already present in the browser/OS, most likely no libraries like React or Vue either (maybe Preact or Svelte).
We'd need to allow for really long request (say, fetch) timeout values, maybe even to choose how to set them based on the connection quality (if a user has a really fast connection but a request suddenly hangs and is taking upwards of a minute, something has probably gone wrong and it'd make sense to fail that request, vs a user in a remote area for whom all requests are similarly slow), assuming that the server doesn't mind connections that linger around for a long time at slow speeds.
We'd also need to allow for configuring an arbitrary HTTP cache/proxy for any site visited and file requested (say, store up to 1 TB on some local server based on file hashes and return the same for any user that requests that), but obviously things don't usually work that way over privacy/security concerns (nowadays different sites even download duplicate copies of the same files due to changes in the browsers: https://www.peakhour.io/blog/cache-partitioning-firefox-chro... ). Maybe even for any web request that the OS might want to do, like system updates, basically a full on MitM for the whole system.
Speaking of which, no more Electron or large software packages. Only native software with like Win32/WPF or maybe something like GTK/Qt, but nowadays it seems like even phone apps, not just desktop software, often don't use the system GUI frameworks, but instead ship a bunch of visual fluff, which might look nice and work well, but also takes up a bunch of space.
I don't think there are incentives out there to guide us towards a world like that, which doesn't quite make sense to me. Lightweight websites should lead to better customer/user retention, but in practice that doesn't seem like something that anyone is optimizing for - ads everywhere, numerous tracking scripts, even autoplay videos, for everything from news sites to e-commerce shops.
People who do optimize for that sort of stuff, seem to be a part of a smaller niche enthusiast community (which is still nice to see), like:
Admittedly, even I'm guilty of bloating my homepage size from ~150 KB to ~ 600 KB due to wanting to use a custom set of fonts (that I host myself), even WOFF2 didn't save me there.
I agree with the overall take by OP, but I find this point quite problematic:
> If you have the ability to measure whether bytes are flowing, and they are, leave them alone, no matter how slow. Perhaps show some UI indicating what is happening.
Allowing this means easy DDOS attack. An attacker can simply keep thousand of connections open
Close after 10-60s of complete inactivity, don’t use JS bloatware and allow for range/etag requests should go a long way though. The issue is people setting fixed timeouts per request which isn’t meant for large transfers.
It's funny how similar the problems that affect a workstation in Antartica are to designing a robust mobile app.
I personally think all apps benefit from being less reliant on a stable internet connection and that's why there's a growing local-first movement and why I'm working on Triplit[1].
> a lot of the end-user impact is caused by web and app engineering which fails to take slow/intermittent links into consideration.
Technology today is developed by, and for, privileged people. Has been that way for a while. Ever since you had to upgrade your computer in order to read the news, there has been a slow, steady slog of increasing resource use and conspicuous consumption.
I remember using the 9600 baud modem to get online and do the most basic network transactions. It felt blazing fast, because it was just some lines of text being sent. I remember the 2.5KBps modem, allowing me to stream pictures and text in a new World Wide Web. I remember the 5KBps modem making it possible to download an entire movie! (It took 4 days, and you had to find special software to multiplex and resume cancelled downloads, because a fax on the dialup line killed the connection) I remember movies growing to the size of CDROMs, and later DVDROMs, so those who could afford these newer devices could fit the newer movies, and those who couldn't afford them, didn't. I remember the insane jump from 5KBps to 1.5Mbps, when the future arrived. Spending days torrenting hundreds of songs to impress the cool kids at school, burning them CDs, movies, compiling whole libraries of media [hey, 15 year olds can't afford retail prices!].
I remember when my poor friends couldn't use the brand new ride-sharing services Uber and Lyft because you had to have an expensive new smartphone to hail them. They'd instead have to call and then pay for a full fare taxi, assuming one would stop for them in the poor neighborhood, or wait an hour and a half to catch the bus. I remember when I had to finally ditch my gaming laptop, with the world-class video card you could've done crypto mining on, because opening more than 5 browser tabs would churn the CPU and hard-drive, max out the RAM, and crash the browser. I remember having to upgrade my operating system, because it could no longer run a new enough browser, that was now required to load most web pages. I remember buying smartphone after smartphone after smartphone - not because the previous one stopped working, but because more apps required more cpu and more memory and more storage. I remember trying to download and run a chat app on my local machine, and running out of memory, because the chat app had an embedded web browser. I remember running out of my data cap on my cell phone because some app decided it wanted to stream a load of data as if it was just unlimited. I remember running out of space on my smartphone because 70% of the space was being used just to store the Operating System files.
I'm not complaining, though. It's just how the world works. Humanity grows and consumes ever more resources. The people at the top demand a newer, better cake, and they get it; everyone else picks up the crumbs, until they too get something resembling cake. I sure ate my share. Lately I try to eat as little cake as possible. Doesn't change the world, but does make me feel better. Almost like the cake is a lie.
Telegram messenger is fantastic. It works over GPRS (AKA 2.5G) connection. I love sailing, and the moment we see nearby island and get data connection - Telegram immediately starts working. WhatsApp tries, but actually works only over 3G.
I assume it was considered, but I don’t see it mentioned: Would it be a terrible idea to use a cloud computer and Remote Desktop/VNC to it? Your slow internet only needs to stream the compressed pixels to your thin client.
Try using the internet on an "exchange only line". Technically it's broadband but its speeds are still dialup tier. I know several streets in my city that still have these connections.
And also ever since the 3G shutdown in the UK phones often fall back to GPRS and EDGE connections (2G), as 2G is not scheduled to shut down in the UK until 2033. I know several apps that are too slow to work in such conditions, as they are developed by people who use the latest 5G links in urban locations instead of testing it in rural and suburban areas with large amounts of trees.
So I have some experience with this because I wrote the non-Fleash Speedtest for Google Fiber. I also have experience with this by virtue of being from Australia. Let me explain.
So Google Fiber needed a pure JS Speedtest for installers to verify connections. Installers were issued with Chromebooks, which don't support Flash and the Ookla Speedtest at the time used Flash. There's actually good reasons for this.
It turns out figuring out the maximum capacity of a network link is a nontrivial problem. You can crash the browser with too much traffic (or just slow down your reported result). You can easily under-report speed by not sending enough traffic. You have to weigh packet sizes with throughput. You need to stop browsers trying to be helpful by caching things (by ignoring caching headers). There's a long list.
So I did get a pure JS Speedtest that could actually run up to about ~8.5Gbps on a 10GbE link to a Macbook (external 10GbE controller over TB3).
You learn just how super-sensitive throughput is to latency due to TCP throttling. This is a known and longstanding problem, which is why Google invested in newer congestion control schemes like BRR [1]. Anyway, adding 100ms of latency to a 1GbE connection would drop the measured throughput from ~920-930Mbps to a fraction of that. It's been a few years so I don't remember the exact numbers but even with adjustments I recall the drop off being like 50-90%.
The author here talks about satellite Internet to Antarctica that isn't always available. That is indeed a cool application but you don't need to go this extreme. You have this throughput problem even in Australia because pure distance pretty much gives you 100ms minimum latency in some parts and there's literaly nothing you can do about it.
It's actually amazing how much breaks or just plain sucks on that kind of latency. Networked applications are clearly not designed for this and have never been tested on it. This is a general problem with apps: some have never been tested in non-perfect Internet conditions. Just th eother day I was using one of the Citi-bike apps and it could hang trying to do some TCP query and you'd every now and again get "Connection timed out" pop ups to the user.
That should never happen. This is the lazy dev's way of just giving up, of catching an exception and fatalling. I wish more people would actually test their experience when there was 100ms latency or if there was just random 2% packet loss. Standard TCP congestion control simply doesn't handle packet loss in a way that's desirable or appropriate to modern network conditions.
Some of these web apps are from very profitable or big companies and that drives me insane because they have more than enough funding to do things right.
Take Home Depot for example. Loading their website in a mobile browser is soooooooooo slow. The rendering is atrocious, with elements jumping all over the place. You click on one thing and it ends up activating a completely different element, then you have to wait for whatever you just clicked to load and jump all over the place again. Very frustrating! Inside their stores is even worse! I asked for help locating an item from one of their workers one day and they pull up their in-store app. That too was slower than molasses and janky, so we ended up standing there for several minutes just chatting waiting for it to load.
Definitely agree with the article that engineers should be more aware of scenarios where those interacting with the systems they build have slow internet.
Another thing I think people should think about is scenarios with intermittent connectivity where there is literally no internet for periods ranging from minutes to days.
Sadly in both these regards I believe we're utterly screwed.
Even the Offline First and Local First movements who you'd think would handle these issues in at least a semi-intelligent manner don't actually practice what they preach.
Look at Automerge or frankly the vast majority of the other projects that came out of those movements. Logically you'd think they have offline documentation that allows people to study them in a Local First fashion. Sadly that's not the case. The hypocrisy is truly a marvel to behold. You'd think that if they can get hard stuff like CRDTs right they'd get simple stuff right like actually providing offline / local first docs in a trivial to obtain way. Again sadly not.
Again at this point the jokes are frankly writing themselves. Like bro make it possible for people to follow your advice.
Also if you directly state or indirectly insinuate that your tool is ANY/ALL OF Local First, or Open Source, or Free As In Freedom you better have offline docs.
If you don't have offline docs your users and collaborators don't have Freedom 1. If you can't exercise Freedom 1 you are severely hampered in your ability to exercise Freedoms 0, 2, or 3 for any nontrivial FOSS system.
The purpose of broadband is so that nothing has any latency when you're not on dial-up.
Except user-initiated mega downloads, and they occur at the maximum connection rate the hardware can handle. At bandwidth costs anybody can afford. Just like with dial-up, why settle for less instantaneous performance than the hardware can handle? Just to make it so lesser geeks can participate?
The hallmark of insufficient web "engineering" has always been the completeness of the failure to pre-deploy on dial-up.
I've been looking into how to survive low bandwidth/high latency/frequent disconnecting internet, because we may need to save money at home and temporarily switch to an unlimited 2G speed (64kbit) cell data plan as our primary internet for a few months. (Red Pocket's $7/mo. 1GB GSMA plan paid annually.) Not only is Red Pocket notoriously bad at limiting both bandwidth and are horrible with latency, we also live far from the closest cell tower, so our reception is also poor and frequently disconnects. A one-two-three punch for a bad internet experience.
This is probably about as close to experiencing an Antarctic data satellite connection within the continental US as is possible, and would be even worse than a 56k modem; While the bandwidth is roughly the same, Red Pocket users often report latencies in excess of 1000+ms, whereas 56k modem latency was typically under 200ms. The frequent disconnects (because we're far from the tower) would simulate Antarctic conditions, as well.
So how do internet-connected people living in 2024 survive such an atrocity?
First, we would leverage SSH and CLI tools as much as possible. Brow.sh and Carbonyl browser for web and Mutt for email. Mosh.sh to keep the SSH connection alive and reconnect automatically on reconnect--even if your IP changes. (Mosh is incredible.) Some terminals have a zoom feature (ctrl++ or - in kitty) which may help if some web browser item is just too difficult to see in its blocky resolution. See also the Carbonyl --bitmap and --zoom flags:
https://github.com/fathyb/carbonyl/releases/tag/v0.0.3
The youtube-dl command can fetch tiny versions of a video on a server (even audio-only if we don't care about the video portion such as with long-form interviews), then transcode them on the server to the smallest-tolerable bitrate. rsync them home overnight and watch them the next day. rsync can retry on failure: rsync --partial --progress --compress-choice=STR, --zc=zlib --compress-level=9 --rsh=ssh user@host:remote_file local_file
Needs a remote Linux server, but ServerHunter has servers adequate for the task for less than $3/mo. Focus on RAM; Most any plan has sufficient hard drive space and speed, sufficient bandwidth and CPU for what you're doing. You're not serving large content to many demanding customers, and in my personal testing I have found that even 1GB RAM should be enough--but obviously more is better. And many companies only charge for data egress--which should be smaller than ingress. Data ingress to the server is usually free, and when you're downloading large amounts of data to the server, then compressing them, egress is much smaller. Choose companies that have a domain age of at least 1 year old; I've had a fly-by-night company take my money and run. (PayPal reimbursed me.)
To download large files, try first compressing them; Although, depending on the original file, compression can either give you no advantage or even _increase_ the file size. Try different compression methods; bzip2, zpaq, xz, lzma, etc. ls -l the before and after to see if it helped any.
Also, we'd rely on phones apps for things like Gmail/Facebook/Messenger/WhatsApp/Telegram/Twitter/Discord/etc. This saves always redownloading JavaScript for each visit. While there are Windows/Linux desktop clients for these, they usually are only thin layer over a browser, which means they still download JavaScript and large images on every use. But a phone app has everything it needs all the time--until the next app update, of course. Can temporarily disable app updates, but I wouldn't do that for long. The good part about app updates is they can tolerate slow links and frequent disconnects. And we often take our phones with us to high-bandwidth locations, so we can do app updates there and disable them over cell data. The article said on some days Antarctica has better bandwidth availability, so you could halt updates until conditions improve.
Same with Windows or Linux OS auto-updates; Disable them until bandwidth conditions improve. (But you're probably already doing that.) Edit: After reading the article again, yes they definitely are doing that, and are still having problems. Bummer.
Installing Bluestacks is an option, for the above reason. Using phone apps on the desktop that don't need to download JavaScript on every use could bridge the gap somewhat. Disable its auto-updates as well.
Gmail can be set up for IMAP or POP3. Try both; POP should give a better experience, but by default it removes messages on the server, so look for that setting to disable it if you want to keep them there. I believe you can disable downloading attachments and images automatically, which probably works better for IMAP? Not sure if POP can handle this. Doing it this way means you don't need to load the web interface any time you check mail, and you'll only be reading the text content of emails. It should therefore work through Mutt as well.
Then there's browser tricks for local rendering: Disable graphic images and disable media autoplay. uBlock Origin to reduce overall download size. Turn off JavaScript until truly needed. Pain in the butt, but it works.
In Windows, set your network connection to metered. This should help. (And back in the US, we can set our phones to low bandwidth mode--but that would only help users on cell networks; I'm sure the south pole lacks cell reception.)
You're not going to be gaming with large downloads and patches, nor will you be competing with low-latency gamers around the globe. But not all games need these. Play against the computer instead of other people. Play only pre-downloaded or DVD-based games. Or go back to the 90s and set up a LAN party and play against one another in the same room. We love Unreal Tournament 1999 GOTY; Plays blisteringly fast on any computer, is cheap to buy on DVD or Steam, has no in-game paid-for upgrades, has oodles and oodles of maps for download, and gameplay still feels fresh even 25 years later. Card games and offline phone games are king in low bandwidth environments.
You're probably not going to make many audio or video calls, so learn to send mp4s or mp3s to others back and forth by email. Use the lowest-tolerable bitrate. A 6Kbps OPUS/32Kbps AV1 video is about 5KB/sec and is clear enough to follow along, and would certainly be acceptable for a video message. A 0.5Kbps OPUS-encrypted audio file is surprisingly understandable, if not very enjoyable. But to get the message across, it works. A 6Kbps OPUS audio sounds like telephone quality to my ears.
https://heavydeck.net/post/64k-is-enough-for-video/
I don't know if podcast apps have a low-bandwidth option, but it's worth investigating. Apparently the Player FM Pro app compresses pods on their servers before sending.
Finally, there's the possibility of running a remote desktop session: Something like RDP over SSH, VNC over SSH, NoMachine's NX, TeamViewer, etc. Can give clearer detail than brow.sh and can sometimes be faster than browsing locally. I tested it by intentionally limiting my desktop's speed to 5KB/sec and while it wasn't fun, it got the job done. Took about ten seconds to paint a page on a 1GB VM, but that can often be faster than rendering a JavaScript-heavy page locally. Seems to be a good compromise for certain situations when the rest of the options fail you.
Typing is too laggy (5 seconds before characters appeared) but one way around that is to type into a file via SSH then cat the file on the server, paste that into the web form. Or write your notes to a file on your local system and copy+paste that directly in. I've found that even a server with 1GB RAM is sufficient for even this task, as long as you're only looking at one web page at a time. No multiple tabs. I ran XFCE4 on Ubuntu 22.04 and Google's official Chrome package. Resized the connection to 1024x768, which I consider to be the minimum for useful browsing. Changed the quality settings to lowest, color depth to lowest, etc. Disabled audio and printing.
To read large pages offline, print to PDF on the server inside RDP/VNC and download that with rsync.
The combination of all of these should help us get through a season of needing to save money, and it may very well save the day in Antarctica, as well.
I was reminded that Google Cloud offers a free forever 1GB RAM server with 1GB of network egress. In the US, additional egress costs $0.02/GB. A server sending 5KB/data every hour of every day for 30 days will cost you about 25 cents a month :-)
Why do writers like this feel so entitled to engineering effort from companies? Maybe companies don’t want to plow millions into microoptimising their sites so a handful of people in Antarctica can access them, when the vast majority of their clients can use their sites just fine.
The author puts a lot of effort into emphasizing that it's not just "a handful of people in Antarctica" facing such issues, but quite a noticeable percentage of global population with unstable or otherwise weird connectivity. The internet shouldn't be gatekept from people behind such limitations and reserved for the convenient "target audience" of companies, whoever that might be - especially when solutions to these problems are largely trivial (as presented in the article) and don't require that much "engineering effort" for companies of that scale, since they are already half-implemented, just not exposed to users.
People should not be limited from employing already existing infrastructure to overcome their edge-case troubles just because that infrastructure is not exposed due to it being unnecessary to the "majority of the clients".
for example, the audience of a site might be in southern Africa, with same bad connectivity, but the EU/UN site developers are in the north so they don't care and the consequences are in program adoption that fails to blame that. Or you might be doing business with a coffee producer and now your spiffy ERP is missing data because it's too much effort for them to update the orders, so your procurement team have to hire an intern to do it over the phone costing you an extra 2k a month. Or which is more likely for the crowd here, yall losing clients left and right because your lazy sysadmin blocked entire country ip ranges because once they saw a single DoS wave from there.
If you do not want to write software that works well in the antarctic mission, just don't sell to them. Government contracts are pretty lucrative though.
I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly. In deficit of time travel, if people could just learn to open web tools and use its throttling tool: turn it to 3g, and see if their webapp is resilient. Please!