Hacker News new | past | comments | ask | show | jobs | submit login

Since I temporarily have HN's attention with this side blog of mine, can I suggest one simple tweak:

Please, please, if your site requires AJAX to work at all, then retry failed AJAX queries. Use exponential backoff or whatever but don't let the AJAX query fail once and the page be unusable.

This happens all the freaking time when I'm on dialup, and there's nothing more annoying than having filled out a form or series of forms only to have the submit button break because it used AJAX to do a sanity check and threw an exception because the server timed out after some absurdly short (dialup-wise) period of time while the client was sending the request.




Just as annectdote, your blog site is kind of broken on smaller screens (tablet). The useless right sidebar covers almost half of the screen, and overlaps the content (only a bit more than half of it is visible).

And then there are many many users world wide who use a smartphone (small screen) and connect over GPRS or Edge (slow as modem). Bonus points when overblown sites with dozends of ads and trackers crash your browser on a mobile device, because it ate all your RAM for lunch. Every additional JS file is a pain. Tip: Keep the total JS file size below 300KB.


And doesn't happen only on dialup either, but tethered and mobile connections while commuting - when most people are using their mobile devices to browse around.


Yeah, found this out the hard way with customers complaining that they hit save, get an error code "0", and then couldn't save again because we had disabled the button after submitting.

Side rant: can I take a moment to call out WHATWG for deciding to specify that all networking errors in XmlHttpRequest get status: 0 and absolutely no explanation anywhere in the response object ( see https://fetch.spec.whatwg.org/#concept-network-error ), making it absolutely a nightmare to diagnose problems and support our users? I suppose in a world of fail whales and cute cat pics, leaving the user in the dark as to why something broke is now standard practice, but at least in those cases there's something server side to let techs know what's going on, but I get random calls from users complaining about this and the best I've got is that they tried to leave the page while the form was still saving (this triggers the error handler too since it's the exact same code) or their internet connection dropped ever so briefly because the next words out of their mouth when they say that is "but I can view other websites".


  can I take a moment to call out
  WHATWG for deciding to specify 
  that all networking errors in 
  XmlHttpRequest get status: 0
I think this is one of those times where people should be ignoring the spec.

Do the w3c specs have anything to say on the matter?


According to w3c:

> The error flag indicates some type of network error or request abortion

and in their spec for "network errors" at https://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/#reque... unless you're using it in synchronous mode, it just sets the error flag. There's an onerror callback, but it doesn't appear to get any information about what went wrong than the WHATWG version.


It's not just an easy JS fix though. It complicates quite a lot on the server side. Your timed out request could have reached the server already, so the state would different now. From the server perspective it's not clear whether it was a retry or another request (and some actions are meant to be repeatable).

Now that I think about it, generating some request UUID and passing it to the server could allow it to quickly skip duplicates (it would also need to cache responses for resending them).

I'm also curious how often this problem occurs statistically (not just in your case, but for an average user of popular sites)


Mind you, we're talking about "sanity checks" here, not actions: presumably these are XHR GETs.

In HTTP-semantic terms, you're always allowed to retry GETs, and they're always supposed to be idempotent (or not change state at all, but that's a lost battle.)

There's no reason that the browser shouldn't default to automatically retrying (with back-off) GET XHRs, save for how many sites are built without a proper understanding of HTTP semantics.


Wouldn't it be easier to add a retry=true to the request, at least if your implementing exponential back off in an existing codebase? Then you just hash the request (minus the retry field) in your cache layer and send the saved response again. You'll need a little extra code to update timestamps on the client side but the server side can be implemented as middleware in most frameworks. Expire the cache after a reasonable total timeout and make the client do a new read if they try to retry after cache has cleared it.


I think you need client side UUID instead of just retry=true for requests that can be repeated with the same params.

E.g. /inrement?retry=true, server gets 3 of them - how many times should it increment? Is it just that user still haven't received the first response, or should we increment 3 times because it was send/fail/retry/ok 3 times on the client side?

But with this UUID it still can be easily implemented as a middleware.


but you shouldn't use a GET request for incrementing, right? do people (or popular frameworks' defaults) really do that? frequently?


The OP was talking about having to refill the form, I assumed they weren't submitted using GET.


This is at the TCP/http layer not app logic. Your machine and the servers you're connected to don't have the right settings for dialup.

Turn up retries on TCP. Increase timeouts on the servers. Etc...


I've had this happen on a cell phone modem! My (large) image uploads to an online flow-chart saas started failing. Open up the chrome dev tools, turns out they all failed at exactly 20.0 seconds -- obviously a timeout issue. I had a maddening support experience in which the support tech kept insisting that he couldn't reproduce the problem from his (high bandwidth) end. I asked him to please just ask an engineer to raise the 20 second timeout, but he stone-walled me. So frustrating.


Then again, as a user I actually find a perpetual loader spinner animation far more annoying than a simple "oh noes server is under heavy load please come back later".


I'm curious what do you think about AMP. Do you think it is a good solution for this? (not implying that AMP solved this problem, but rather some company as intermediary to serve you the page correctly, instead of relying on the website's owner)


AMP isn't a solution for anything. Google should NOT be 'fixing' the web for anyone.


I don't disagree with you, but there are so many bad websites that something should be done. AMP is a really bad solution to a real problem. Walled garden pages as Facebook for companies and similars are equally bad in my opinion. Some time ago I had the idea that, maybe, the website layout could be controlled by the user, and only data are controlled by the website's owner. The web would become boring, I know, but the alternative is just as bad: broken websites.


> Some time ago I had the idea that, maybe, the website layout could be controlled by the user, and only data are controlled by the website's owner. The web would become boring, I know,

(disclaimer: my web developer days were a lifetime ago)

wasn't this basically the idea with CSS (and user CSS), "semantic" HTML and JS as progressive enhancement? those were pretty exciting ideas (IMHO), and with modern CSS there's no reason they shouldn't look good either... what ever happened with that?


> I don't disagree with you, but there are so many bad websites that something should be done.

Yeah, we should not be visiting those sites. Give our attention and currencies to the sites who care about their users.

I guess one good thing about AMP is now I know what sites to do my best not to use.


As a third world Google user, I appreciate that they save me a lot of frustration and money with AMP.

I suppose it'd be great if pampered first-worlders get to disable it for themselves.


My point is that the developers of the sites that are so slow should be figuring out why and not leaning on the crutch that is AMP.


The problem AMP is trying to fix is very real. Its just a shitty powergrabbing fix


[flagged]


"It hurts when I move my arm." - "Don't move your arm then."

No one who uses dial-up wants to use dial-up.


It's still possible to be reasonably productive on dial-up if you're working in a screen or tmux session over ssh.

But the modern web is hopeless on slow connections. Some sites are pretty bad even at standard cable internet speeds. Site developers who work with gigabit fiber connections should try their pages at 10Mb or slower and 3G speeds out of courtesy to the real world.


With mosh instead of plain SSH, working with a crappy connection becomes pretty fluid.

The copper cable bringing ADSL to my house broke and it is prohibitively expensive to fix. I haven't had this bad Internet connectivity after 1999 but I can still work alright with mosh + tmux. Browsing the web isn't very nice, though.


10mb? I celebrate anything over 100KBps. This is anecdotal but in Uganda(Third word) most personal connections(MIFI/modem) are way slower than that


You mobile phone == dial up. Just less reliable and drops more packets.


At my home, mobile is much faster than my cable modem, although latency is variable.

I regularly pull 60/30 speeds on LTE. Broadband is 50/5. :)


In 1999 with a BTCellnet brick and WAP, I agree, but at no point in the almost 20 years since then, has it been as slow as dial up in any country I have visited. Maybe I am just lucky but it seems far fetched.


What do you define "fast"? Yes, most of the time you (the anectodal you) probably have LTE coverage everywhere, but that coverage is not universal even in high-tech places like Sillicon Valley and can vary wildly when just moving around for a bit.

As I said - if you'll actually look data in aggregate, you'll notice that people love to use their mobile devices for reading the web on commutes. Millions and millions of people devoting their morning/afternoon commute time to reading the internet on their mobile devices and mobile coverage in those conditions is nowhere close to "universal LTE without packet loss". A lot of cities (heck, even CalTrain in SV) have spotty commuter line coverage, especially when cells get loaded due to large amount of clients. Those people will then get crappy experience if you just assume everyone has LTE.

Is anectodal feeling data really how you design and optimize your software for end-users? :/


The throughput might be better than dial up but latency is worse, reliability can be awful (especially when you're between two cell base stations) and the connection can go down for a minute or more at any time.

In particular, the latency can make your browsing experience rival dial up.


Maybe you've never left the cities. I just took a 3000-4000 mile long train trip across the country. I'd estimate that I had cell phone coverage for less than 1% of the trip.


Well, on second thought let's say less than 5%. 1% would only be about an hour of that trip.


I ran in to plenty of 2g service in Germany and Belgium last year. Perhaps T Mobile just has horrible service, but I was frequently on slow 3g at best, especially west of Cologne. It made 4g/lte really nice when I found it, though :)

yes, 2g isn't dialup modem speed, but it isn't useful, either




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: