In fact, right now YouTube loads far quicker than it has for the last seven to ten days, where it would take ages to load any YouTube page.
When we proposed adding a daily reboot to cron, the tech lead (which encouraged practices which lead to this low quality software) retorted that "this in not Windows, it doesn't need constant reboots", totally missing the point that just using Linux doesn't make you a developer of reliable software.
We had a Linux server that had some issue with a network driver that was causing kernel oops with long uptimes (about once a week). This one was particularly nasty because the oops would disable the network interface, meaning that you couldn't ssh to diagnose the fault (or, more likely, reboot), which meant that you had to drive to the premises and kick the server in the guts.
Of course, Murphy's law ensured that the server would fail at the worst possible time. Late Friday, weekends, your mom's birthday, etc... Not fun, at all.
The solution was to write a cron job to reboot every day at ~4:30AM. Stupid? Yes. But we all agreed that it sure beat the alternative (driving, kicking, sobbing).
The driver was eventually fixed by the vendor, and this "hack" became unnecessary.
I am an economist, turned into "data scientist" since I've learned to program in the past 5 years (hate that name...).
At a macro consultancy firm I worked, everybody lost it when I suggested that we moved our manually downloaded data from a bunch of excel spreadsheets to a proper database (28 years of macroeconomic data) so that we could programatically extract data for online reports we sent/hosted. They said I was being lazy...
The modem would randomly seem to keel over with some unknown fault. Causing my internet speed to drop from 300Mbps down to 0.25Mbps, ping to Google.com for instance would then spike from 5ms to 1900ms (or more)
Curiously the upload speed would stay pegged at 30Mbps however!
After a few days of this happening, I picked up a Chinese "Smart Switch" that ran OpenWRT, and set up a small shell script to simply ping Google.com, then cycle the modem if the average ping results exceeded a certain threshold (I think 100ms?)
It would also record the exact date and time and log that, so I could try and correlate the issue. Unfortunately it seemed to be utterly random, without any rhyme or reason.
Since I worked at Comcast at the time, I tried to gather more data on the issue internally. Eventually writing a report that totaled something around 10 pages.
From what I gathered: There were no physical signal deviations when the device would "hang". The device would respond normally to SNMP requests etc, everything on Comcast's side appeared normal. The device had some internal fault with its software that was causing problems (kernel bug perhaps?)
I contacted Motorola/Arris for support, and was advised that the warranty specifically excludes Software faults(!) and then kindly recommended to "upgrade" to the newer SB6190 model.
Unfortunately being a Cable modem, the firmware is completely controlled by the ISP. Since there were only 25,000~ SB6183's on Comcast's network at the time, and even fewer on the speed tier that I had, there was not enough data to report the issue back to Motorola/Arris through Comcast
Eventually a software update was pushed out which corrected the issue roughly 6 months later
Sometimes the hammer approach is the only solution.
The only way to get it into a known good state when it shit the bed was to reboot the machine and then let it sort itself out.
We rapidly moved away from using it.
Funnily enough I'm in a similar situation at the new job with Jasper Reports, god damn if that thing isn't everything bad about Enterprise Java(TM).
Yes... but no software should require the Linux OS to reboot unless you're running custom or known buggy kernel modules, or you've triggered a spiral of death through swap usage. If your program is misbehaving, kill the program and restart it.
Restarting the OS daily is like noticing that your car uses a lot of gas, and deciding that every time you fill it up you'll get an oil change too. You might need to fill up a lot, but the oil change is overkill and not really affecting the situation in one way or another.
In general you want to avoid sync loads of js assets because depending on how the server serving the asset hangs it can cause the webpage to hang as well. For example, if the server responds with a 404 right away then there are no problems. But if the server does not respond and leaves the connection open the browser will just wait the max time.
Shouldn't a goal be to mitigate the number of possible failures which can bring down your site by reducing the number of single points of failure?
1. If you're still using HTTP 1.x, sharding assets across origins lets the browser load them in parallel (if set up correctly). You can generally load just 6 assets in parallel per origin, and sharding is a way to get around that limit.
2. A library like jQuery is so popular, and is so often served from googles CDN, that chances are a user already has it in their local cache from when they downloaded it on some other site.
That said, yes - the downside is more surface area that might go down.
Which of these versions do you have cached?
3.2.1, 3.2.0, 3.1.1, 3.1.0, 3.0.0, 2.2.4, 2.2.3, 2.2.2, 2.2.1, 2.2.0, 2.1.4, 2.1.3, 2.1.1, 2.1.0, 2.0.3, 2.0.2, 2.0.1, 2.0.0, 1.12.4, 1.12.3, 1.12.2, 1.12.1, 1.12.0, 1.11.3, 1.11.2, 1.11.1, 1.11.0, 1.10.2, 1.10.1, 1.10.0, 1.9.1, 1.9.0, 1.8.3, 1.8.2, 1.8.1, 1.8.0, 1.7.2, 1.7.1, 1.7.0, 1.6.4, 1.6.3, 1.6.2, 1.6.1, 1.6.0, 1.5.2, 1.5.1, 1.5.0, 1.4.4, 1.4.3, 1.4.2, 1.4.1, 1.4.0, 1.3.2, 1.3.1, 1.3.0, 1.2.6, 1.2.3
As an actual answer, it would be variable proportional to the size of the window between releases mentioned here: https://en.wikipedia.org/wiki/JQuery#Release_history
I'm sure a fair amount of people serve jQuery from a local storage. The usefulness that the user might already have it cached is a non-zero point, no matter how insignificant you may think it is.
That narrows your suggested problem down dramatically.
The ratio of cost of storing a library versus the cost of GETing a library is very low, so the chances of already having a library cached can be very low for the EV to be worthwhile.
Weighing that against the chance of downtime is a bit more complicated, admittedly.
This seems to only apply to Chrome, whereas Firefox will happily download everything as fast as possible.
I know this because I fixed a bug recently where chrome was taking so long to download images that other resources on the page were timing out. No problem in Firefox.
Not sure why it was phrased that way but...isn't everybody?
I know that HTTP/2 is released and browsers support it, but I'm fairly certain that next to nobody is actually doing anything with it.
I also use HTTP2 at work, and on every personal project. It's supported by every browser , and comes with a slew of benefits. It's usually trivial to set up, if you want to give it a shot.
A few extra ms in initial download isn't so bad compared to having your site be completely inaccessible for reasons outside your control.
Or would you do something in the browser to fetch the local one in case of failure?
I think google is pretty good about engineering no global single points of failure; all updates are rolled out to only a fraction of users/machines at a time, etc.
Edit: Everything working fine for me again.
Or maybe Samsung's announcement of a 'fold out' phone?
Or the Ted Cruz news?
It's a busy morning.
9/12/2017 @ 10:27 AM +MST (Time services reported back up according to status page)
That's why the smart money defaults to 18.104.22.168...
I did hear youtube having 503 errors.
North Korean cyber attack anyone?