Here's a random idea that might have more potential: create an adblocker browser plugin that also colors URLs based on how slow they are expected to load, e.g., smoothly from blue to red. The scores could be centrally calculated for the top N URLs on the web (or perhaps, an estimate based on the top M domain names and other signals) and downloaded to the client (so no privacy issues). People will very quickly learn to associate red URLs with the feeling "ugh, this page is taking forever". So long as the metric was reasonably robust to gaming, websites would face a greater pressure to cut the bloat. And yet, it's still ultimately feedback determined by a user's revealed preferences, based on what they think is worth waiting how long for, rather than a developer's guess about what's reasonable.
Edit: Joking aside, that throttling feature is a nice easy way to let a dev team, or business counterpart, see what their site is like for say, a customer with a low-end DSL connection: https://developers.google.com/web/tools/chrome-devtools/prof...
# Bandwidth trottling, enabling 150kB/s on port 80
sudo ipfw pipe 1 config bw 15KByte/s
sudo ipfw add 1 pipe 1 src-port 80
# Bandwidth trottling, disable
sudo ipfw delete 1
More useful stuff here as well, please star the repo :)
The new method uses dummynet and pf but isn't reliable and I've never got it to work consistently, despite trying for hours and hours.
The only method that works reliably on recent versions of OS X is the free Network Link Conditioner. It is absolutely bulletproof.
Edited to add: Network Link Conditioner seems to use pf and dummynet under the hood; you can see the rules appear. But there's an interaction with the nlcd daemon that I don't understand yet. I want to do protocol-specific bandwidth throttling and I've not got that to work with nlcd interfering. But if you can live with throttling all traffic on the box, NLC works a treat.
That doesn't reflect responsible, light-minded development. Hell, load a couple of Airline website home pages... I don't think any of them are loading uncer 400kb of JS. that's for the homepage alone. Let alone the number of individual assets being requested (less of an issue once http/2 takes hold.. but still.
When we tracked conversion the best converting page on the site was one made specifically for that and it is 100kb and loaded instantly. Two images and lots of text. They still insist on slow, beautiful pages elsewhere instead of making them convert as well.
That's just bootstrap, not even the shear number of jQuery UI bits floating around, and heaven forbid you see both in a project.. and all the "bugs" that the input in one page doesn't match others. sigh
Web browsers have more or less become mini operating systems, running elaborate virtual machines. There's way too much complexity for everyone involved — from web devs and browser devs to the users and the people who maintain the standards, then there's the devs who have to make native clients for the web apps — just to deliver products that don't have half the power of OS-native software. Everyone has to keep reinventing the wheel, like with WebAssembly, to fix problems that don't have to be there in the first place, not anymore:
Thanks to smartphones, people are already familiar with the modern concept of the standalone app; why not just make downloading an OS-native binary as easy as typing in a web address, on every OS?
Say if I press Cmd+Space on a Mac and type "Facebook" in Spotlight, it immediately begins downloading the native OS X Facebook app. The UI would be described in a format that can be incrementally downloaded, so the experience remains identical to browsing the web, except with full access to the OS's features, like an icon in the Dock and notifications and everything.
TL;DR: Instead of investing resources in browser development, Android/iOS/OSX/Windows should just work on a better standard mechanism to deliver native apps instead.
This is a backward-looking argument which ignores the unique benefits of the web which made it inevitable that it would evolve into an application platform, regardless of how tortured the results may feel.
The web is the first truly cross-platform development environment. It is not controlled by a single vendor, and anyone implementing a new computing device must support the web (just stop for a second and consider from a historical perspective what a monumental accomplishment that is). Furthermore, it allows casual access of content and applications without any installation requirement. It comes with a reasonable security model baked-in, which, while imperfect, gets far more attention than most OS vendor sandboxing schemes. Last but not least, the web's primitive is a simple page, which is far more useful than an app as a primitive—for every app someone installs they probably visit 100 web pages for information that they would never consider installing an app for.
I agree that the web is sort of abused as an application platform, the problem is there is no central planning method which will achieve its benefits in more app-oriented fashion. No company has the power to create a standard for binary app deliverables that will have anywhere near the reach of the web. And even if one could consolidate the power and mastermind such a thing, I feel like it would run squarely into Gall's Law and have twice as many warts as the web.
No it isn't. Not even close. It's maybe the first cross-platform development "environment" of which millenials are widely aware. But it's only an "environment" in the most ecological sense -- it's a collection of ugly hacks, each building upon the other, with the sort of complexity and incomprehensibility and interdependency of organisms you'd expect to find in a dung heap.
"Last but not least, the web's primitive is a simple page, which is far more useful than an app as a primitive"
For whom, exactly? You're just begging the question.
I'll grant you that "installing" an app is more burdensome for users than browsing to a web page, but the amount of developer time spent (badly) shoe-horning UI development problems (that we solved in the 90s) into the "page" metaphor is mind-boggling. In retrospect, the Java applet approach seems like a missed opportunity.
The proper reaction to something like React, for example, should be shame, not pride. We've finally managed to kludge together something vaguely resembling the UI development platform we had in windows 3, but with less consistency, greater resource consumption, and at the expense of everything that made the web good in the first place. And for what reason? It's not as if these "pages" work as webpages anymore.
A proper "application development environment" for the web would be something that discards the page model entirely, and replaces it with a set of open components that resembles what we've had for decades in the world of desktop application development.
Alan Kay has expressed the same filling.
PS to downvoters if you have not used a proper interface designer such a QT or Delphi, then you don't know what we mean. Please watch some videos to decide if the state of the art (angular and react) is what we should be using in 2016.
"The web is the first truly cross-platform development environment."
...and then talked a bit about how it's open (yeah, ok, sure), and then you said it's not really a good application development environment (obviously).
I'm saying, your entire premise is wrong: it isn't an application development environment, any more than a box of legos is a "housing development environment". People have built houses out of legos, but that doesn't make "lego" a building material. It's a big, messy, nasty hack.
The fact that it's "open" is a non-sequitur response to "it's the wrong tool for the job", which is what the OP (and I, and elviejo) are arguing. It's also not a legitimate response to argue that any re-thinking of the model has to come from a company, or otherwise not be "open".
The reason that web apps happened is because web apps started as a hack. That doesn't mean we can't change the paradigm, but to do that, we have to stop defending the current model.
(Realistically, the reason I'm getting downvoted probably has more to do with my willingness to call out React as a pile of garbage than with the substance of the greater argument. C'est la vie...it's actually pretty amusing to watch the comment fluctuate between -3 and +3...)
Here's the crux of our disagreement. You believe that the web is such a broken application platform that it is possible to convince enough vendors and people to get behind a better solution. However, I (despite your presumptuous implication that I'm a millenial), have been around long enough to know that will never happen. Web standards will continue iterating, and companies will continue building apps on the web, even the most powerful app platforms today such as iOS and Android for all their market power can not stop this force. The reason is because it's a platform that works. The man-millenia behind the web can not be reproduced and focused into a single organized effort. You might as well argue that we replace Linux with Plan 9, it doesn't matter how much passion you have and how sound your technical argument is, Linux, like the web, is entrenched. It's gone beyond the agency of individual humans and organizations to become an emergent effect.
That's not to say that the web might not some be supplanted by something better, but it won't come because of angry engineers wringing their hands about how terrible the web is. It will come from something unexpected that solves a different problem, but in a much simpler and more elegant way, and over time it will be the thin edge of the wedge where it evolves and develops into a web killer.
Maybe I'm just cynical and lack vision, perhaps you can go start a movement to prove me wrong. I'll happily eat my hat and rejoice at your accomplishments when that time comes.
"That's not to say that the web might not some be supplanted by something better..."
Whomever wrote the first paragraph of your comment should get in touch with the person who wrote the second paragraph.
OK, seriously, though, let's summarize:
1) Person says "web development sucks, here's why: $REASONS"
2) You reply: "it's the only truly cross-platform development environment ever"
3) I (and others) reply: "no, it really isn't. it isn't even a development environment, by any reasonable measure."
Now you're putting words in my mouth about convincing vendors and starting movements. I'm not trying to start a revolution here, just trying to counter the notion that we can't do any better than the pile of junk we've adopted. You don't have to love your captors!
I have no idea if someone will come up with a revolutionary, grand unified solution tomorrow, but I know that this process starts with the acknowledgement that what we have sucks, and that we have lots of examples of better solutions to work from. Hell...just having a well-defined set of 1995-era UI components defined as a standard would be a quantum leap forward in terms of application development.
Declaring the web "not even a development environment" is just absolutist rhetoric that can in no way further the conversation. If you define "development environment" as a traditional GUI toolkit then your're just creating a tautology to satisfy your own outrage.
So.. app stores?
It has already begun. The most popular webapps (Facebook, Twitter etc.) already have native clients in Android and iOS. I believe the majority of people already prefer and use the native FB/Twitter apps more often than accessing the FB/Twitter websites. So it's already obvious that native apps must be more convenient.
Right now however, app stores are a little clumsier to navigate compared to browsers.
• you have to open the browser,
• type in the address OR
• use a web search if you don't know the exact address.
But for apps:
• you have to open the app store,
• search for the app,
• potentially filter through unofficial third-party software,
• download the app, possibly after entering your credentials,
• navigate to the app icon,
• authorize any security permissions on startup (in the case of Android or badly-designed iOS apps.)
We just need the Big Three (Apple/Google/Microsoft) to actively acknowledge that app stores can supplant the-web-as-application-platform, and remove some of those hurdles.
Ideally an app store would be akin to searching for a website on Google.com (or duckduckgo.com) with a maximum of one extra click or tap between you and the app.
Apps should also be incrementally downloadable so they're immediately available for use just like a website, and Apple already has begun taking steps toward that with App Thinning.
Ultimately there's no reason why the OS and native apps shouldn't behave just like a web browser, because if web browsers keep advancing and evolving they WILL eventually become the OS, and the end result will be the same to what I'm suggesting anyway.
Currently though, both the native OS side and the web side exist in a state of neither-here-nor-there, considering how most people actually use their devices.
Bad stuff seems to win because it's more evolutionarily adapted than well thought out stuff. This happens to hold for programming languages too.
What's truly sad is that some people would rather be abstractly right while producing nothing of value than work with the dominant paradigm and introduce useful concepts to it.
This did not happen thanks to Unix; if anything, you'd probably have to be grateful to Microsoft and Apple for introducing OSes that were end-user-usable. There's a reason the "year of Linux on Desktop" never happened and is always one year from now.
The point of The Unix-Haters Handbook, which also applies very much to modern web is that the so-called "advancement" didn't really bring anything new. It reinvented old things - things we knew how to do right - but in a broken way, full of half-assed hacks that got fossilized because everything else depends on it.
(Also don't blame Lisp community for the fact that companies reinvented half of Lisp in XML. Rather ask yourself why most programmers think the history of programming is a linear progression of power from Assembler and C, and why they remain ignorant of anything that happened before ~1985.)
This is Steve Yegge's understanding of the Lisp community, and I should clarify that I don't think the XML monstrosities we all work with are "all their fault", but that, on the whole, the Lisp community and enterprise coders were mutually antagonistic.
And I really do recommend The Unix-Haters Handbook. Funny thing is - over a decade ago, when I was acquainting myself with the Linux world (after many years of DOS and Windows experience), I've been noticing and complaining about various things that felt wrong or even asinine. Gradually I got convinced by people I considered smarter than me that those things are not bugs but features, they're how a Good Operating System works, etc. Only now I realize that my intuition back then was right, but I got Stockolm-syndromed to accept the insanity. Like most of the world. The sad thing is, there were better solutions in the past, which once again shows how IT is probably the only industry that's totally ignorant of its own history and constantly running in circles.
The real crime of UHH is that it merely hates, it does not instruct. When we do find valid criticisms, there is no suggestion for how to fix things, or how other OSes are better at the same role. I've resigned myself to read the entirety, but for all the authors' complaints about not learning anything from history, one can only feel like they have themselves to blame.
A common defensive mechanism among people with outdated skills is to try to delegitimize new frameworks and technologies in the hopes of convincing the broader community not to use things they don't know.
I'm inferring this from your arguments being driven by analogies and insinuations rather than concrete critique. It's not my intention to attack you personally, but an aggressively dismissive attitude towards unfamiliar concepts should be properly contextualized.
As for React, isn't it more likely that you don't know React very well, have never looked at its internals, and in general don't feel like you have the time or ability to learn much about modern web development?
If you build a few projects with React and still dislike it, good! Your critiques will be a lot more valid and useful at that point, whereas right now...yeah.
The web is a joke of an app platform. Those of us who have wider experience of different kinds of programmings see some web devs struggling with this concept and conclude, I think quite reasonably, that the only plausible explanation is lack of experience. This is not due to "outdated skills" - I daresay everyone criticising the web as a platform in this thread has, in fact, written web apps. It's the opposite problem. It's to do with developers who haven't got the experience of older technologies having nothing to compare it too, so "web 2.2" gets compared to compared to "web 2.0" and it looks like progress.
And in case you're tempted to dismiss me too, I recently tried to write an app using Polymer. It sucked. The entire experience was godawful from beginning to end. Luckily the app in question didn't have to be a web app, so I started over with a real widget toolkit and got results that were significantly better in half the time.
I would be interested in a detailed explanation of why websockets are a "dumb hack." Duplex streams much more closely map to what web apps actually need to do than dealing with an HTTP request-response cycle. In what way is streaming a hack and making requests that were originally designed to serve up new pages not a hack?
> My point was that you can't create a proper application
> development environment that is both an open and defacto
> standard the way the web is.
In addition to the countless examples posted in this thread, I would argue that if it's nearly impossible to create your own implementation of a platform or standard from scratch, then it's not really open in a practical sense. Who cares if the specs are available if it takes dozens or hundreds of man years to deliver a passable implementation?
That's what every practical environment is. Only the environments which never get used remain "pristine" in the architectural sense, because you cannot fundamentally re-architect an ecology once you have actual users beyond the development team and a few toy test loads.
I think a big part of the problem is that web developers have forgotten (or never learned) about a lot of the ui innovation that has already been done for native platform development.
I blame the page / html / dom model for this. It has forced generations of web developers to figure out clever (or not) workarounds to the point that they actually think they are innovating when they arrive at the point qt was at years ago.
Name one then instead of this patronizing BS.
Two seconds of googling will find you dozens.
And web browsers are different how, exactly?
(Other than the fact that "the web" is a mish-mash of hundreds of different "standards" with varying levels of mutual compatibility and standardization, of course.)
one might imagine that after these competing and incompatible native apps become a headache for crossplatform pursuits, a new platform will emerge that provides a uniform toolset for developing (mostly) native-platform independent applications.
perhaps this toolset will utilize a declarative system for specifying the user interface, and a scripting system that is JIT'd on each platform.
I think you're on to something...it could be huge... Heh.
Could be. Sad that it isn't. Think how awesome it would be if app developers actually cared about interoperability instead of trying to grab the whole pie for themselves while giving you a hefty dose of ads in return. This is mostly the fault of developers, but the platform itself could help a lot if it was more end-user programmable. You'd have at least a chance to force different apps to talk to each other.
I think people confuse ideas with implementations. The web is a pretty reasonable implementation of the idea "let's build a hypertext platform". It is not at all a reasonable implementation of the idea "let's build an app platform" which is why in the markets where reasonable distribution platforms exist (mobile) HTML5 got its ass kicked by professionally designed platforms like iOS and Android.
My point is that even with the web the developers are still going to make the native clients, so either the web has to become good enough for the need for native apps to disappear eventually, and the browser becomes the OS, or native apps become convenient enough to completely replace webapps.
Of course if the browser becomes the OS then the end result would be the same as the suggestion in my original post.
When Lars Bak and Kasper Lund launched Dart , I found it sad that they weren't more bold - left CSS and the DOM alone, and created an alternative Content-Type. So you can choose to Accept 'application/magic-bytecode'  before text/html, if your client supports so. Sadly, we ended up with Web Assembly, which by the few talks I've seen, appears to only cater to that of graphic/game developers, with no support for dynamic or OO languages.
 Or in Dart lingo, a VM snapshot.
Go doesn't have generics, some hate it, some love it. But it took a strong stand on that point.
I wish. No, web browsers have become massively bloated operating systems. And since they didn't intend to, they are terrible at it. You have little to no control over anything.
I mean the UI is undeniably smoother, and they can seamlessly hook into the OS notifications systems and better multitasking (for example I see separate entries for each native app in the OS's task switcher, but have to go through the extra step of switching into a browser then its tabs for webapps) and energy saving and everything else.
And there's the problem. I don't use any of those four sites regularly, but I have visited all of them. Hyperlinks provide for that, and they (a) don't exist or at best would be awkward in a native app (not that webapps handle them well to begin with, though all of the above do allow for them at least) and (b) work between apps, platforms, and what-have-you. If I got a link to some image, <160 character sentence, comment thread, or song and was prompted to download an app, I would probably not view that content instead.
We're either looking at making individual pages maintain binaries for the platforms they support (implying support of only those platforms that make sense to the site) or some kind of compilation framework running on the local machine.
Native delivery of a monolithic browser based on an open-source codebase is a fixed problem. Trying to do the same with a website, using current techniques, would cause issues to both their current workflow and my expectations as a user that websites don't currently have.
I'm not saying that it's impossible to do, I'm just saying that it's not a good fit for current trends in web development, and I'm not convinced that it would be great for the users either.
For example there has been so many things for powerful layouts whereas everyone knew 10 years ago we need powerful layout solutions (flexbox or whatver) and now we have grid frameworks and years of craft on older css enhancement that has to be supported. They keep adding features here and there to sort of address lots of problems where individually those features might be cheaper but the overall cost of implementing both by browsers and the us regular developers is much higher.
Here are couple of the things I want from the web and quite a few of them are there already if not in super ideal forms. Powerful layout thats simple enough to use, concept of webpage, a bundle (http2? all your resource together), making the whole partial rendering (ajaxified page) a natural concept. Even making the UI/markup delivery being made separate from content (you can do that with all sorts of library but I think it should be at the core). Security concepts that are easier to implement (CSRF, url tampering etc.).
One of the idea I had is that browsers make a new engine that does the right things from the start and hopefully thats a much lighter engine and if you serve new pages they are really fast and if you serve old pages there is an optional transpoiler kind of thing that translates to the new version of the fly. Now it won't be terribly good to start with so its optional but essentially the old version is frozen and the more people start to only use the new engine (with transpoiler).
Perhaps rather than native apps what we need is the return of gopher. I think that's what Apple's trying to do with Apple News.
In a way, this is why I like doing more and more things from inside Emacs. I get a consistent interface that I control, and that is much more powerful than what each app or website I'd otherwise use can offer. Hell, it's a better interface than any popular OS has.
That's exactly what Windows 10 does.
When it comes down to it, for so long many front-end guys only cared about how it looked, and backend guys didn't care at all, because it's just the ui, not the real code.
We're finally at a point where real front end development is starting to matter. I honestly haven't seen this much before about 3-5 years ago... which coincides with node and npm taking over a lot of mindshare. There's still a lot of bad, but as the graph show, there's room to make it better.
I think that is orthogonal to bloat. Sure, a complex app will always have more to load and compute than a static page with one blog post on it, but that doesn't mean an app can't be bloated on top of that, just like pages with just a single blog post on them can be bloated.
https://news.ycombinator.com/item?id=11548816 - The average size of Web pages is now the average size of a Doom install
(To make this comment not entirely frivolous, does anyone remember the "bloatware hall of shame", or however it was called? I couldn't find it or anything decent like it, sadly. How about something like it for websites?)
Congratulations, you've invented ActiveX.
Epic malware vector.
Web technologies can already do most of what you are proposing, including notifications. There are some performance issues, but they are well on their way to being fixed.
What we would need, if the browser is to become a platform for actual productivity tools and not shiny toys, is a decent persistent storage interface - one that would be controlled by users, not by applications, and that could be browsed, monitored. And most importantly, one that would be reliable. And then, on top of that, a stronger split between what's on-line and what's off-line. Because some data and some tasks should really not be done through a network.
The problem with the web is that the developer experience is nightmarish. The fact that native apps don't suffer XSS should be a hint about where to start looking, but it's really just a house of horrors in there.
It turns out that this technology already exists in a much better form. It's called cache. The problem is that almost everyone hosts their own version of jQuery. If everyone simply linked the "canonical" version of jQuery (the CDN link is right on their site) then requiring jQuery will be effectively free because it will be in everyone's cache.
Also the cache is supported by all browsers with an elegant fallback. Instead of having to manually having to check if your user's browser has the resource you want preloaded you just like the URL and the best option will automatically be used.
TL;DR Rather then turning this into a political issue stop bundling resources, modern protocols and intelligent parallel loading allow using the cache to stove this problem.
It's not, though. I ran this experiment when I tried to get Google Search to adopt JQuery (back in 2010). About 13% of visits (then) hit Google with a clean cache. This is Google Search, which at the time was the most visited website in the world, and it was using the Google CDN version of JQuery, which at the time was what the JQuery homepage recommended.
The situation is likely worse now, with the rise of mobile. When I did some testing on mobile browsing performance in early 2014, there were some instances where two pageviews was enough to make a page fall out of cache.
I'd encourage you to go to chrome://view-http-cache/ and take a look at what's actually in your cache. Mine has about 18 hours worth of pages. The vast majority is filled up with ad-tracking garbage and Facebook videos. It also doesn't help that every Wordpress blog has its own copy of JQuery (WordPress is a significant fraction of the web), or for that matter that DoubleClick has a cache-busting parameter on all their JS so they can include the referer. There's sort of a cache-poisoning effect where every site that chooses not to use a CDN for JQuery etc. makes the CDN less effective for sites that do choose to.
There is the problem then, and the solution? I for one don't make bloated sites nilly-willy, I suck at what I do but at least I do love to fiddle and tweak for the sake of it, not because anyone else might even notice; and I like that in websites and prefer to visit those, too. Clean, no-BS, no hype "actual websites". So I'd be rather annoyed if my browser brought some more stuff I don't need along just because the web is now a marketing machine and people need to deploy their oh so important landing pages with stock photos and stock text and stock product in 3 seconds. It was fine before that, and I think a web with hardly any money to be made in it would still work fine, it would still develop. The main difference is that it would be mostly developed by people who you'd have to pay to stay away, instead of the other way around. I genuinely feel we're cheating ourselves out of the information age we could have, that is, one with informed humans.
On top of that, while everyone uses jquery, everyone uses different version of it (say, 1.5.1, 1.5.2, ... hundreds of different versions in total probably).
Also for my sites I have a fallback to a local copy of the script. This allows me to do completely local development and remain up if the public CDN goes down (or gets compromised). With small (usually) performance impact.
You only gain performance if the browser already has a cached version of this specific version on this specific CDN.
If you don't - you end up losing performance, because now an additional DNS lookup needs to be performed, and an additional TCP connection needs to be opened.
Here are a few reasons people choose to avoid CDNized versions of JS libraries.
This is a 6 year old post, but it raises some valid concerns:
And really if you are using one of the major libraries and a major CDN (Google, JQuery, etc.) over time your users will end up having the stuff in the cache, either from you or from others having used the same library version and cdn.
I suppose someone has done a study on CDN spreading of libraries and CDNS among users, so that you could figure out what the chance is that a user coming to your site will have a specific library cached - there's this http://www.stevesouders.com/blog/2013/03/18/http-archive-jqu... but it is 3 years ago, really this information would need to be maintained at least annually to tell you what your top cdn would be for a library.
So you use the one that has both, but one is not canonical, which means more cache misses. That doesn't even count the fact that there are different versions of each library, each with it's own uses, and distribution, and the common CDN approach becomes far less valuable.
In the end, you're better off compositing micro-frameworks and building yourself. Though this takes effort... React + Redux with max compression in a simple webpack project seems to take about 65K for me, before actually adding much to the project. Which isn't bad at all... if I can keep the rest of the project under 250K, that's less than the CSS + webfonts. It's still half a mb though... just the same, it's way better than a lot of sites manage, even with CDNs
The question then is how likely are they to have done that in regards to your particular cdn and version of the library.
I agree that a lot of possible cdns, versions and so forth decreases the value of the common CDN approach, but there are at least some libraries that have a canonical CDN (JQuery for example) and not using that is essentially being the selfish player in a games theory style game.
Since I don't know of any long running tracking of CDN usage that allows you to predict how many people who visit your site are likely to have a popular library in their cache it's really difficult to talk about it meaningfully (I know there are one-off evaluations done in one point in time but that's not really helpful).
Anyway it's my belief that widespread refusal to use CDN versions of popular libraries is of course beneficial in the short run for the individual site but detrimental in the long run for a large number of sites.
Since HTTPS needs an extra round trip to startup, it's now even more important to not CDN your libraries. The average bandwidth of a user is only going to go up, and their connection latency will remain the same.
If you are making a SaaS product that business want, using CDNs also make it hard to offer a enterprise on-site version as they want the software to have no external dependencies.
If the user making the request is in Australia, for example, and your web server is in the US, the user is going to be able to complete many round trip requests to the local CDN pop in Australia in the time it takes to make a single request to your server in the US.
Latency is one of the main reasons TO use a CDN. A CDN's entire business model depends on making sure they have reliable and low latency connections to end users. They peer with multiple providers in multiple regions, to make sure links aren't congested and requests are routed efficiently.
Unless you are going to run datacenters all around the world, you aren't going to beat a CDN in latency.
If you are using a CDN for images/video, then yes, you would have savings from using a CDN since your users will have to nail up a connection to your CDN anyways.
Then again a fair number of the users for the site I'm currently working on have high latency connections (800ms+), so it might be distorting my view somewhat.
As for adoption, that is very much a chicken and egg problem.
DNS resolution time is a pretty significant impact for a lot of sites.
The benefits of hosting say Google Fonts, Font Awesome, jQuery, etc. all with KeyCDN is that I can take better advantage of parallelism if I have one single HTTP/2 connection. Not to mention I have full control over my assets to implement caching (cache-control), expire headers, etags, easier purging, and the ability to host my own scripts.
<script src="jQuery-1.12.2.min.js" authoritative-cache-provider="https://ajax.googleapis.com/ajax/libs/jquery/1.12.2/jquery.m... sha-256="31be012d5df7152ae6495decff603040b3cfb949f1d5cf0bf5498e9fc117d546"></script>
Would this cause more problems than it would solve? I'm assuming disk access is faster than network access.
I'm concerned about people like me who use noscript selectively. How easy is it to create a malicious file that matches the checksum of a known file?
I'd say not easy at all, practically impossible.
SHA-256? Very, very, very, very hard. I don't believe there are any known attacks for collisions for SHA-256.
Of course this would need a proposal or something but it would be interesting to consider.
Also available on *nix
As others have pointed out, it's quite difficult. But here's another way to think about it: if hash collisions become easy in popular libraries, the whole internet will be broken and nobody will be thinking about this particular exploit.
Servers won't be able to reliably update. Keys won't be able to be checked against fingerprints. Trivial hash collisions will be chaos. Fortunately, we seem to have hit a stride of fairly sound hash methods in terms of collision freedom.
<script src="jQuery-1.12.2.min.js" sha-256="31be012d5df7152ae6495decff603040b3cfb949f1d5cf0bf5498e9fc117d546"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.2/jquery.min.js" sha-256="31be012d5df7152ae6495decff603040b3cfb949f1d5cf0bf5498e9fc117d546"></script>
You're right in that the first one you had with just the sha-256 would be pretty much equivalent as what I had especially given that hn readers have resoundingly given support to the idea that it is non-trivial to create a malicious file with the same hash as our script file. I was simply trying to be cautious and retain some control for the web application (even if the extra sense of security is misplaced).
This is the use case I'm trying to protect by adding a new "canonical" reference that the web application decides. As others in this thread have said, it is very unlikely that someone will be able to craft a malicious script with the same hash as what I already have. The reason I still stand by including both is firstly compatibility (I hope browsers can simply ignore the sha-256 hash and the authorized cache links if they don't know what to do with it).
As a noscript user, I do not want to trust y0l0swagg3r cdn (just giving an example, please forgive me if this is your company name). NoScript blocks everything other than a select whitelist. If the CDN happens to be blocked, my website should still continue to function loading the script from my server.
My motivation here was to allow perhaps even smaller companies to sort of pool their common files into their own cdn? <script src="jimaca.js" authoritative-cache-provider="https://cdn.jimacajs.example.com/v1/12/34/jimaca.js""></scri... I also want to avoid a situation where Microsoft can come to me and tell me that I can't name my js files microsoft.js or something. The chances of an accidental collision are apparently very close to zero so I agree with you that there is room for improvement. (:
This is definitely not an RFC or anything formal. I am just a student and in no position to actually effect any change or even make a formal proposal.
SHA + CDN url list (for whitelisting/reliability purposes - public/trusted, and then private for reliability) would be ideal.
- jquery.com becomes a central point of failure and attack;
- jquery gets to be the biggest tracker of all time;
- cache does not stay forever. Actually, with pages taking 3Mo everytime they load, after 100 click (not much), I invalidated 300 Mo of cache. If firefox allow 2go of cache (it's lot for only one app), then an the end of the day, all my cache has been busted.
So create one massive target that needs to be breached to access massive numbers of websites around the world?
Imagine if every Windows PC ran code from a single web page on startup every time they started up. Now imagine if anything could be put in that code and it would be ran. How big of a target would that be?
While there are cases where the performance is worth using a CDN, there are plenty of reasons to not want to run foreign code.
(Now maybe we could add some security, like generating a hash of the code on the CDN and matching it with a value provided by the website and only running the code if the hashes matched. But there are still business risks even with that.)
See https://developer.mozilla.org/en-US/docs/Web/Security/Subres... though it isn't universally supported yet.
There's no reason to hit the webserver with an If-modified when the libraries already include their version in the path.
There's a firefox plugin that does just that! DecentralEyes - https://addons.mozilla.org/en-US/firefox/addon/decentraleyes...
Although mostly for privacy reasons, so google doesn't know all the sites you visit.
If JS, absolutely needed JQuery in order to, say, select an element the way C needs a library to output to the console, then sure, you may have an argument here.
Thankfully node.js is bringing this clusterfuck to the desktop too.
The dependencies we are talking about are transparent to the user, save only for some download performance issues which are pretty minimal, if not overstated in many cases, compared to the former issues faced by native applications.
This is still a shame if you're maintaining a heavyweight alternative to jQuery or React which is far too obscure to be a consideration for browser vendors, but it's a big boon for users, especially users on slow or metered connections that download several different version numbers of jQuery from several different CDNs every day.
If you like, there could be mutually agreed upon standard repository that browser vendors routinely update from.
Sure, your less popular experimental library won't be in the list, that's what "<script src=..." is for.
It probably won't happen but it is hard to defend the position that it is a bad idea, I think.
But I agree, overall.
For instance, your site might pull it in from cdnjs, while I have the one from ajax.googleapis.com already cached. I still have to go fetch your selected version.
jquery-2.2.3.min.js is only 86KB for me. For the amount of functionality it adds, sure seems like a sweet deal.
Part of the problem (IMHO) is the growing requirement for larger, alpha-channel-using image formats like PNG, that then need to be expressed in a Retina format - I mean, shit looks terrible on my retina MacBook Pro that isn't properly Retina formatted. (Here's looking at you, Unity3D...even the current beta which supports Retina displays is extremely unstable and half the graphics (even stupid buttons on the UI) are still SD...)
With bandwidth costs declining  and basic RAM increasing  is there a particular reason a Web application should be much smaller than a typical desktop application? We have caches for a reason.
What if browsers shipped with a repository instead of a cache? Download once, stay forever.
If any browser vendor implemented my proposal, their users would switch to another browser. If it was an addon, only a handful of people would use it so there would be no impact. If it was done on the network level of a corporate business, users would complain. Still, one can dream :)
It's often suggested that we'll solve poverty by providing education and given the internet provides a unique opportunity to learn then surely we benefit as a species by ensuring there isn't a rich web and a poor web? This risk is compounded by rich people being in a better position to create better learning opportunities.
Much like there's a drive toward being aware of the accessibility of your website (colour blind-check tools, facebooks image analysis for alt text tags, etc, etc) we should be thinking about delivery into slower networks in poorer countries.
Give some kind of reminder to both users and developers about how slow their sites are. Those with the slowest websites probably won't like it too much initially, but it's going to be better for all of us in the long term.
Also, browsers could default to http://downforeveryoneorjustme.com/ for sites that loaded too slowly. AdBlock Plus and NoScript speed loads greatly. Maybe browsers could do triage on sites that load too slowly. Perhaps switch to reader mode or whatever.
Also, it doesn't work other places.
Users already respond to page load time. There's extensive evidence to support this.
Thirty-five times! Apollo software got us to the moon. Doom wasted millions of man-hours on a video game.
My point of course is that these comparisons are not actually that illuminating.
Are web pages much heavier than they need to be? Yes. This presentation very capably talks about that problem:
Does comparing web pages to Doom help understand or improve the situation? No, not any more than comparing Doom to Apollo memory size helps us understand the difference between a video game and a history-altering exploration.
What about the question "do web pages work any better than they did in 2007?" when we were using full page reloads and server side logic instead of Javascipt tricks.
I see so much basic brokenness on the web today from the back button not working to horribly overloaded news websites with mystery-meat mobile navigation I find myself wondering what have we really achieved in the last 9 years? This stuff used to work
Thaaaaaaaat's nonsense. I had relatively high-res CRTs (1600x1200) in the late 90s and early 2000s.
My father and I were able to get by with Netscape Navigator and Firefox for quite awhile as well.
> "Sorry, the Argos Internet site cannot currently be viewed using
Netscape 6 or other browsers with the same rendering engine.
> In the meantime, please use a different web browser or call
0870 600 2020 to order items from the Argos catalogue."
> Sorry, I think I'll shop elsewhere until you get it fixed...
Argos was sniffing the useragent. I think people tried changing the useragent, and it worked fine.
This kind of thing wasn't rare, even in 2003.
And it is not rare now either. Nothing has changed in that regard. Things have just gotten massively slower, use insane amounts of CPU, and are less functional.
client-side logic, done right is much improved over a server-side solution.
My wired desktop gets DNS responses from 188.8.131.52 nearly as if it were in my network, in way under 10 ms, ping responses in 2 ms or so. Accessing websites hosted in e.g. Korea takes >100 ms.
Add a congested wireless connection somewhere (WLAN or mobile network) and you can add another few hundred ms. And neither cross-continent nor congested wireless latency is going to go away.
I said audio. I provided this as a counterexample to the stated thesis of your post. There exist things that can be done over a network such that latency is not an issue. I am obviously not pulling data over a cross-continental link.
FWIW, the protocols I write at work can do a full data pull - a couple thousand elements and growing - in under a half second end to end. I don't know of any HTML/Web based protocols that can even get close to that over localhost.
So yeah - we know the Web is an utter pig. My point is that it probably doesn't have to be.
The article was specifically about web page payload size. My comment was comparing UX of dynamic client-side logic vs full round trips.
You must replied to the wrong comment, I would hope.
Well, to be honest, Episode 1 and Episode 2 of Doom takes place on Phobos and Deimos, so you could say Apollo software got us to the moon but Doom got us to Mars :)
One Hundred Years of Solitude
The Count of Monte Cristo
Amazon.com: Online Shopping for Electronics, Apparel, Computers, Books, DVDs & more
Keep in mind that the AGC was a necessary but not sufficient piece of hardware for navigating to the moon, and was extremely special-purpose. NASA had several big (for the time) mainframes that
1) calculated the approximation tables that were stored in AGC ROM (each mission required a new table because the relative positions of the earth, sun and moon was different)
2) reduced soundings from earth-based radars to periodically update the AGC's concept of its position.
3) other things that I've forgotten
In other words, the AGC required the assistance of a ground-based computer with dozens of megabytes of RAM and hundreds of megabytes of storage. That will fit on your phone quite easily, but let's not minimize the requirements for celestial navigation.
What the shuttle did was much more complex because it was an unstable aircraft that required many "frames per second" applied to the control surfaces to keep it stable during reentry and landing.
Not long after that, libraries such as Prototype and JQuery were becoming popular and these were all many times bigger than my 3d engine before you even started coding the app.
IMO, Doom and Web pages are remarkably close in terms of purpose and required assets, and the comparison is apt. Especially when you can play Doom on a web page...
How many millions of man-hours Apollo project wasted for a PR stunt?
Or to put it more elegantly: "Stop there! Your theory is confined to that which is seen; it takes no account of that which is not seen."
True, but if you're talking opportunity costs then I see Apollo, and the space race in general, as a great success story. They took the polotical atmosphere of nationalism, paranoia, one-upmanship, costly signalling, etc. and funnelled some of it into exploration, science and engineering at otherwise unthinkable levels.
It it weren't for the space race, it's likely the majority of those resources would be poured into armaments, military-industrial churn, espionage, corruption/lobbying, (proxy) wars, etc.
Sounds like a bargain to me.
The problem with this type of attitude is that discovery doesn't work like this. Incremental improvements can sometimes work this way, but big discoveries do not. If there had been a mandate to "find a way to communicate without wires" I'm going to guess that it would not have gotten very far. Instead, this came about as a side effect of pure science research.
That said, I do take chriswarbo's point that it could have easily instead been even more baroque weapons or proxy wars, as well as yours and manaskarekar's about the uncertainty inherent in counterfactuals. I just wanted to make the point just finding some positives is not enough, you need to look at opportunity costs. If we both look at them and come to different conclusions, that's life, but at least we agree on the basis of measurement.
It's definitely debatable and hard to gauge. I just thought I'd throw in the link to show the other side of the argument.
It takes a while to load (not really, 100ms + download), but after that it is silky smooth due to client side rendering (faster than downloading more html) and caching.
On the most demanding page, heap usage is a little more than 20mb.
Sure, there are a lot of websites which are slow and huge memory hogs. But that goes for many native apps as well.
Meanwhile, how fast Google Docs loads depends entirely on the speed of my internet connection at the time. Good luck even opening it at all if your connection is crappy, flaky, if any of the ISPs between you and Google have congestion issues, or if there's a transient latency problem in one of the dozens of server pools that makes up an app like Docs.
(but the different domains actually helps speed up loading with http because it helps parallelise transfers.)
https://github.com/filamentgroup/loadCSS#recommended-usage-p... has a rather nice way to load CSS asynchronously in browsers which support rel=preload.