> If you're just starting out with a new web application, it should probably be an SPA.
Your reasoning for this seems to be performance (reloading assets), but IMHO the only good reason for using a single-page app is when your application requires a high level of interactivity.
In nearly every case where an existing app I know and love transitions to a single-page app (seemingly just for the sake of transitioning to a single-page app), performance and usability have suffered. For example, I cannot comprehend why Reddit chose a single-page app for their new mobile site.
It's a lot harder to get a single-page app right than a traditional app which uses all the usability advantages baked in to the standard web.
For websites, you should always use progressive enhancement - there is no reason why you couldn't obtain the same performance gains by progressively enhancing your site with reload-less navigation. That's what AJAX and the HTML5 History API are for.
Especially don't forget that no, not all your clients support JS. And there's no reason why they should need to, for a website.
Sure there is, if you care about interactivity, responsiveness, and general user experience. The fact that not all potential clients support JS is probably irrelevant, since that number is probably incredibly small. You might as well say that not all potential clients have access to the Internet.
And all of these can be provided just fine using progressive enhancement. This is _not_ an argument for SPAs.
> The fact that not all potential clients support JS is probably irrelevant, since that number is probably incredibly small.
Your use of the term 'probably' suggests to me that you have not run any actual metrics. Here's a short list of some examples:
- Bots and spiders
- Low-powered mobile devices
- Privacy-conscious users who use NoScript
- Terminal-based browser users
- Literally every single one of your users until the .js blob has completed downloading
Requiring JS for a website is not okay, end of story. We all collectively understood this 10 years ago, and I'm not really sure where along the way this understanding has gotten lost.
EDIT: Forgot that HN has a neutered version of Markdown. Fixed.
What I really meant is that if you have a significant non-JS-supporting visitor base, you almost certainly already know it. There are a huge class of web apps that simply do not have to worry about it. If you know who your customers/visitors are then you should already know how important it is to support non-JS browsers.
> Requiring JS for a website is not okay, end of story.
> We all collectively understood this 10 years ago
Things were a lot different 10 years ago, and perhaps your reluctance to update your understanding less than once per decade might explain some of your views. You may as well be saying "making a store than can only be accessed over the Internet is not okay, end of story, we all collectively understood this 25 years ago."
I always think it's interesting when people bring CSS into this, because it's actually a very strong counter to the general argument.
By design a user agent is well within its rights to completely ignore any stylesheets attached to a page, and the idea was always that this should be completely OK, partly because you have no idea what the UA's capabilities are, up to and including whether or not it's doing any kind of visual rendering at all.
This isn't a news flash to everyone, but even those already aware of this seem to think the main reason you'd do this is ADA accommodations and those are some kind of minority afterthought. My experience is that this is misguided -- accommodations for unusual visitors is important, but I think the biggest benefit might be that the simpler you make things for the client, the less complicated the engineering tends be, even if you can't import your Java-imitation-of-smalltalk-inspired application paradigm of choice.
And "too bad if you didn't have JS!" position seems to basically boil down to the idea that serving a custom client that consumes JSON instead of HTML as the media type for a given URI is Real Progress™.
Keep things as simple as possible. Don't require JS unless what the application needs to do can't be done without it.
The problem is that arguments from both sides are often generalized and it's not necessarily wrong. If you have a wide audience e-commerce enterprise you should probably keep JS use as limited as possible. If you build tools for programmers or highly dynamic projects, JS everywhere is probably fine.
When it comes to CSS, generally most apps will continue to work even if the CSS is simply ignored.
Because that was one big point of CSS. Separation of presentation from content and function.
It's funny. That kind of separation of concerns is something developers talk about valuing, but the SPA/webapp craze erodes the user-facing aspect of it, even while developers are very proud to demonstrate they're thinking hard about which specific kind of separated-concern architecture they're working with well away from the boundary where an outside user or UA would care.
A forum, for example, isn't highly interactive, whereas an online spreadsheet editor is.
I was refering specifically to websites, not webapps.
That analogy makes absolutely no sense, nor does your comparison to CSS. JS is an entirely different class of dependency than either CSS or web browsers.
JS is turing-complete, has a significantly bigger attack surface (both in terms of vulnerabilities and in terms of tracking), is much, much harder to optimize for a low-powered device than HTML and CSS, and so on.
Further, the web browser is a necessity to view something on the web. It is well understood, it is easy to optimize, and it is widely deployed. JS is no such 'required dependency' - and you should not make it one, when it can be done just fine without.
> Things were a lot different 10 years ago, and perhaps your reluctance to update your understanding less than once per decade might explain some of your views. You may as well be saying "making a store than can only be accessed over the Internet is not okay, end of story, we all collectively understood this 25 years ago."
The support/compatibility landscape for JS has not changed in those 10 years. The types of non-JS clients are still the same, the reasons for progressive enhancement are still the same. The only thing that has changed is standardization.
"Old" is not the same as "obsoleted". Unless you can come up with a concrete reason as to why this knowledge has been obsoleted throughout the past 10 years, it still applies.
And I'm getting a little tired of these poorly fitting analogies, you're using them as a crutch.
I disagree with several claims you make later in that comment, but since I think I've addressed our primary disagreement I will leave it at that.
There's motherfuckingwebsite.com but the tone of that site is very adversarial, I think one that lays out the reasoning in a noncombative way would be more successful at reaching some of these people.
There's also this post, that goes into it somewhat: http://allinthehead.com/retro/367/why-is-progressive-enhance... - also, Brad Frost has some posts about this.
You have a failure of understanding here.
There is a difference between a web site and a web app (although there is blur in the middle obviously). A web site (e.g. this site, a news paper, a blog, a search page, a job application form etc) should not need JS - and frankly if you put it in there then you're over-engineering things. Meanwhile web apps like e.g. Google docs/Office 365, a webmail client etc are clearly going to need specific JS.
Unless Stallman is in your target market that's probably something you don't need to consider for any longer than fifteen seconds.
This is how Google sees your web site.
But it still boils down to this: The easier you make it for Google to get at the text of your web site, the better for your page rank.
A simple web page (no-js) in a browser covers these three things to a highly acceptable standard. It's when you start to add the other junk that these three become compromised.
Interactivity: Hyperlinks work great from HTML 1.0 onwards
General User Experience: The basic web page user experience is EXCELLENT for web pages because it is incredibly basic. Even date pickers can consist of 3 select drop downs (or even a text box; there are plenty of human-date-to-ISO-3306 convertors out there)
Graceful fallbacks tend to be maintained as a separate version, and neglected over time. Progressive enhancement means taking a basic page and adding snazzy functionality to it - eg. ajaxified page loads. The latter is what you want.
I even take advantage of this on my blog by rigging a field in the comments section to not be visible for users but be visible for spammer bots, so that they will fill it out and the software can auto-reject it.
Works very well.
See here: http://bradfrost.com/blog/post/fuck-you/
> I even take advantage of this on my blog by rigging a field in the comments section to not be visible for users but be visible for spammer bots, so that they will fill it out and the software can auto-reject it.
i ask only because i'm puzzled as to the purpose of your reply.
You get a much more limited range of languages and libraries to work with. You get to use overcomplicated build and deployment processes with ever-changing tools. You get to reinvent the wheel if you do want to use things like URI-based routing and browser history in a sensible way. In many cases you are going to need most of the same back-end infrastructure to supply the underlying data anyway.
Also, it's tough to argue the SPA approach is significantly more efficient if it's being compared with a traditional web app built using a server-side framework where switching contexts requests exactly one uncached HTML file, no uncached CSS or JS in many cases, and any specific resources for the new page that you would have had to download anyway.
Of course some web apps are sufficiently interactive that you do need to move more of the code client-side, and beyond a certain point you might find it's then easier to do everything there instead of splitting responsibilities. I'm not saying everything should be done server-side; I'm saying different choices work for different projects and it is unwise to assume that SPA will be a good choice for all new projects.
If you're just starting out, chances are you are not (or should not be) making any sort of application where the performance increases by operating as a SPA will be even be noticeable compared to a standard server app.
Plus, I'd argue that you won't really understand what a SPA adds (or takes away) unless you are thoroughly familiar with the traditional model.
Finally, at the end of the day traditional apps are just a lot easier to put together even compared to the latest SPA frameworks, especially if your server side tech is something like Ruby or C#. A beginner will be better served by getting something nice up quickly, before attempting to do it the 'purist' way and possibly getting discouraged by the difficulty.
They should but not the way they are making it.
It should be server side render for initial page (not just home page but any page). And mostly changing content through AJAX when you navigate between pages.
SPA is hard specially when it comes to usability. One of the biggest issue I see with SPA is going back. Browser handles back history pretty good for non-SPA. Replicating similar behavior in JS in not easy.
Reddit SPA should be like this with server side rendering. http://reddit.premii.com/
You should instead have a security audit with people who have experience in security, so they can help you identify where and why you're system is vulnerable. If no one exists on your team/company that does, then hire a consultant.
Security is a hairy issue, and no single blog post/article is going to distill the nuances down in an easy to digest manner.
Instead of throwing money at the problem, you can instead choose to teach yourself more about the subject. We maintain a curated list on Github for people interested in learning about application security for this very reason.
But if you're a company and your operating budget is in the millions of dollars hire a security consultant!
True. You don't need to hire consultants to perform a security audit. Ask HN and Security Stack Exchange are good free alternatives to get critiques on your approach.
It is easy to write that, and on the face of it, it's hard to argue against.
The trouble is, those audits and consultants don't come cheap, and if you're new at web apps and working on your first one that no-one has ever heard of yet, there is little really essential that you wouldn't find investing the same time reading the usual beginners' guides to security on-line. It's all risk management, and if you even make that effort you'll already be a significantly harder target than many established sites.
As a corporate lawyer once told me when I was getting the very first contract drawn up for a new business, for a simple supplier relationship, he could certainly charge me five figures and write an extensive document protecting the business against every conceivable threat he could imagine involving that supplier, but until the business had actual revenues worth protecting and the deal with that particular supplier was worth a lot more than the legal fees, he wouldn't advise doing it.
It's clearly a logical failure to suggest heeding the authors advice would result in a catastrophic security breach.
Not paying attention to security by reason of "I've done a little better than nothing at all" feels like willful negligence.
(edit: this is an explanation of what "middle-brow dismissal" is)
Security is never perfect, and to security people, we know that there is a tradeoff between Security and Users.
We don't advocate letting The Perfect be the Enemy of the Good when it comes to security, and on the same token we want you to implement security properly if you do it.
Don't try to do it yourself.
I challenge you to point out specific suggestions in this article which are wrong or misleading, or to point out glaring omissions.
Encryption passwords is not hashing (think this was fixed after publishing due to comments below)
OAuth is not for authentication
SPA is not suited to all or even most websites, and is far from being 'king' in any sense.
CDNs have pros and cons, they don't suit everyone.
Localisation does not mean serving assets closer to home, but translating stuff.
Nothing better than SSL? TLS
If OAuth is not for authentication, someone better tell Google: "Google APIs use the OAuth 2.0 protocol for authentication and authorization." 
TLS is basically just the newest version of SSL. The name was changed for legal reasons. So it is an understandable oversight 
The others aren't security related, so I didn't address them.
The special property that encryption functions have compared to hashing functions isn't that it is extremely difficult to discover the source, but rather almost the reverse -- that for every encryption function there exists a function (decryption function) by which you can recover the unique source.
Hashing functions in general do not have an inverse function: while you might be able to recover several possible sources from them (and this might be easy or difficult), you cannot recover the single source, because the space of inputs is larger than the space of outputs, so there can be no unique mapping from outputs back to inputs that would generate them.
There's a fundamental difference between storing a password so that you can read it again (encrypt implies this), and storing it so that you can only verify it, not read it (hash). But a broader criticism of the article is that it is far too sweeping in its judgements based on scant knowledge of the topic - the little mistakes are just indications of that.
It's fine to be a beginner asking questions and the mistakes are not really so important, but it's not really useful to attempt a definitive summary of a field which you know very little about.
You can use OAuth for authentication, but that is specific purpose for Authorization.
Google has a separate product for Sign In:
Alan has a web application that shows you all funny tweets. In order to see those tweets you must first create an account.
You pick a username, you enter a password. That combination is attached to Alan WebApp UserID: 12345
Everytime you login with that username and password combination, you get back Alan WevApp UserID 12345.
You click the "Login with Google" button.
It redirects you to say "Do you want to associate your google account with Alans Web App?"
You click yes.
Google ID: XYZZY is returned. That id is tied to Alan WebApp UserID 12345.
The next time you go to login, Google returns "This is Google ID: XYZZY". Alan WebApp finds the association XYZZY with Alan WebApp 12345.
That is a inaccurate statement.
Those sites allow you to login with your Github/Facebook/Google Accounts. That isn't OAuth. Those sites also use OAuth in order to let 3rd party applications access the users data stored on that system.
Take this Scenario
Alan has a service that finds funny tweets.
cpitman wants to use Alan's service, to find his funny tweets.
No OAuth Example:
cpitman gives Alan service his Twitter Username and Password.
Alan service logs into Twitter, and pulls twitter data.
Alan service opens a request to Twitter asking for twitter data for cpitman
Alan service redirects cpitman to Twitter
Twitter notifies cpitman that Alan Service wants to access twitter data
Twitter passes back a token
Alan service uses token to access cpitman twitter data.
(beginner here, trying to understand why not use OAuth for Identification/Authentication)
I am guilty of not editing that one comment to hang my other comment off of.
The end result of better code is far better than a magical appliance.
brb compiling linux to JS to render my blog post.
I had the impression that it was supposed to be a simple, bloat-free blogging platform.
heres a compressed version: https://www.dropbox.com/s/bw606t7znouxpj1/photo-141847963101...
That is ironic on so many levels.
I mean there is even a section on "UX: Bandwidth"...
Maybe the author should brush up on image compression best practices and consider adding a subsection on images and other media.
EDIT: Realized my previous wording was probably a bit too harsh considering the author is still relatively new to web development.
4k is all the rage now on mobile.
just taken a chunk out of someone's download quota with that nice background!
The thing that everybody seems to overlook here: this has serious legal consequences.
You are demanding of your users that they agree to a set of TOS from a third party, that does not have either their or your best interests at heart, and that could have rather disturbing things in their TOS - such as permission to track you using widgets on third-party sites.
Not to mention the inability to remove an account with a third-party service without breaking their authentication to your site as well.
Always, always offer an independent login method as well - whether it be username/password, a provider-independent key authentication solution, or anything else.
> When storing passwords, salt and hash them first, using an existing, widely used crypto library.
"Widely used" in and of itself is a poor metric. Use scrypt or bcrypt. The latter has a 72 character input limit, which is a problem for some passphrases, as anything after 72 characters is silently truncated.
A CDN seems nice in theory. Reality is:
Does the browser have the library cached?
Is the library cached from the CDN that I'm using?
The browser is making more HTTP requests, which sometimes takes more time to request than to download the library.
I agree that using CDNs is a good speed boost. I'm trying to figure out if hoping for a library cache hit out weights a library cache miss.
This rarely works in practice. The URLs to these shared libraries. Multiple shared services. Multiple version numbers. HTTPS vs HTTP. The net result is that the probability that someone visiting your site has a copy of the exactly same resource referenced via the exact same URL is very low.
With the overhead of having to do a DNS lookup, a TCP connection, TCP slow start, its rarely worth it. Just concat/minify into your own block of JS served from your own systems. Shared JS hosts/CDNs are a terrible and annoying hack, all in an attempt to save 50KB or so
Also can you provide any stats/citation that cache hit probability on first request is in fact very low?
I have more recent data on this that is not yet published from my day-to-day work at a web performance company. The pictures has improved somewhat, but not significantly.
You are correct, "Overheads of fetching library from a CDN are applicable to the first request" The problem is, because of the fragmentation, every website is asking you to hit a different URL, so every request is a "first request". You aren't leveraging the browser cache.
Most sites are already serving you site-specific JS anyway over a warm connection (even more so with HTTP/2), so there is even less benefit to going to a 3rd party host to potentially avoid downloading a few dozen kilobytes. Couple that with the security implications of injecting 3rd party code into your page, its just plain silly and wasteful to do this for a modern website.
Also I was talking from the subsequent requests from the same client.
This sentence should be a big clue: "We usually average around 15,000 hits per second to our CDN with 99.8% of those being cache hits."
"We" in that sentence is Kris Borchers speaking collectively about the jQuery foundation, talking a MaxCDN interviewer. But he is not talking about the browser cache. He can't be, because jQuery, or MaxCDN for that matter has no idea what the "hit rate" of a browser cache is.
Example: If I go to 1.example.com, which links to maxcdn.com/jquery.js, and then later I go to site 2.example.com, which links to the same maxcdn.com/jquery.js file, my browser doesn't send any requests! That is the entire point of far-future caching! I was able to use the version of jquery that was in my browser cache. However MaxCDN, or jQuery for that matter, have no idea this hit took place.
By the same token, if I go to 1.example.com, which links to maxcdn.com/jquery.js, and then later I go to site 2.example.com, which links to a different URL like maxcdn.com/master/jquery.js, my browser has a cache miss. /master/jquery.js is not in my browser's cache, I've never been there. MaxCDN, or jQuery for that matter, have no idea that I requested something different then before.
CDN cache hit rate has nothing to do with browser caches. In fact, people that are not you, being able to detect if something is in your browser cache or not, is a massive security problem. See my talk at BlackHat in 2007, Many of Jeremiah Grossman's talks at BlackHat (2006, 2007, 2009), or go all the way back to Ed Felton's work on using the timing side channels against browser caches.
In the industry, "99.8%" cache hit on a CDN's edge server means that 99.8% of the time the edge server can handle the request, instead of the request having to go all the way to the origin. They have no way of knowing how often a random person on the internet loads a random file from their browser cache.
A browser can use be told to revalidate files, telling the server to return the content using the "If-Modified-Since" and "If-None-Match" headers. This way, the server will return 304 and empty content if the file has not changed or 200 and the file if it is new or it changed
"Our CDN is a huge part of the jQuery ecosystem. We usually average around 15,000 hits per second to our CDN with 99.8% of those being cache hits. This provides much better performance for those using our CDN as their visitors can use a cached copy of jQuery and not have to download it, thus decreasing load time."
Somehow because of that I assumed that they had analysis done to understand browser caching rates. My bad.
EDIT: Huh, funny thing. What exactly is the origin server for the CDN jQuery library when the request URI is https://code.jquery.com/jquery-2.1.4.min.js ?
What would be the point for going to origin server at all if versioned jquery libraries are static and do not change? Edge locations are for all intents and purposes an origin server. I think that the sibling comment may be more accurate in its assumption: 99.8% cache hit most probably are 200 vs 304 responses.
END OF EDIT
Usage on top10k websites:
Google JS CDN is used on 23.5% 
jQuery DNS is used on 4% 
CDNJS is used on 4% 
jsDelivr is used on 0.5% 
OSSCdn is used on 2% 
Supposedly set of websites that use a particular JS CDN belong are disjoint with a set of websites using a competitive CDN. Thus we can estimate total JS CDN use at 30% of top10k websites and literally millions of websites scattered around the internet.
As JS libraries popularity follows a power law distribution and libraries cache headers are set for a year and longer, I would suggest that the probability of top 100 JS libraries being already cached in a browser is really high.
Statistical data hints that JS cdns are in fact quite efficient at reaching their goals, but certainly doesn't prove anything conclusively.
This is terrible advice. Don't do this. Remember what happened when Adobe did this?
"When storing passwords, encrypt them first, using an existing, widely used crypto library. If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow."
Can you elaborate on what's so "nope" about that advice? Are you saying one shouldn't encrypt passwords?
If you use a batteries-included web-framework, this is already done for you. If you do not, you better understand the tradeoff of redeveloping those parts.
You should either store only the salted hash value, or outsource the identity management to a third party who knows not to store the users passwords. :)
This is something I've seen a lot of developers act elitist about, and it's always rubbed me the wrong way.
It's like the gun nuts that flip out when someone calls it an assault rifle or a clip instead of a magazine.
What can you do, people like showing off how "smart" they are.
And if you want the user to be able to perform sensitive operations (edit their personal details for example) then you'll have to ask for a OTP or email verification every time. These methods tend to be higher friction than a password box.
Sure you don't want to constantly bug the user but not every site needs to do that. Especially for sporadically-used sites, "receiving email" could be less of a pain than keeping track of passwords.
A session can be long-lived without being indefinite. We might decide that any authenticated site visit within the last week is new enough not to repeat the passwordless process, or we might say two weeks or a month or whatever.
OAuth isn't identity management, it's for authorization.
Each of those platforms does provide it's own identity management, but that isn't OAuth.
Personally, I still prefer Persona's privacy-oriented approach to id management, but since Mozilla stopped pushing it, development has slowed quite a bit and widespread adoption will probably never happen.
2. Security best practices subject to "open for interpretation."
Login with FB, Google, Github, Twitter, etc different systems, separate from OAuth.
So, what would you use instead?
Can you explain me this? How Google will be able to access my service data?
> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.
Questionable advice. At the very least neither of these two are some kind of automatic "best practice" everyone should just follow.
> it can be helpful to rename all those user.email vars to u.e to reduce your file size
Maybe even append HMAC signature to that parameter with user IP and timestamp. Might be an overkill, but still be careful with craftable redirects, they might become vulnerability one day.
... well, no. Technically you don't have to. But you almost certainly should.
If anything the advice should be inverted by replacing 'mobile' with 'desktop'
So, most searches, then.
Note, I'm absolutely not saying that a big screen desktop experience isn't inherently better for all but the simplest apps, there's very high probability it is because it's more ergonomic hardware with far more screen real estate.
However since most people want to, and do use mobile in preference to desktop for a huge proportion of tasks now, the design philosophy in most cases needs to flip from 'full functionality for desktop, scale down gracefully for mobile' to 'full functionality for mobile, scale up for desktop, taking advantage of the extra UX potential where possible'.
It's actually a far more optimistic, and creative, approach. Make it, then enhance it for the less popular use case rather than make it, then degrade it for the less popular use case.
If I'm making a Web IDE or a Web Photoshop, it's very unlikely I'll be able to fit all of the functionality that's needed into a tiny mobile screen, and it's also unlikely I'll be able to get it to perform well. And you know what? That's totally fine, because if my demographic is gonna be people with 1920x1200 monitors on a powerful desktop machines, it'll work great. I'll build an amazing experience for desktop, because that's my target demographic.
A lot of enterprise applications are impossible to scale down to mobile as well, due to the sheer amount of customizability and information they provide. I don't know of many enterprise applications that support both mobile and desktop. If you want to support mobile for an enterprise app, you're better off designing a separate mobile variant of your application. This assumes you have the resources to do so, and that there's sufficient interest from your customers such that the decision to have a mobile variant makes sense.
Here's the thing, building a sophisticated application that works well on a tiny phone and scales all the way up to a 30'' monitor is not feasible at all for a lot of teams. I'd challenge you to show me a good example of a sophisticated app (e.g. along the lines of a Web IDE or Web Photoshop) that will scale nicely from a tiny mobile screen all the way up to an awesome 30'' display.
ALWAYS USE CRYPTOGRAPHY for communication! Simply doing HTTP to HTTPS redirects is not sufficient. The origin request must be via HTTPS. Also make sure the app is properly validating the HTTPS connection.
Sorry I had to shout, but I'm growing tired of downloading the latest cool app that is marketed as secure only to find that it doesn't use HTTPS and as a result I can hijack the application UI to ask users for things like their password, credit-card number, etc., all without them having any way to tell if they are being asked by some bad guy.
1. Use a widely-accepted framework.
2. Implement your application using that framework's methods.
Why a beginner would implement even 1/3 of this list manually is beyond me.
If you're a student or are serious about learning web development (and want to focus on developing in JS), it would make a lot of sense to dedicate your time to actually learning Node and Express, figuring out all of these hairy details and 'manually' implementing the items in Venantius' list.
Or don't figure out the hairy details, because many of his items have proven and documented solutions in the Node context, and learning how to properly use bcrypt and passport isn't too difficult. These libs are a good middle-ground between low-level details and something more out of the box.
I'm curious, why is this good? Sure, sending an email to them so they confirm they have the correct email, but what is the benefit of the verification step? Is it to prevent them from proceeding in case they got the wrong email? It would be nice if this was justified in the article.
I would also add, that changing a password should send an email to the account holder to notify them. Then when changing the email address, the old email address should be notified. This is so a hijacked account can be detected by the account owner.
I don't know much about web development, but shouldn't those resources get cached? Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?
Actually, this is achievable with push states, so isn't a strong argument against single page apps.
I think the problem with SPAs is that they exacerbate memory leaks, since they don't have the typical 'reset' of a browser page load to clear them. Also, a lot of SPAs re-implement browser functionality like scrollbars and the back button without proper cross-browser testing - let alone usability testing.
Conceptually, there's nothing wrong with SPAs, but many of the implementations are shoddy at best with no clear advantage gained.
server side yes, what he means is that you only need to load the content as a client now, not the layout.
> Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?
> Forms: When submitting a form, the user should receive some feedback on the submission. If submitting doesn't send the user to a different page, there should be a popup or alert of some sort that lets them know if the submission succeeded or failed.
I signed up for an Oracle MOOC the other day and got an obscure "ORA-XXXXX" error and had no idea if I should do anything or if my form submission worked. My suggestion would be to chaos monkey your forms because it seems that whatever can go wrong can. Make it so that even if there is an error the user is informed of what is going on and if there's something they can do about it.
Why is it better to be specific?
Has anyone built a lasting stand-alone business that relies on Facebook, et al for identity management?
Yes there is. It's called Transport Layer Security (TLS).
Smells like an information discolsure highway. I usually 404 all requests that hit "unauthorized" content.