Hacker News new | comments | show | ask | jobs | submit login
Things to Know When Making a Web Application in 2015 (venanti.us)
297 points by venantius 656 days ago | hide | past | web | 177 comments | favorite



First of all, thanks for the nice writeup. I hate that comments tend to hone in on nitpicking, but so it goes. My apologies in advance.

> If you're just starting out with a new web application, it should probably be an SPA.

Your reasoning for this seems to be performance (reloading assets), but IMHO the only good reason for using a single-page app is when your application requires a high level of interactivity.

In nearly every case where an existing app I know and love transitions to a single-page app (seemingly just for the sake of transitioning to a single-page app), performance and usability have suffered. For example, I cannot comprehend why Reddit chose a single-page app for their new mobile site.

It's a lot harder to get a single-page app right than a traditional app which uses all the usability advantages baked in to the standard web.


I fully agree with this. SPAs are for webapps, not websites.

For websites, you should always use progressive enhancement - there is no reason why you couldn't obtain the same performance gains by progressively enhancing your site with reload-less navigation. That's what AJAX and the HTML5 History API are for.

Especially don't forget that no, not all your clients support JS. And there's no reason why they should need to, for a website.


> Especially don't forget that no, not all your clients support JS. And there's no reason why they should need to, for a website.

Sure there is, if you care about interactivity, responsiveness, and general user experience. The fact that not all potential clients support JS is probably irrelevant, since that number is probably incredibly small. You might as well say that not all potential clients have access to the Internet.


> Sure there is, if you care about interactivity, responsiveness, and general user experience.

And all of these can be provided just fine using progressive enhancement. This is _not_ an argument for SPAs.

> The fact that not all potential clients support JS is probably irrelevant, since that number is probably incredibly small.

Your use of the term 'probably' suggests to me that you have not run any actual metrics. Here's a short list of some examples:

- Bots and spiders

- Low-powered mobile devices

- Privacy-conscious users who use NoScript

- Terminal-based browser users

- Literally every single one of your users until the .js blob has completed downloading

- ...

Requiring JS for a website is not okay, end of story. We all collectively understood this 10 years ago, and I'm not really sure where along the way this understanding has gotten lost.

EDIT: Forgot that HN has a neutered version of Markdown. Fixed.


> Your use of the term 'probably' suggests to me that you have not run any actual metrics.

What I really meant is that if you have a significant non-JS-supporting visitor base, you almost certainly already know it. There are a huge class of web apps that simply do not have to worry about it. If you know who your customers/visitors are then you should already know how important it is to support non-JS browsers.

> Requiring JS for a website is not okay, end of story.

It's really not the end of story, and it's not your call except for your own web apps. JavaScript is an integral part of the platform that is the web, just like HTML and CSS. You might as well make the argument that "requiring a web browser for your app is not okay, end of story."

> We all collectively understood this 10 years ago

Things were a lot different 10 years ago, and perhaps your reluctance to update your understanding less than once per decade might explain some of your views. You may as well be saying "making a store than can only be accessed over the Internet is not okay, end of story, we all collectively understood this 25 years ago."


> JavaScript is an integral part of the platform that is the web, just like HTML and CSS

I always think it's interesting when people bring CSS into this, because it's actually a very strong counter to the general argument.

By design a user agent is well within its rights to completely ignore any stylesheets attached to a page, and the idea was always that this should be completely OK, partly because you have no idea what the UA's capabilities are, up to and including whether or not it's doing any kind of visual rendering at all.

This isn't a news flash to everyone, but even those already aware of this seem to think the main reason you'd do this is ADA accommodations and those are some kind of minority afterthought. My experience is that this is misguided -- accommodations for unusual visitors is important, but I think the biggest benefit might be that the simpler you make things for the client, the less complicated the engineering tends be, even if you can't import your Java-imitation-of-smalltalk-inspired application paradigm of choice.

And "too bad if you didn't have JS!" position seems to basically boil down to the idea that serving a custom client that consumes JSON instead of HTML as the media type for a given URI is Real Progress™.

Keep things as simple as possible. Don't require JS unless what the application needs to do can't be done without it.


You are correct all the way.

The problem is that arguments from both sides are often generalized and it's not necessarily wrong. If you have a wide audience e-commerce enterprise you should probably keep JS use as limited as possible. If you build tools for programmers or highly dynamic projects, JS everywhere is probably fine.


If you build tools for programmers, give me some way of interacting with it other than running your JS in my browser. Otherwise, I'm likely to find it horribly awkward to integrate with my preferred workflows, and I'm unlikely to use your project.


> By design a user agent is well within its rights to completely ignore any stylesheets attached to a page, and the idea was always that this should be completely OK, partly because you have no idea what the UA's capabilities are, up to and including whether or not it's doing any kind of visual rendering at all.

I think that is just an outdated idea that does not apply to highly interactive, long-lived web applications. The user agent is well within its rights to ignore anything it wants, of course, but the user agent is not owed anything by the server. With CSS, or JavaScript, the app may just not work.


> With CSS, or JavaScript, the app may just not work.

When it comes to CSS, generally most apps will continue to work even if the CSS is simply ignored.

Because that was one big point of CSS. Separation of presentation from content and function.

It's funny. That kind of separation of concerns is something developers talk about valuing, but the SPA/webapp craze erodes the user-facing aspect of it, even while developers are very proud to demonstrate they're thinking hard about which specific kind of separated-concern architecture they're working with well away from the boundary where an outside user or UA would care.


Yes, highly interactive applications are an exception. But most things people build on the web (except for games) aren't highly interactive.

A forum, for example, isn't highly interactive, whereas an online spreadsheet editor is.


It's really not the end of story, and it's not your call except for your own web apps. JavaScript is an integral part of the platform that is the web, just like HTML and CSS. You might as well make the argument that "requiring a web browser for your app is not okay, end of story."

The comparison between JavaScript and a browser is very weak as a web app absolutely requires a browser, but JavaScript is most certainly not required. But, that aside, requiring JavaScript with absolutely no workarounds is a very bad idea.

Consider people who are sight impaired. Screen readers and heavy JavaScript really are not friends, but there is legislation in many countries that makes this basic kind of accessibility a legal necessity.

Things were a lot different 10 years ago, and perhaps your reluctance to update your understanding less than once per decade might explain some of your views. You may as well be saying "making a store than can only be accessed over the Internet is not okay, end of story, we all collectively understood this 25 years ago."

First, this is rude. Second, your comparison between JavaScript and click-only commerce is even weaker than your previous comparison between a web browser and JavaScript.

But, those points aside, the web was a lot different ten years ago. Ten years ago, I could browse the web with JavaScript turned off and I rarely came across sites that didn't work for me. When I came across sites that didn't work, they always politely had a <noscript> that told me that the site needs .js. Today, my experience is the exact opposite and basic accessibility has faded.

Change is not necessarily positive - no website should require JavaScript. And, I'd argue that most applications should work without JavaScript.


Screen readers a JavaScript get along just fine if a site is coded well. A SPA can be perfectly accessible. The Section 508 refresh is well underway and will essentially be WCAG 2.0 AA [1]

[1]: http://www.w3.org/WAI/WCAG20/quickref/


> What I really meant is that if you have a significant non-JS-supporting visitor base, you almost certainly already know it. There are a huge class of web apps that simply do not have to worry about it. If you know who your customers/visitors are then you should already know how important it is to support non-JS browsers.

I was refering specifically to websites, not webapps.

> It's really not the end of story, and it's not your call except for your own web apps. JavaScript is an integral part of the platform that is the web, just like HTML and CSS. You might as well make the argument that "requiring a web browser for your app is not okay, end of story."

That analogy makes absolutely no sense, nor does your comparison to CSS. JS is an entirely different class of dependency than either CSS or web browsers.

JS is turing-complete, has a significantly bigger attack surface (both in terms of vulnerabilities and in terms of tracking), is much, much harder to optimize for a low-powered device than HTML and CSS, and so on.

Further, the web browser is a necessity to view something on the web. It is well understood, it is easy to optimize, and it is widely deployed. JS is no such 'required dependency' - and you should not make it one, when it can be done just fine without.

> Things were a lot different 10 years ago, and perhaps your reluctance to update your understanding less than once per decade might explain some of your views. You may as well be saying "making a store than can only be accessed over the Internet is not okay, end of story, we all collectively understood this 25 years ago."

The support/compatibility landscape for JS has not changed in those 10 years. The types of non-JS clients are still the same, the reasons for progressive enhancement are still the same. The only thing that has changed is standardization.

"Old" is not the same as "obsoleted". Unless you can come up with a concrete reason as to why this knowledge has been obsoleted throughout the past 10 years, it still applies.

And I'm getting a little tired of these poorly fitting analogies, you're using them as a crutch.


> I was refering specifically to websites, not webapps.

Okay, then yes, we were using terms differently. I consider web apps to be a subset of web sites (which is the standard usage as far as I knew), and thus I thought you were claiming that JavaScript should not be a requirement for any web site, including web apps. But you're using "web site" to mean web sites which aren't web apps, and in that case, I do agree.

I disagree with several claims you make later in that comment, but since I think I've addressed our primary disagreement I will leave it at that.


Thank you for making these points, I'm just disappointed at how often they need repeating for some people, do you know of a website that explains the reasoning behind not relying 100% on JavaScript for your website?

There's motherfuckingwebsite.com but the tone of that site is very adversarial, I think one that lays out the reasoning in a noncombative way would be more successful at reaching some of these people.


Hmm. None in particular, though you could just look around for some of the older resources on progressive enhancement - that's what it's called, and it used to be a recommended practice everywhere.

There's also this post, that goes into it somewhat: http://allinthehead.com/retro/367/why-is-progressive-enhance... - also, Brad Frost has some posts about this.


> huge class of web apps

You have a failure of understanding here.

There is a difference between a web site and a web app (although there is blur in the middle obviously). A web site (e.g. this site, a news paper, a blog, a search page, a job application form etc) should not need JS - and frankly if you put it in there then you're over-engineering things. Meanwhile web apps like e.g. Google docs/Office 365, a webmail client etc are clearly going to need specific JS.


Come on, really? Some of those are legit (although manageable if you do things right), but how many web Apps truly need to worry about accommodating terminal based browser users?

Unless Stallman is in your target market that's probably something you don't need to consider for any longer than fifteen seconds.


There is a simple argument for making sure that a web site looks good in a terminal based browser (note: web site, not web app).

This is how Google sees your web site.

Sure, there are lots of meta data enhancements and Google is probably much smarter than it was 10 years in extracting data from your js gobbled up web page (as we know, they even try to interpret javascript to some extent).

But it still boils down to this: The easier you make it for Google to get at the text of your web site, the better for your page rank.


You'd be surprised how commonly used terminal browsers are.


You should check your stats some time and see how many of your users fail to load your JS, rather than are capable of loading your JS. You might be surprised, especially if you have lots of mobile traffic.


Just out of curiosity, what's the standard way to gather that data? I guess you could put a non-JS request (like an image) and a JS request (like XMLHttpRequest) on each page, and compare numbers.


A `<noscript>` tag seems most intuitive to me, perhaps paired with some server-side Google Analytics.


Also, most of the corporate people still have forced IE installed on their work laptops. SPAs do generally break in IE.


"if you care about interactivity, responsiveness, and general user experience"

A simple web page (no-js) in a browser covers these three things to a highly acceptable standard. It's when you start to add the other junk that these three become compromised.

For example:

Interactivity: Hyperlinks work great from HTML 1.0 onwards

Responsiveness: Without additional items like JavaScript or even CSS, web pages are incredibly responsive.

General User Experience: The basic web page user experience is EXCELLENT for web pages because it is incredibly basic. Even date pickers can consist of 3 select drop downs (or even a text box; there are plenty of human-date-to-ISO-3306 convertors out there)


What is ISO-3306?


LOL thats the mysql port! I'm an idiot


He probably meant ISO 8601


I think he means that it's best to offer both variants with a graceful fallback for no-js users. This doesn't require sacrificing interactivity, responsiveness or UX for users with JS enabled.


"Graceful fallback" is what came before progressive enhancement, and is considered obsoleted for good reason.

Graceful fallbacks tend to be maintained as a separate version, and neglected over time. Progressive enhancement means taking a basic page and adding snazzy functionality to it - eg. ajaxified page loads. The latter is what you want.


It does, however, require a potentially large amount of work for a SPA. Granted, server-side rendering for JavaScript libraries like Ember and React is doable now.


First page load can be totally server side generated. Once loaded on client side, you can check using modernizer or something if your user's browser supports the features that you need for SPA if yes then you can replace server side generated HTML structure with a client side version. Now, you can proceed with all of your server and client communication via your favorite SPA framework.


Maybe not all my clients support javascript, but substantially all I care about do.

I even take advantage of this on my blog by rigging a field in the comments section to not be visible for users but be visible for spammer bots, so that they will fill it out and the software can auto-reject it.

Works very well.


> Maybe not all my clients support javascript, but substantially all I care about do.

See here: http://bradfrost.com/blog/post/fuck-you/

> I even take advantage of this on my blog by rigging a field in the comments section to not be visible for users but be visible for spammer bots, so that they will fill it out and the software can auto-reject it.

That is a reverse CAPTCHA, it works very well, and has absolutely nothing to do with Javascript. They are typically implemented using CSS.


Actually I think you are right that this particular one uses CSS - but I may have to upgrade to JS if the spammers get past it.


I enable JS for sites I feel are valueable for me. The first impression is always without JS. If navigation-buttons dont work without JS - I just leave.


if you self-select into the group that isn't cared about, then do you expect your leaving to be cared about?

i ask only because i'm puzzled as to the purpose of your reply.


I am too small a group to be cared about. But in my opinion only poor designed websites use JS for layout or navigation. And the way back to the search-engine is always just one mouse gesture away. JS = code written by someone with not neccessarily my best interests in mind running on my computer.


Honest question - What are you trying to prove by not enabling JS by default?


I think it's a matter of prevent: links to facebook, linked-in, and god knows what other service; and possibly save some bandwith, cpu time and ram.


I doubt she tries to prove anything. It is probably rather the peace of mind of not having ads stabbing in your eyes, subscribe popups appearing as you scroll, things wiggling around, images fading up in a popup instead of loading directly, protection against exploits and saving bandwidth just to name a few benefits.


Aside from the fact that JavaScript is often poorly written, I generally don't want untrusted code running on my computer.


I didn't even use a SPA for a webapp on my main work project I wrote a lightweight base and send down the page specific stuff and that model (so in a way it's lots of little SPA's not sure if that has a name) which means I get most of the advantages with few of the downsides.


The title and article are also for web apps, not web sites.


This was one of the points that stood out for me in the article too, also because I strongly disagreed with it. There is nothing inherently wonderful about doing everything client-side.

You get a much more limited range of languages and libraries to work with. You get to use overcomplicated build and deployment processes with ever-changing tools. You get to reinvent the wheel if you do want to use things like URI-based routing and browser history in a sensible way. In many cases you are going to need most of the same back-end infrastructure to supply the underlying data anyway.

Also, it's tough to argue the SPA approach is significantly more efficient if it's being compared with a traditional web app built using a server-side framework where switching contexts requests exactly one uncached HTML file, no uncached CSS or JS in many cases, and any specific resources for the new page that you would have had to download anyway.

Of course some web apps are sufficiently interactive that you do need to move more of the code client-side, and beyond a certain point you might find it's then easier to do everything there instead of splitting responsibilities. I'm not saying everything should be done server-side; I'm saying different choices work for different projects and it is unwise to assume that SPA will be a good choice for all new projects.


This is totally true.

If you're just starting out, chances are you are not (or should not be) making any sort of application where the performance increases by operating as a SPA will be even be noticeable compared to a standard server app.

Plus, I'd argue that you won't really understand what a SPA adds (or takes away) unless you are thoroughly familiar with the traditional model.

Finally, at the end of the day traditional apps are just a lot easier to put together even compared to the latest SPA frameworks, especially if your server side tech is something like Ruby or C#. A beginner will be better served by getting something nice up quickly, before attempting to do it the 'purist' way and possibly getting discouraged by the difficulty.


> I cannot comprehend why Reddit chose a single-page app for their new mobile site

They should but not the way they are making it.

It should be server side render for initial page (not just home page but any page). And mostly changing content through AJAX when you navigate between pages.

SPA is hard specially when it comes to usability. One of the biggest issue I see with SPA is going back. Browser handles back history pretty good for non-SPA. Replicating similar behavior in JS in not easy.

Reddit SPA should be like this with server side rendering. http://reddit.premii.com/


If you need a backend, starting with a SPA has one strong point, decoupling, you can leave your backend unmodified and start writing that native iOS, Android client or desktop client when needed it.


I often find many ajaxy effects on websites don't work work. For example, when I click to expand a comment in Quora, it often fails and I find it much quicker to just open it in a new page.


If you're new to web application development and security, don't blindly follow the advice of someone else who is also new to web application security.

You should instead have a security audit with people who have experience in security, so they can help you identify where and why you're system is vulnerable. If no one exists on your team/company that does, then hire a consultant.

Security is a hairy issue, and no single blog post/article is going to distill the nuances down in an easy to digest manner.


If you are a business, then definitely yes. But the average self-taught developer will not have the resources available to hire a security consultant.

Instead of throwing money at the problem, you can instead choose to teach yourself more about the subject. We maintain a curated list on Github for people interested in learning about application security for this very reason.

https://github.com/paragonie/awesome-appsec

But if you're a company and your operating budget is in the millions of dollars hire a security consultant!


> If you are a business, then definitely yes. But the average self-taught developer will not have the resources available to hire a security consultant.

True. You don't need to hire consultants to perform a security audit. Ask HN and Security Stack Exchange are good free alternatives to get critiques on your approach.


If you build something open source and it gets incredibly popular, security researchers will also probably come to you. This creates its own problems, of course. (Can't have problems without PR.)


You should instead have a security audit with people who have experience in security, so they can help you identify where and why you're system is vulnerable. If no one exists on your team/company that does, then hire a consultant.

It is easy to write that, and on the face of it, it's hard to argue against.

The trouble is, those audits and consultants don't come cheap, and if you're new at web apps and working on your first one that no-one has ever heard of yet, there is little really essential that you wouldn't find investing the same time reading the usual beginners' guides to security on-line. It's all risk management, and if you even make that effort you'll already be a significantly harder target than many established sites.

As a corporate lawyer once told me when I was getting the very first contract drawn up for a new business, for a simple supplier relationship, he could certainly charge me five figures and write an extensive document protecting the business against every conceivable threat he could imagine involving that supplier, but until the business had actual revenues worth protecting and the deal with that particular supplier was worth a lot more than the legal fees, he wouldn't advise doing it.


Security is never perfect. It is a deterrent, not impenetrable prevention. So sure, to security people, it is never good enough. To everyone else, a easy to digest blog post might give them food for thought that would make their work one step better than it was before, resulting in security that is still flawed, but better. So why not just accept the post for what it is - some basic advice to do that one better step.


> So why not just accept the post for what it is - some basic advice to do that one better step.

http://www.nytimes.com/2015/07/10/us/office-of-personnel-man...


Do you honestly feel that the work of beginning web developers falls into the same risk management quadrant as a major governmental database of personal information?


Look up "medium-brow dismissal"


I did. Wasn't sure what I was looking for.

https://www.google.com/search?q=medium-brow+dismissal&ie=utf...

It's clearly a logical failure to suggest heeding the authors advice would result in a catastrophic security breach.

Not paying attention to security by reason of "I've done a little better than nothing at all" feels like willful negligence.


https://news.ycombinator.com/item?id=5072224

(edit: this is an explanation of what "middle-brow dismissal" is)


If you're going to do something, do it right.

Security is never perfect, and to security people, we know that there is a tradeoff between Security and Users.

We don't advocate letting The Perfect be the Enemy of the Good when it comes to security, and on the same token we want you to implement security properly if you do it.


Since the security advice in the article is bad, this is more the case of the wildly incorrect is the enemy of the reader who takes the advice. Somewhat different.


"right" is subjective. there is always "good enough for right now, with the tools available, and the budget in hand".


This is something we help with a lot at Tinfoil (https://www.tinfoilsecurity.com). You can read our blog for useful tips and info, but we always recommend actually running our web application scans against your app in order to actually look for vulnerabilities. Is it as good as having 'tptacek or someone else from Matasano looking at it as a human? Not quite, since humans have more ingenuity. Is it better than reading a blog post and trying to follow 'best practices'? Infinitely.

Don't try to do it yourself.


In general, you may be right, but the security suggestions in this particular post are the same I hear from people "who have experience in security." Also, they often encourage readers to basically go out and find the thing everyone says is the best thing (i.e. "When storing passwords, salt and hash them first, using an existing, widely used crypto library.")

I challenge you to point out specific suggestions in this article which are wrong or misleading, or to point out glaring omissions.


There are quite a few misleading or incorrect suggestions, to pick a few:

Encryption passwords is not hashing (think this was fixed after publishing due to comments below)

OAuth is not for authentication

SPA is not suited to all or even most websites, and is far from being 'king' in any sense.

CDNs have pros and cons, they don't suit everyone.

Localisation does not mean serving assets closer to home, but translating stuff.

Nothing better than SSL? TLS


Most (all?) encryption functions are also hash functions, they're just special hash functions with the extra property of making it extremely difficult to discover the source. (edit: I realized after posting this, that this item is incorrect in regards to the cipher text, which obviously changes in length in relation to the length of the source, unlike the output of a hash function which is a fixed length)

If OAuth is not for authentication, someone better tell Google: "Google APIs use the OAuth 2.0 protocol for authentication and authorization." [1]

TLS is basically just the newest version of SSL. The name was changed for legal reasons. So it is an understandable oversight [2]

The others aren't security related, so I didn't address them.

[1] https://developers.google.com/identity/protocols/OAuth2

[2] http://security.stackexchange.com/questions/5126/whats-the-d...


> Most (all?) encryption functions are also hash functions, they're just special hash functions with the extra property of making it extremely difficult to discover the source.

The special property that encryption functions have compared to hashing functions isn't that it is extremely difficult to discover the source, but rather almost the reverse -- that for every encryption function there exists a function (decryption function) by which you can recover the unique source.

Hashing functions in general do not have an inverse function: while you might be able to recover several possible sources from them (and this might be easy or difficult), you cannot recover the single source, because the space of inputs is larger than the space of outputs, so there can be no unique mapping from outputs back to inputs that would generate them.


Most (all?) encryption functions are also hash functions, they're just special hash functions with the extra property of making it extremely difficult to discover the source.

There's a fundamental difference between storing a password so that you can read it again (encrypt implies this), and storing it so that you can only verify it, not read it (hash). But a broader criticism of the article is that it is far too sweeping in its judgements based on scant knowledge of the topic - the little mistakes are just indications of that.

It's fine to be a beginner asking questions and the mistakes are not really so important, but it's not really useful to attempt a definitive summary of a field which you know very little about.


OAuth isn't for Identification.

You can use OAuth for authentication, but that is specific purpose for Authorization.

Google has a separate product for Sign In:

https://developers.google.com/identity/sign-in/web/


Can you explain why OAuth is not for authentication? What does it not do that you expect an authentication system to do? What is fundamentally wrong with every site that allows me to sign in with a github/google/facebook account (via OAuth)?


Contrast that with the following scenario.

Alan has a web application that shows you all funny tweets. In order to see those tweets you must first create an account.

Username/Pass

You pick a username, you enter a password. That combination is attached to Alan WebApp UserID: 12345

Everytime you login with that username and password combination, you get back Alan WevApp UserID 12345.

Google Login

You click the "Login with Google" button.

It redirects you to say "Do you want to associate your google account with Alans Web App?"

You click yes.

Google ID: XYZZY is returned. That id is tied to Alan WebApp UserID 12345.

The next time you go to login, Google returns "This is Google ID: XYZZY". Alan WebApp finds the association XYZZY with Alan WebApp 12345.


> What is fundamentally wrong with every site that allows me to sign in with a github/google/facebook account (via OAuth)?

That is a inaccurate statement.

Those sites allow you to login with your Github/Facebook/Google Accounts. That isn't OAuth. Those sites also use OAuth in order to let 3rd party applications access the users data stored on that system.

Take this Scenario

Alan has a service that finds funny tweets. cpitman wants to use Alan's service, to find his funny tweets.

No OAuth Example:

cpitman gives Alan service his Twitter Username and Password.

Alan service logs into Twitter, and pulls twitter data.

With OAuth:

Alan service opens a request to Twitter asking for twitter data for cpitman

Alan service redirects cpitman to Twitter

Twitter notifies cpitman that Alan Service wants to access twitter data

cpitman agrees

Twitter passes back a token

Alan service uses token to access cpitman twitter data.


but the email ID he used to register at Twitter is also one of resources associated with his account and can be accessed as Twitter Data?

(beginner here, trying to understand why not use OAuth for Identification/Authentication)


People usually use OpenID for that bit and OAuth for the authorisation to use the third party APIs as the customer. There's nothing horribly wrong with third-party signin if it suits you and for smaller projects however it does limit your relationship with customers and tie you in to third party services which might be charged for or shut down at any time, so it's not ideal for many websites. It depends on your requirements.


My first comment on this article was pointing out that Facebook Login et al isn't OAuth.

I am guilty of not editing that one comment to hang my other comment off of.


If you can afford it, buy a proven security solution. For example use an IBM Datapower or ISAM appliance (or similar from F5). Enterprises will choose something like this to secure their many internal web applications.


Having worked with such solutions before.. the pricing can come in higher than having a professional actually review your code and improve it.

The end result of better code is far better than a magical appliance.


I wouldnt take any advice on web dev from someone if his simple blog looks like this http://i.imgur.com/uHi0g0Z.png

brb compiling linux to JS to render my blog post.


The footer clearly says "Proudly published with Ghost." It's not his own doing.


That begs the question... why is Ghost loading all this?

I had the impression that it was supposed to be a simple, bloat-free blogging platform.


This is a bit of a pet peeve of mine, but that banner image is 10 megabytes, it can be compressed down to 2mb without any perceptible loss of quality. Heck it could probably be shrunk further if you can accept a bit more loss because most of the image is blurry and noisy anyway.

heres a compressed version: https://www.dropbox.com/s/bw606t7znouxpj1/photo-141847963101...


Good catch.

That is ironic on so many levels.

I mean there is even a section on "UX: Bandwidth"...

Maybe the author should brush up on image compression best practices and consider adding a subsection on images and other media.

EDIT: Realized my previous wording was probably a bit too harsh considering the author is still relatively new to web development.


Hard not to catch it when its loaded in progressively on a 20Mbps connection :P. Websites are way too fat these days.


10 meg?

4k is all the rage now on mobile.

just taken a chunk out of someone's download quota with that nice background!


I've been thinking of developing a proxy that compresses responses and forwards them to you, specifically to handle sites not optimized for mobile. There are already solutions like this but I think a self hosted version is what people need.


This one has been around a long time:

http://www.khelekore.org/rabbit/


thanks!, That will definitely help a lot. I was thinking of using squid proxy and writing some custom handlers, but this helps a lot.


Ironically the original banner image doesn't even load for me on iOS Safari.


> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.

The thing that everybody seems to overlook here: this has serious legal consequences.

You are demanding of your users that they agree to a set of TOS from a third party, that does not have either their or your best interests at heart, and that could have rather disturbing things in their TOS - such as permission to track you using widgets on third-party sites.

Not to mention the inability to remove an account with a third-party service without breaking their authentication to your site as well.

Always, always offer an independent login method as well - whether it be username/password, a provider-independent key authentication solution, or anything else.

> When storing passwords, salt and hash them first, using an existing, widely used crypto library.

"Widely used" in and of itself is a poor metric. Use scrypt or bcrypt. The latter has a 72 character input limit, which is a problem for some passphrases, as anything after 72 characters is silently truncated.


Question about JavaScript and CDN for mobile devices. Should I use a CDN for standard libraries or should I just concat and minify all my JavaScript?

The concat and minify seems better as that reduces the JavaScript libraries and code load to a single HTTP request.

A CDN seems nice in theory. Reality is: Does the browser have the library cached? Is the library cached from the CDN that I'm using? The browser is making more HTTP requests, which sometimes takes more time to request than to download the library.

I agree that using CDNs is a good speed boost. I'm trying to figure out if hoping for a library cache hit out weights a library cache miss.


Just to clarify. General CDNs tend to be a good idea of you are having latency issues.

Standard libraries for major JavaScript project, all served from a single shared CDN (like Google, maxcdn, cdnjs, etc) also tend to be called "CDNs" but this is a little confusing. Yes, these shared files are often stored on a CDN, but that's not the so-called major benefit of these shared hosts. The main benefit is supposed to be that, if everyone references the same copy of jQuery on one of these shared hosts, the when visitors hit other sites, jQuery will already be in their browser cache.

This rarely works in practice. The URLs to these shared libraries. Multiple shared services. Multiple version numbers. HTTPS vs HTTP. The net result is that the probability that someone visiting your site has a copy of the exactly same resource referenced via the exact same URL is very low.

With the overhead of having to do a DNS lookup, a TCP connection, TCP slow start, its rarely worth it. Just concat/minify into your own block of JS served from your own systems. Shared JS hosts/CDNs are a terrible and annoying hack, all in an attempt to save 50KB or so


Ugh, I will never understand this reasoning. Overheads of fetching library from a CDN are applicable to the first request. Why do you consider this to be an important factor?

Also can you provide any stats/citation that cache hit probability on first request is in fact very low?


First, research on hit factor:

https://zoompf.com/blog/2010/01/should-you-use-javascript-li... http://statichtml.com/2011/google-ajax-libraries-caching.htm... https://github.com/h5bp/html5-boilerplate/pull/1327#issuecom...

I have more recent data on this that is not yet published from my day-to-day work at a web performance company. The pictures has improved somewhat, but not significantly.

You are correct, "Overheads of fetching library from a CDN are applicable to the first request" The problem is, because of the fragmentation, every website is asking you to hit a different URL, so every request is a "first request". You aren't leveraging the browser cache.

Most sites are already serving you site-specific JS anyway over a warm connection (even more so with HTTP/2), so there is even less benefit to going to a 3rd party host to potentially avoid downloading a few dozen kilobytes. Couple that with the security implications of injecting 3rd party code into your page, its just plain silly and wasteful to do this for a modern website.


jQuery CDN cache hit rate is 99.8%[0], Google CDN numbers should be comparable. So yes, you are leveraging browser cache for most popular libraries.

Also I was talking from the subsequent requests from the same client.

[0]https://www.maxcdn.com/blog/maxscale-jquery/


You are confusing browser cache hits with a CDN/edge server cache hit.jQuery, or MaxCDN for that matter has no idea what the "hit rate" of a browser cache is.

This sentence should be a big clue: "We usually average around 15,000 hits per second to our CDN with 99.8% of those being cache hits."

"We" in that sentence is Kris Borchers speaking collectively about the jQuery foundation, talking a MaxCDN interviewer. But he is not talking about the browser cache. He can't be, because jQuery, or MaxCDN for that matter has no idea what the "hit rate" of a browser cache is.

Example: If I go to 1.example.com, which links to maxcdn.com/jquery.js, and then later I go to site 2.example.com, which links to the same maxcdn.com/jquery.js file, my browser doesn't send any requests! That is the entire point of far-future caching! I was able to use the version of jquery that was in my browser cache. However MaxCDN, or jQuery for that matter, have no idea this hit took place.

By the same token, if I go to 1.example.com, which links to maxcdn.com/jquery.js, and then later I go to site 2.example.com, which links to a different URL like maxcdn.com/master/jquery.js, my browser has a cache miss. /master/jquery.js is not in my browser's cache, I've never been there. MaxCDN, or jQuery for that matter, have no idea that I requested something different then before.

CDN cache hit rate has nothing to do with browser caches. In fact, people that are not you, being able to detect if something is in your browser cache or not, is a massive security problem. See my talk at BlackHat in 2007, Many of Jeremiah Grossman's talks at BlackHat (2006, 2007, 2009), or go all the way back to Ed Felton's work on using the timing side channels against browser caches.

In the industry, "99.8%" cache hit on a CDN's edge server means that 99.8% of the time the edge server can handle the request, instead of the request having to go all the way to the origin. They have no way of knowing how often a random person on the internet loads a random file from their browser cache.

This whole thing proves my point: Calling shared, common, publicly hosted copies of popular JS libraries "CDN's" or "JavaScript CDNs" is just confuses people. CDNs are about reducing latency. Share JS libraries are about trying to avoid requests altogether by leveraging the browser cache, and they are largely ineffective.


Maybe they are talking about 200 vs 304 caches.

A browser can use be told to revalidate files, telling the server to return the content using the "If-Modified-Since" and "If-None-Match" headers. This way, the server will return 304 and empty content if the file has not changed or 200 and the file if it is new or it changed


You are right, I was confusing browser cache with CDN cache hit. In their interview they state that:

"Our CDN is a huge part of the jQuery ecosystem. We usually average around 15,000 hits per second to our CDN with 99.8% of those being cache hits. This provides much better performance for those using our CDN as their visitors can use a cached copy of jQuery and not have to download it, thus decreasing load time."

Somehow because of that I assumed that they had analysis done to understand browser caching rates. My bad.

EDIT: Huh, funny thing. What exactly is the origin server for the CDN jQuery library when the request URI is https://code.jquery.com/jquery-2.1.4.min.js ?

What would be the point for going to origin server at all if versioned jquery libraries are static and do not change? Edge locations are for all intents and purposes an origin server. I think that the sibling comment may be more accurate in its assumption: 99.8% cache hit most probably are 200 vs 304 responses.

END OF EDIT

Nevertheless, I've spent more time to research the issue of random person loading a javascript library from browser cache.

Usage on top10k websites: Google JS CDN is used on 23.5% [1] jQuery DNS is used on 4% [4] CDNJS is used on 4% [2] jsDelivr is used on 0.5% [3] OSSCdn is used on 2% [5]

Supposedly set of websites that use a particular JS CDN belong are disjoint with a set of websites using a competitive CDN. Thus we can estimate total JS CDN use at 30% of top10k websites and literally millions of websites scattered around the internet.

As JS libraries popularity follows a power law distribution and libraries cache headers are set for a year and longer, I would suggest that the probability of top 100 JS libraries being already cached in a browser is really high.

Statistical data hints that JS cdns are in fact quite efficient at reaching their goals, but certainly doesn't prove anything conclusively.

[0]https://www.maxcdn.com/blog/maxscale-jquery/

[1]https://trends.builtwith.com/cdn/AJAX-Libraries-API

[2]https://trends.builtwith.com/cdn/CDN-JS

[3]https://trends.builtwith.com/cdn/jsDelivr

[4]https://trends.builtwith.com/cdn/jQuery-CDN

[5]https://trends.builtwith.com/cdn/OSS-CDN


Come on, man. 5 year old data? 20-30% of top websites are currently using Google's CDN, so it seems like you're wrong about the picture not having improved significantly. You really think people don't have DNS for ajax.googleapis.com already? And HTTPS is available (and default), so you really think somebody is gonna hack Google to serve you bad JS? You also conveniently ignore the benefits of domain sharding and the fact that the CDN will serve the files faster and with lower latency than almost any setup. And that HTTP/2 mitigates the cost of not concatenating your scripts.


CDN is a way to go unless you have some very specific circumstances, like increased security requirements or lack of CDN edge location near the majority of your users.

jQuery CDN has something like 99,8% cache hits. And even if the browser doesn't have the library cached, it will have it cached on all subsequent request. Additional roundtrips will be needed on first page load only. Take into consideration that as soon as you make even a small change to your js files, the whole minified and bundled JavaScript will need to be redownloaded.


Also pulling anything from CDN basically means that CDN operator (or anyone that will manage to hack it) can spy on or alter communication between your users and your server.


>When storing passwords, encrypt them

Nopenopenopenopenope!

This is terrible advice. Don't do this. Remember what happened when Adobe did this?


I suspect, given the reference to sending verification emails, that hashing was what was intended here. As with the use of identity instead of authorization. To be clear, encryption implies you can retrieve the stored value later, while hashing is intended to be one-way.


I've fixed the language in question. Thanks all for the catch here.


The full quote is:

"When storing passwords, encrypt them first, using an existing, widely used crypto library. If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow."

Can you elaborate on what's so "nope" about that advice? Are you saying one shouldn't encrypt passwords?


Of course one shouldn't encrypt them, one should salt and hash them. With a cryptographically secure hash such as bcrypt or scrypt.

If you use a batteries-included web-framework, this is already done for you. If you do not, you better understand the tradeoff of redeveloping those parts.


I imagine the OP probably meant that and simply wrote the wrong thing in the post. I probably wouldn't have noticed it was the wrong wording if not for this comment chain.


The only acceptable advice in this situation is "use bcrypt". Vague stuff about "hashing or encrypting" is not good enough.


Wait, what about "scrypt"? Maybe the only acceptable advice changed in the last hour? :-)


Bcrypt is a very easy to use hashing tool that exists in all popular languages. It is the best choice, and the easiest to implement.


and if you're salting them yourself you're doing things wrong, use a good library that takes care of these little crypto details for you.


That is correct. You should not store passwords in any form ... even encrypted.

You should either store only the salted hash value, or outsource the identity management to a third party who knows not to store the users passwords. :)


I think they're assuming it can be decrypted instead of one-way.


Encryption isn't encryption if it's one-way.


Thanks, I was trying to rephrase it, similar to people using IM in this thread but being off-base with it's actual meaning.


Encryption means encrypting information which can later be decrypted. If you are talking about a one-way transform of data that you can never retrieve the original information from the result of the transformation, that is called hashing. We hash passwords, not encrypt them.


I think the OP meant "hash them" with something like bcrypt.


Which raises the question: should you follow web application advice regarding security from someone who mistakenly uses the word "Encrypt" when they (actually or unintentially) mean "Hash?"


It's an easy slip of the tongue, either if you don't know very much, or if you've spent too much time reading how the bcrypt hash works internally - https://www.usenix.org/legacy/events/usenix99/provos/provos_...


Yeah, yeah, I think I would. Someone's credibility as a programmer isn't destroyed in my mind because they say encrypt to describe hashing, especially if they are in fact, hashing and not encrypting and understand why.

This is something I've seen a lot of developers act elitist about, and it's always rubbed me the wrong way.


It's the same in everything. I'd say passwords are "encrypted" in my systems (even though they're salted/hashed).

It's like the gun nuts that flip out when someone calls it an assault rifle or a clip instead of a magazine.

What can you do, people like showing off how "smart" they are.


Technical jargon is a precise language because it communicates precise concepts. People who do not use it correctly likely have serious misapprehensions and their advice is automatically suspect. Excusing the misuse helps no one.


I hope that at least some services will eventually consider that for sites that aren't storing valuable data, Passwordless (i.e. emailed, etc. one-time token) and long-lived session tokens are better than even touching passwords.


Going passwordless with long-lived sessions requires more complex session management though. If you don't time-out sessions then you increase the cumulative probability of a live session eventually being hijacked through XSS, MITM (coffee shop, rogue wifi), or malware etc.

And if you want the user to be able to perform sensitive operations (edit their personal details for example) then you'll have to ask for a OTP or email verification every time. These methods tend to be higher friction than a password box.


I'm not sure I see the XSS vuln, or rather, a site might have an XSS vuln and long sessions would make it worse, but I don't see long sessions causing XSS. MitM would be possible without TLS, but not with it. Malware is always a threat, but if it can read cookies it might be able to read cached passwords etc. too.

Sure you don't want to constantly bug the user but not every site needs to do that. Especially for sporadically-used sites, "receiving email" could be less of a pain than keeping track of passwords.

A session can be long-lived without being indefinite. We might decide that any authenticated site visit within the last week is new enough not to repeat the passwordless process, or we might say two weeks or a month or whatever.


Are there actually any sites that do this? It is somewhat interesting.


> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.

OAuth isn't identity management, it's for authorization.

Each of those platforms does provide it's own identity management, but that isn't OAuth.


The OAuth based OpenID Connect is for identity management.

http://openid.net/connect/

Personally, I still prefer Persona's privacy-oriented approach to id management, but since Mozilla stopped pushing it, development has slowed quite a bit and widespread adoption will probably never happen.

https://www.mozilla.org/en-US/persona/


I took that to mean use both identity management as well as OAuth.


1. Why use OAuth unless you want to grant 3rd parties access to your services data, on behalf of your customers?

2. Security best practices subject to "open for interpretation."


While OAuth isn't "for" authentication, everyone uses it that way by "authorizing" access to "view your email address" which is as good as authenticating your email address.


Can you link to implementations that use OAuth in such a manner?

Login with FB, Google, Github, Twitter, etc different systems, separate from OAuth.


GitHub How To: https://developer.github.com/guides/basics-of-authentication... And the OpenID Connect standard (essentially OAuth V2 + identity service): http://openid.net/connect/


> Why use OAuth unless you want to grant 3rd parties access to your services data, on behalf of your customers?

So, what would you use instead?


> Why use OAuth unless you want to grant 3rd parties access to your services data, on behalf of your customers?

Can you explain me this? How Google will be able to access my service data?


I think he may have confused OAuth with OpenID (which are often used in a complementary fashion).


> All assets - Use a CDN

> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.

Questionable advice. At the very least neither of these two are some kind of automatic "best practice" everyone should just follow.

> it can be helpful to rename all those user.email vars to u.e to reduce your file size

Or maybe you should less JavaScript so length of your variable names does not matter.


One thing to note is login redirect. Please be sure that redirect parameter is local URI and don't redirect user to another site.

Maybe even append HMAC signature to that parameter with user IP and timestamp. Might be an overkill, but still be careful with craftable redirects, they might become vulnerability one day.


True, open redirects can cause serious security issues. Do not have an route that simply does the equivalent of `return redirect(params["url"])`, either at login or anywhere.


"You don't have to develop for mobile..."

... well, no. Technically you don't have to. But you almost certainly should.


Was surprised to see this too, considering the article's title is Things to Know When Making a Web Application in 2015

If anything the advice should be inverted by replacing 'mobile' with 'desktop'


Yes, even moreso now since it hurts your Google page rank to not make your app mobile friendly.


I believe it only hurts your google rank for searches conducted on a mobile device.

So, most searches, then.


In 2015, my primary use of web apps "for mobile" is to find the link that says "full site".


That's exactly why the inverted advice needs to be heeded, so that that sucky experience, which does still occur daily with a lot of sites and apps, goes away by default.

Note, I'm absolutely not saying that a big screen desktop experience isn't inherently better for all but the simplest apps, there's very high probability it is because it's more ergonomic hardware with far more screen real estate.

However since most people want to, and do use mobile in preference to desktop for a huge proportion of tasks now, the design philosophy in most cases needs to flip from 'full functionality for desktop, scale down gracefully for mobile' to 'full functionality for mobile, scale up for desktop, taking advantage of the extra UX potential where possible'.

It's actually a far more optimistic, and creative, approach. Make it, then enhance it for the less popular use case rather than make it, then degrade it for the less popular use case.


I strongly disagree. There's tons of people that talk about how everything should be mobile first, but that's just not feasible for a lot of web applications. Stop trying to shove mobile into everything.

If I'm making a Web IDE or a Web Photoshop, it's very unlikely I'll be able to fit all of the functionality that's needed into a tiny mobile screen, and it's also unlikely I'll be able to get it to perform well. And you know what? That's totally fine, because if my demographic is gonna be people with 1920x1200 monitors on a powerful desktop machines, it'll work great. I'll build an amazing experience for desktop, because that's my target demographic.

A lot of enterprise applications are impossible to scale down to mobile as well, due to the sheer amount of customizability and information they provide. I don't know of many enterprise applications that support both mobile and desktop. If you want to support mobile for an enterprise app, you're better off designing a separate mobile variant of your application. This assumes you have the resources to do so, and that there's sufficient interest from your customers such that the decision to have a mobile variant makes sense.

Here's the thing, building a sophisticated application that works well on a tiny phone and scales all the way up to a 30'' monitor is not feasible at all for a lot of teams. I'd challenge you to show me a good example of a sophisticated app (e.g. along the lines of a Web IDE or Web Photoshop) that will scale nicely from a tiny mobile screen all the way up to an awesome 30'' display.


As a web application developer in 2015+ I would argue that developing with mobile in mind should be required. At least taken into consideration. At bare minimum have a pre-deployment test: is my app unusable/does this look terrible on the most popular iphone/android.


And that it passes Google's mobile check, so it doesn't get penalized in SERPs.


For mobile apps that use WebView and/or has the capability to execute javascript or any other language provided by any network available resource I'd like to add:

ALWAYS USE CRYPTOGRAPHY for communication! Simply doing HTTP to HTTPS redirects is not sufficient. The origin request must be via HTTPS. Also make sure the app is properly validating the HTTPS connection.

Sorry I had to shout, but I'm growing tired of downloading the latest cool app that is marketed as secure only to find that it doesn't use HTTPS and as a result I can hijack the application UI to ask users for things like their password, credit-card number, etc., all without them having any way to tell if they are being asked by some bad guy.


How to make a reasonbly-decent webapp in 2015 without having to worry about bcrypt and open redirects and such:

1. Use a widely-accepted framework.

2. Implement your application using that framework's methods.

Why a beginner would implement even 1/3 of this list manually is beyond me.


I think this is more of a question of what kind of project or team you are working on, not one of experience in web development. Because it seems that you're suggesting beginners use Sails or Meteor (if focusing on JS), which are great and allow for rapid prototyping, but they and other 'high-level frameworks' that implement these methods for you tend to be very opinionated with important details of developing for the web abstracted away.

If you're a student or are serious about learning web development (and want to focus on developing in JS), it would make a lot of sense to dedicate your time to actually learning Node and Express, figuring out all of these hairy details and 'manually' implementing the items in Venantius' list.

Or don't figure out the hairy details, because many of his items have proven and documented solutions in the Node context, and learning how to properly use bcrypt and passport isn't too difficult. These libs are a good middle-ground between low-level details and something more out of the box.


>> When users sign up, you should e-mail them with a link that they need to follow to confirm their email

I'm curious, why is this good? Sure, sending an email to them so they confirm they have the correct email, but what is the benefit of the verification step? Is it to prevent them from proceeding in case they got the wrong email? It would be nice if this was justified in the article.

I would also add, that changing a password should send an email to the account holder to notify them. Then when changing the email address, the old email address should be notified. This is so a hijacked account can be detected by the account owner.


This may not be the writer's reason, but I tend to get people's e-mail accidentally. One time someone signed up an iTunes account with my email, then kept requesting new verification emails. Most of these automated emails do not have a "this isn't me" link, since they assume that the person who signed up and the person getting the email are the same.


You need to verify that they actually own (or at least have access to) that email address, otherwise all sorts of shenanigans could be had.


Primarily, verified email is the way everyone does password resets.


> The key advantage to an SPA is fewer full page loads - you only load resources as you need them, and you don't re-load the same resources over and over.

I don't know much about web development, but shouldn't those resources get cached? Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?


> Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?

Actually, this is achievable with push states, so isn't a strong argument against single page apps.

I think the problem with SPAs is that they exacerbate memory leaks, since they don't have the typical 'reset' of a browser page load to clear them. Also, a lot of SPAs re-implement browser functionality like scrollbars and the back button without proper cross-browser testing - let alone usability testing.

Conceptually, there's nothing wrong with SPAs, but many of the implementations are shoddy at best with no clear advantage gained.


> but shouldn't those resources get cached?

server side yes, what he means is that you only need to load the content as a client now, not the layout.

> Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?

If it's well done no, since you can dynamically update the URL with javascript.


One big omission from this list: gzip. Before you ever think about uglify, make sure you're gzipping your textual assets.


Rails has most of this out of the box. Use Rails :)


I think, even Django also has most of this stuff.


I like this list.

> Forms: When submitting a form, the user should receive some feedback on the submission. If submitting doesn't send the user to a different page, there should be a popup or alert of some sort that lets them know if the submission succeeded or failed.

I signed up for an Oracle MOOC the other day and got an obscure "ORA-XXXXX" error and had no idea if I should do anything or if my form submission worked. My suggestion would be to chaos monkey your forms because it seems that whatever can go wrong can. Make it so that even if there is an error the user is informed of what is going on and if there's something they can do about it.


> Avoid lazy transition calculations, and if you must use them, be sure to use specific properties (e.g., "transition: opacity 250ms ease-in" as opposed to "transition: all 250ms ease-in")

Why is it better to be specific?


I was surprised at the lack of mention of SVG. The biggest change to my working habits (apart for working in a SPA) is that every non-photographic image I use is now SVG.


Don't be surprised. See this: https://news.ycombinator.com/item?id=9866461


>If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.

Has anyone built a lasting stand-alone business that relies on Facebook, et al for identity management?


Tinder? Facebook only when I tried. Valued > 1 $bn

http://www.businessinsider.com/jmp-securities-analyst-note-o...


When using SPA, validate CORS origin instead of allowing *.


Internationalization?


>For all of its problems with certificates, there's still nothing better than SSL.

Yes there is. It's called Transport Layer Security (TLS).


web2py is a batteries-included framework that has all of this and much more done for you, tested and proven over many years.


I was thinking to make a web app for my college project. Can you please help me with some inspirations for my project ? :P


Hey uncss supports dynamically added stylesheets too (via running in through PhantomJS).


> sent to a page where they can log in, and after that should be redirected to the page they were originally trying to access (assuming, of course, that they're authorized to do so).

Smells like an information discolsure highway. I usually 404 all requests that hit "unauthorized" content.


> Confirm emails

Why?


It's for authentication. It's kinda like: you (email server) know this person, then we (developer) know this person.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: