Hacker News new | comments | show | ask | jobs | submit login
HTML5 Features you need to know (daker.me)
267 points by daker on May 25, 2013 | hide | past | web | favorite | 73 comments

A useful list. I was surprised to find that I'd only heard of one of these.

The pattern attribute for regular expressions looks like a great addition. Obligatory comment that "no more...server side code to check if the user's input is a valid email" is clearly a terrible advice. Always validate on the server.

I should point out it's generally not a good idea to try to use regex on an email address. It's surprisingly complex and very easy to miss something. I'd rely on type="email" and send a verification link server-side.

What out the WHATWG's take on it: http://www.whatwg.org/specs/web-apps/current-work/multipage/...

Specifically, they've redefined the format of email addresses in a way that's much simpler and is actually amenable to regular expressions. "This requirement is a willful violation of RFC 5322, which defines a syntax for e-mail addresses that is simultaneously too strict (before the "@" character), too vague (after the "@" character), and too lax (allowing comments, whitespace characters, and quoted strings in manners unfamiliar to most users) to be of practical use here."

The spec even includes a Perl-compatible regex as an example. :)

s/What out/Check out/ # How the hell did I miss that typo? :(

You can edit your comments on HN, right?

Not after an hour.

You know, I really think at this point that the random historical crap that's technically legal in an email address should be treated the same as if you walked into a restaurant and tried to order in Latin. It's time to retire a lot of it and tell people to move on.

That doesn't change the fact that a lot of validators disallow common, everyday things like + tags or even . characters. The only good way to be sure is to make sure the address contains an @ SOMEWHERE other than the last character (.+@.+, maybe), and then send a validation e-mail.

This regexp

    [^ @]*@[^ @]*
will validate strings like `@example.com` and `joe@` and even a single `@`. Probably

    [^ @]+@[^ @]+
is what he wants, plus server side validation (that is the classic "click link to validate address" email).

Forgot the SOL/EOL markers; the above will still match a substring, e.g. xxx@xxx@xxx.com. Try:

^[^ @]+@[^ @]+$

That said, HTML5 has something even better if the input is expected to be an email address: input@type=email

See http://www.w3.org/TR/html-markup/input.email.html for more details.

Thanks for the suggestion, i will check that :)

The good thing is that the regex will be the same. It's also pretty trivial to write a polyfill to work on old browsers. I've found it's really nice to define that in one place then to use it everywhere. Newer users get the HTML5 validation, older ones the Javascript fallback, and everyone gets a second round of checking before it hits the database.

No, don't validate email addresses with regex. It's more complex than that.

Pattern attribute can also be used to specify what kind of keyboard should your smartphone show when field receives focus, e.g. numeric only.

Thanks for your suggestion, i agree with you on that, the server side validation is _necessary_ .

Pattern is super useful, but not for type=email, which guarantees syntax check already. Use it with type=text for special fields like serial numbers, SKUs, credit card numbers (for which type=number is not appropriate).

dns-prefetch and prerender are not "HTML5". They're experimental/proposals from couple of browser vendors.

Is there any reason that:

  <link rel="dns-prefetch" href="//fonts.googleapis.com">
is using a protocol-relative URL, or any URL at all? Since we're controlling a DNS lookup, we would only care about the FQDN.

Based on the spec ( http://www.w3.org/TR/html5/document-metadata.html#the-link-e... ) requiring a valid URL ( http://www.w3.org/TR/html5/infrastructure.html#valid-url ) it would interpret it as a relative link to a path "fonts.googleapis.com" if you didn't specify '//' to indicate it was absolute. I don't agree that the spec should make an exception when rel is "dns-prefetch" and consider all href tags to be FQDNs or similar.

For that matter, I'm not sure how much this feature actually helps... I know there is a cost to a lot of dns lookups.. but having the dns-prefetch in the header along with the actual link to the css likely won't help much... it might help a bit with script includes at the bottom of the page though...

Also, a lot of the other bits aren't supported on IE 8/9 which is optimistically 1/3-1/2 the users on a lot of websites... it's worth doing, but doesn't mean you can/should forgo server-side checks/code. I find a lot of the client/server frameworks that are client-centric people are coming up with that do no input validation server-side to be more dangerous than a lot of older frameworks. It's like SQL injection issues all over again.

Is the dns-prefetch only useful for things that are not already linked/included on the page? (such as redirect chains... t.co links might DNS prefetch the domain of the end of the chain)

Or is the benefit that you can start the DNS resolution whilst receiving the <head>, for the JavaScript resources only included near the </body>? (being a very slight performance benefit)

Edit: And I suddenly thought... for SSL pages would dns-prefetching leak some information to an observer about the composition pf content or the linked resources on a page by grouping such DNS requests into a short burst at the same time as the SSL request?

Correct on both counts.. Pre-fetching DNS names for origins not on the page is, indeed, a nice use case - redirects are a good example. Similarly, embedding a <link rel=dns-prefetch> in the head will trigger an early DNS lookup, which can, in fact, deliver big latency savings -- you would be surprised how slow DNS lookups are in practice.

Last but not least, AFAIK, I'm not aware of any logic to group DNS-prefetch hints or similar strategies when running over TLS.

I tried implementing it for my blog’s pagination, but I definitely don’t see the zero-second latency Google report in the video, and it doesn’t seem any faster at all.

Can you tell me if it’s actually working from using the pagination at the bottom of the front page, or the pagination inside the permapages: http://pygm.us/1e9KHCYt. (Link fixed.)

You can see if pre-rendering is being triggered in chrome://net-internals#prerendering.

See: http://www.igvita.com/posa/high-performance-networking-in-go...

And "yes": https://www.evernote.com/shard/s1/sh/0f3592b8-d72c-40f7-9bbf...

Any idea why the prerendering fails?



Thanks for the help, by the way. :)

On forcing PDFs to download: please don't. Most often I'd rather view it in the browser and then decide whether to save.

More than the "force download" aspect, it's mostly useful to specify a filename when linking to a download where the filename can be strange or non-existent, like a data: url, or some remote resource you don't completely control.

I'm pleasantly surprised to find out it now works correctly on Firefox. It was Chrome only last time I checked.

This was just an exemple :)

Surte, but one cannot underline enough the importance of relevant and correct examples. May I suggest something like "<a href='verybiglogoftheday/' download='log-2013-12-11.txt'>"

Yes, look at my example (http://jsbin.com/utugex/1/edit), the href is pointing to an .html file, so in the normal case the browser will display the content of the page but with the download attribute the browser will be forced to download it as "myfile.pdf".

The regular expression pattern checking is pretty awesome. I'll disagree though where they say you don't need any more server side validations. You can easily remove / change the pattern value in the DOM. Which begs the question, what's the integrity of client-side testing if it cam be bypassed so easily?

anything client-side is never about security or consistency since no matter what you do, anybody can always send you any data they want.

client side validity checking is about the user experience, I mean if the user is doing something wrong you can guide them to the right direction without the need of server interaction and server interaction is always slower that one line of client side code.

Good point about user experience. Are there any tools you use specifically to test ux that isn't manual?

You can use Selenium. Second link is an example of using it within the Play Framework (scroll to the bottom of the page).

[1] http://code.google.com/p/selenium/?redir=1 [2] http://www.playframework.com/documentation/2.1.1/ScalaFuncti...

sorry, I don't know such tools.

Yes, you are right you really need the server-side part to prevent bad things.

Client side validation is about UX and server load, not security.

The service/business tier should never trust the integrity of the front-end or the data it's sending.

Bounds checking and validation on the server will never go away...

The prefetch demo is obnoxiously misleading. If it is definitely the link you'll be loading, I doubt you will wait upwards of 7 seconds before clicking the link. In which case, it will not be 0 seconds loading. Then there is the equally likely, you were confident it was what I wanted, but you were wrong case. In which case I just downloaded a ton of garbage I didn't want. (Can't wait for ad servers to preload their assets for you...)

And this is also somewhat amusing, since if I recall firefox has boasted 0 second reloads when you hit "back" fora long time now. Does chrome support that, yet?

The prefetch is an option which you can opt-out from, and i think you are mixing prefetch and prerender, just to let you know Firefox has actually supported prefetching since...wait for it... 2003.

Worth reading : https://developer.mozilla.org/en-US/docs/Link_prefetching_FA...

You are correct that I was using prefetch where I meant prerender.

And... I believe I stand by my complaint regarding it.

AWESOME POINTS! I spend alot of time on Html5Rocks.com and I studied Html5, but I never even heard of some of these feautres....Made a new wrinkle in my brain ;)

> I studied Html5, but I never even heard of some of these feautres

That's because they aren't "HTML5" yet.

This should make pagination really, really interesting. I just need to add some checks to ensure I don't force users to prefetch a huge chunk of data, of course.

Yes this is a good use case.

Must the prefetch/render tags be present at first load, or might they be added dynamically with JavaScript? Pagination is a good example, since a news site might want to load the second page of an article only after the user get so far in. Single-page sites are another good use case.

Note that those prefetching links are just hints to the browser.

Are these all working right now in iOS/firefox/chrome/ie ?

Definitely not all. Here's support list for two of the features:

Download Attribute - http://caniuse.com/download Datalist - http://caniuse.com/datalist

It would be neat if caniuse had a widget that you could embed which would show % usage at time of reading the post, so I could know right away instead of looking each feature up.

You'd also have the advantage of the article always being relevant.

It does: just add "/embed" to the URL. E.g. http://caniuse.com/datalist/embed

Exactly what I wanted to ask. Link prefetching seems super useful.


Scroll down for list of browsers (and versions) that support prefetch / prerender / and other hints.

The RegEx input checker on the client-side is not a replacement for server-side validation. It doesn't prevent someone from sending requests to the server with bad data, so don't forget to make sure that all requests have valid data.

What it does is help the interface to be more responsive so the user doesn't have to wait for a round-trip before finding out that they forgot to put in an email address / other valid input.

you can also just prefetch an image or file, say a background or large image that is on another or next page

<link rel="prefetch" href="http://davidwalsh.name/wp-content/themes/walshbook3/images/s... />

from http://davidwalsh.name/html5-prefetch

I think you still need to use "prerender" for Chrome - and "prefetch" for Firefox - for it to work.



Yep. Prefetch != prendering. Prefetch only fetches the single resource, and is meant as a "post load" hint. Whereas prerendering fetches the entire page, and all of its assets.


Does prerendering execute JavaScript or only load assets?

Thanks for the clarifications.

But isn't it really a matter of time before the `prefetch` and `prerender` options get ignored? Some rogue sites are just going to mark all their links with `prerender`. This is almost akin to the `meta keywords` that were abused for a long time, and then all search engines started ignoring it.

There's probably a reason most browsers have been so slow to adopt it - especially Opera. Perhaps the reason Chrome is the only browser supporting it is that Google can leverage the web crawlers to scan for malicious use of the attributes.

I'm disappointed that DNS prefetching does not specify the record type (A, MX, TLSA etc), thus limiting utility solely to protocols that use nothing but A records.

Oversights like these subsequently constrain other developments, such as the case for basing HTTP 2 on SRV records, using DANE to authenticate certificates and so forth.

another important feature: virtual file systems

1. upload something at http://html5-demos.appspot.com/static/dnd/all_types_of_impor...

2. see files list here filesystem:http://html5-demos.appspot.com/temporary/

3, wat?

explained here


as someone who worked with HTML5 FS API before, I can tell you its async API design ins PITA to write with.

Would be nice to have a small table showing which browsers/versions support each of these features ..

What I would like to know is if you made that scrolling time-left thing, it looks neat.

this was immediately distracting to me and annoyed me very fast. Maybe it's because I'm reading on a mobile device and "x mins remaining" took up ~1/4 of my screen every time I scrolled. What value exactly does it add? It's likely to be wrong, and just got in my way.

Thanks for the suggestion, i'll try fix this :)

Datalist is really awesome. Autocomplete with no CSS or JS? sounds good to me.

While it is nice to have, datalists are currently lacking in common functionality and are no real substitute for fully featured autocompletes. For example, datalist only matches the beginning of the values. Typing "soft" into a datalist will not match "Microsoft".

There are some other limitations discussed here: http://msdn.microsoft.com/en-us/magazine/dn133614.aspx

Hugh's point is incredibly important. You don't really want one hundred thousand &lt;option&gt; elements in your DOM as your "auto-complete" solution.

To be honest, datalists are pretty worthless for every situation where I've needed to implement auto-complete functionality in projects, simply because the universe of possible entries has been always been huge.

I just noticed Firefox has implemented fuzzy matching, if you type "soft" into a datalist will match "Microsoft".

And maybe you will be able to style it with CSS in the future.

Very useful list. Will try it and report, Thanks

Your are welcome :)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact