Hacker News new | past | comments | ask | show | jobs | submit login
Enough with the JavaScript already (slideshare.net)
402 points by gulbrandr on July 15, 2013 | hide | past | favorite | 221 comments



Before "js everything", it wasn't just a plain simple pure web. It was flash and gifs and "dynamic html" that would break in half of the browsers, and you'd show a special version of your site saying "we're not paid enough to support your browser, go get another one".

Clients never were reasonnable in their demands, nor did most of the site owner have good and simple tastes and care about efficiency, nor did half of the internet care about user experience first above everything else.

Saying that the use of js has because heavy handed is cool and fun, but for any serious discussion there should be more acknowledgment of why we are in this situation in the first place. I'm not sure it's so much worse than 5 years before, at least we can read most things on mobile devices.

As a side point, I've done project on ultra weak platforms were every js had to be hand written to gain speed and memory space. I wouldn't try to do the same on some cookie cutter corporate site with a 600k slideshow loading on the front page, where having easily replacable components and tried and true pieces to test on the 20 combinations of browsers is far more critical that cutting 20k of compressed script.


I think this is very true. When Java launched I re-did the Sun home page as an applet. Bill Joy suggested that in the future HTML would be gone and each web page would be its own Java applet. He was mistaken of course :-) But as I've watched the emergence of 99% js pages I am reminded of his insight. His reasoning was pretty straight forward, the reason PDF (and that thing Imagen/Xerox did (DDF?)) existed was that to convey the document creator's intent to the consumer over a fungible media like computers, required that the document be a program describing what it looked like, leaving it up to the end node to interpret that program and then do its best to recreate it for the end user. That was something ASCII could never do. So with web pages, especially interactive ones, they were destined to be "programs" rather an some form of semantic markup structure.

I have come to appreciate that this is a pretty profound concept (well for me anyway :-), having the end result sent as generalized instructions for reconstruction, rather than as a external representation of the constructed object.


It's funny, the whole java applet thing was actually a better solution than JS in a lot of ways (shipping compiled/compressed bytecode with a security manager), just way, way ahead of it's time, and with a terrible windowing/drawing toolkit that meant it would never be adopted.


I think that in the mobile side it actually went the way you describe. In japan docomo offered an open java platform (free to use, free to install, no gatekeeper for standard apps). Mobile html was only useable for dead simple things, there was no js of course, and any service with mildly complex things to do or show would be better to implement in it's own app. It sounds terrible, but the user experience wasn't that bad. The terrible toolkit part was solved by Docomo shipping it's own UI toolkit (no J2ME compatibility, but it was so much more useable), and I think at some point there was a way to launch an app from the browser without installing it but I'm not sure my memory serves me well.

The choice of java was made for security of course, and I never heard of any serious breach in 10 years following the mobile tech news.

We get the same phenomenon I guess with the "go the mobile app" redirects on websites that don't want to have x optimized versions of the same service.


The first HTTP 1.1 implementation, CL-HTTP was written in ANSI Common Lisp.

> Clients never were reasonnable in their demands, nor did most of the site owner have good and simple tastes and care about efficiency, nor did half of the internet care about user experience first above everything else.

My web browser today uses more memory than my computer had in 1996. The web isn't efficient today at all.


"at least we can read most things on mobile devices"

True but we don't need Javascript for it. I think that's what the article is about. We can make cool stuff without or with less Javascript.


It's not really an either or thing. You can go too far with client side rendering. But progressive enhancement can only bring you so far in delivering an interactive experience - it's typical that his example is a simple tabs implementation - of course that's easy to do without much js.

That's why I like knockoutjs - it makes it easier to sprinkle in the data-bound rich interactive UI to the pieces of your page that need it and leave the rest as normal server side rendered content. It doesn't force you to go whole hog client side.

One key area of performance where js rendered UI helps a lot is in customizing an otherwise uniformly served (and cached) page. 95% of the page is uniform to everybody, so render that server side and cache the heck out of it (varnish or whatever). Then, bind the pieces of the UI after page load and customize them based on the user - their login status, their location, etc.


Angular is also quite good about allowing you to enhance only parts of server-rendered pages.


have you actually done this? I was wondering if this is a feasible and good way to use knockout!


yeah, a good description in passing in terms of using a framework as an "island of riches" here:

http://blog.stevensanderson.com/2012/08/01/rich-javascript-a...

we use this technique at food52.com, e.g this content page cached uniformly, login system, recipe saving widget, comment system all done in ko.

http://food52.com/recipes/22888-yotam-ottolenghi-sami-tamimi...

more interactive / sophisticated uses of ko coming in our new shop soon for the cart functionality.


2013 - I stopped paying attention to every new framework or MV-whatever that comes out. What matters is the product and end user experience, and every single decision on how to go about making that product should be made with that thought in mind.

For me, it's not even javascript anymore. Objective-C and Java rule 2013 since web apps are not as pleasant to use on mobile.


> For me, it's not even javascript anymore. Objective-C and Java rule 2013 since web apps are not as pleasant to use on mobile.

I think this is bad. Not because the technology isn't good, but because small companies (or idealistic organizations, or others with a small budget) will have to implement useful apps for several platforms. Perhaps they cannot afford it (or think it's too much hassle, or don't care).

E.g., say that there is a product made to make life easier for people with a certain disease. Perhaps some users will have to switch device in order to use it. And maybe there are other apps they need which are available only for their old device. (And don't forget that are other platforms than iOS and Android as well, including desktop.)

Web apps (either run on the web, or natively) solves this problem, even though I agree that JavaScript, with it's myriad of ways to shoot yourself in the foot, is not ideal.

Like another commenter pointed out, there are options for compiling to multiple platforms. But it often doesn't seem to happen that way, and perhaps it's not always possible or feasible.


"Objective-C and Java rule 2013 since web apps are not as pleasant to use on mobile."

Or with some MVVM, Xamarin and some C# or F#... write once, compile for all mobile platforms. Never having to once touch the nastiness of Obj-C or Java.


What's nasty about Obj-C?


The syntax and the APIs.


I understand that the Smalltalk like syntax of Objective C can be difficult to understand at first glance; however, once you've worked with it a bit, the syntax becomes quite expressive -- each argument is labeled; something like using named arguments in Ruby, but it's not optional.

Not sure what you mean about the APIs being nasty. Can you elucidate?



Could you go further? What, in your opinion is wrong with NSString? How would you fix it?


I understand not liking the syntax until you get used to it (I find it wordy, but very expressive), but what's wrong the APIs? Apple's documentation is pretty good and the APIs have lots of features especially with blocks. I also like that collections are immutable and the programmer must explicitly ask for the mutable version. Cocoa etc... is quite large, but once the programmer knows what's available many problems that are hard in other frameworks are trivial when coding for OSX/iOS.

Java is well java. I find one of Java's strongest points the extensive set of collections available. Add in Apache Commons and java is easy enough to use.


I rather use those 5 MB for my application.


Save 5 MB and condemn yourself/your team/your company to the hell of maintaining multiple code bases that all do basically the same thing but subtly different and written in different languages.

Sounds like a dumb ass decision.

It's the sort of logic that makes people pick C or C++ when a managed language would have been more appropriate.


I'm curious how well these tools work. Having a single unified code base and supporting all (desired) platforms is every developer's dream. I'm skeptical because supporting multiple operating systems can be a headache even for desktop apps.

I remember Facebook originally used HTML5 to support all platforms. This did work, but they became unsatisfied with the performance and ended up rewriting both the iOS and Android apps in native code.[1] The killer quote:

"I think the biggest mistake we made as a company was betting on HTML5 instead of native," Zuckerberg said...[2]

Granted, this was a different situation. They were using HTML5, and it looks like Xamarin compiles to native code. You mentioned slightly larger executable sizes, so I'm curious about the performance.

1: https://www.facebook.com/notes/facebook-engineering/under-th...

2: http://www.computerworld.com/s/article/9234695/Facebook_debu...


What is extremely telling about this whole ordeal is how awful the facebook app on iOS is now. Their biggest issue (they claimed) with their HTML5 version was adding new elements as a user would scroll. The DOM operation was said to be both costly and leak memory like crazy. Sencha Labs has shown that they were able to accomplish it just fine.

Where I feel they should place some of the blame is the backend architecture. Perhaps the app doesn't scale well. Perhaps they should research what the proper amount of data is to send to any given device. Implement more of a lazy approach. They have some very brilliant people working there. I find it hard to believe that HTML5 can be blamed totally for their app's poor performance.


It works well. It takes advantage of Portable Class Libraries (PCLs) in .NET which can target just about any platform including Xbox's.

Basically all your UI logic sits within these PCLs. And the only stuff you have to write/design for each platform is the actual screen layouts. Then you just setup the data bindings back to your models.

It's a shame this is news to people on here. It seems like the real innovation that goes on in .NET and Mono land is largely ignored or just swept under the carpet by the hipster community.

The larger executables is because it has to embed the Mono VM, and some base class libraries. I've not really looked into performance versus Davlik; probably because I've not yet encountered any show-stopping performance issues.


How do you write UI code to make apps look and feel correct across platforms? Does Xamarin already support the API changes in iOS 7?



Number 1 reason to uninstall applications on Play Store, executable size, Google IO 2013.

Plus Mono does not save you from writing multiple UI code anyway.


Yes it does. Look at the MvvmCross project.

Also struggling to verify that claim re. executable size, can you provide a link? Either way, average app size on Android seems to be around 3MB. A typical Xamarin-compiled app is around 4 to 4.5MB.


> Yes it does. Look at the MvvmCross project.

This is a cross platform implementation of the MVVM patterns, you still need to write the platform specific views.

And to be honest, given my experience on .NET enterprise projects, MVVM brings me back J2EE 1.4 memories.

> Also struggling to verify that claim re. executable size, can you provide a link?

Someone mentioned it to me on a Reddit discussion.

We have the policy to only use vendor supported languages, to minimize support issues and take advantage of performance.

I should add the mobile apps I was involved were games.


I wonder if there'll ever be an alternative to Javascript. I'm not talking about those things that eventually get translated to Javascript, I'm talking about a native platform well thought-through and based on a typed language that isn't a mess. Yeah, I know: it's not Javascript that's broken, it's the DOM. I'd argue that both should be replaced by something else, otherwise the future will be 90% native mobile apps, which isn't a bad prospect if you can afford to develop and iterate for two different leading platforms.


>Yeah, I know: it's not Javascript that's broken, it's the DOM.

No, JavaScript itself is a ridiculous language. Every time I have to deal with it I grit my teeth. Right now I'm dealing with dates. Luckily JS has a built in Date class! Which, of course, does nearly nothing. If you call "getDay" you get a zero indexed number representing the day of the week. So since there's no formatting functions to print out dates, how do you get the month-day number? Oh, right, you call getDate...

If I didn't know better I would think this abomination were created by someone who thought the secret of PHP's success was being horribly defined.


Try this: date.toLocaleDateString("en-US", {month: "numeric", day: "numeric"});


I want a specific format to our proprietary systems. In C# I can just say "date.Format("dd.MM.YYYY")" or what ever I want.


Granted that's a bit tidier than doing this in JS:

var myProprietaryDateFunction = function(dt) { return dt.getDate()+'.'+dt.getMonth()+'.'+dt.getFullYear(); };

But in JS, you can pass that function around like a village bicycle -when I think back to my C# days, it makes me wonder how I ever did without functions as first class objects and a slew of other really great things about JS. A lot of those things are brainbangers at first, to be sure - but when you finally grasp them (for me, at least) you start to see that the things that make JS "ugly" to the novice are the same things that make it powerful and elegant in the hands of a master. To each his own - and there are things I do really miss about C# from time to time... but for me, I am all too happy trade the tidier date.Format() for the more powerful underlying functional constructs any day.


when I think back to my C# days, it makes me wonder how I ever did without functions as first class objects and a slew of other really great things about JS.

C# has delegates, events, lambda function, properties and async function as language constructs. This is as "first class" as it gets. It also has LINQ.


You forgot to add 1 to dt.getMonth(), because months index 0-11 :)

(and be careful not to accidentally concatenate that 1 instead of add)


uh, this won't have leading zeroes like DD.MM.YYYY format would have


The mistake, perhaps, is that people (us) are trying to shoe horn an application into a Document object model - its in the name for christ sakes! Document! Not application object model!


Not just the DOM, HTTP as well. We're shoehorning state into a stateless system, and inventing new crap like WebSockets to overcome limitations of a system designed for document retrieval. The web has turned into the biggest hack ever.


Like how TCP is a hacked-together impression of reliable transfer over an unreliable medium?

Nothing wrong with layering approaches, but the problem is knowing where to place the borders in order to give clarity at every level.


Bigger than x86? I doubt it. It's fine for things to evolve: so long as the complexity is compartmentalised we can forget about it and move on with our lives.


I don't see the point of websockets, why not just let the browser make normal outbound TCP connections?


I think the main points WebSocket vs. unrestricted TCP sockets is:

1. Support for a browser-appropriate security model (origin-based)

2. Not requiring extra work to pass through HTTP-friendly (and everything-else-hostile) firewalls.


I assume you would apply the same origin policy to the connections.

The firewall point is good, although I don't understand why you would want to block general TCP connections but not websock.


A lot of corporate environments prevent you from connecting to anything but port 80 and 443. Websockets is the only way for you to multiplex your tcp-like connections over port 80.


Right, but I assume they block those other ports for a reason. What is different about websocket that makes that reason no longer apply?


Because: we want a message-based protocol, and we want to make it (comparatively) hard to launch a DDOS attack using visitors to a website.


Document = application interface. It's a great way to abstract the back end from the front end. Unless of course you mush your business logic in with your interface code.


This!


Objective-C is really nice. I don't like Apple, but their development tools are actually fairly good, and the whole API in iOS is, if not necessarily a beautiful piece of engineering (which is subjective), at least thoroughly-documented and easy enough to learn and use. Android is terribly documented and very messy to develop on.

But yeah, it sucks that you can't do cross-platform development unless you like to be tied to a web platforms, which tends to result in inferior user experience.


> Android is terribly documented and very messy to develop on.

I keep hearing this FUD, but I have no idea where it's coming from. I've never had any problems with the Android documentation. The most annoying part of Android development is probably Eclipse, but I'd rather have a resource intensive IDE that's cross platform than one that only runs on OS X.

In fact, the Android platform has some amazing opportunities for those who are willing to look past the mainstream views on it. Because it's ignored by so many companies/developers, all you have to do is pick an iOS app that is popular and clone it - you are pretty much guaranteed to get lots of users.


> I keep hearing this FUD, but I have no idea where it's coming from. I've never had any problems with the Android documentation.

Let's take just one example from literally hundreds that are scattered all over the API reference alone. The delete() method in SQLiteDatabase -- arguably, a mature part of the Android API, considering it's been there ever since version 1 and is something that almost every application uses. Here: https://developer.android.com/reference/android/database/sql..., java.lang.String, java.lang.String[])

The method takes three arguments, but only two are documented. The description of the whereClause states that "Passing null will delete all rows", while the function description says "To remove all rows and get a count pass "1" ass the whereClause". I presume the difference between the two cases is that passing null as the whereClause will delete all rows without giving a count of how many, but that's really poor taste in describing what the function returns.

This simply isn't ok. It's barely enough for internal use, where you'd probably Skype the guy who wrote it, ask for clarification, and kindly ask that they fix it when/if they have time (or you fix it yourself if possible), but this is very far away from what you want from a serious framework. Let's not even stick iOS here. Look at Qt -- which optimistically would have, what, 10% of the users Android has, and a lot fewer developers -- and their documentation is at the very least complete.

Edit:

> The most annoying part of Android development is probably Eclipse, but I'd rather have a resource intensive IDE that's cross platform than one that only runs on OS X

It's definitely Eclipse, but in its description don't forget "unstable" and "still lacking a decent UI builder".


>> "The most annoying part of Android development is probably Eclipse"

Hopefully Android Studio[1] will be the solution to that problem.

>> "I keep hearing this FUD, but I have no idea where it's coming from. I've never had any problems with the Android documentation."

I don't think the Android documentation is bad but I find iOS documentation much better. It might just be because I'm more used to the iOS docs (5 years experience vs. 1 year on Android).

[1] http://developer.android.com/sdk/installing/studio.html


Coming from a world of Java (Java SE of no particular platform and Android), I find the iOS documentation surprisingly hard to navigate. In the Java world, it's common for documentation on a class to contain at least a page of introductory material to the class including its purpose, its function, major caveats, some example code, etc. The iOS documentation splits all of that up between Getting Started articles, the class documentation, and separately-downloadable sample code projects. It's all there if you know where to look, but it's inconveniently spread out.


Maybe it is just because I've more experience with iOS documentation that I find it superior then.


http://www.xamarin.com compiles C# to native apps for Android, iOS and Windows Phone.


Your best bet on Android is to download the source and use that as your documentation.


Dart ( http://www.dartlang.org ) is a attempt by Google. Of course, probably no browser except Chrome (and Opera?) will ever have the Dart VM built-in, but Dart also compiles to javascript for those other browsers.


An issue with Dart is it does not support IE 8. IE 8 still ships with Windows 7 and it is quite popular.

It's not reasonable to throw away all IE 8 traffic just to get Dart's features.


IE8 usage is under 8% so it's possible for some people/organizations to stop supporting it. In another year, I would expect market share to be under 5%.

http://gs.statcounter.com/#browser_version-ww-daily-20130615...

IE10 has more usage. Hopefully, Win 8.1 is successful for Microsoft and XP usage drops significantly.


There can be many markets where IE usage is much higher than the worldwide average. My previous job had IE as a whole with ~60% share. You can't just decide to ditch a browser because of averages, you need to look at your own data.


Corporate Intranet: within epsilon of 100% IE 8, at the moment. My current job and the last one always had some outdated version of Internet Exploder as the desktop standard (which I ignored for running anything other than in-house junk -- no way I'm taking MSIE out into the wild)


Does Dart support Chrome version 2 [0] or Firefox 3.0.8[1]? Since IE8 stems from that age, March 2009[2]

[0] https://en.wikipedia.org/wiki/Google_Chrome#Release_history [1] http://en.wikipedia.org/wiki/Firefox_release_history [2] http://en.wikipedia.org/wiki/Internet_Explorer_8

It's not reasonable to support an outdated browser version with an inferior Javascript engine, or to single out Internet Explorer as a browser for which older versions should still be supported, imho.

(disclaimer: I'm building a webapp that has to work in IE 8)


I wouldn't mind if Dart didn't support older versions of Chrome or Firefox because that's just much different than IE 8.

Most people who are running IE 8 are random non-technical folks who are using the browser that came with their OS. They don't even know what "internet explorer" is.

There are probably a small sub-set of people who have Firefox 3.x because their brother's uncle's nephew installed it once like 5 years ago but I honestly don't mind losing these people as potential customers because it's such a ridiculously small %. I also feel like these are the type of people who would be more than capable of upgrading if they kept seeing messages like "your browser is old as time itself, upgrade or find someone who knows how to do it for you!" through friend/family assistance.

I can't justify throwing away almost 10% (IE 8) market share for Dart and the idea of supporting multiple versions of the app is just too much work for too little return.


Absolutely. Write in whatever language you like, compile to asm.js - that way you're running in the browser, at about half native speed.


"Write in whatever language you want" - For languages that need a VM (most garbage collected languages for example) you would need to port the VM to asm.js. That's a job in itself. Then when a user hits your site they would need to upload all the asm.js for the VM at least the first time (think about that on a sketchy mobile connection). On top of that the performance of C++ to asm.js is about half native. The performance of e.g. a ruby interpreter running in a VM that was ported to asm.js could be a long way off native. The performance of e.g. Ruby might be acceptable on a server where you can throw more horsepower at it. But on a client? Client code is the most performance critical there is. The user expects instant feedback when they swype. Even if there was a native Ruby VM (for example) on the client it still might not meet your performance requirements not to mind one that runs in asm.js. It was the movement back to the server side with web programming that enabled the diversity of language use in the 90s. On the client the same old restrictions apply i.e. we would need something to make those VM's faster than their native versions, not slower.


Asm.js is not an alternative to JavaScript, though. It is JavaScript, just a really mangled and ugly subset of it.

cliveowen was clearly talking about "an alternative to Javascript" and "not [...] those things that eventually get translated to Javascript". Anything targeting Asm.js is merely targeting JavaScript.

The "half native speed" claim is quite dubious, at best. Even if it were realistic, that's still quite a horrible decrease in performance relative to native code. It's not as bad as the typically much worse disparity, but it's still not good at all.


Talk about JavaScript being replaced wholesale should be regarded as therapy. Therapy to dull the realisation that JavaScript is now immortal because it gained such massive reach and entrenchment. It's the web counterpart to C the immortal.


Asm.js is a non-starter for most of the high-level languages out there due to its lack of GC.


__at about half native speed.__

Which may well the right trade off for many apps.


What's wrong with the DOM? (Sorry if this is a stupid question)


What's wrong is what the DOM is and what it isn't. The DOM is the Document Object Model data structure along with the parametrized routines that operate on that data structure. It exposes every way in which the apparent simplicity of XML is far from the reality. Then the operations that it provides do little to manage the complexity. They are only relatively "primitive" operations, like getters and setters for the data structure, plus a few things like "getElementsByTagName". That gets a set of elements from which you then should or must exclude what you don't want. If you want a convenient wrapper for the DOM API, you have to find or build one.


I don't necessarily disagree with you that there are problems with the DOM API. However, would you rather that higher level APIs were designed by standards committees, or would you rather they were worked on and "de facto" standardized in the wild - the situation we have now. The situation we have now is a case of worse is better I think.


Crockford has some very interesting things to say about it:

http://yuiblog.com/blog/2006/10/20/video-crockford-domtheory...


Nothing wrong. But i guess DOM manipulation can be painfull sometimes. But if you are going to develop a web app you'll have to use the dom.


Google were planning to do some interesting experiments with JS and the DOM - they're hinted at in the Blink announcements but can't remember seeing any more about them yet


I develop applications that skip the browser(s) and HTML. But it's not totally cross platform. But then again I don't develop for Joe Public most of the time either....


Isn't this the objective of Dart?


JavaScript is a tool. Once it becomes trendy idiots will always abuse a tool. It's not JavaScript's fault people are bad at web design and development. If it wasn't fucked up JavaScript these people were contacting you about it'd be something else, be glad you have a job.


It does not however, preclude the possibility that the tool is in fact a bad tool, and that the tool is bizarrely the only tool we have.

They say poorly skilled people blame their tools, and that highly skilled people who know their tools well, will know how to use it well. That being said, see that circular saw over there that will occasionally bounce and cut off its user's fingers? I'm not gonna use it, no matter how skilled I am.

I for one am hoping for a replacement for JS (that isn't Dart, which feels like JS Patched). I have more than just a hairy experience with JS lately.


Seriously your analogy is absurd. JS has a few rough edges that any decent editor will warn you about and that you automatically know to avoid after doing a lot of it for even a couple months.

This complaining that JS is unusable is just BS and whining by people who are simply shying away from something they don't know.

There are bigger things that can bite even experienced developers like memory leaks and bloat but that has little to do with JS since you can fall into those pitfalls in any language.

I'm not attached to JS and have pretty much switched to CoffeeScript. I like CS better but that doesn't mean JS is anywhere near as bad as you make it out to be.


Coding JS is 95% of my day job and I'm 100% sure that it is the worst language I have ever used. All languages have flaws but JS's are serious, inexcusable and onerous.


He didn't say it's unusable, he said it's a bad tool...and it is. That's why a whole industry was born to work around it by transpiling to it.

It's a bit ridiculous to say that anyone who complains about JS doesn't know it.


But JS isn't just a tool. No programming language is.

JS is more a workshop that happens to contain, amongst its stations, a less-than-safe circular saw. And you can absolutely, productively use the shop without using the saw, or by using the saw, if necessary, with additional safety precautions.

And because some people use the saw willy-nilly, you're saying we should throw out the whole shop?


I agree that it's a bad tool in many ways, but even if the language (and environments) were beautiful, the problem at hand is often the way that people use it.

In this case, a poor craftsman uses his tools to do the things that the browser already does for you.

I've seen some JS that reinvents lots of what the browser rendering engine should handle (recalculating & reflowing heights of elements every time something was added or subtracted from the DOM, for example). Expand this kind of thinking to an entire project, and you start to find yourself in the kind of mess described in the slides.

If/when Dart replaces JS, some of the enforced structure may help prevent badly organized or buggy code, but it won't fix poor assumptions of what concerns scripting should and should not handle.


Fortunately, JS is not the only tool we have. JS is a relatively fine compile target, and with asm.js, a pretty fast one.

So pick your favorite among CoffeeScript, TypeScript, Dart, GorillaScript, Elm, ClojureScript, etc, or try compiling your favorite language using LLVM.

Let a compiler take care of all the numerous rough edges raw JS has.


I have never really understood this. JavaScript it is a a programming language and the only reason people built languages that compile to it is because they felt the needed something better, but this has always seemed incredibly hacky to me.


Also, FunScript (F# to JS). http://funscript.info/


also Haxe.


> JS is a relatively fine compile target ...

No, it's not.

> ... and with asm.js, a pretty fast one.

And asm.js demonstrates why it's not, because asm.js isn't JavaScript. It's a strictly defined ASCII-encoded bytecode that happens to be representable using a subset of valid JavaScript.

At which point, one must ask, what bizzaro-world engineering justification do we have for using a JavaScript subset as a first-order bytecode format? Why couldn't the silly JS bytecode format be a second-tier target for legacy browsers that don't support a proper format?

On top of which, why are we willing to throw away 2x+ performance (in the best case)? Is the iOS/Mac App Store not successful enough for us, such that we absolutely refuse to try something other than adding more JavaScript to every problem we face with web app deployment?


asm.js is JavaScript. It executes according to the semantics specified in ECMA-262.

The 2x performance numbers for OdinMonkey are not "best case": they include compilation time and will certainly improve (they are better now already).

Regarding having a "real IR", throwing away backwards compatibility for surface syntax doesn't work on the Web. It was tried, with XHTML 2.0 for example. It failed.


> asm.js is JavaScript. It executes according to the semantics specified in ECMA-262.

No, it's a strictly defined text-encoded bytecode that happens to be representable using a subset of valid JavaScript. If you deviate from the standard using valid JavaScript, you lose the gains.

Calling it "JavaScript" is just a semantic game. You can't output arbitrary but fully 100% standards-compliant JavaScript from a compiler and expect asm.js to do anything meaningful.

> Regarding having a "real IR", throwing away backwards compatibility for surface syntax doesn't work on the Web. It was tried, with XHTML 2.0 for example. It failed.

The irony is that these things fail because of the people who wish to maintain the status quo, and then those same people point to the failure as justification for maintaining the status quo.

It's not like you folks at Mozilla couldn't get support for a "real IR" from Google/Chrome -- that's half the market right there. In fact, the actual problem is that Google could never get support from you.


> If you deviate from the standard using valid JavaScript, you lose the gains.

That's true for lots of JavaScript optimizations. JS optimization is all about speculation that the more dynamic features won't be used. Try adding calls to "eval" within a JavaScript function in any modern JS engine and watch its performance drop by an order of magnitude. Does that make functions that don't use "eval" no longer JavaScript? After all, adding a call to the standardized function "eval" negates the performance benefits of "eval"-less JS.

asm.js is just this principle writ large.

> Calling it "JavaScript" is just a semantic game.

No, it means that asm.js is backwards compatible. That is not a game; that is the entire point. That is why asm.js worked in Chrome (with good performance even!) from day one.

> It's not like you folks at Mozilla couldn't get support for a "real IR" from Google/Chrome -- that's half the market right there. In fact, the actual problem is that Google could never get support from you.

Because PNaCl is not a good idea for Web content. People have this idea that Mozilla knows PNaCl is "better" than asm.js, but Mozilla wants to stick to JS out of some sort of pride or NIH syndrome. This is not the case. Backwards compatibility is the main advantage of asm.js, of course. But there are also many others: LLVM IR is a compiler IR and was not designed for this; asm.js is smaller when gzipped than LLVM bitcode; asm.js compiles faster than PNaCl; asm.js can reuse the JavaScript infrastructure, leading to a smaller, simpler browser; asm.js does not have the Pepper API which reimplements all of the Web APIs in underspecified ways.


Maybe it's time to go back to the HotJava[1] approach and use Java bytecode as the canonical "bytecode for the Web".

OK, to be fair, the browser itself doesn't really need to be written in Java. But other than Javascript, (and maybe Flash, I suppose) Java bytecode probably has the most penetration as a mechanism for delivering "programs" over the web. Maybe we should just embrace it...

[1]: http://en.wikipedia.org/wiki/HotJava


People don't use javascript because it's a beautiful, well thought out language. They use it because it runs everywhere.


Here's the painful part: Douglas Crockford wrote a book called Javascript, the Good Parts.

If Javascript were limited to The Good Parts alone, it has the potential to be a beautiful well thought out language. It would be quite close to a beautiful well thought out language


What's stopping you from only using the "good parts" in your own code? How is JS a bad tool? It runs reasonably fast, works everywhere, is a small, simple language that's surprisingly powerful. What if Eich had been influenced by C instead of Lisp when creating JS?

All things considered, it could have been way worse than it is, and the truth is JavaScript lets you get in there and do good stuff. Not sure why we are still hating on this environment.


I think a lot of experienced developers are only using the good parts, it's difficult to learn but it can indeed be a very beautiful language when you use it with modules/jsLint/jsHint.


On our svn repo (team of 20 js devs) i've added jshint checking in a precommit hook. Devs who don't follow the rules literally cannot commit code. It was much less of a big deal than you would think.


Agreed... I only had to do a couple tweaks to my jshint rules (I prefer comma first, and a few other things)... but it wasn't hard to get used to at all.

Testing is another point... having JS tests can help a lot, though my opinions of TDD aren't as strong as many.


What kind of modules do you mean - there are so many kinds in Javascript?



Yeah, he did. As a javascript developer I wouldn't limit myself that much though - he considers using ++ a sign of bad code.


To be fair, Crockford gives his reasons [1] and the replacement is literally += 1 instead of ++. Given the very low cost of the change, and that it is an optional rule in jslint, I have no problem with it.

[1] See here for more discussion and links to Crockford explaining the reasons: http://stackoverflow.com/questions/971312/why-avoid-incremen...


True, but for me the issue is that with += I have to double check that it is only adding one (and not skipping any) to understand the loop, where as ++ is something I have seen so many times that my brain reads it in a different way.


Did you read the Good Parts? If so, read it again.


I used JS because it was the only option. I'm using it now because it's beautiful and fun.


I know you are just making a point with the circular saw metaphor, but for those wondering here is some good info on how to use a circular saw as safely as possible:

http://www.docstoc.com/docs/87338746/CIRCULAR-SAW-SAFETY-AND...

http://carpenterbooks.com/userFiles/556/frame_table_mw_pdf_2...

Even still, as the safety guide states, "Be careful, making one small mistake with a circular saw could be the last thing you ever do in your life"


Yeah JavaScript has problems, so do all programming languages. JS has some particularly egregious ones, but the abuses and problems the slide complains about are not a symptom of a defect with JavaScript. They would be the same problems with any other language being used by incompetents.


I like to think of it this way: Some languages allow you to shoot yourself in the foot easier than other languages, given the context and environment.

Abuses and problems precisely show Javascript's defects. The more easily abusable a language is, the more defective it is.

A lot can be said about familiarity with the language. Many examples in wtfjs.com boils down to (mis)understanding the language itself. Let's call this the cognitive overhead of a language - the amount of corner cases you have to store in your head about a language.

Surely a language with high cognitive overheads is more abusable than languages that have low cognitive overheads.

Incompetent python, C, or even scheme programmers wouldn't be able to shoot themselves in the foot (by shooting themselves in the foot, I mean having unexpected results - even with Undefined behaviours) as much as incompetent javascript programmers. That's my beef. I currently have no way of empirically proving that, but my gut is leaning that way.


You are 100% wrong. C allows you to shoot yourself in the foot much more easily and much worse than JavaScript. JS is a far more "safe" language to code in.

No pointers, no memory allocation. Yeah JS isn't typed but the problems that you get into with that are nothing by comparison.

And again, I'm not saying JS doesn't have problems, it does. But the problems this presentation is complaining about are not results of flaws in JS they are results of incompetent developers.

So he's complaining about the wrong thing by complaining about JS.

It should be titled "I wish incompetent people wouldn't try to do things." And we'd all agree but then consultants like him would be out of a job.


FYI, Nicholas Zakas, the person who gave this presentation (which I attended) is a well-known JavaScript expert. He's definitely not complaining about JavaScript as such, just horrible misuses of it.


> No pointers, no memory allocation.

You mean no manual pointer management, no manual memory al location, right? Right?

> Yeah JS isn't typed but the problems that you get into with that are nothing by comparison.

Js is typed. Every language is typed. Is just isn't statically typed. Big difference.

>You are 100% wrong.

Strong words for someone who gets a lot of basic stuff wrong in a single paragraph.


That's semantics. He didn't get the "basic stuff wrong in a single paragraph" at all. Bu not having to worry about pointer or memory allocation at the the point of coding, it is entirely reasonable to say that JS the language has none. Yes, of course the interpreter does that, it goes without saying. JS is weakly typed, when writing no type declaration is needed, so it is said that JS isn't typed. Your comment is akin to that of a grammar pedant by not addressing the argument, just the manner in which the points were made.


>> No pointers, no memory allocation.

There are definitely both of those things, they just aren't explicit in the same way.[1]

[1]: http://point.davidglasser.net/2013/06/27/surprising-javascri...


The parent probably shouldn't have included C and Python in the same list but if you apply the principle of charity[1] and choose Python as the point of comparison rather than C, then his argument is somewhat stronger.

[1] http://en.wikipedia.org/wiki/Principle_of_charity


You speak very confidently about a language you already admitted you refuse to use. And your heuristic of "the more abusable a language is, the more defective it is" is completely nonsensical.


I don't think the article is knocking JS, it is knocking the over/miss use of JS on the client side. So, yes, any other language would have the same problem, but to take this as another "JS suckzzz" article misses the point.


It makes me shudder to think what things would be like if they hadn't included closures in Javascript. Things could have been a hell of a lot worse!


Ironically, if they hadn't we'd probably be using something much nicer by now. Closures are one of those couple features that made idioms for sane JS programming possible.


We are all abusing the technologies originally developed for hypertext documents (HTTP, HTML, CSS, JavaScript) for building applications. If you today started working on a set of standards for handling what people currently build on the web proposing those technologies would be completely ridiculous, so ill suited are they for this job. And JavaScript is probably the worst of all, it is designed badly enough that there isn't even a standard way of defining modules or classes so you end up connecting shit structured in a 100 different ways.


Amen!

The fact that it's still insanely easier to write desktop apps, using one or two languages and a layout manager vs. dealing with two decades of WTF! web programming makes me sad.


See CommonJS and/or AMD combined with a build tool Grunt...

You have component options around, mostly pretty new for building modular JS and building them for use in the browser as a single download. RequireJS in particular goes a long way towards helping with browser development. AMD lends itself more towards the browser, but there are build tools for CommonJS style modules as well.

If I were starting today, I'd probably have a reduced subset of what HTML is, with extension points for form inputs. The issue is that extensible/modular, skinnable and a centralized authority are points of contention for application building. I really liked Silverlight as a concept, I thought the package system was well thought out.

What I really didn't care as much for is how verbose XAML is. I can say most of the same about Flex+ActionScript. The problem is neither of these formats were open enough for browser vendors to simply have built-in support for them as a specification.


I think the problem is that the original technologies were bad to begin with. And the development was ad hoc and without foresight. Just look at the history of the img tag (http://diveintohtml5.info/past.html). DOM was tacked on later, JS was developed in a couple of weeks, with Java-like syntax just bolted on for marketing purposes. And CSS is the worst. Why anyone considers cascading to be a feature is beyond me. I mean, what were they thinking, what was going through the person's head when they decided "yep, cascading, that's what we need, forget name spacing, variables or expressions, we need cascading". Oh boy.


Java is a tool too. But the years and years of people inflicting bad Java desktop UIs on the world might have been avoided if we had been more critical of it rather than going with what's cool and hip and new.

I think in a few years time, we'll look back on the current trend of client-side-all-the-things just as we now look back at Flash intro pages, pop-up ads and all that shit.


Nobody has ever liked the swing GUI's, everyone with a little bit of taste disliked it, were constantly looking to change their theme and it was really aweful for a developer to create a decent user interface with.


Not only idiots, it seems everyone is so obsessed with creating ajax navigation, rich controls, and one page apps. The one apt reason why I never open Google Plus is because every things takes years to load with their fucking javascript in everything.

Why can't sites be like StackOverflow, it only uses JS where necessary.


> $LANGUAGE is a tool. Once it becomes trendy idiots will always abuse a tool. It's not $LANGUAGE's fault people are bad at web design and development. If it wasn't fucked up $LANGUAGE these people were contacting you about it'd be something else, be glad you have a job.


The same comparisons were made when Flash was being misused. It doesn't take much to muddy an ecosystem. There were certainly other problems with Flash, but thinking that best practices for JS will filter through to most developers is unlikely.


Another presentation making similar conclusions, but with different arguments, that I recently saw on reddit (not sure if it made it to HN).

http://www.infoq.com/presentations/web-development-technique...

http://www.reddit.com/r/programming/comments/1eiykw/web_deve...

(back to me...)

I wonder if the pendulum is finally swinging back, to working with the affordances of the web, instead of fighting them. I hope so.

This might have an interesting relationship to the rise of mobile. It's still not possible to rival native apps with HTML apps on mobile (a controversial assertion, but one that is 'trending' too) -- so if you ARE building a web app, you've got to actually have reasons to prefer web apps. Even if that's just cost/speed of development. But if you've actually chosen to build a web app, maybe you're more likely to want to work within the web instead of fighting it. If you don't like the architecture of the web, you could have just written a native app instead.


One can do a great many things with disciplined Javascript, and I shudder to think what Twitter was trying to do on their front-end that caused a 5x increase in load time compared to server-side template-rendering.

That being said, I would very much welcome a high-performance alternative to Javascript that also runs in any browser -- something in the spirit of C or Java, which could be embedded in Javascript and vice versa.


> That being said, I would very much welcome a high-performance alternative to Javascript that also runs in any browser -- something in the spirit of C or Java, which could be embedded in Javascript and vice versa.

Possible steps in that direction:

- Google Native Client: run native code in a sandbox [1]

- asm.js: A strict subset of JavaScript that can be optimized to native or near-native speed [2]

I much prefer the asm.js approach. I agree that there's a need for NaCl, but if we look at the browser as a sandboxed virtual machine that everyone has, we should all agree on a bytecode specification for that virtual machine. We pick JavaScript "because it is there". Once all browsers speak native asm.js, then you can compile your client-side code in whatever language you want, even C/C++ (see Emscripten for example). [3][4]

[1] https://developers.google.com/native-client/

[2] https://blog.mozilla.org/mbest/2013/06/25/asm-js-its-really-...

[3] https://github.com/kripken/emscripten

[4] http://games.slashdot.org/story/13/03/28/2113234/emscripten-...


I agree - but moreover, I'd like to see a single, cohesive replacement for the whole web stack. One standardized language to handle styling, scripting, server-side, and the document (in fact, let's just get rid of the notion of a document - we are building dynamic applications now, not documents). I understand that one reason we have separate languages is in light of security, but I think this could be made even better with special permissions (like unix file permissions but for code functionality/access).


Client- and server-side code is separated in literally every application to use a network, because they must necessarily be run on different computers. In most applications they can be the same language (C, C++, Java, etc.), but they have to be separate programs. I think what you're mostly complaining about is the JS lock-in on the browser side, which is a legitimate complaint, but a different one.

re:HTML - What's the difference between a dynamic document and a dynamic application, really? With JS and CSS3, HTML is unrecognizable - pair it with something like Backbone.Marionette, and the only thing HTML is doing is defining your display in a structured way, more like the XML definitions of an Android view than an old-school document.


Basically, what Twitter did was load a bare wireframe that loaded JS, and then that made a request to the server to get the content. If you cut out the second request, you've instantly saved the latency before that second request is even made, as well as the extra roundtrip time.


You should check out ASM.js. It might not strictly be necessary to get a new universal language. It's a really interesting project.


All the includes are insane as well. Run Noscript or any other JS blocker and visit a few big sites and you'll see that you end up running JS from half the Internet.

I'm joking of course, but some sites have dozens of includes from other sites, advertisers, CDNs, etc.


I installed Ghostery but did not set up any filters; with just that, it shows how much external resources / from which external parties things are loaded. Some sites have like twenty external dependencies, and multiple analytics gathering scripts (which may get embedded via the iframes of advertisers).


I work in advertising. I have seen Ghostery rack up close to 300 trackers before. Piggybacked pixels... they're everywhere


How does one even collect the data from 300 trackers?


Your 300 affiliate marketing partners run software from their affiliate network which automatically analyzes their tracking pixel server logs to determine how far the traffic they steered made it through your purchase process based on the pixels they requested (unique for each page) and the unique ID in their cookies. Then, their affiliate network collects the sales commissions and pays out the affiliate partners.


Ah, gotcha, that makes more sense. I was like, 300 individual sites with individual tracking on what page, what the, how the, etc. Thanks for the clarification!


TechCrunch is the worst I've seen. 13 for me. Any other higher ones?


How many trackers are detected by Ghostery on individual articles on these sites?

Boing Boing has 12.

The Atlantic has 13.

Huffington Post has 16.

The Onion's AV Club has 18.

Wired has 19.


Media sites are painful to visit with so many libraries and embedded widgets. Those pages load much faster after using Noscript to selectively load JS. Wish there was a built in repository where people could share whitelists.


A few points of contention... First, it doesn't seem like TFA is opposed to JS, as the linkbait title would seem, but the overhead of some sites/libraries.

I think that jQuery is probably a bit larger than it may need to be. Most of this is to work around edge cases or missing features in supported browsers. I think that the biggest issue may well be cost(s), and trust. It would be entirely possible to have a jQuery-like framework that would bring in only those shims as needed as part of loading from a central source. Unfortunately, that has costs in terms of both maintenance as well as deployment/cdn. It's probably not worth it.

Second, you are getting a lot of unused features with most frameworks (like jQuery), however this can be mitigated by using a common CDN, where caching helps a lot. Using the google, or ms cdn for jquery is a no brainer for a public facing application. I think that jQuery is too useful to just be replaced with one-off components.

As to jQueryUI, when you compare what it does with other toolkits, it's actually very impressive. Just look at the load size for the JS for Bootstrap for example... and bootstrap doesn't do all that jQueryUI does.

More and more frameworks have checkbox build options to give more fine grained builds specific to your needs with less overhead. Also, as pointed out in a few slides, you can load certain scripts and features as an on demand or post-load approach.

For example ALL my scripts tend to be at the bottom before the closing body tag (unless it's a single page application). Even then, the analytic scripts are last... imho the page being served to the user is the most important thing... it should be mostly functional without JS. And in terms of scripts, in the larger sense analytics are pretty low pecking order... when you have 10k users an hour, missing 2-3 analytics loads is no big deal.


Enough with the no-context-attached slides already.

Seriously, if you don't have a recording of the talk, or a transcript, at least provide an article or something.


I will never understand the utility of posting slides from a talk online without the audio.


One of the main reasons I come to HN is to learn how other people feel about certain things, predominantly from the comments section. Else it's just me in my cave.

It's obvious what point the slides were making.

The important part is that it said enough to inspire discussion and that it's qualified by its recency (posted last month) and by the experience of the person that gave the talk (front-end guy at Box).


It's useful for those that attended the talks. You generally don't need the audio, but just the highlights from the slides.


I found the slides to be quite informative.


I agree, but I definitely see the grandparent's point. These slide decks are annoying as hell. They're inconvenient for quoting, reading on-the-go, and they can leave out a whole lot of context. This is a trend that really needs to go.


I don't know if I should laugh or cry: I had to temporarily disable NoScript for both slideshare.net and slidesharecdn.net to read these slides.


Is it ironic that this slideshow complains about Javascript but requires it to be viewed?


no, it's not


this slideshow doesn't complain about JS, it's giving you suggestions on how to use it well.


JavaScript assumes a "web browser". What happens when we're not using a "web browser"?

A few days ago, I was actually downvoted for even suggesting that a user could disable JavaScript and that this might reduce her vulnerability to exploits. I'm always fascinated by the strength of the bias in favor of JavaScript.

I'm guessing that so many developers are now so heavily invested in JavaScript that if it were to become less popular they believe they would suffer somehow. They will thus defend this language with fervor. That's my guess.

Days ago we saw Dan Bricklin, who is no stranger to a world without a web browser and is responsible for the app that literally launched the PC into the mainstream, put in his plug for JavaScript. But we also learned he's written entire spreadhseet applications in JavaScript. It appears he's heavily invested in this language. It stands to reason he would defend its use.

On this thread someone mentioned that Bill Joy thought Java applets would power the web. Not surprising considering his company was responsible for Java, and he has called James Gosling, the father of Java, his favorite programmer.

I think when we look at JavaScript we need to ask ourselves who stands to gain the most from it. My belief is that it benefits developers more than users. It's aesthetically pleasing to most developers, but more importantly, programming in JavaScript requires less work than using a language with manual memory management that does not expect to be run inside another application (a "web browser"). JavaScript boosts productivity.

Users, I believe, do not see the same benefits. (e.g. I have seen Marissa Mayer while at Google state how important speed is to users. We might accept that speed is one benefit that users would recognize.)

Because the love for .js is so strong and criticism of it is not well received, I won't go into any more detail. But suffice it to say, if there are problems with using JavaScript, I believe it is not developers who would suffer the most from them. I believe it is users who would bear the burden.


Well , now that developpers can do crazy stuffs with javascript they will never use it responsibly and will show every trick they know to the user, because they want or because their clients want it.

I remember the time when 100ko pages were considered too heavy. now devs dont even bother optimizing images and use 500 ko png logos...

It's not unusually today to see 2MO homepages ...


There is a place for Javascript, but it feels like it's being excessively overused.

Anyone who browses with Javascript disabled (using, for example, the noscript browser add-on) will be aware how many sites, even those with mostly text content, fail to load without Javascipt.

Google's blogger/blogspot service is one of the worst offenders. Here's an example: the official Android blog from Google. The page simply won't load with Javascript disabled. Once it is enabled, you have a page of mostly text. This is simply bad web practice in my opinion.

http://officialandroid.blogspot.co.uk/


Slide 19 seems broken for me.

I remember from last time someone posted this that slide 19 was like slide 18 but with all of the features over in the JS box. In this version, both slides are identical.


Many sites already use Google's hosted javascript libraries for JQuery, Prototype, etc...

Could Google reduce load time in Chrome by building these into the browser, so that anytime they see the include pointing to Google CDN, they just skip it and let the pre-included library take its place?


This shouldn't be necessary. Google's CDN (afaik) sets proper cache headers so that the first time you download a specific version of a library, your browser should cache it until it expires. (correct me if I am wrong)


A non negligible downside is that it would give some libraries an "unfair advantage" which might harm innovation.


The number and versions of these libraries is just immense.

A lot of what the early versions of these libraries used to do is now handled by upgraded versions of javascript.

Adding JQuery etc. to the browser is a short term fix, that would become a problem further down the track.


I agree about the short term fix thing.

What I meant was, "_Would_ it actually save any time to do this." not "They should do it."

I am curious if there would be any speed gains, not just in data transfer but in javascript initialization and execution.

Also, it would be an interesting "aspirational algorithm" if they only did this for the best/latest versions of these libraries, encouraging developers to stay up-to-date.


I like this in principle. I think it should be abstracted out to a simple package system or something though. When you go to a sight that requires JQuery, your browser checks the sha256 hash or something against it's canonical list of published packages. If one is available it installs that one, or uses the one already installed. If the sha hash doesn't match it goes ahead and downloads the one from the site.

However, is transfer time really the biggest issue here?


Great idea, why limit it to just Chrome? Of course, implementing something like this would require a number of additional updates.

And this still doesn't directly address the issue due to custom, large scripts for site-specific functionality.


In every situation that you allow your engineers to develop for the technology instead of for their customers, this is going to occur.

Does the customer care that the latest MVVM JS tool is being used? Not unless your customers are only other developers. The customer cares about getting whatever widget you are selling them quickly and with as little thought as possible on their side to consume it.


Why store state in DOM? Isn't that bad practice?

I remember reading DOM access is the slowest part of JS [needs verification]. So you want to trade performance for few dozen kilobytes of assets that can be cached?


DOM access is the slowest part of JS in the same way that Window API access is the slowest part of a C++ app on Windows.

"DOM access" covers everything from "WebGL calls" to "stroke a path in canvas" to "hey, redo the layout of the whole page" to "hey, store this string in a database" to "add an attribute to this element", depending on who you're talking to. Some of these are slow (redoing the layout of the whole page). Some are not too bad (e.g. in a current Firefox on Mac typical WebGl calls are about 2x slower than a corresponding GL call from a C program last I measured).

Now obviously storing data in a DOM attribute is slower than storing it in a JS property, because there is more overhead: DOM attributes can dispatch mutation notifications, can affect styling, etc. If you don't need any of those things, you might be paying some cost that you don't need to pay. This typically starts to matter once you're doing a _lot_ of attribute sets, though. A typical attribute set in a modern browser is in the <100ns range on modern laptop hardware.


I'm not exactly sure what you're talking about, however mobile phones have awful bandwidth and awful browsers with not enough memory to cache everything they can.

In the context of websites, you also want the experience of new users to be the best it can be.

Basically, caching is nice, but it only works efficiently for applications that users visit often and that don't rely on links going viral, such as GMail. However, if you rely on caching for providing an acceptable user experience in cases where you rely on links going viral (such as Twitter), then you're screwed. Twitter has a mobile optimized web page that's much, much lighter than their desktop version and they did it for a good reason ;-)


Let me rephrase then; storing state in DOM may be ok for small pages with some insignificant JS on top to provide progressive enhancement, but from web app point of view its totally unfeasible and saving a few KB is IMHO not worth it.


Because then you can create a fallback if Javascript isn't enabled - use a server side language to manipulate the DOM on page load to give the same state as if the Javascript were there


To bad you have to pay $399 to watch the videos.

http://velocityconf.com/velocity2013/public/sv/q/479

I realize it costs money to run a conference but that seems excessive.



Awesome, thanks.


This deck is really hard to flip through on an iPhone. Nothing a little JavaScript couldn't fix.


I've often thought touchscreen devices would be complemented well by a set of arrow keys. (On a desktop, you can flip through the slides with the left and right arrow keys.)


Airbnb's Rendr seemed to be a nice solution to this problem

https://github.com/airbnb/rendr


Javascript itself has warts and they are easily overcome. I've found that the biggest hurdle for me, someone who's written native desktop applications, are the libraries are a huge hurdle. I still don't understand the obsession with MV* "design patterns," in GUI code.

It's made worse by the fact that the model we have to base our GUIs off of is a based on hierarchal documents.


Those bloated GUI libraries are part of the problem, in my opinion.

HTML is a great document definition language in the right hands but like JavaScript, its poor use can create a nightmare. You can fix this by learning to write clear and simple semantic HTML but if you have to work on legacy code that doesn't really help.


Though I would like to see the talk in its fullness, and I am a full-on Javascript fanboy, I have to agree with the core thesis. In 99% of cases, there's no reason to do all rendering client-side, especially on content-centric sites. And devs do sometimes get too dependent on kitchen-sink libraries, rather than rolling their own native JS to address their own specific need.


On many sites, the (cached) load of 500k of gzipped JS is dwarfed by the amount of HTML, CSS, and images loaded.


I'm really perplexed by one of the central complaints in this slide deck. JS load time is just not a serious problem in my experience. The use of CDN's and the browser cache has made loading scripts almost irrelevant in terms of page performance. Its a problem for first time visitors who have a cold cache. And then it amounts to adding an additional 1-2 seconds of load time once and only once on their first visit. I really don't get bent out of shape by that. I'd be worried if that 1-2 second overhead was incurred with every single page load with every single user. that's just now what happens though.


I see one problem:

- Entire generation of programmers may not have any knowledge about how computer works


That's not a problem, it's an opportunity for those that do, and there will always be those that do.


No?

If that were to happen, it would be a problem with the programmers more than anyone else.


with JavaScript, there is no need to know how computer works, so why would anyone learn(deeply) about that?


You act as though Javascript will take over the programming world -- I can confidently say that it won't


mind you it will ... ( and i'm not pro javascript ).


What I don't get is the sites that load ALL their content using JavaScript. Sure, I see the point in loading more content if the user scrolls down to the bottom of the page, but why would you do that straight away?


Funny thing is, I use JavaScript libs like Ext JS because they don't impose HTML/CSS on me.

I can write the whole application in JavaScript and I just have to mess with the DOM and it's ugly friends when in trouble.


doesnt matter , they use it under the hood. and the fact that you have little control over how ExtJs actually uses the DOM will not make your app faster...


That's a two-edged sword.

If I rely on an API, which abstracts everything, I benefit of every performance improvements they make "under the hood".

If they are better with performance than me, I win. If I'm better with performance than them, I lose.


The reality is... JavaScript is here to stay. So stop your complaining, that's such as waste of time. Rather than complain, champion better solutions. Be bold and advance a new and more efficient agenda.


Just because it's badly abused doesn't mean it's bad per se. Let's say you build your SOA: JSON API + Mobile App. At this point, if you plan to build a web app to browse your content, it's very hard to render pages server side without code duplication/collision with the API. If you go with the full js MVC approach you can reuse your api and just worry about the templating and event binding. Maybe this approach doesn't yet scale to the size of Twitter, but I hope the web moves towards fat clients optimization.


A lot of JS usage on web browser is as a tool to manipulate DOM. A bigger part of the problem, I think, is the DOM API. Another big part of the problem is of course inconsistencies between browsers. Because of those 2 problems, our client-side code becomes complex. Changing JS with "$LANGUAGE_X" wouldn't help much. If the DOM API is much better and web browsers behave the same way, things would be much different.


There is a very important reason client-side js apps are appearing: managing stateful apps over a stateless protocol means spending about half of your time and complexity budget dealing with state transfer. Js clients bring us back to the days of true application programming. They use REST and HATEOAS in a very natural way that most app servers have not. Maybe it's time to let the server side handle what it's good at?


If I had time to get something up and running where the html could be rendered on both client and server using the same code base in the early stages of a project then I would.

It all depends on the project of course but currently here is the order in which I tend to do things:

1 Write an api

2 Write web/mobile apps to consume it

3 Optimize by pre-rendering the html when needed if the project becomes popular enough or if it really needs to be crawled.


I think the summary is more helpful than looking at the slides. Does anyone have a link to the video talk about this?


I love how the tech community promotes something until it's over used and only AFTER everything breaks then we THINK about how / when we can use a technology...


I don't always read Hackernews, but when I do I read it with this - http://nojs.herokuapp.com


its flat design, or should I say no design. prettier than hn


Did anyone else think this was pedantic?


Very nice, but unfornately, it seems we lost a lot of information from just watching slides...


This slideshow lost all credibility when it said to put your analytics in the <head> tags. Sure some analytics providers might recommend you do that, but that is wrong and only serves to

A) Slow your site down B) Introduce a single point of failure into your webpage

All scripts should be loaded either async or at the end of the dom.


If you load the analytics script anywhere other than the head, you potentially miss user input while the page is loading. For a slow-loading page, this could make you lose out on useful data. I realize that this argument is somewhat circular.


You should avoid including script tags in the head because it introduces single points of failure in your app. Losing a few ms of 'analytics' time is better than providing your users with a broken site because your analytics scripts (or any other hosted JS files) didn't load and waited 5-15s to timeout (depends on browser).

Steve Souders gives a great talk on this: http://www.stevesouders.com/blog/2010/06/01/frontend-spof/


Ironically, slides are hosted on a JS implementation of presentation software.


Who's the superman?


If you use it in the right way, there would be no problems.


Amusing, this article was submitted 2 weeks ago and got no comments.

https://news.ycombinator.com/item?id=5973914


Bad timing?


Store state in the DOM? Really?


For the sort of small-scale interactions Zakas was talking about, that's a perfectly reasonable thing to do.


Bird gotta fly, fish gotta swim, hater gotta hate. I love JS!


OK, who clicked through all 84 slides?


[deleted]


This was clearly a talk you were supposed to watch, not attempt to read the slide deck through later. Many of my talks are like that, and to avoid people complaining online "your slides suck" even though the people at the talk usually were in agreement that "that was an amazing talk", I always go out of my way to use presentation technologies that are as random and esoteric as possible to make certain that when I am asked "can we have a copy of your slides" the answer is "I guess, but it will be really hard and of no use to you".


Martin Fowler refuses to publish the slides of his talks because they only make sense in the context of his talks. Seems to be a good decision.


Nicholas Zakas is a famous JS dev that worked at Yahoo. And the slides have nothing to do with a rant against JS , if it is why you hate them.That's the context.


To say it in the words of @fat: hi haters https://pbs.twimg.com/media/A_G3NghCIAEAGbM.jpg:large




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: