Hacker News new | comments | show | ask | jobs | submit login

Near native performance is a very good thing especially in the fields such as game development.

However, it's a pity that Mozilla doesn't address the main problem with the web platform, the lack of good platform and the set of API, something like Java or .NET provides. There are a large number of good javascript applications (Google docs, GMail, etc), which are produced by heroic efforts of the teams who created them. The same effort could have been expended on hundreds more useful software, if JavaScript was replaced by something more adequate.




Look around you at the Web: all the top sites use JS heavily and without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).

If the old big-OO-frameworks really conferred such huge fitness advantages over JS, you would not see them dead on the client. They would have been used, and their plugins would have been supported better by the plugin vendors.

JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9 offers one) and not-too-OO-or-huge frameworks.

/be


>Look around you at the Web: all the top sites use JS heavily and without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).

Java isn't used in the browser because of awfully bad user experience, and bad security (personally, I disabled Java in the browser). JavaScript applications just look much better and behave much more smoothly, which is more important on the web than ease of development and performance. Web applications are much more readily used than desktop applications, and that's why we have to use them. It's just economic incentive. If we create a desktop application, user base which we sell too will be much smaller if the same application will be provided on the web. I'd love to use technologies, which make me more productive than JavaScript like JavaFX and Silverlight, or at least Flash. However, there's a large percentage of users whose browsers don't support them and if they support them they provide inferior user experience. I (and many other people) use JavaScript not because it's great but because it's the only platform which is universally supported by browsers not and provides superior user experience. It's the only reason to use it.

> without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).

Most of the top sites are quite trivial in code complexity. They are more complex design and ui-experience wise than codewise. There are, of course complex web applications, like google docs, cloud9, gmail, google reader, but they were created with definitely heroic efforts, and they don't reach the complexity of the top desktop applications. Where's web based Mathematica, 3DS Max, Autocad, IntelliJ? When web based office applications will have performance of MS Office?

As an indicator of how complex these web applications are, you can take a look at how many web frameworks have non trivial collections, like HashSets, HashMaps, TreeMaps etc. Only the following frameworks support them: closure tools, GWT, Dart. Most of the popular JS frameworks which are used by top sites don't use them.

>JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9 offers one) and not-too-OO-or-huge frameworks.

I actually work at JetBrains (the creator of IntelliJ), and I can say that Cloud9 provides IDE experience from 90s. The only meaningful feature that it supports is code completion and error highlighting. It has no refactorings, find usages, and many other smart features. I think, the JavaScript is to blame, and I feel that so complex code is near impossible to write in a language such as JavaScript (because of its lack of static type system).


Quick reply (thanks for the well-formatted cited text!).

* Java didn't have the bad security rep until relatively recently. Java had nice-looking UX in the 90s (Netscape bought Netcode on this basis), much nicer than Web content. Didn't help.

* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons behind these outcomes should be clear. They have little to do with JS lacking Java's big-OO features.

* Large mutable-state collection libraries are an anti-pattern. Functional structures, when hashes and arrays do not suffice (and even there), are the future, for scaling and parallel hardware wins.

* Conway's Law still applies. Too often, bloated OO code is an artifact of the organization(s) that produced it. This applies even to open source (Mozilla's Gecko C++ code; we fight it all the time, including via JS). It definitely applies to Google (e.g., gmail, Dart at launch). Perhaps there's no other way to create such code, and we need such programs as constituted. I question both assumptions.

* Glad you brought up refactoring. It is doable in JS IDEs with modern, aggressive static analysis. See not only TypeScript but also Marijn Haverbeke's Tern and work by Ben Livshits, et al., at MSR. But automated refactoring is not as much in demand among Web developers I know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns" approach that motivates auto-refactoring.

In sum, if the web ever becomes big-OO as Java and .NET fans might like, I fear it will die the same death those platforms have on the client side. Another example: AS3 in Flash, also moribund. These systems (even ignoring single-vendor conflicts) were too static.

The Web is not the desktop. Client JS-based code can be fatter or thinner as needed, but it is not as constrained as in static languages and their runtimes. Distribution, mobility, full-stack/end-to-end (Node.js) options, offline operation, multi-party and after-the-fact add-on and mash-up architectures, social and commercial benefits of the Web (not just of the Internet) -- all these change the game from the old desktop paradigm.

JS has co-evolved with the Web, while the big-OO systems have not. This might still end up in a bad place, but so far I don't see it. JS can be evolved far more easily than it can be replaced.

/be


>* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons behind these outcomes should be clear. They have little to do with JS lacking Java's big-OO features.

I am actually not defending big-OO features (I think, 90s style big-OO is obsolete). I like mix of OO and functional programming and like the results which it confers to code (see for example, Reactive Extensions, it's very easy to learn, expressive, and compact). The feature which I miss in JavaScript and which platforms such as JVM and .NET have, is ease of maintaining code, mainly through sound type system and languages created with tooling in mind.

>* Glad you brought up refactoring. It is doable in JS IDEs with modern, aggressive static analysis. See not only TypeScript but also Marijn Haverbeke's Tern and work by Ben Livshits, et al., at MSR.

The problem with algorithms similar to Tern's is that it works well until we use reflexive capabilities of the language. However, most of the libraries do use them, and as long as it happens, algorithms such as Tern's infer useless type Object.

>But automated refactoring is not as much in demand among Web developers I know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns" approach that motivates auto-refactoring.

There are refactoring which can be useful in any language. My favorite one is rename, I usually can't come up with a good name from a first attempt. Others are extract/inline method (extract/inline variable is easy to implement in JavaScript).

Another maintainability related feature is navigation to definition and find usages. Unfortunately, language dynamism makes them imprecise and code maintenance becomes nightmare especially if you have > 30 KLOCs of code. You have to recheck everything manually and it's very error prone. Tests can help, but they also require substantial effort.


Tern's static analysis is based loosely on SpiderMonkey's type inference, which does well with most JS libraries.

Yes, some overloaded octopus methods fall back on Object. What helps the SpiderMonkey type-inference-driven JIT is online live-data profiling, as Marijn notes. This may be the crucial difference.

However, new algorithms such as CFA2 promise more precision even without runtime feedback.

And I suggest you are missing the bigger picture: TypeScript, Dart, et al., require (unsound) type annotations, a tax on all programmers, in hope of gaining better tooling of the kind you work on.

Is this a good trade? Users will vote with their fingers provided the tools show up. In big orgs (Google, where Closure is still used to preprocess JS) they may, but in general, no.

Renaming is just not high-enough frequency from what I hear to motivate JS devs to swallow type annotation.

/be


>And I suggest you are missing the bigger picture: TypeScript, Dart, et al., require (unsound) type annotations, a tax on all programmers, in hope of gaining better tooling of the kind you work on.

In many cases types can be inferred. ML is able to infer almost all types in a program (however the algorithm requires that the language doesn't have subtyping). Haskell has very good type inference which support subtyping (you declare very few types). They both have strong static type system and don't tax developers by making them having to declare every type. Algorithms which are used in Haskell are complicated, but they can be implemented.


I know about ML and Haskell but let's be realistic. Neither is anywhere near ready to embed in a browser or mix into a future version of JS.

We worked in the context of ES4 on gradual typing -- not just inference (as you imply, H-M is fragile) -- to cope with the dynamic code loading inherent in the client side of the Web. Gradual typing is a research program, nowhere near ready for prime time.

Unsound systems such as TypeScript and Dart are good for warnings but nothing is guaranteed at runtime.

A more modular approach such as Typed Racket could work, but again: Research, and TR requires modules and contracts of a Scheme-ish kind. JS is just getting modules in ES6.

Anyway, your point of reference was more practical systems such as Java and .NET but these do require too much annotation, even with 'var' in C#. Or so JS developers tell me.

/be


"JS can be evolved far more easily than it can be replaced." - this sums up everything :)


> HashSets, HashMaps, TreeMaps etc.

As a note, the built-in JS object type is a hash map; although it has the annoying property of requiring keys to be strings, it still suffices for most uses of maps and sets.


Indeed. See also JSON.

ES6 brings Map, Set, WeakMap, and WeakSet. First three are already prototyped in Firefox and (under a flag) Chrome.

/be




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: