Even with a case of Red Bull, a bottle of Adderall, and a heart full of courage, I doubt most of us could come close to that level of productivity. We'd get bogged down in the details. I know I would.
More over there was no social media sites like twitter, facebook, etc, at that time :)
But arguably (Doug Crockford may have argued this) the whole process required JS to make more out of fewer, stronger primitives (first-class functions, prototypes). I know I didn't have time for much else, as I said at the ICFP 2005 keynote.
As I told Peter Seibel in "Coders at Work", besides lack of time, I couldn't add anything like (Pythonic, dynamic) classes. That would have encroached on Batman-Java; can't have JS-Robin-the-boy-hostage getting too big for the Netscape/Sun-1995-era-batcave.
An analogous situation is your average first year PhD candidate. Initially making some sort of contribution to the field feels overwhelming and almost impossible. But once you've spend a year or two reading papers and having coffee with the leaders in the field, everything comes together. That same PhD student starts to churn out quality papers every 6 months or so.
I think back to what it was like watching my dad program in Scheme when I was in high school. I got the same curious and overwhelming feeling then as I do now when working on certain areas of distributed systems. There's no reason to believe that the barriers that grownups face are any less insurmountable than the ones children do. :)
Brendan's house and my nightclub thank us for selling out early and often.
Once you have bit operators, you can do bit arithmetic. So yes it's a bit slower than when having pure integer types, so what? The JS engines were something like 100 times slower up to recently, and nobody really, really cared until Google did V8.
I really think Brendan was right in his decision and I don't get what jwz would like, except to maybe have explicit types in the language, like, hm, JScript.NET or ActionScript.
But even there as far as I know ints are 32 bits. 64-bit OS use is still not so common, so at the time the languages were introduced it was the best engineering decision.
jwz misses this time.
Tamarin and Squirrelfish were benchmark battling before Chrome was publicly released. I think the current focus on performance to be sparked by the CSS selector engine battle between the major JS frameworks.
It's still the question of the engineering trade-off, certainly not so clear as he presents it. In the light of the trend of always bigger importance of small devices, the best direction to take is probably the one that will most often use less of battery power, no matter what programmer's ideals can be.
bignums (among alternative solutions) fix this bug.
bignums also are not necessarily a whole lot slower than doubles, since you can optimize to fixnums for common cases. JITs can do pretty well. It's not as if double is so fast, even with SSE4, that int doesn't still win.
But the main bug to fix is the rounding or powers-of-five inexpressiveness issue. It's a real usability problem.
I'm biased as I actively use floating point calculations and I don't know the convenient representation which would be fast enough like that one which is directly supported by hardware -- you can really get one FP addition per processor cycle(!) on modern x86 processors, potentially more on arrays. That is really as fast as ints (not considering the additional adders and shifters which are there for address calculations and can be used in parallel). But I'd also think mobile CPU's should be considered too in something as widely used as JS. Anyway, from my perspective fast indexable arrays are missing in a lot of modern languages and I think AS3 did something about that.
And I believed bignums are lists of integers, so still not enough for .1 + .2, I believed for that we'd need rationals or decimal FP?
Mobile is much worse. We disable SoftFP in SpiderMonkey by requiring modern-enough ARM, but Adobe can't in Tamarin (we both use the Nanojit back end). Really slow. Even real FP is not nearly the same as on SSE4.x.
Indeed bignum is a bit int format, but since there's no finite precision limit, you can do as someone in jwz's blog comments suggested, and use milli-cents or whatever for currencies, and never suffer rounding. But you do have to scale.
EDIT: so to be clear, I agree that bignums don't solve the .1 + .2 problem. IBM favored IEEE754r to handle that case, but no one could agree on the exact integration or worthiness of that finite-precision decimal format.
Sam Tobin-Hochstadt's "fast, precise, rational: pick two" conclusion (cited by me in jwz's LJ in reply to someone pushing ratnums as all three) still holds. I think a case can be made for bignums on this two-out-of-three basis, but you'd still want double and you might even want decimal.
This need for several numeric types led us to work on value types, so library authors can extend the language with new numeric types (including operator and literal support), and the TC39 committee is not the bottleneck and the one-size-fits-nothing-well decider.
In particular, "long double" on x86 can represent 64 bit integers without loss as well as being floating-point (64 bit mantissa, 15 bit exponent, 1 bit sign). Downside is that it's quite large: 16 bytes.
(GWT emulates longs.)
Better would be ... Moon then struck the student 100000000000000000-10-1000000000000000000+10 times. The student was enlightened
There was some disagreement in the replies to my comment, but the general consensus seemed to be that JS is very suitable for use as an object code.
But this article makes me wonder. Increasingly, we're compiling HLLs into an object code that only kinda-sorta has integer arithmetic. Is this a good idea? I'm dubious, to say the least. Certainly this property of JS puts some nontrivial constraints on the design of an HLL that can be efficiently compiled into JS.
In any case, interesting post.
A couple side notes...
Actionscript is the inverse case, an abominable language that manages to wreak incredible damage on something pretty good (ECMAScript) in surprisingly few steps -- mainly, as far as I can tell, by trying to turn into something "proper" a.k.a Java, a fate that JS itself was fortunately forced to avoid -- but it provides access to one lower level than JS does, and this is a big deal for some kinds of programs. We're probably going to use Flash (when available) as a computational accelerator for this reason.
Second, our results suggest that the real gamechanger here is V8. I know that most of the benchmarks out there show Tamarin and other VMs somewhat competitive with V8. Not in our world. I'm talking a couple of orders of magnitude difference. It's astonishing. If everyone would just use Chrome, we would have no performance problems at all.
Can you please give some specific examples? I'm curious. I think you mention yourself that you can get faster execution with it, so that's the positive side. What's the damage?
Not annotating the loop control variable lets two things happen: 1) everything on the FPU, which if SSE has a lot of bandwidth in parallel to the integer units, which can still handle addressing and known-int chores; 2) tracing JITs can speculate, and type inferencing JITs can infer, that the loop control fits in an int, and all evaluation and storage can use int domain.
Converting from double to int32 or uint32 in JS is far from a simple truncate or round, although it entails floor. From ECMA-262 5th Edition:
9.5 ToInt32: (Signed 32 Bit Integer)
The abstract operation ToInt32 converts its argument to one of 2^32 integer values in the range −2^31 through 2^31−1, inclusive. This abstract operation functions as follows:
1. Let number be the result of calling ToNumber on the input argument.
2. If number is NaN, +0, −0, +∞, or −∞, return +0.
3. Let posInt be sign(number) * floor(abs(number)).
4. Let int32bit be posInt modulo 2^32; that is, a finite integer value k of Number type with positive sign and less than 2^32 in magnitude such that the mathematical difference of posInt and k is mathematically an integer multiple of 2^32.
5. If int32bit is greater than or equal to 2^31, return int32bit − 2^32, otherwise return int32bit.
NOTE Given the above definition of ToInt32:
• The ToInt32 abstract operation is idempotent: if applied to a result that it produced, the second application leaves that value unchanged.
• ToInt32(ToUint32(x)) is equal to ToInt32(x) for all values of x. (It is to preserve this latter property that +∞ and −∞ are mapped to +0.)
• ToInt32 maps −0 to +0.
--- end snip ---
Note that this path is rare in code that does not use shift or bitwise-logical operators or certain built-in functions. See https://bugzilla.mozilla.org/show_bug.cgi?id=597814.
Even the performance is misleading. It comes from one thing, the static typing, and having played that card there doesn't seem to be anywhere else for them to go. AVM2 bytecode is very interesting; if you try to optimize it you get counterintuitive results: it either stays the same or gets slower. (The one exception is the fast memory opcodes that were added for the abortive Alchemy project, but that's another story.)
The proper point of comparison is V8. Adobe had an eternity of a head start. They just bet on the wrong horse. Had they understood the problem more deeply they could have grabbed the V8 guys before Google even thought about it. In that case their head start might have turned into total dominance of the browser runtime. The biggest advantage Flash has -- 95%+ market penetration -- is one of the most valuable assets on the internet. (Is any single asset more valuable? I mean executables, as in IE counts but google.com doesn't.) Imagine if Adobe had done V8 before Chrome existed. The few users who didn't have Flash would have had to install it just to make web apps usable. People might not have bothered for a 2x speedup, but a 1000x speedup? I think so.
Edit: to be clear, I'm not talking about (heaven forbid) Flash apps. I'm saying that Flash could have become the standard runtime for web apps. They were on the VM performance track years before anyone else. But it's easy to see that they weren't thinking about this at all, because the facilities for communicating between Flash and the browser are unbelievably poor. (They actually marshal all calls into XML messages! Grrrrrrraaaaaaagh.)
Pity, as the rest of us then can't learn anything without the examples of the errors.
> I'm not talking about (heaven forbid) Flash apps. I'm saying that Flash could have become the standard runtime for web apps.
And how were they supposed to reach that when having only Flash which runs as it runs? How were they to make a whole browser that would be widely accepted? What do you think they should have done actually?
What I said: make the fastest JS VM in the world years before others got started and make it cheap to call into from the browser. Then web apps could have used Flash's JS if it were available and just run more slowly if it weren't. Since V8 runs far faster than AS3 (in any measurement I've done), this was obviously technically possible. Had Adobe done this, Flash's extraordinary ubiquity would have meant we had access to something like V8 in nearly every browser out there, right down to IE6. Instead, what they produced was a dead end -- faster than the old JS implementations but not as fast as the new ones -- and a monstrosity called ExternalInterface that makes interaction between JS and AVM2 too slow for all but the most expensive computations.
> ExternalInterface that makes interaction between JS and AVM2
So how were they to have a JS VM without the overheads but not making a full browser? Note that Google doesn't manage that either, they just have the thing that inserts the whole Chrome window in the IE frame.
(The examples or specifics for each your claim are still missing).
The idea was for Tamarin to be as fast as V8 on untyped JS, and (via the Flash vector) distributed and integrated with all IE versions via the COM ActiveScripting interfaces to the native DOM and browser objects.
This was harder than it sounds. Tamarin was not fast on untyped code and the work to make it so never happened. We at Mozilla paid Mark Hammond to do the COM integration with IE, but he ran into global object API mismatches and couldn't make progress without help from Adobe. ES4 failed and Adobe departed.
ChromeFrame is really a super-ScreamingMonkey, but will it get distribution on the scale that Flash has? If it did, I bet authors would target it. With IE9 there's less need for it, but IE8 on Windows XP seems likely to be the "new IE6".
I suspect something different will happen from the best laid plans of Alex et al. for ChromeFrame, just as our ScreamingMonkey dreams died. Either Microsoft will manage to move people off of XP, or other browsers will convert most XP users to switch from IE8, or something I can't foresee, but not ChromeFrame ubiquity, will happen.
Google seems to be working toward making the browser into a sort of X server, and they're attacking the problem from both ends...a SDK with a community around it in a widely-used language (GWT), and a sandboxed, platform agnostic runtime which will allow for a wide variety of languages/ecosystems to run upon it. With the exception of bleeding-edge games, whatever remaining value native applications for 95% of users will be entirely eroded. We can also observe Google's web printing initiative as yet another means of decoupling most people's computing needs from any particular full-fledged operating system.
What I don't understand is how Google will manage to get this scheme adopted as a standard. I believe Mozilla said they aren't in favor of it. And, Microsoft certainly won't play along until they risk having no browser marketshare without it. End users don't care that their computing experience involves DOM and JS -- they just care that it's secure and works well. As a developer, I'd love the freedom of building apps in a language of my choice (presuming it can compile down to the LLVM and run in the browser/Native Client sandbox.
Do people on HN think I'm nuts? Is the endgame of the web a universal application "player" as Microsoft has surely feared for over a decade? What is there to lose in such a scheme? One downside for Google would be that machine-readable content is required for search. But, search is so important for app/site developers that all involved would be willing to make accommodations (see Google's published scheme for allowing GWT/AJAX-y apps to be made crawlable).
What I do not respect are blanket unqualified absolutist statements and fundamentalisms. I understand that you may have found yourself frustrated at points during which you've written Java in the past. I would like to hear about these, and understand them better. But I'd also encourage you to avoid such totalizing statements, as for a variety of reasons, many may have very good reasons for enjoying things you do not.
> but its core elegance has led to increasingly
> sophisticated and increasingly powerful uses
> of the language
> is likely to have considerably advanced.
On the language evolution front: the http://wiki.ecmascript.org/doku.php?id=harmony:proposals features are very likely to be in the next edition, by end of 2013. Some are already implemented in Firefox.
http://wiki.ecmascript.org/doku.php?id=strawman:strawman contains the full laundry list of possible additions, but among those, http://wiki.ecmascript.org/doku.php?id=strawman:simple_modul... is worth calling out. The module system is likely to be a major feature of the next version of the standard.
That was then (1999-2004). Since Firefox restarted browser competition in 2004; then with Safari, the iPhone, etc.; and since 2008 with Google Chrome, which clearly provoked major work in IE9; things are moving again, and standards bodies (still dysfunctional in some hard-to-fix ways) are more balanced than ever in terms vendor representation.
So yeah, the next ten years seem likely to be be different from the last ten.
Not only due to browser competition, but also from the Bell's Law device shift to always-connected, instant-on mobile and away from desktop, indicated above via "Safari, the iPhone, etc."
Some fear this shift means non-interoperable, siloed apps and closed app-stores will dominate, but my money is still on the Web. The Web can evolve to have better apps and stores too, provided browser and Web platform markets remain competitive.
It does not fill some of the big gaps left in JS (no module system, that's coming in Harmony), but it helps and it got the standards committee back together (sans Adobe).
Uh, any web app? Gmail? Google Maps? The one I'm working on? Surely we don't need to make a list.
That's incorrect. The language didn't need to change much. The two things that have changed are so major they couldn't be majorer: (1) it took people 10 years to actually figure out what they had in JS; (2) the implementations needed to catch up. While 1 may be more or less done, 2 is still in full swing. This will enable further innovation. It's not even clear we need major changes to the language itself. I'd be much more excited if the VMs were opened up to apps.
JQuery and node.js alone are perfect examples.
It has been rare that Google has introduced some piece of technology to the world where, even if it first seemed to be some one-off oddity, was not part of some grander plan which had yet to emerge. Take GOOG411 as an example -- it was a cheap way to mileage on voice recognition tools for Android. Take Google Maps, Android, and GOOG411 all in isolation, and the company seems a bit nutty. But, once it became clear that they were all part of a strategy for location-based advertising, everything began to make sense.
Ow. Ow. Ow.
At least the bignum library will be easier to write in JS than in C...