Hacker News new | comments | show | ask | jobs | submit login

Before speculating too much about "a bytecode standard", etc., it would probably be helpful to understand virtual machines and instruction sets.

I know web programmers generally aren't big on assembly or writing virtual machines, but an instruction set (group of bytecodes) design and implementation predisposes a processor/VM to certain operations. A VM for an OO language is going to have bytecodes (and other infrastructure) for doing fast method lookup, because it will really hurt performance otherwise. A functional or logic language VM will probably have tail-call optimization, perhaps opcodes specific to pattern matching / unification, and a different style of garbage collector.

Compare the JVM to the Lua or Erlang virtual machines; think about the issues people run into when trying to port languages that aren't like Java to the JVM. Unless people are very deliberate about making a really general instruction set, a "bytecode standard" informed by Javascript could be similarly awkward for languages that aren't just incremental improvements on Javascript. Besides, you can't optimize for everything.

There are a LOT of details I'm glossing over (e.g. sandboxing/security concerns, RISC vs. CISC, the DOM), but I've been meaning to point this out since I read someone saying, "Why do people keep writing more VMs? Why don't we just use the JVM for everything and move on?" It's not that easy.

Another issue is that when you create a virtual machine you basically tie down your potential for optimization to what the opcodes can do, instead of optimizing for the language itself. This is a major problem with Java.

For example, Smalltalk has an advantage compared to Java in which designers of VMs can change the opcodes as needed to get better performance. That's not possible with Java because of the JVM.

The future of the language may be compromised by the decisions you make on the VM. For example, think of the problems Java has in fully supporting 64 bit software -- most of it based on the decisions that were made when 32bit processors where the norm.

All true. But I think the potential benefits are real, and I don't necessarily think it's a bad thing if the JS VM was specialized for JS. Standardizing the byte-codes could allow a looser coupling between browsers and JS. It could also allow for people to play with JS optimizations and augmentations without having to touch the VM internals itself.

All of this makes me think that if I was a VM researcher, I'd seriously consider going in this direction. And not being a VM researcher makes me think maybe I should be one.

I agree with you. Typical web devs probably don't have enough context about virtual machine implementation to understand the trade-offs, though.

I think having a byte code standard is totally possible. Webkit js engine Nitro/Squirrelfish already has bytecode. Spec is here http://webkit.org/specs/squirrelfish-bytecode.html

Nitro follows lot of luajit methods and techniques some are outlined here by Mike Pall - http://article.gmane.org/gmane.comp.lang.lua.general/58908 Some adopted by Nitro - http://webkit.org/blog/189/announcing-squirrelfish/ Nitro does the js-code -> bytecode -> jit-optimization.

However the more beneficial question is how long does it take it to convert js-code -> bytecode and how much it boosts overall performance. Seems like it is very small compared to execution times. https://lists.webkit.org/pipermail/squirrelfish-dev/2009-May...

On the contrary V8 does the direct compile yet following jvm/jit techniques. More details by Lars Bak here http://channel9.msdn.com/Shows/Going+Deep/Expert-to-Expert-E... and here http://www.youtube.com/watch?v=hWhMKalEicY V8 does some more non traditional things like snapshotting, hidden classes, etc which give incremental performance boost. If there was a bytecode standard some V8 techniques might not be applicable but it is possible to maintain the performance boost.

I would love to see bytecode getting as a standard so that people prefer to stick to whatever language they are comfortable with. For instance I like both js and python but if there was an option, I would stick to python for all my needs.

Sure, so it'd be hard to come up with a bytecode standard that worked really well for anything you might want to run on it - but you could probably come up with a reasonable bytecode / VM that would work reasonably well for most things.

For example, imagine that there was no threat of being sued by Oracle, etc. You could just use the JVM - that already has lots of things that compile to it which work reasonably well. I'm not arguing that we should use the JVM, but only that something like the JVM seems to work reasonably well.

Anyway - having a way that's OK or reasonable to run (say) Ruby in the browser is a lot better than the current situation where there is no such way (without using proprietary stuff).

It's probably less to do with a bytecode 'standard' vs. a set of standard libraries for doing things outside of the browser (file manipulations, etc). Then browsers could just not support those libraries, but JavaScript VM developers could. In this way, it would be possible to not have 'the future of JavaScript' tied down to a specific bytecode implementation. Then people could pick and choose the VM that they want to run their JS in based on what sort of optimizations they needed.

One of the largest requirements here would probably be a method of linking against/using C libraries, and also a standard for 'import/#include/etc' statements.

Maybe I'm being naive here though. Feel free to correct me.

http://code.google.com/p/nativeclient/ may answer your need. Hopefully someday something like this will be available cross browser...

I'm not talking about pulling C libraries into the browser. I'm talking about expanding the use of the JavaScript language outside of the browser (beyond even something like Node.js).

Unless I'm completely misunderstanding NativeClient, that's not what it's about.

Doesn't both merge at some point?

Right, it's same reason we still have ARM despite the success of x86. Or why some hardware vendors bundle FPGAs rather than rely on GPUs or custom ASICs. The instruction set is absolutely vital for a given domain's performance.

One way to avoid the problem is to make the bytecodes low level enough. You don't add bytecodes for method lookup or unification. You add bytecodes on top of which these can be implemented efficiently.

The downside is that if the bytecodes are low level the chances of different languages interoperating easily are small (e.g. take x86 assembly: Python doesn't automatically interoperate with Ruby, but take MSIL, now they do more easily).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact