I wish google would stay on board with JS since they have the engineering power to do a lot in this area, but to me it seems after V8 they've kind of abbonned JS in favor of Dart (As oppossed to supporting asm.js). For instance, I just did the "try anyway" in chrome 26.0 linux and everything crashed. Did anyone get it working in chrome?
- Chrome currently crashes, but is expected to be resolved by the Chrome team soon.
Crashed my whole system
Chromium 25.0.1364.160 Ubuntu 12.04
Hardest crash I've experienced on this new setup.
Basically, each font in each size uses some handles in a renderer. At 10000 handles, the tab renderer dies. They have a font cache but never clear it.
When enough render processes together use too many GDI handles, the whole Windows desktop breaks down.
I've never once had an application in Linux crash the whole OS - or at least not so long as I've had physical access to the machine (I've had rouge database requests brick a server before because it took down sshd - but that's a different story)
In fact on the laptop I'm on now, the parent link crashed Firefox. But it OOM'ed and got killed before I even noticed there was a problem (and that's on a beefy desktop environment with compositing enabled too)
but being the devils advocate here, both parents of your comment are not that far off.
Ubuntu is more prone to crash because it invites the user to install much more closed source and proprietary code, by design. so they are not that idiotic and flamebaitic.
Where did you get that idea? Just look at these graphs:
Apparently, they are still trying to make V8 faster. V8's graph isn't flat and there are also very recent bumps.
>As oppossed to supporting asm.js
For example: https://gist.github.com/calvinmetcalf/5473022
That's the main thing, but not the only thing.
WebGL is also very portable, more than other flavors of GL. Huge effort has gone into that. As a result there is a much better chance a WebGL app will give the same output on different browsers/OSes/GPUs (and if it does not, that's a bug that should be fixed).
WebGL is also more secure than OpenGL, since it was designed to run in a web browser, that executes unsafe code off the web.
Haven't they announced that they have no plans to integrate the Dart VM with the Blink rendering engine?
You make it sound like they're the only ones doing this kind of thing. Anybody remember this?
I'm not a fan of DRM, but there's nothing preventing anyone from downloading the game and selling/giving away copies with ease.
Actually, WebGL games would be a lot more secure from piracy, because they would be online, and you could easily stop 99% of the piracy by requiring a login for the game. It's basically like the Diablo 3 model, only better. Because Diablo 3 should be easier to crack and play on private servers (not sure if even that has happened yet).
Making an "online game" instead of a native "PC game" is the best way to stop most of the piracy.
In addition to that, just cracking the Diablo 3 client isn't enough, because a lot of the game logic runs exclusively on the server. For example, even with tens of thousands of test runs of a specific monster, you still won't know the correct item drop probability table, because that table is never sent to the client. Only the result of the server-side dice roll checked against the server-side loot table.
Moot point. Any client software can and will be cracked, regardless of how difficult it is to crack. Which is mtgx's point - online games are easier to secure because you can put logic and validation on a remote server. No DRM needed. Your example of Diablo 3 only proves his point. It's not the DRM that makes Diablo 3 hard to crack - it's the server side logic and validation.
The piracy while a legitimate problem is a relatively minor one. The first is visibility, the second is quality. Getting to the point where someone bothers to pirate your stuff and the majority of people that download it install it and play trough it is a mark of success - doing something right.
And with sizable proportion of the gaming community in their 30s - you will be surprised how many people have desire to support the studios. We learned our lessons with Shadow of the colossus, Psychonauts and the fate of Black Isle.
On the other hand, the VM cool kids of today might have an issue.
Comparatively, this runs a little sluggishly on a MacBookPro8,2 (early 2011).
> "This is Unreal Engine"
- a subset of it
> "with physics"
- did not notice any falling kickable boxes and such
> "cloth simulation, particles, light & glare effects"
- impressive, but there will be twice less of that then via native code
- what's the difference between downloading one/two specific browsers or making a build of each with a flash player bootstrapped?
> "Just yesterday you couldn't draw a circle on a 2d canvas at 30fps"
- so instead of pushing to make a universal VM, they decided to use a dynamic prototype-OOP language just because it happened to be most common - not very impressive.
Firefox has a separate ahead of time compiler for asm.js that isn't a JS VM.
And it only works in Firefox, and even then only very well in Firefox Nightly.
Those points aside, though, this is pretty amazing. I fully expect multiple engines to target HTML5 in the same way Unity/Unreal/etc were cross-compiling for the Flash runtime. It's just not quite there yet...
(And it's good that demos like this exist to put pressure on browsers to fully support them).
Edit: UE3 was chosen because it's known stable and optimized tech. This isn't the pinnacle of what can be done, far from it. But neither us (Mozilla) nor Epic wanted to be working with code that was still under active development for a next generation engine while simultaneously trying to port it to a new experimental platform. One step at a time! :p
And atop of missing some things from UE3, Citadel is designed specifically for limited devices.
While still a very impressive feat, let's not confuse this with running GoW or The Samaritan demo in the browser.
And it's using the exact same code as this demo. You literally use the same compiled JS, and just swap in a different set of game assets.
Thanks! (also thanks for JS, it is neat!)
I have a gaming desktop at home that runs all new games at 50+ FPS, but for some reason looking at graphics that a late-era PS2/early PS3 game would have _in a browser_ is more impressive than running a game like Crysis 3 for the first time.
My theory is that Mozilla and Google are in a race to develop the most advanced html5 engines because they know that if a killer html game or app gets developed in their browser, everyone who wants to use the app will switch to them while the other implementations tries to catch up.
It will also prevent new browsers from trying to pop up, unless they simply fork and try to keep up with one of the two major engines. Google and Mozilla know they can outpace any competitors that are trying to innovate in the web space by having the "latest" html5 features integrated. In example, unless Microsoft pours a bunch of money into it, or Internet Explorer forks Mozilla or Google, they can pretty much count that competitor out.
Is this supporting an open web?
What's the problem with focusing on open source plugin virtual machines that can work in every browser regardless of that browser's version?
> How does having a separate implementation of a specification help us build a safer, more open web?
> Is this supporting an open web?
Exactly, an open source standardized VM would be ideal, then you would have a choice of languages as well.
I wouldn't be able to play a fast paced shooter like this, but I could see it being more than tolerable for playing a slow paced role playing game or something in the browser, with the added ease of connecting with other players, possibly MMORPG-style.
On a side note, it'd be nice to make it capture the mouse instead of having to drag it.
Demo itself ran halfway ok, clicking on 'Benchmark' (which I had to guess was in the upper pulldown) immediately froze everything for me (Firefox on Kubuntu).
Still: I do appreciate a demo that even kinda-sorta works on gnu/linux.
It is actually both downloading and compiling during that time, and probably spending far more time in the download.
It would be nicer if the text said "retrieving code" as opposed to "compiling code" at that point.
edit: looks like the demo site was just updated, the message is better now :)
It is very refreshing to see something cutting edge work on linux.
I'm using Firefox 20 on Ubuntu 12.04/amd64 and everything works flawlessly.
As for the demo, I'm stunned. It's incredible.
Still, it's pretty amazing.
OS Browser Avg. FPS Min FPS
Win8 Fx 20 26 16
Win8 Fx Nightly 46 34
Ubuntu 13.04 Fx 20 17 16
Also, the textures are not fully detailed when looking directly at the floor and walls.
If this is supposed to be a full quality benchmark (ala Unigine Heaven when it came out), it needs to improve.
However, it still has poor graphics, with low quality textures, low/nonexisting anistropic filtering and no AA or bad AA.
Also just noticed that stuff like bracers that should be rounded (using GL 4 hardware tessellation) is not and has visible polygonal edges.
Not really a great showing as a demo.
Is anybody able to get it running on android? The nightly has both asm and webgl but is showing up as unsupported (and not just doing UA sniffing).
some known bugs that cause us to use way too much transient memory while loading. Fixes incoming.
edit: although there is no path finding
It is faster with asm.js optimizations, but as other comments mention, it also runs well (depending on CPU/GPU) even without such optimizations, in browsers that have no special asm.js optimizations whatsoever.
Even when it is treated in a special way, it still uses the same parser and same backend and optimizations (IonMonkey) as the Firefox JS engine uses for all JS.
> We don't call everything that runs on .NET C# nor everything that runs on the JVM Java.
Note that asm.js-like code is nothing new. It's been generated for years now by compilers like Emscripten and Mandreel, and Firefox and Chrome (and likely others) have been optimizing for it, for example Google added a Mandreel benchmark to Octane. (The only thing new with asm.js is that there is a formal typesystem which makes it simple to make sure you emit proper code, and simple to verify you are receiving proper code; also, while developing the type system some bugs in how emscripten generates code were found and resolved.)
Actually, that's not true at all. That's the plan for asm.js, but this demo doesn't run in anything other than very new Firefox builds, AFAICT. It crashes chrome, IE doesn't support WebGL (yet).
It works for example in Firefox 20, the current stable release, which has no special asm.js optimizations.
To correct your analogy: it'd be like using a subset of C# as the bytecode rather than a language. It's still actual C#, it's not just .NET bytecode, but it's being used differently.
I have a lot of respect for Egorov, and I agree with his views on this matter quite entirely.
The sole benefit of embedding a bytecode in a language is that you get the side effect that it can run anywhere that language runs. But in general, this implies performance penalties, especially when the language is relatively slow to begin with.
But in this case, this demo is made usable (ie: the impressive part) by writing a compiler specifically for the bytecode itself. So it works well in spite of the fact that it's a bytecode embedded in JS, not because of it. The really cool things about this demo are precisely 2 things:
1. asm.js makes running C++ code that would render to OpenGL in the browser a possibility, and with relatively good performance. Kudos to Mozilla.
2. It runs on browsers that don't know about asm.js, but it's effectively an emulated machine, and it's slow.
> 2. It runs on browsers that don't know about asm.js, but it's effectively an emulated machine, and it's slow.
Do you have benchmark numbers to support that? In my experience, asm.js code is quite fast even without special asm.js optimizations. It depends on the benchmark obviously, but look at
Many of those benchmarks are very fast in browsers without special asm.js optimizations.
All they need to do to be fast on asm.js code is to optimize typed array operations and basic math, and those are things browsers have been doing for a long time. Google even added a Mandreel benchmark to Octane for this reason.
Emscripten and Mandreel output, with or without asm.js, tends to be quite fast, generally faster than handwritten code. asm.js is often faster than that, because it's easier to optimize, even without special optimizations for it. Those special optimizations can help even more, but they are not necessary for it to run, nor are things "slow/emulated" without them.
Yes, there is a factor of around 4x slowdown between asm.js optimizations and without them, on the next slide. But even 4x slower than asm.js is quite fast, it's enough to run the Epic Citadel demo for example (try it on a version of firefox without those optimizations, like stable release - as others reported in comments, it runs ok). A lot of the work in Epic Citadel is on the GPU anyhow, say it's about half, so the difference is then only something like 2x.
2x is not that much, we have similar differences on the web anyhow because of CPU differences, JIT differences, etc. That's within the expected range of variance.
Also, asm.js code is faster even without special optimizations compared to handwritten JS (see for example http://blog.j15r.com/blog/2011/12/15/Box2D_as_a_Measure_of_R... and http://blog.j15r.com/blog/2013/04/25/Box2d_Revisited ). So even without those optimizations it is worthwhile.
The problem is that it's still dynamic, and at any point a null can come along and force you to throw away a JITted function, so you have to have these checks everywhere just in case. Further, ECMA compliance does not require JIT compilation, which means to expect performance, especially on the web, is dubious at best. I can see merit to expecting performance metrics in something like Node.JS since it assumes V8, but the web does not have 1 JS engine, and the standard does not require such performance metrics.
The "benefits" of building a bytecode in that's represented in some subset of JS are essentially red herrings since they're effectively lies and the downsides are a lot more significant, IMO. Instead, we should be focusing on actually building a bytecode for the web such that all implementations are expected to have certain performance metrics.
Need to stream it to the user once.
And running a site on a given domain isn't hard.
Unless you're streaming one single massive multi-gigabyte blob, instead of different asset packs, there's no reason they wouldn't be possible.
The main difference are the optimizations currently on Firefox Nightly. They'll be in a stable release in just a few months or less.