I wish google would stay on board with JS since they have the engineering power to do a lot in this area, but to me it seems after V8 they've kind of abbonned JS in favor of Dart (As oppossed to supporting asm.js). For instance, I just did the "try anyway" in chrome 26.0 linux and everything crashed. Did anyone get it working in chrome?
Chromium has been a little dodgy for me lately too. Lots of memory leaks and what not. I've had to kill it's parent process id on a number of occasions (but even then, there's no reason why a browser should take out the whole OS).
Basically, each font in each size uses some handles in a renderer. At 10000 handles, the tab renderer dies. They have a font cache but never clear it.
When enough render processes together use too many GDI handles, the whole Windows desktop breaks down.
I've never once had an application in Linux crash the whole OS - or at least not so long as I've had physical access to the machine (I've had rouge database requests brick a server before because it took down sshd - but that's a different story)
In fact on the laptop I'm on now, the parent link crashed Firefox. But it OOM'ed and got killed before I even noticed there was a problem (and that's on a beefy desktop environment with compositing enabled too)
You would think, but I've noticed that on HN specifically, the comment pointing out that a stupid argument is forming--if it gets in early enough--tends to be both highly upvoted, and the end of the argument. One reason I like this community.
The only amazing stuff is the distribution mechanism really. Other stuff is just plain old OpenGL but in your browser this time. It is not like it has some crazy support for raytraced voxels or something.
> The only amazing stuff is the distribution mechanism really.
That's the main thing, but not the only thing.
WebGL is also very portable, more than other flavors of GL. Huge effort has gone into that. As a result there is a much better chance a WebGL app will give the same output on different browsers/OSes/GPUs (and if it does not, that's a bug that should be fixed).
WebGL is also more secure than OpenGL, since it was designed to run in a web browser, that executes unsafe code off the web.
I personally love that OpenGL is 'plain old' haha. Not to criticise your comment at all - but as someone who's been forced to work with OpenGL 3.3-4 quite a lot recently ... it's pretty incredible at times.
This is so awesome that I never want to see another WebGL demo again. This one proves it; you can make awesome games in WebGL. From now on I only want to read about non-demo WebGL games that are in development with a real release date.
Unlike the current situation with piracy where pirated copies are for all to see and get only hours before the official release.
The piracy while a legitimate problem is a relatively minor one. The first is visibility, the second is quality. Getting to the point where someone bothers to pirate your stuff and the majority of people that download it install it and play trough it is a mark of success - doing something right.
And with sizable proportion of the gaming community in their 30s - you will be surprised how many people have desire to support the studios. We learned our lessons with Shadow of the colossus, Psychonauts and the fate of Black Isle.
I hate how DRM is being thrown around as a "turn-key" solution against piracy, because it's simply wrong. What's stopping anyone from selling copies of heavily DRM'ed - but cracked - games right now? Nothing.
Actually, WebGL games would be a lot more secure from piracy, because they would be online, and you could easily stop 99% of the piracy by requiring a login for the game. It's basically like the Diablo 3 model, only better. Because Diablo 3 should be easier to crack and play on private servers (not sure if even that has happened yet).
Making an "online game" instead of a native "PC game" is the best way to stop most of the piracy.
In addition to that, just cracking the Diablo 3 client isn't enough, because a lot of the game logic runs exclusively on the server. For example, even with tens of thousands of test runs of a specific monster, you still won't know the correct item drop probability table, because that table is never sent to the client. Only the result of the server-side dice roll checked against the server-side loot table.
Moot point. Any client software can and will be cracked, regardless of how difficult it is to crack. Which is mtgx's point - online games are easier to secure because you can put logic and validation on a remote server. No DRM needed. Your example of Diablo 3 only proves his point. It's not the DRM that makes Diablo 3 hard to crack - it's the server side logic and validation.
[some negative rant below to offset the hype - sorry]
> "This is Unreal Engine"
- a subset of it
> "with physics"
- did not notice any falling kickable boxes and such
> "cloth simulation, particles, light & glare effects"
- impressive, but there will be twice less of that then via native code
- what's the difference between downloading one/two specific browsers or making a build of each with a flash player bootstrapped?
> "Just yesterday you couldn't draw a circle on a 2d canvas at 30fps"
- so instead of pushing to make a universal VM, they decided to use a dynamic prototype-OOP language just because it happened to be most common - not very impressive.
And it only works in Firefox, and even then only very well in Firefox Nightly.
Those points aside, though, this is pretty amazing. I fully expect multiple engines to target HTML5 in the same way Unity/Unreal/etc were cross-compiling for the Flash runtime. It's just not quite there yet...
(And it's good that demos like this exist to put pressure on browsers to fully support them).
As Brendan said, it's UE3, all of it. See the video at http://www.youtube.com/watch?v=BV32Cs_CMqo (second half of video) for the Sanctuary UT3 map, complete with bots, shooting, etc. Once the engine was ported, we threw random UE maps at it, and it worked fine.
Edit: UE3 was chosen because it's known stable and optimized tech. This isn't the pinnacle of what can be done, far from it. But neither us (Mozilla) nor Epic wanted to be working with code that was still under active development for a next generation engine while simultaneously trying to port it to a new experimental platform. One step at a time! :p
It's really not. It's the UE3 Mobile Engine. Which while it's compiled from the same C++ source, It has enough things stripped during compilation to not really be considered 'full UE3' custom shaders for instance are not supported.
And atop of missing some things from UE3, Citadel is designed specifically for limited devices.
While still a very impressive feat, let's not confuse this with running GoW or The Samaritan demo in the browser.
As mentioned in other comments, we also ran other UE3 games, like Sanctuary. We demoed that in a booth at GDC last month where people could play it. That's a full UE3 desktop game with bots, AI, normal FPS mouse control (not tablet-like), etc. etc. You can see it in action in the 2nd half of this video
Ran this on a laptop on Firefox 20 (stable) with Bumblebee and it's surprisingly impressive and smooth.
I have a gaming desktop at home that runs all new games at 50+ FPS, but for some reason looking at graphics that a late-era PS2/early PS3 game would have _in a browser_ is more impressive than running a game like Crysis 3 for the first time.
My theory is that Mozilla and Google are in a race to develop the most advanced html5 engines because they know that if a killer html game or app gets developed in their browser, everyone who wants to use the app will switch to them while the other implementations tries to catch up.
It will also prevent new browsers from trying to pop up, unless they simply fork and try to keep up with one of the two major engines. Google and Mozilla know they can outpace any competitors that are trying to innovate in the web space by having the "latest" html5 features integrated. In example, unless Microsoft pours a bunch of money into it, or Internet Explorer forks Mozilla or Google, they can pretty much count that competitor out.
Is this supporting an open web?
What's the problem with focusing on open source plugin virtual machines that can work in every browser regardless of that browser's version?
> How does having a separate implementation of a specification help us build a safer, more open web?
This ran surprisingly well on my 3+ year old Asus laptop with integrated graphics. By surprisingly well, I mean I doubt I ever got over 30 fps (probably 15 fps on average) but there were little to no hitches when loading new areas.
I wouldn't be able to play a fast paced shooter like this, but I could see it being more than tolerable for playing a slow paced role playing game or something in the browser, with the added ease of connecting with other players, possibly MMORPG-style.
On a side note, it'd be nice to make it capture the mouse instead of having to drag it.
Demo itself ran halfway ok, clicking on 'Benchmark' (which I had to guess was in the upper pulldown) immediately froze everything for me (Firefox on Kubuntu).
Still: I do appreciate a demo that even kinda-sorta works on gnu/linux.
I've found it works almost flawlessly on ArchLinux 64bit with the standard Firefox package. Only one minor issue with the flag in the wind going all over the place, but that's probably the fault of open source radeon driver.
It is very refreshing to see something cutting edge work on linux.
Some stats from the benchmark running Windows 8 and Ubuntu 13.04 on Asus Zenbook UX21A (Intel HD Graphics 4000, Intel Core i7 3517U, 1920x1080):
OS Browser Avg. FPS Min FPS
Win8 Fx 20 26 16
Win8 Fx Nightly 46 34
Ubuntu 13.04 Fx 20 17 16
For me those numbers tell that if ams.js does not catch on in other browsers, browse based games just will not fly. Also, graphics drivers on Linux still suck and that's why I still need to double boot.
Definitely awesome, still a while before this is usable in the market (across many browsers) and load times seem longer than Unity/Flash. I can't wait until browser support of emscripten/asm.js is better.
On Firefox 20 on Windows on my 2500k with a 7970 it seems CPU bound and can only achieve 43fps benchmark at 1920x1200, while having some clear artifacts from lack/too little texture anisotropy on the floor and antialiasing issues.
Also, the textures are not fully detailed when looking directly at the floor and walls.
If this is supposed to be a full quality benchmark (ala Unigine Heaven when it came out), it needs to improve.
It is faster with asm.js optimizations, but as other comments mention, it also runs well (depending on CPU/GPU) even without such optimizations, in browsers that have no special asm.js optimizations whatsoever.
Even when it is treated in a special way, it still uses the same parser and same backend and optimizations (IonMonkey) as the Firefox JS engine uses for all JS.
> We don't call everything that runs on .NET C# nor everything that runs on the JVM Java.
Note that asm.js-like code is nothing new. It's been generated for years now by compilers like Emscripten and Mandreel, and Firefox and Chrome (and likely others) have been optimizing for it, for example Google added a Mandreel benchmark to Octane. (The only thing new with asm.js is that there is a formal typesystem which makes it simple to make sure you emit proper code, and simple to verify you are receiving proper code; also, while developing the type system some bugs in how emscripten generates code were found and resolved.)
"It is faster with asm.js optimizations, but as other comments mention, it also runs well (depending on CPU/GPU) even without such optimizations, in browsers that have no special asm.js optimizations whatsoever."
Actually, that's not true at all. That's the plan for asm.js, but this demo doesn't run in anything other than very new Firefox builds, AFAICT. It crashes chrome, IE doesn't support WebGL (yet).
To correct your analogy: it'd be like using a subset of C# as the bytecode rather than a language. It's still actual C#, it's not just .NET bytecode, but it's being used differently.
I have a lot of respect for Egorov, and I agree with his views on this matter quite entirely.
The sole benefit of embedding a bytecode in a language is that you get the side effect that it can run anywhere that language runs. But in general, this implies performance penalties, especially when the language is relatively slow to begin with.
But in this case, this demo is made usable (ie: the impressive part) by writing a compiler specifically for the bytecode itself. So it works well in spite of the fact that it's a bytecode embedded in JS, not because of it. The really cool things about this demo are precisely 2 things:
1. asm.js makes running C++ code that would render to OpenGL in the browser a possibility, and with relatively good performance. Kudos to Mozilla.
2. It runs on browsers that don't know about asm.js, but it's effectively an emulated machine, and it's slow.
Many of those benchmarks are very fast in browsers without special asm.js optimizations.
All they need to do to be fast on asm.js code is to optimize typed array operations and basic math, and those are things browsers have been doing for a long time. Google even added a Mandreel benchmark to Octane for this reason.
Emscripten and Mandreel output, with or without asm.js, tends to be quite fast, generally faster than handwritten code. asm.js is often faster than that, because it's easier to optimize, even without special optimizations for it. Those special optimizations can help even more, but they are not necessary for it to run, nor are things "slow/emulated" without them.
That link is misleading in the context you present it. Those are microbenchmarks, which a JIT can optimize relatively well. But if you look at the very next slide at http://kripken.github.io/mloc_emscripten_talk/#/28 you will see that for a larger application, non-optimized asm.js performs abysmally, as does JS in general. A performance penalty of ~1000% of native is what you should expect for a nontrivial JS application, and asm.js does not do much (if anything) to alleviate that unless you run odinmonkey.
I was purposefully linking to that slide + the one after it.
Yes, there is a factor of around 4x slowdown between asm.js optimizations and without them, on the next slide. But even 4x slower than asm.js is quite fast, it's enough to run the Epic Citadel demo for example (try it on a version of firefox without those optimizations, like stable release - as others reported in comments, it runs ok). A lot of the work in Epic Citadel is on the GPU anyhow, say it's about half, so the difference is then only something like 2x.
2x is not that much, we have similar differences on the web anyhow because of CPU differences, JIT differences, etc. That's within the expected range of variance.
It's faster than "handwritten JS" because it's using a subset of JS that's already tuned to near the peak level of performance that you can achieve in JITting JS, which is tightly looped arithmetic with everything inlined working on SMIs. In V8, that's ~2.5x slower than the equivalent C code with optimizations. That's basically best case.
The problem is that it's still dynamic, and at any point a null can come along and force you to throw away a JITted function, so you have to have these checks everywhere just in case. Further, ECMA compliance does not require JIT compilation, which means to expect performance, especially on the web, is dubious at best. I can see merit to expecting performance metrics in something like Node.JS since it assumes V8, but the web does not have 1 JS engine, and the standard does not require such performance metrics.
The "benefits" of building a bytecode in that's represented in some subset of JS are essentially red herrings since they're effectively lies and the downsides are a lot more significant, IMO. Instead, we should be focusing on actually building a bytecode for the web such that all implementations are expected to have certain performance metrics.
>One concern for this type of product is that there's no concept of "installing" a webapp. The website will need to stream all game resources to the client for each user. I would wager that's one reason the graphics are poor: all of what you see was probably generated from <100MB of content.
As the demo says, it's recommended to try it in Firefox Nightly. The demo starts up and runs much faster there thanks to optimizations that are not yet in the stable release (which is what you're running, I think).