"When I tried to think of a project suitable for learning JavaScript, a terrain flyover demo came to mind."
Me too! But then I re-establish contact with the reality of my actual abilities and create a button which, when pressed, prints "Hello, world!" to an alert window.
Haha thanks :) I've done these things in the past many times, only on different platforms. The real kudos go to the browser guys (especially at Mozilla and Google) for all the superb work they've done on WebGL and JavaScript in general. Learning JavaScript and some of the intricacies of developing for the web has never been more fun! :)
Yes, they are generated offline. It's the most time consuming step when generating the terrain.
Doing the lighting and shadow generation in real-time (during tile initialization) is a very interesting problem! The great advantage is the bandwidth reduction and the ability to move the sun of course. A Web Worker can (probably should) be used for that purpose. The problem is that in order to correctly light a tile, you need access to its adjacent tiles as well (since peaks near the border on their side may be casting shadows on our tile).
All in all, very interesting and certainly doable!
Cool demo. What caught my attention was this line in the description:
>The terrain is procedural, generated offline by a Delphi program (Delphi is a modern version of Pascal).
Nowadays I very rarely see new projects use Object Pascal or Delphi. This is somewhat disappointing since I think it really is a fine language that could be a viable alternative to C++ and Java due to its combination of high performance and clean syntax [1]. But then again, C# is very popular and it is pretty much the new Delphi.
Does anyone here use Object Pascal in their current projects?
[1] At least in theory. I'm not necessarily talking about the particulars of the Borland/CodeGear/Embarcadero implementation since Anders Hejlsberg left for Microsoft, which were not always perfect.
I used Delphi for a 16-bit Windows game ages ago. However I skipped all of the UI builder stuff (which is the primary reason you'd want to use it in the first place) and coded directly against the Windows API, with the help of some inline asm.
So why not just use C? There were good reasons. In 1992 C/C++ compilers on the PC were slow -- pre-compiled headers were still mysterious -- and Delphi's handwritten compiler and integrated linker made the edit/compile/run cycle very very fast. It made small, fast, completely stand-alone (no MSVCRT.DLL or anything) binaries -- a feature which still supposedly makes it popular with malware authors.
The UI builder was excellent and perfect for knocking off one-off tools like the terrain generator described above. As long as you stuck to Windows, of course.
I don't know if I would necessarily go back though. While I sometimes miss some of Pascal's features like ranged types and sets (and bounds checking on arrays) the performance advantages of the specific Borland implementation have become less important, and there are other contenders that cover a bigger problem space than the delta between Pascal and C.
Thank you and I completely agree! I hold VCL especially in very high regard. As for Delphi, I think it doesn't get nearly the mindshare it deserves. As a side note, the CAD software we're making in my company is entirely written in Delphi (with some bits in assembly):
I think there's a lot of low profile software out there written in Delphi which doesn't get mentioned because it's not a reason to buy, and may be a reason not to buy. There's a nice windows installer tool I occasionally use written in Delphi.
I was very impressed when I found out that Pascal supports subrange types (that is types with custom, user-defined ranges). I had only seen it in VHDL / Ada before, where I found it to be very useful.
After using Delphi professionally for some years, I pretty much hated it. Some of my least favorite features are automatic and locale-dependent string/WideString conversions, and that there are multiple memory models: interfaces (IUnknown) uses reference counting, TComponent is owner-based, and lots of other classes require manual memory management.
On the other hand, the object system has some nice features. I miss AfterConstruction and BeforeDestruction in other languages, and I like virtual constructors.
I did a course on object pascal in 2002 (just before that college, like many others, switched to java). A few months ago, I had a look at Ada, and found it pleasantly similar to object pascal. If you've played with pascal, but not with Ada, I recommend having a look :-) Eg:
Actually Go's use of := is almost anti-pascal: it means something rather different, but similar enough to be confusing, and so is likely to confuse anybody who's used to the vastly more common meaning of := in Algol-family languages (Pascal, Ada, etc).
... and then there's Go's screwed up version of Pascal-style declaration syntax...
[a link to my earlier rant on this subject: http://news.ycombinator.com/item?id=4520104 (TL;DR: Go: "var foo, bar int"; Pascal: "var foo, bar : int"; the colon vastly improves readability, and has no real drawback)]
Er, ok, it can slightly mitigate the problem in a few cases. Even to the extent that it work though, this is a very fragile "solution" — (1) declarations often occur alone, (2) multiple declarations can have similarly sized variables, meaning there's no big whitespace chunk to act as a separator, and (3) one shouldn't have to run one's code through a code formatter, or use awkward formatting practices, to get basic readability....
Simply following standard practice (over decades), and including a colon, on the other hand, would have made all declarations more readable, and be more familiar, for no real cost.
Really, some of Go's syntax decisions are completely baffling...
[Sure there are lots of crazy computer languages around, but these guys really should have known better—and anything they do is much more likely to have an impact than most random languages, so it'd be nice if they could take a bit more care...]
This looks great and that is a huge amount of effort. I have to ask about what I think is the elephant in the room: Surely you must have noticed that all your mountains are chopped off at the exact same altitude? Wouldn't the scene look way better if you just increased the maximum, with basically no effort?
It's as though you spent a month painstakingly mixing 64 channels of crystal clear audio, but, right at the end, threw your hands up and clipped the final mix into oblivion.
Very good question! Actually, I can't increase the maximum without going to two bytes per elevation. All of the (top) plateaus are at a height equivalent to 255.
Q: OK, why not put them a bit lower than that, with some variety between peaks?
A: I already have. If you take a closer look, you will notice that there is another layer of flat surfaces, lower than the top.
Q: I'm not convinced. Why only two layers of 'flatness', one at the top, another a bit lower?
A: In the end, it's all about the dynamic range that you have to work with. When using a single byte, there are only 255 distinct height values. The key point is to understand that these values must not differ by much (i.e. they cannot be scaled by large values), since this will affect the appearance of the rest of the terrain (think very, very sharp, unnatural triangles everywhere). On the other hand, the scale factor must be large enough to allow for distinct terrain 'features', avoiding the appearance of a deflated terrain. Two layers of flatness, safely away from each other, was the best compromise.
Q: I'm still not convinced. Just vary the top layer by a small amount between peaks.
A: Using a small value wouldn't make much of a difference. If the amount was large enough, the distinction between the two 'flatness' layers would be lost and the terrain would lose that specific character that it currently has.
Going to two bytes per elevation (and thus be able to use a small scale factor) would allow me to keep the style intact (of some specific geological procedure that has formed the terrain), while varying the peaks and keep the rest of the terrain smooth.
I hope this made some sense. However you're right, it is noticeable! I just didn't think it detracts that much from the overall feel, while it still has advantages, so I went with it.
Could you add a cheap procedural function in the rendering pipeline to get more depth variety (past the 255 limit)? You might avoid adding the byte that way, though I'd guess it depends on how much of the pipeline is locked into that limit.
FYI, directly working with WebGL code is painful just because of the amount of basic infrastructure that has to be built before anything can be done. I strongly suggest using an existing framework like three.js as a starting point and going from there. Three.js is complete enough that a basic scene can be created quickly and extensible enough that complex scenes and shading can be created by extending it with custom objects.
For example, I was working on creating a WebGL setup for playing around with the Oculus Rift and used three.js to create http://sxp.me/rift/. It's mostly three.js code, but I added a custom camera object which handles the offscreen rendering and distortion required for the Rift. It was much faster and easier than starting from scratch which was what I originally did before giving up.
I agree! Three.js is excellent and already a de facto standard for developing in 3D. However, some points:
1. As the demo stands, it doesn't really need a 3D engine. Including one (any one) wouldn't help with anything, except for maybe skipping some (miniscule) setup code. Surprising, but true. Things definitely change as soon as you wish to render a flying ship though.
2. Learning has been the main reason for this demo. There's no better way to learn than to take apart or build something from scratch. In fact, I've built a 3D engine as well during this time (not used in this demo at all), along with a 3D model viewer and an application I hope to turn into a start-up some day. If I've been developing a 3D game (for example), with the express purpose of releasing it as soon as possible, I'd certainly use Three.js!
And last but not least:
3. When I decided to get into web development, about one and a half years ago, I didn't know about Three.js! :) (I found out quite soon though)
That looks really nice. A lot of WebGL demos these days don't really show the full extend of what WebGL can do, so to my eyes it's basically equivalent to something like VRML (anyone remember that?). Of course WebGL/OpenGL and VRML are totally different but you get the idea. Yeah I know there's stuff like Quake in WebGL but it's not really the same thing, because Quake was designed for lesser powered machines.
I'm still waiting for someone to make a fully fledged game in WebGL because I can totally see it happening.
Thank you. In fact, I may add a ship flying and shooting missiles, as soon as I get some free time. This will require the implementation of a shadow algorithm (you'll need a way to gauge the ship's height) and a particle system (for the smoke trails and explosions). Already done that in an old desktop implementation I've kept somewhere.
Looks nice but runs at awfully slow 4-5 fps on my Ubuntu 64bit system. Is Radeon HD 4200 to weak for WebGL or is there something you need to configure?
Sounds about right. I get up to 11 fps on Ubuntu 64bit on an AMD E-350/Radeon HD 6310 system fullscreen on 1366x768 with the catalyst drivers and hidden GUI and a quick search for benchmarks seems to indicate that it is about twice as fast as the HD 4200 ( http://www.notebookcheck.com/ATI-Radeon-HD-4200.20492.0.html ). You may get a few fps by switching to the proprietary drivers if you do not yet use them, but I wouldn't expect wonders.
[Edit:] And I see up to 13 fps if I fly away from the sun, the author seems to be right when he writes "Calculating sun visibility the geometric way (by casting rays through the terrain and the clouds) can be expensive if not done carefully."
A retina display has to render 4x the pixels of a regular display, so the slowdown isn't too surprising. Same reason why drawing operations were actually faster on the iPhone 3GS than the 4 after the latter came out.
I played with it more and the UI windows seem to make a huge difference, with them all open it drops to around ~25-30fps or so. I probably had only a couple open to get the original 40fps number. With all the windows closed on the air it actually gets right up to 60 fps. I'm leaving everything at the default settings, enabling 'flying mode' and just leaving it flying straight.
I also tried it on my 'mac' desktop with is an i7 920 @ 4.1Ghz with a nvidia 660Ti running OSX, also using chrome and with all windows open it only does ~40-45fps but pegs at 60fps again with them closed.
Resolution probably makes a big difference also, on both machines I have the chrome window 'maximized' but the desktop has 1920 x 1200 monitors vs the air's 1440 x 900 screen. I'm not sure how webgl handles retina resolutions, it's also possible the nvidia gpu isn't kicking in for some reason, assuming your on the 15" rmbp.
Runs well on my machine (32-bit Ubuntu, NVidia drivers, Firefox). You shouldn't need to configure anything for it to be fast in general (if it's blacklisted, it won't run at all), but non-NVidia drivers on Linux can be slower in some cases. Do other WebGL demos run ok for you?
Gentoo 64-bit here, Radeon HD 4350 and managed to get 20-30 fps depending. (git versions of Mesa, libdrm, xf86-video-ati, and a recent kernel, all of which are probably required to get best HW rendering, unfortunately).
60 FPS here, Ubuntu 64, nVidia GTX 660. Cool demo, thanks. For most people though I doubt they will have good video hardware so I think 3D in the browser is still several years of for the everyday user.
Thank you! I didn't have time to implement image-space occlusion queries, plus I wasn't sure of the level of support they have on mobile hardware (not that this demo will run on anything mobile for the time being :)). I'm shooting a grid of rays through the terrain instead (see relevant screenshots), using various techniques to accelerate that.
The result is used for modulating the intensity of both the lens flares layer and the glare layer.
Maybe he meant videography. A game engine is more like a virtual video camera anyways (depending on who you ask...some people think it should be more like a person's eye, with no lens/camera artifacts).
Yeah. Personally, lens flare drives me nuts. It's like instagram - you can show me a perfect picture, but instead you're applying a can't-see-shit filter.
It's actually an interesting debate: is it more believable, from the perspective of immersion, that A) the player is a character actually in the world that they're navigating. or B) that they're controlling a character with a video camera.
If you frame it that way, I think B is more believable, because it accounts for the television/monitor, the controller, and the general fact that the player doesn't experience any of the physical repercussions of his/her actions. If B was what was really happening, then of course there would be camera effects, and of course I would need a monitor to view the actions from the camera, and of course I would need a controller. If A was really happening, then why would I need any of those things? So A is a bigger jump to immersion.
Lens flare in photography and videography tends to be seen as a defect and something to be avoided. That's why lenses have use expensive chemistry for anti-reflective coatings.
When lens flare is used it should be a conscious choice.
Lens flare tends to be reserved for scenes in space. And most of these scenes are not real, but created in a computer, and the flare is added to create "realness". Well, that's fine. Sometimes it works, sometimes it's mocked. ('NEEDS MOAR LENS FLARE' has some useful web search results.)
But, for games, I tend to like lens flare, and it tends to help immersion. (If used carefully.) I have no idea why. It's probably a good idea to allow users to turn it on or off.
Interesting point. I would pick B because a game's virtual camera doesn't necessarily stick around in the head of your player avatar the whole game, but can shift behind, beside, overhead it as well as flying around the scene cinematically.
If someone is going to start to nitpick about the unreality of B versus A, then they ought to also address the lack of a dynamic focus and changine depth of field that our visual system handles so automatically that we are unaware that it is even happening and that CG doesn't do this.
The alternative is a proper HDR lighting system that correctly handles differently-lit scenes and exposure changes.
The Source engine does a great job with this, but it can really get in the way of gameplay.Bad "bloom" can really hurt your ability to aim at something when emerging from a dark corridor into bright daylight; depending on the game, this may be a feature--but is generally unfun.
Outstanding work.
I noticed that resizing my browser window actually changes my view portal size. I expected it to keep the same portal resolution but shrink/stretch/distort the view as I resized the browser. I'm not sure what the implications are, but it was a fun surprise. I also noticed that the framerate scales very nicely with the changing portal view resolution as I resize the browser. Very impressive.
That was the behavior in the early development stages. I wanted to include both behaviors for educational purposes (it is very instructive to be able to observe what happens when you change the rendering window and how the frustum adapts - or not).
The scaling in frame rate has to do with the amount of terrain patches that have to be displayed. You can see the same scaling by changing the frustum viewing angle (i.e. without resizing the browser window).
There is only one vertex buffer per patch that contains all of the vertices. Index buffers are being used, all of them pre-generated of course and uploaded to the graphics card. Rendering a patch then becomes a simple matter of selecting the right index buffer (or index buffers, since stitching requires more than one drawing call).
Thank you :) I'm glad that you liked the write-up as well. Yes, I've received many requests for the source code. I'll clean it up and make it public as soon as I find some time.
Woah, I just realized that you draw the entire GUI by yourself.
Don't get me wrong, that's pretty awesome in itself, I've done things like that myself. But I'd like to suggest (or perhaps inquire) why you wouldn't just overlay some html?
When I started development, I didn't know that DOM controls could overlap, let alone lay over a WebGL canvas :) I had zero HTML (and general web development) knowledge at the time. Since I have some experience doing this kind of stuff, instead of diving into the scary DOM details, I took the path of least resistance for me (begun implementing a GUI on a nice, clean drawing surface).
Joking aside, it didn't start as a fully-fledged UI. Some scrolling text strings to show a message log first, the ability to move them out of the way later, then implementing a proper window container for them, then introducing the concept of a 'widget', then getting really excited and thinking about a layout system to be able to present multiple widgets per window... that's the way it usually goes :) The GUI turned out a lot more demanding than the terrain engine itself, both in development time as well as rendering load.
Every time I click on a WebGL demo I think to myself... here comes my CPU fan and my lap is about to get hot! Seriously though, this demo makes me realize how much programming knowledge I lack. It it totally fascinating. Where would be a good place to start learning this type of programming?
For learning, you can try game development sites, graphics programming books, online Geomipmapping tutorials (for terrain stuff), forum discussions when facing problems... However, the most important thing is general programming experience, I guess.
I remember though that Learning WebGL (http://learningwebgl.com/blog/) was very useful when I began learning things specifically for the web!
That's very cool. Nicely polished, that's a lot of hard work and effort. Congrats!
I can relate to your story of building the UI framework yourself. That's kinda what I ended up doing myself too, and I've learned a lot of fundamentals through that approach.
Let me take this opportunity to plug a 3D terrain I made in Flash about three years ago: http://kosmosnimki.ru/3d. It uses actual satellite images and heightmaps.
Incredibly impressive. I, like you, began my first endeavor into learning JS by building a GUI. Unlike you, my resulting product was ugly as sin. This is seriously awesome work.
You can accelerate by pressing the down arrow key and even overwhelm the tile prefetcher by pressing the 'O' key and accelerating even further (this is not a very publicized fact due to bandwidth concerns :)).
Can't say for sure, because I didn't make it all in one go. The first time I opened the browser with the intent of developing for the web was a year and a half ago. During that time, I've built some other things as well (a 3D graphics engine, a CAD-like 3D model viewer to test the engine with and an application I'd like to turn into a start-up someday :)).
If I had to guess, I'd say that the terrain took me 3 to 4 months, the GUI a bit more than that, plus two more for really polishing the demo. Hard to say though, since everything was done in parallel, with even a few dead periods in-between.
Thank you! I wish I've been more thorough when I was developing everything and keep a journal. The progression is very interesting. I do have older versions though, I may use them to illustrate some points in future blog posts (when I figure out how to setup a blog :)).
Me too! But then I re-establish contact with the reality of my actual abilities and create a button which, when pressed, prints "Hello, world!" to an alert window.